uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,477,468,750,181
arxiv
\section*{Introduction} \PARstart{N}{umerical} semigroups have proven to be very useful in the study of one-point algebraic-geometry codes. On one hand the arithmetic of the numerical semigroup associated to the one-point yields a good bound---called the order bound---on minimum distance \cite{FeRa:dFR,HoLiPe:agc,KiPe:telescopic}. On the other hand, a close analysis of the numerical semigroup and the decoding algorithm commonly used for one-point codes shows that significant improvements in rate may be achieved while maintaining a given error correction capability \cite{FeRa:improved}. In this article we discuss the order bound and improvements to the rate for codes constructed from Hermitian curves. Let us briefly recall the definition of one-point algebraic geometry codes and state the notation we will use. Suppose ${\mathbb F}$ is a finite field, $F/{\mathbb F}$ a function field and $P$ a rational point of $F/{\mathbb F}$. For $m\in{\mathbb N}_0$ let ${\mathcal L}(mP)$ be the ring of functions in $F$ having poles only at $P$ and of order at most $m$. Let $v_P$ be the valuation of $F$ associated with $P$ and let $\Lambda=\{-v_P(f): f\in \bigcup_m{\mathcal L}(mP)\}$. $\Lambda$ is a {\it numerical semigroup}. That is, a subset of ${\mathbb N}_0$, closed under summation, containing $0$ and with finite complement in ${\mathbb N}_0$. It is called the {\it Weierstrass semigroup} associated to $P$. Let $P_1, \dots, P_n$ be pairwise distinct rational points of $F/{\mathbb F}$ which are different from $P$ and let $\varphi$ be the map $\bigcup_m{\mathcal L}(mP)\rightarrow{\mathbb F}^n$ such that $f\mapsto(f(P_1),\dots,f(P_n))$. Suppose that $\Lambda=\{\lambda_0=0<\lambda_1<\lambda_2<\dots\}$. The {\it $i$-th one-point algebraic-geometry code} associated with $P$ and $P_1,\dots,P_n$ is $[\varphi({\mathcal L}(\lambda_i P))]^\perp$. Naturally, the semigroup which will give us information about the one-point codes on $P$ will be the Weierstrass semigroup associated to $P$. The {\it Hermitian curve} over ${\mathbb F}_{q^2}$, where $q$ is a prime power, is defined by its affine equation $x^{q+1}=y^{q}+y.$ It has a single point $P_\infty$ at infinity and $q^3$ proper rational points $P_1,\dots,P_{q^3}$. The ring of functions on the curve with poles only at $P_\infty$ is generated, as a vector space over ${\mathbb F}_q$, by the set $\{x^iy^j: j< q\}$. Moreover, $v_{P_\infty}(x)=-q$ and $v_{P_\infty}(y)=-q-1$. Thus, the Weierstrass semigroup at $P_\infty$ is generated by $q$ and $q+1$. {\it Hermitian codes} are the one-point codes defined on the Hermitian curve associated with $P_\infty$ and $P_1,\dots,P_{q^3}$. For details on the Hermitian curve and the Hermitian codes we refer to \cite{Stichtenoth:hermite,HoLiPe:agc,Geil:Norm-trace-codes}. The scope of this work is to analyze some aspects of Hermitian codes based on the Weierstrass semigroup at $P_\infty$. Since the only thing we will be using about the Hermitian codes is that the associated numerical semigroup is generated by two consecutive integers, all the results can be stated more generally for all those one-point codes for which the associated semigroup is generated by two consecutive integers. In Section~\ref{sec:enum} we analyze the enumeration of semigroups generated by two consecutive integers. Then we mention the known results on the sequence $\nu_i$ and the order bound. In Section~\ref{sec:red_st} we give formulas for the number of checks of optimal codes correcting all errors of a given weight, whenever the associated numerical semigroup is generated by two consecutive integers. In the case of Hermitian codes this is the answer of an open question stated in \cite{PeTo}. In Section~\ref{sec:red_gen} we give formulas for the number of checks of optimal codes correcting all {\it generic} errors of a given weight. \section{On the enumeration and the $\nu$-sequence of semigroups generated by two consecutive integers} \label{sec:enum} We start this section with a small survey of the nomenclator and notations we will use on numerical semigroups and, more specifically, those numerical semigroups generated by two consecutive integers. Then we will analyze the enumeration of the latter semigroups and give the tools we will use in Section~\ref{sec:red_st} and Section~\ref{sec:red_gen}. \subsection{Semigroups Generated by Two Consecutive Integers} By a {\it numerical semigroup} we mean a subset of ${\mathbb N}_0$, whose complement in ${\mathbb N}_0$ is finite and which contains any sum of its elements. Given a numerical semigroup $\Lambda$ we denote {\it gaps} the elements in its complement in ${\mathbb N}_0$. The {\it genus} $g$ of $\Lambda$ is the number of gaps while its {\it conductor} $c$ is equal to the largest gap plus one. The {\it enumeration} $\lambda$ of $\Lambda$ is the unique increasing bijective map $\lambda:{\mathbb N}_0\longrightarrow\Lambda$. We say $\lambda_i$ to denote $\lambda(i)$. Notice that if $\lambda_i$ is larger than or equal to the conductor or, equivalently, $i\geq c-g$, then $\lambda_i=i+g$. In this work we just deal with numerical semigroups generated by two consecutive integers. If the consecutive integers are $a,a+1$ then the numerical semigroup consists of any element $ia+j(a+1)$ with $i,j\in{\mathbb N}_0$. By properties of semigroups generated by two integers \cite{HoLiPe:agc}, we know that the genus of this semigroup is $g=\frac{(a-1)a}{2}$ and its conductor is $c=(a-1)a$. Furthermore, the semigroup generated by $a,a+1$ admits two alternative descriptions. The first one is given by the disjoint union $0\sqcup\{a,a+1\}\sqcup\{2a,2a+1,2a+2\}\sqcup\dots\sqcup\{(a-2)a,(a-2)a+1,\dots,(a-2)a+a-2\}\sqcup\{i:i\geq (a-1)a\}.$ The second one was proved in \cite{GaRo:interval} and it is given in the next lemma. \begin{lemma} \label{lemma:GaRo} The numerical semigroup generated by $a,a+1$ is the set with all nonnegative integers whose remainder when dividing by $a$ is at most the quotient. \end{lemma} \subsection{Enumeration} As one can see from Lemma~\ref{lemma:GaRo}, numerical semigroups generated by two consecutive integers are highly related to the set of pairs ${\mathcal P}=\{(x,y):x,y\in{\mathbb N}_0,y\leq x\}$. In fact, the numerical semigroup generated by $a,a+1$ is the image of the map $$\begin{array}{crcl} \alpha_a:& {\mathcal P} & \rightarrow & {\mathbb N}_0\\ & (x,y) & \mapsto & ax+y\\ \end{array} $$ It turns out that this map is one-to-one whenever $\alpha_a(x,y)$ is strictly less than $a(a+1)$. Indeed, if $l<a(a+1)$ and $(x,y)\in\alpha_a^{-1}(l)$ then $x$ must be less than or equal to $a$ and $y$ must be strictly less than $a$. So $x$ and $y$ are the quotient and the remainder of the Euclidean division of $l$ by $a$, which are unique. In particular, $\alpha_a$ is one-to-one whenever $\alpha_a(x,y)$ is less than or equal to the conductor of the semigroup, which is $c=a(a-1)$. Furthermore, the total order $$(x,y)<(x',y')\mbox{ if }\left\{\begin{array}{l}x<x',\\x=x'\mbox{ and }y<y',\end{array}\right.$$ is compatible with the natural order of the semigroup for all those values in the semigroup which are less than $a(a+1)$. That is, for any $l,l'\in\Lambda$ with $l,l'<a(a+1)$, then $l<l'\mbox{ if and only if }\alpha_a^{-1}(l)<\alpha_a^{-1}(l').$ Now, since $\sum_{j=0}^{k}j=\frac{k(k+1)}{2}$, the sequence $a_k=\frac{k(k+1)}{2}$ is increasing and $a_{k+1}-a_k=k+1$. So any integer $i$ in ${\mathbb N}_0$ can be written uniquely as $i=\frac{x(x+1)}{2}+y$ for some $x\in{\mathbb N}_0$ and some $0\leq y\leq x$. Thus, the map $$\begin{array}{crcl} \beta:& {\mathcal P} & \rightarrow & {\mathbb N}_0\\ & (x,y) & \mapsto & \frac{x(x+1)}{2}+y\\ \end{array} $$ is one-to-one everywhere and it is also compatible with the former total order. As a conclusion, and taking into consideration that the genus and the conductor of the numerical semigroup generated by $a,a+1$ are, respectively, $\frac{(a-1)a}{2}$ and $(a-1)a$, one can see that the map $\lambda:{\mathbb N}_0\longrightarrow\Lambda$ with $$\lambda(i)=\left\{\begin{array}{ll} \alpha o \beta^{-1}(i) & \mbox{ if }i\leq\frac{(a-1)a}{2},\\ i+\frac{(a-1)a}{2} & otherwise,\\ \end{array}\right.$$ is increasing and one-to-one. Hence, it is exactly the enumeration of the semigroup generated by $a,a+1$. \subsection{The $\nu$-Sequence and the Order Bound} Given a numerical semigroup $\Lambda$ with enumeration $\lambda$ define the sequence $\nu_i$ by $$\nu_i=\lvert\{j\in{\mathbb N}_0:\lambda_i-\lambda_j\in\Lambda\}\rvert.$$ The sequence $\nu_i$ is used to define the {\it order bound} on the minimum distance of one-point algebraic-geometry codes: $$\delta_i=\min\{\nu_j:j>i\}.$$ The order bound, also known as Feng-Rao bound, is a lower bound on the minimum distance of the $i$-th one-point code on $P$. In this case the numerical semigroup is the Weierstrass semigroup associated to $P$. Details can be found in \cite{FeRa:dFR,HoLiPe:agc,KiPe:telescopic}. The Feng-Rao improved codes \cite{FeRa:improved} are defined by means of the sequence $\nu_i$ as well. First a set of functions on the curve $\{z_i: i\in {\mathbb N}_0\}$ having only poles at $P$ is considered such that the valuation of $z_i$ at $P$ is $-\lambda_i$. Now, the Feng-Rao code designed to correct $t$ errors has as parity checks the evaluation in certain points of the curve of functions $z_i$ for all $i$ with $\nu_i<2t+1$. In this subsection we derive the sequence $\nu_i$ as well as the order bound for numerical semigroups generated by two consecutive integers. For Hermitian codes this information has appeared previously (see \cite{PeTo,HoLiPe:agc,MuRa}). We choose to include our own proofs since our methods are new and will be needed later in the analysis of improved codes. From now on, let $\Lambda$ be the semigroup generated by $a$ and $a+1$ and let $g$ and $c$ be respectively its genus and its conductor, and let $\lambda$ be its enumeration. In order to compute the values in the sequence $\nu_i$ we need to distinguish between those elements $\lambda_i\in\Lambda$ for which $\lambda_i=ax+y$ for unique nonnegative integers $x,y$ with $y\leq x$ from those for which $x,y$ are not unique. Let us denote by $\Lambda^x$ the subset of $\Lambda$ containing the elements $l=ax+y$ with $0\leq y\leq x$. Then $l$ is uniquely expressible as $l=ax+y$ for nonnegative integers $x,y$ with $y\leq x$ if and only if $l\in\Lambda^x\setminus(\cup_{x'\not=x}\Lambda^{x'})$. Suppose $l=ax+y\in\Lambda^x$. Then $l=a(x-1)+a+y$ and $l\in\Lambda^{x-1}$ if and only if $a+y\leq x-1$, i.e., $y\leq x-a-1$. Similarly, $l=a(x+1)-a+y$ and $l\in\Lambda^{x+1}$ if and only if $-a+y\geq 0$, i.e., $y\geq a$. From this argument we have that $ax+y$ with $y\leq x$ is in $\Lambda^x\setminus(\cup_{x'\not=x}\Lambda^{x'})$ if and only if $x-a\leq y\leq a-1$. \begin{lemma} \label{lemma: nu en Lambda isolada} Let $\lambda_i\in\Lambda$ and suppose that the Euclidean division of $\lambda_i$ by $a$ has quotient $x$ and remainder $y$. If $x-a\leq y\leq a-1$, then $\nu_i=(x-y+1)(y+1)=xy-y^2+x+1.$ \end{lemma} \begin{proof} Suppose $\lambda_i=\lambda_j+\lambda_k$. It is easy to check that if $\lambda_i\in\Lambda^x\setminus(\cup_{z\not=x}\Lambda^z)$ for some $x$, then $\lambda_j\in\Lambda^{x'}\setminus(\cup_{z\not=x'}\Lambda^z)$ and $\lambda_k\in\Lambda^{x''}\setminus(\cup_{z\not=x''}\Lambda^z)$ for some $x',x''$. So, \begin{eqnarray*} \nu_i&=&\lvert\{(x',y')\in{\mathcal P}: \lambda_i-ax'-y'\in\Lambda\}\rvert\\ &=&\lvert\{(x',y')\in{\mathcal P}:(x-x',y-y')\in{\mathcal P} \}\rvert\\ &=&\lvert\{(x',y')\in{\mathbb N}_0\times{\mathbb N}_0:\\ &&\phantom{mmm}x'\leq x,\ y'\leq y,\ y'\leq x',\ y'\geq x'-x+y\}\rvert\\ &=&\sum_{0\leq x'\leq x}\lvert\{y': \max\{0,y+x'-x\}\leq y'\leq \min\{y,x'\}\}\rvert. \end{eqnarray*} This last number is the number of integer points inside a parallelogram with base $x-y+1$ and height $y+1$ (see Figure~\ref{fig:paral}). Hence it is equal to $(x-y+1)(y+1)$. \begin{figure}[ht] \setlength\unitlength{4mm} \tiny \begin{center} \begin{picture}(16,15) \thicklines \put(3,1){\vector(0,1){12}} \put(3,13.5){\makebox(0,0){$y'$}} \put(2.5,7){\vector(1,0){11}} \put(14.5,7){\makebox(0,0){$x'$}} \thinlines \put(3,7){\line(1,1){5}} \put(6,12.5){$y'=x'$} \put(3,2){\line(1,1){10}} \put(10,12.5){$y'=x'-x+y$} \put(3,11){\line(1,0){10}} \put(0.8,2){$-x+y$} \put(2.2,11){$y$} \put(1.8,7){$0$} \put(7,6){$x-y$} \put(3,7){\makebox(0,0){\circle{0.2}}} \put(4,7){\makebox(0,0){\circle{0.2}}} \put(5,7){\makebox(0,0){\circle{0.2}}} \put(6,7){\makebox(0,0){\circle{0.2}}} \put(7,7){\makebox(0,0){\circle{0.2}}} \put(8,7){\makebox(0,0){\circle{0.2}}} \put(4,8){\makebox(0,0){\circle{0.2}}} \put(5,8){\makebox(0,0){\circle{0.2}}} \put(6,8){\makebox(0,0){\circle{0.2}}} \put(7,8){\makebox(0,0){\circle{0.2}}} \put(8,8){\makebox(0,0){\circle{0.2}}} \put(9,8){\makebox(0,0){\circle{0.2}}} \put(5,9){\makebox(0,0){\circle{0.2}}} \put(6,9){\makebox(0,0){\circle{0.2}}} \put(7,9){\makebox(0,0){\circle{0.2}}} \put(8,9){\makebox(0,0){\circle{0.2}}} \put(9,9){\makebox(0,0){\circle{0.2}}} \put(10,9){\makebox(0,0){\circle{0.2}}} \put(6,10){\makebox(0,0){\circle{0.2}}} \put(7,10){\makebox(0,0){\circle{0.2}}} \put(8,10){\makebox(0,0){\circle{0.2}}} \put(9,10){\makebox(0,0){\circle{0.2}}} \put(10,10){\makebox(0,0){\circle{0.2}}} \put(11,10){\makebox(0,0){\circle{0.2}}} \put(7,11){\makebox(0,0){\circle{0.2}}} \put(8,11){\makebox(0,0){\circle{0.2}}} \put(9,11){\makebox(0,0){\circle{0.2}}} \put(10,11){\makebox(0,0){\circle{0.2}}} \put(11,11){\makebox(0,0){\circle{0.2}}} \put(12,11){\makebox(0,0){\circle{0.2}}} \end{picture} \end{center} \caption{Parallelogram in proof of Lemma~\ref{lemma: nu en Lambda isolada}.} \label{fig:paral} \end{figure} \end{proof} To approach the case in which $\lambda_i=ax+y=ax'+y'$ with $x\neq x'$, $y\neq y'$, we need a result from \cite{Farran:symmetric}. It says that if a numerical semigroup $\Lambda$ is such that its conductor $c$ is two times its genus, then for all $\lambda_i\in\Lambda$ such that $\lambda_i-c+1\in\Lambda$, we have $\nu_i=\lambda_i-c+1$. We already know that for the numerical semigroup generated by $a,a+1$ the conductor is two times the genus. Let us check that if $\lambda_i\in\Lambda^{x}\cap\Lambda^{x+1}$ then $\lambda_i-c+1\in\Lambda$. Indeed, suppose $\lambda_i\in\Lambda^{x}\cap\Lambda^{x+1}$. Since $\lambda_i\in\Lambda^{x+1}$, $\lambda_i=(x+1)a+y$ with $y\leq x+1$. Now, since $\lambda_i\in\Lambda^x$ and $\lambda_i=xa+(a+y)$, we have $a+y\leq x$. Thus, $\lambda_i-c+1=(x+1)a+y-a(a-1)+1= a(x-a+2)+y+1$ with $y+1\leq x-a+2$ and so $\lambda_i-c+1\in\Lambda$. Consequently, if $\lambda_i=ax+y=ax'+y'$ with $x\neq x'$, $y\neq y'$, then $\nu_i=\lambda_i-c+1$. The next theorem is a consequence of the former arguments. \begin{theorem} \label{th:nu} Let $\lambda_i\in\Lambda$ and suppose that the Euclidean division of $\lambda_i$ by $a$ has quotient $x$ and remainder $y$. Then, $$ \nu_i=\left\{\begin{array}{ll} (x-y+1)(y+1)&\mbox{ if }-a+x\leq y \leq a-1,\\ \lambda_i-c+1&\mbox{otherwise.}\\ \end{array}\right.\\ $$ \end{theorem} Once we have found a formula for the values in the sequence $\nu_i$, the next step is to find a formula for the values of the order bound defined as $\delta_i=\min\{\nu_j:j>i\}.$ Notice that this definition has a lot to do with the increasingness of the sequence $\nu_i$. From Theorem~\ref{th:nu} we deduce that $\nu_i$ is quadratic in $y$ for the integers $i$ corresponding to the values $\lambda_i=ax+y$ inside $\Lambda^x$ with $-a+x\leq y \leq a-1$, while it is increasing elsewhere. See Figure~\ref{fig:nuob4}, Figure~\ref{fig:nuob8}, Figure~\ref{fig:nuob16}. By analyzing the parabola we see that $\nu_i$ is increasing for $y\leq\frac{x}{2}$ and decreasing for $y\geq\frac{x}{2}$, being symmetric with respect to $y=\frac{x}{2}$. In the case when $x<a$ all values $ax+y\in\Lambda^x$ satisfy $-a+x\leq y \leq a-1$. Then the first and last elements in $\Lambda^x$ (i.e. $y=0,y=x$) have the same value for $\nu_i$, which is $x+1$ and which is minimal. In the case when $x\geq a$, the first element (i.e. $y=-a+x$) attains the minimal value for $\nu_i$, which is $ax-a^2+x+1$; the second and last elements (i.e. $y=-a+x+1 ,y=a-1$) have the same value for $\nu_i$, which is $a(x-a+2)$ and which is minimal if we take the first element away. Thus, \begin{itemize} \item If $x<a$ then \begin{itemize} \item $\Lambda^x\cap\Lambda^{x'}=\emptyset$ for any $x'\neq x$ and \begin{equation}\label{eq1}\min\{\nu_i:\lambda_i\in\Lambda^x\}=x+1,\end{equation} \item if $\lambda_i\in\Lambda^x$ and $\lambda_x\neq ax+x$ then $$\min\{\nu_j:j>i \mbox{ and }\lambda_j\in\Lambda^x\}=x+1.$$ \end{itemize} \item If $a\leq x< 2a$ then \begin{itemize} \item $\Lambda^x\cap\Lambda^{x'}\neq\emptyset$, $\Lambda^x\setminus(\cup_{x'\neq x}\Lambda^x)\neq \emptyset$, and \begin{equation}\label{eq2}\begin{array}{l}\min\{\nu_i:\lambda_i=ax+y\in\Lambda^x,-a+x\leq y \leq a-1\}=\\\phantom{mmm}(a+1)x-a^2+1,\end{array}\end{equation} \item if $\lambda_i=ax+y\in\Lambda^x$ and $-a+x\leq y < a-1$ then \begin{equation}\label{eq3}\begin{array}{l}\min\{\nu_j:j>i,\lambda_j=ax+y\in\Lambda^x,\\\phantom{mmm}-a+x\leq y \leq a-1\}= a(x-a+2),\end{array}\end{equation} \item $\min\{\nu_i:\lambda_i\in\Lambda^x\cap\Lambda^{x+1}\}=\min\{\nu_i:a(x+1)\leq\lambda_i\leq ax+x\}=\nu_{\lambda^{-1}(a(x+1))}=a(x+1)-a(a-1)+1$, \item if $\lambda_i\in\Lambda^x\cap\Lambda^{x+1}$ and $\lambda_i\neq ax+x$, then\\ $\min\{\nu_j:j>i\mbox{ and }\lambda_j\in\Lambda^x\cap\Lambda^{x+1}\}=\lambda_{i+1}-c+1=\lambda_i-c+2.$ \end{itemize} \item If $x\geq 2a$ then $\Lambda^x\setminus(\cup_{x'\neq x}\Lambda^x)= \emptyset$. \end{itemize} Finally, one can easily check the inequalities \begin{itemize} \item$\min\{\nu_i: \lambda_i\in\Lambda^{x-1}\cap\Lambda^{x}\}\leq\min\{\nu_i: \lambda_i\in{\Lambda}^x\setminus(\cup_{x'\neq x}\Lambda^{x'})\}\leq\min\{\nu_i: \lambda_i\in\Lambda^x\cap\Lambda^{x+1}\},$ \item$a(x-a+2)\leq\min\{\nu_i:\lambda_i\in\Lambda^x\cap\Lambda^{x+1}\},$ \item$\lambda_i-c+2\leq\min\{\nu_i: \lambda_i\in{\Lambda}^{x+1}\setminus(\cup_{x'\neq x+1}\Lambda^{x'})\}, \mbox{ for any }\lambda_i\in\Lambda^x\cap\Lambda^{x+1},\ \lambda_i\neq ax+x.$ \end{itemize} With these inequalities it is easy to prove the following theorem. We leave the details for the reader. \begin{theorem} \label{th:ob} Let $\lambda_i\in\Lambda$ and suppose that the Euclidean division of $\lambda_i$ by $a$ has quotient $x$ and remainder $y$. Then, $$ \delta_i=\left\{\begin{array}{ll} x+1&\mbox{ if } x< a\mbox{ and }y\neq x,\\ x+2&\mbox{ if } x< a\mbox{ and }y= x,\\ a(x-a+2)&\mbox{ if } x\geq a \mbox{ and }-a+x\leq y <a-1,\\ \lambda_i-c+2&\mbox{ otherwise.}\\ \end{array}\right.\\ $$ \end{theorem} The graphics in Figure~\ref{fig:nuob4}, Figure~\ref{fig:nuob8}, and Figure~\ref{fig:nuob16} show the first values of $\nu_i$ and $\delta_i$ for the Hermitian codes over ${\mathbb F}_{4^2}$, ${\mathbb F}_{8^2}$, and ${\mathbb F}_{16^2}$, respectively. In fact, it is proven \cite{YaKu,HoLiPe:agc} that for Hermitian codes the order bound on the minimum distance is exactly the real minimum distance of the codes. \section{Minimizing redundancy} \label{sec:red_st} The decoding algorithm commonly used for one-point codes is an adaptation of the Berlekamp-Massey-Sakata algorithm \cite{Sakata} together with the majority voting algorithm of Feng-Rao-Duursma \cite{FeRa,Duursma:maj,HoLiPe:agc}. By analyzing majority voting, one realizes that only some of the parity checks are really necessary to perform correction of a given number of errors. New codes can be defined with just these few checks, yielding larger dimensions while keeping the same correction capability as standard codes \cite{FeRa:improved,HoLiPe:agc}. These codes are often called Feng-Rao improved codes. The redundancy of standard one-point codes correcting a given number $t$ of errors is $$r(t)=\lambda^{-1}(\max\{i\in{\mathbb N}_0:\nu_i < 2t+1\})+1,$$ where the enumeration $\lambda$ and the sequence $\nu$ are derived from the Weierstrass semigroup of the distinguished point. The redundancy of the Feng-Rao improved codes correcting the same number of errors is $$\tilde{r}(t)=\lvert \{i\in{\mathbb N}_0:\nu_i < 2t+1\})\rvert.$$ This section is devoted to finding explicit formulae for these redundancies in the case when the associated Weierstrass semigroup is generated by two consecutive integers $a,a+1$. Recall that this is the case of Hermitian codes. \begin{theorem} Let $a>1$. Then, {\tiny \begin{eqnarray*} r(t) &=& \left\{ \begin{array}{ll} t(2t+1)&\mbox{ if }t\leq a/2,\\ (a^2-a)/2+(a+1)\lfloor\frac{2t}{a+1}\rfloor&\mbox{ if }a/2< t<a(\lfloor\frac{2t}{a+1}\rfloor+1)/2,\\ (a^2-a)/2+2t&\mbox{ if }t\geq a(\lfloor\frac{2t}{a+1}\rfloor+1)/2.\\ \end{array}\right.\\ \tilde{r}(t) &=& \left\{ \begin{array}{l} t(2t+1)-\sum_{x'=\lceil2\sqrt{2t+1}-2\rceil}^{2t-1}(\lfloor \sqrt{{x'}^2+4x'-8t}\rfloor+\delta_{x't})\\\hfill{\mbox{ if }t\leq a/2,}\\ (a^2-a)/2+(a+1)\lfloor\frac{2t}{a+1}\rfloor\\\phantom{mm}-\sum_{x'=\lceil2\sqrt{2t+1}-2\rceil}^{a-2+\lfloor\frac{2t}{a+1}\rfloor} (\lfloor \sqrt{{x'}^2+4x'-8t}\rfloor+\delta_{x't})\\\hfill{\mbox{ if }a/2< t<a(\lfloor\frac{2t}{a+1}\rfloor+1)/2,}\\ (a^2-a)/2+2t-\sum_{x'=\lceil2\sqrt{2t+1}-2\rceil}^{a-1+\lfloor\frac{2t}{a+1}\rfloor} (\lfloor\sqrt{{x'}^2+4x'-8t}\rfloor+\delta_{x't})\\\hfill{\mbox{ if }a(\lfloor\frac{2t}{a+1}\rfloor+1)/2\leq t\leq \frac{a(a+1)}{2},}\\ (a^2-a)/2+2t\\\hfill{\mbox{ if }t> \frac{a(a+1)}{2}},\\ \end{array}\right.\\ \end{eqnarray*} } where {\tiny $$\delta_{xt}=\left\{\begin{array}{ll} 1&\mbox{ if }x=\lfloor \sqrt{{x'}^2+4x'-8t}\rfloor \mbox{ mod } 2\\ 0&\mbox{ if }x\neq\lfloor \sqrt{{x'}^2+4x'-8t}\rfloor \mbox{ mod } 2\\ \end{array}\right.=x+\lfloor \sqrt{{x'}^2+4x'-8t}\rfloor+1 \mbox{ mod }2. $$} \end{theorem} \begin{proof} By the arguments in the previous section, the maximum non-gap whose $\nu$ is bounded by a certain constant must be 1) the last element in a parabola, that is, $ax+x$ for some $x<a$ or $ax+a-1$ for some $x\geq a$; 2) the first element in a parabola for some $x\geq a$, that is, $ax+x-a$; 3) some value in $\Lambda^{x'}\cap\Lambda^{x'+1}$ for some $x'$. In case 1) and 2), $x$ is the largest integer such that $\Lambda^x\setminus(\cup_{x'\neq x}\Lambda^{x'})\neq \emptyset$ and such that the minimum $\nu$ value in $\Lambda^x\setminus(\cup_{x'\neq x}\Lambda^{x'})$ is at most $2t$. That is, the corresponding parabola is not empty and its minimum value is at most $2t$. In case 3), if the largest integer $x$ such that $\Lambda^x\setminus(\cup_{x'\neq x}\Lambda^{x'})\neq \emptyset$ and such that the minimum $\nu$ value in $\Lambda^x\setminus(\cup_{x'\neq x}\Lambda^{x'})$ is at most $2t$, satisfies $x<2a-1$, then $x'=x$. Otherwise, $x'\geq x$. By formulas \ref{eq1} and \ref{eq2}, the set of all minimum $\nu$ values among all non-empty parabolas is $$\begin{array}{lll} M&=&\{\min\{\nu_i:\lambda_i\in\Lambda^{x'}\setminus(\cup_{x''\neq x'}\Lambda^{x''})\}: \\&&\hfill{\Lambda^{x'}\setminus(\cup_{x''\neq x'}\Lambda^{x''})\neq\emptyset\}}\\ &=&\{x'+1:0\leq x'\leq a-1\}\cup\{(a+1)x'-a^2+1: \\&&\hfill{a\leq x'<2a\}}\\ &=&\{z:1\leq z\leq a\}\cup\{z(a+1):1\leq z\leq a\}. \end{array}$$ Now, the maximum among these values which is at most $2t$ is $ \begin{array}{l} \max\{m\in M:m\leq 2t\}=\\\hfill{\left\{\begin{array}{ll} 2t &\mbox{ if } 2t\leq a,\\ \lfloor\frac{2t}{a+1}\rfloor(a+1) & \mbox{ if }a+1\leq 2t\leq a(a+1),\\ a(a+1) & \mbox{ if } 2t>a(a+1).\\ \end{array}\right.} \end{array} $ Therefore, $$x=\left\{\begin{array}{ll} 2t-1&\mbox{ if }2t\leq a,\\ \lfloor\frac{2t}{a+1}\rfloor+a-1&\mbox{ if }a+1\leq 2t\leq a(a+1),\\ 2a-1 & \mbox{ if } 2t>a(a+1).\\ \end{array}\right.$$ If $2t\leq a$ then $\Lambda^x\cap\Lambda^{x+1}=\emptyset$ and we are in case 1). Otherwise, if $2t> a$ then $\Lambda^x\cap\Lambda^{x+1}\neq\emptyset$. If $2t<a(x-a+2)$, by formulas (\ref{eq2}) and (\ref{eq3}), then we are in case 2). Otherwise, we will be either in case 1) or 3). Consequently, {\small \begin{eqnarray*} r(t) &=& \left\{ \begin{array}{l} \lambda^{-1}(ax+x)+1\\\hfill{\mbox{ if }2t\leq a,}\\ \lambda^{-1}(ax+x-a)+1\\\hfill{\mbox{ if }a< 2t<a(x-a+2),}\\ \lambda^{-1}(ax+a-1)+1+\mid\{\lambda_i\in\cup_{x'> x}\Lambda^{x'}:\nu_i\leq 2t\}\mid\\\hfill{\mbox{ if }2t\geq a(x-a+2).}\\ \end{array}\right.\\ \end{eqnarray*} } Replacing $x$ by its value and taking into consideration that the value $\nu_i$ increases constantly by one within $\{\lambda_i\in\cup_{x'> x}\Lambda^{x'}:\nu_i\leq 2t\}$, we obtain {\small \begin{eqnarray*} r(t) &=&\left\{ \begin{array}{l} t(2t+1)\\\hfill{\mbox{ if }t\leq a/2,}\\ (a^2-a)/2+(a+1)\lfloor\frac{2t}{a+1}\rfloor\\\hfill{\phantom{mmmmmmm}\mbox{ if }a/2< t<a(\lfloor\frac{2t}{a+1}\rfloor+1)/2,}\\ (a^2-a)/2+2t\\\hfill{\mbox{ if }t\geq a(\lfloor\frac{2t}{a+1}\rfloor+1)/2.}\\ \end{array}\right.\\ \end{eqnarray*} } For the result on $\tilde{r}(t)$ recall that the parabola $(x-y+1)(y+1)$ gives the values of $\nu_i$ for the non-gaps $\lambda_i=ax+y$ with $x-a\leq y\leq a-1$. Fixed $x$, the maximum on $y$ of $(x-y+1)(y+1)$ is attained at $y=x/2$ and it is equal to $x^2/4+x+1$. From the values $\lambda_i$ with $i< r(t)$ we want to take away all those values whose corresponding $\nu_i$ is larger than $2t$. Our first aim is to identify which parabolas have nonempty intersection with the line at height $2t+1$. That is, $x^2/4+x+1\geq 2t+1$. Those are exactly the parabolas for which $x\geq \lceil2\sqrt{2t+1}-2\rceil$. Now, from each parabola we need to know which is the number of integers $y$ for which the $\nu_i$ corresponding to $\lambda_i=ax+y$ is at least $2t+1$. Since the parabola $(x-y+1)(y+1)$ is symmetric with respect to $y=x/2$, there will be an odd number of such integers if $x$ is even and an even number if $x$ is odd. The real values $y$ where the parabola equals $2t+1$ are given by the equation $-y^2+xy+x+1=2t+1$, and are exactly $\frac{x\pm\sqrt{x^2+4x-8t}}{2}$. Thus, the length of the real interval where the parabola is at least $2t+1$ is $\sqrt{x^2+4x-8t}$. Now, from this interval we only want its integer values. It is easy to check that the number of such integers is $\lfloor\sqrt{{x}^2+4x-8t}\rfloor+\delta_{xt}$. \end{proof} \section{Minimizing redundancy for correcting generic errors} \label{sec:red_gen} In \cite{O'Sullivan:hermite-beyond} another improvement on one-point codes is described. Under the Berlekamp-Massey-Sakata algorithm with majority voting, an error vector whose weight is larger than half the minimum distance of the code is often correctable. In particular this occurs for {\it generic errors} (also called independent errors in \cite{Pellikaan:independent_errors,JeNiHo}), whose technical algebraic definition can be found in \cite{BrOS:AAECC}. Generic errors of weight $t$ can be a very large proportion of all possible errors of weight $t$, as in the case of the examples worked out in \cite{O'Sullivan:hermite-beyond}. This suggests that a code be designed to correct only {\it generic errors} of weight $t$ rather than all error words of weight $t$. Using this restriction, one obtains new codes with much larger dimension than that of standard one-point codes correcting the same number of errors. In \cite{BrOS:AAECC}, the redundancy of standard one-point codes correcting all generic errors of weight up to $t$ is shown to be $$r^*(t)=\lambda^{-1}(\max( \Lambda \setminus \{ \lambda_i + \lambda_j: i, j \geq t\})+1.$$ However, taking full advantage of the Feng and Rao improvements due to the majority voting step \cite{FeRa:improved}, one can get optimal codes correcting all generic errors of weight up to $t$ with redundancy $$\tilde{r}^*(t)=\lvert \Lambda \setminus \{ \lambda_i + \lambda_j: i, j \geq t\}\rvert.$$ This section is devoted to finding explicit formulae for these redundancies. It is easy to check that if $t$ is such that $\lambda_t$ is larger than or equal to the conductor then both $r^*(t)$ and $\tilde{r}^*(t)$ are equal to $\lambda_t+t$. If $c$ is the conductor and $g$ is the genus, $\lambda_t\geq c$ is equivalent to $t\geq c-g$. More specifically, for the semigroup generated by $a,a+1$ this is equivalent to $t\in \Lambda^x$ for $x\geq a-1$. In the next theorem we deal with the case when $t$ is strictly less than the conductor, that is, when $t\in\Lambda^x$ with $x<a-1$. \begin{theorem} \label{theorem:red-hermite} Suppose $t=\frac{x(x+1)}{2}+y$ with $0\leq y\leq x<a-1$. That is, $\lambda_t=xa+y$ with $0\leq y\leq x<a-1$. Then, \begin{eqnarray*} r^*(t)&=&\left\{ \begin{array}{l} 2x^2+x\\\hfill{\mbox{ if } 2x<a,\ y=0,}\\ 2x^2+3x+y+1\\\hfill{\mbox{ if } 2x<a,\ y>0,}\\ 2xa+y-\frac{a^2-3a}{2}\\\hfill{\mbox{ if } 2x\geq a,\ y>2x-a+1,}\\ 2xa+2y-\frac{a^2-a}{2}\\\hfill{\phantom{mmmmmm}\mbox{ if } 2x\geq a,\ y\leq 2x-a+1.}\\ \end{array} \right. \\ \tilde{r}^*(t)&=&\left\{ \begin{array}{l} 2x^2+x+3y\\\hfill{\mbox{ if } 2x<a,}\\ 2xa+3y-2x-\frac{a^2-3a}{2}-1\\\hfill{\mbox{ if } 2x\geq a,\ y>2x-a+1,}\\ 2xa+2y-\frac{a^2-a}{2}\\\hfill{\phantom{mmmmmm}\mbox{ if } 2x\geq a,\ y\leq2x-a+1.}\\ \end{array} \right. \end{eqnarray*} \end{theorem} \begin{proof} We have $\{\lambda_i+\lambda_j: i,j\geq t\}= \{l\in\Lambda^{2x}: l\geq 2xa+2y\} \cup \{l\in\Lambda^{2x+1}: l\geq (2x+1)a+y\} \cup(\cup_{x'\geq 2x+2}\Lambda^{x'}).$ Notice that $\{l\in\Lambda^{2x+1}: l< (2x+1)a+y\} \cap\Lambda^{2x+2}=\emptyset$ because $y<a$. So, $\Lambda\setminus\{\lambda_i+\lambda_j:i,j\geq t\}= \{l\in\Lambda: l<2xa+2y\} \sqcup (\{l\in\Lambda^{2x+1}: l<(2x+1)a+y\} \setminus\Lambda^{2x}).$ Let \begin{eqnarray*} A&=&\{l\in\Lambda: l<2xa+2y\},\\ B&=&\{l\in\Lambda^{2x+1}: l<(2x+1)a+y\} \setminus\Lambda^{2x}. \end{eqnarray*} If $2x<a$ then $\lvert A\rvert=\frac{2x(2x+1)}{2}+2y$ and $\lvert B\rvert=y$ because $\Lambda^{2x}\cap\Lambda^{2x+1}=\emptyset$. So, $ \begin{array}{lll} \tilde{r}^*(t)&=& \lvert\Lambda\setminus\{\lambda_i+\lambda_j:i,j\geq t\}\rvert\\ &=&\lvert A\rvert + \lvert B\rvert\\&=&2x^2+x+3y,\\ r^*(t)&=&\left\{\begin{array}{l} \frac{2x(2x+1)}{2}=2x^2+x\\\hfill{ \mbox{\ if\ } y=0,}\\ \frac{(2x+1)(2x+2)}{2}+y=2x^2+3x+y+1\\\hfill{ \mbox{\ if\ } y>0.}\\ \end{array}\right. \end{array} $ If $2x\geq a$, then all elements in $\Lambda^{2x}$ are larger than the conductor and $\lvert A\rvert=2xa+2y-g=2xa+2y-\frac{a^2-a}{2}.$ In order to compute $\lvert B\rvert$, notice that $\lvert \{l\in\Lambda^{2x+1}: l<(2x+1)a+y\}\rvert=y$, while $\lvert \Lambda^{2x}\cap\Lambda^{2x+1}\rvert =2x-a+1.$ Now, if $y>2x-a+1$, then $\Lambda^{2x}\cap\Lambda^{2x+1}\subseteq \{l\in\Lambda^{2x+1}: l<(2x+1)a+y\}$, so $\lvert B\rvert=y-2x+a-1$ and \begin{eqnarray*} \tilde{r}^*(t)&=&\lvert A\rvert+\lvert B\rvert=2xa+3y-2x-\frac{a^2-3a}{2}-1,\\ r^*(t)&=&2xa+y-\frac{a^2-3a}{2}. \end{eqnarray*} Otherwise, if $y\leq 2x-a+1$, then $\Lambda^{2x}\cap\Lambda^{2x+1}\supseteq \{l\in\Lambda^{2x+1}: l<(2x+1)a+y\}$, so $\lvert B\rvert=0$ and \begin{eqnarray*} \tilde{r}^*(t)&=&\lvert A\rvert=2xa+2y-\frac{a^2-a}{2},\\ r^*(t)&=&\lvert A\rvert=2xa+2y-\frac{a^2-a}{2}. \end{eqnarray*} \end{proof} \mut{ Let us see the behavior of $r(t)$, $\tilde{r}(t)$, $r^*(t)$ and $\tilde{r}^*(t)$ for some examples of Hermitian curves. Notice that in general $r(t)>\tilde{r}(t) >r^*(t)>\tilde{r}^*(t)$ and that the differences are largest for small values of $t$. \begin{center} \input{../GraphDirectory/red-hermite} \end{center} Note that the graph of $r(t)$, $\tilde{r}(t)$, $r^*(t)$ and $\tilde{r}^*(t)$ seems to become more regular as the cardinal of the finite field increases. } \begin{figure} \resizebox{\columnwidth}{!}{ \def\circle{3}{\circle{3}} \def\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}{\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \begin{picture}(299.999998,220.588234) \put(20.000000,20.000000){\vector(0,1){200.588234}} \put(20.000000,20.000000){\vector(1,0){274.999998}} \put(297.499998,20.000000){\makebox(0,0){\phantom{mm}\small{$i$}}} \put(115.294117,18.000000){\line(0,1){4}} \put(115.294117,10.000000){\makebox(0,0){\tiny{$\frac{a(a-1)}{2}$}}} \put(18.000000,210.588234){\line(1,0){4}} \put(0.000000,210.588234){\makebox(0,0){\tiny{$a(a-1)$}}} \put(149.999999,0.000000){\makebox(0,0){\shortstack[c]{\phantom{mmmmm}$\circ$ $\nu_i$\\\phantom{mmmmm}$\times$ $\delta_i$}}}\put(20.000000,35.882353){\circle{3}} \put(20.000000,51.764706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(35.882353,51.764706){\circle{3}} \put(35.882353,51.764706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(51.764706,51.764706){\circle{3}} \put(51.764706,67.647058){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(67.647058,67.647058){\circle{3}} \put(67.647058,67.647058){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(83.529411,83.529411){\circle{3}} \put(83.529411,67.647058){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(99.411764,67.647058){\circle{3}} \put(99.411764,83.529411){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(115.294117,83.529411){\circle{3}} \put(115.294117,83.529411){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(131.176470,115.294117){\circle{3}} \put(131.176470,83.529411){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(147.058823,115.294117){\circle{3}} \put(147.058823,83.529411){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(162.941175,83.529411){\circle{3}} \put(162.941175,99.411764){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(178.823528,99.411764){\circle{3}} \put(178.823528,147.058823){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(194.705881,147.058823){\circle{3}} \put(194.705881,147.058823){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(210.588234,162.941175){\circle{3}} \put(210.588234,147.058823){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(226.470587,147.058823){\circle{3}} \put(226.470587,162.941175){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(242.352940,162.941175){\circle{3}} \put(242.352940,178.823528){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(258.235292,178.823528){\circle{3}} \put(258.235292,210.588234){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(274.117645,210.588234){\circle{3}} \put(274.117645,210.588234){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(289.999998,210.588234){\circle{3}} \put(289.999998,226.470587){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \end{picture} } \caption{Graph of $\nu_i$ and $\delta_i$ for the Hermitian code over ${\mathbb F}_{4^2}$.} \label{fig:nuob4} \end{figure} \begin{figure} \resizebox{\columnwidth}{!}{ \def\circle{3}{\circle{3}} \def\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}{\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \begin{picture}(299.999991,212.168669) \put(20.000000,20.000000){\vector(0,1){192.168669}} \put(20.000000,20.000000){\vector(1,0){274.999991}} \put(297.499991,20.000000){\makebox(0,0){\phantom{mm}\small{$i$}}} \put(111.084334,18.000000){\line(0,1){4}} \put(111.084334,10.000000){\makebox(0,0){\tiny{$\frac{a(a-1)}{2}$}}} \put(18.000000,202.168669){\line(1,0){4}} \put(0.000000,202.168669){\makebox(0,0){\tiny{$a(a-1)$}}} \put(149.999996,0.000000){\makebox(0,0){\shortstack[c]{\phantom{mmmmm}$\circ$ $\nu_i$\\\phantom{mmmmm}$\times$ $\delta_i$}}}\put(20.000000,23.253012){\circle{3}} \put(20.000000,26.506024){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(23.253012,26.506024){\circle{3}} \put(23.253012,26.506024){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(26.506024,26.506024){\circle{3}} \put(26.506024,29.759036){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(29.759036,29.759036){\circle{3}} \put(29.759036,29.759036){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(33.012048,33.012048){\circle{3}} \put(33.012048,29.759036){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(36.265060,29.759036){\circle{3}} \put(36.265060,33.012048){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(39.518072,33.012048){\circle{3}} \put(39.518072,33.012048){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(42.771084,39.518072){\circle{3}} \put(42.771084,33.012048){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(46.024096,39.518072){\circle{3}} \put(46.024096,33.012048){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(49.277107,33.012048){\circle{3}} \put(49.277107,36.265060){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(52.530119,36.265060){\circle{3}} \put(52.530119,36.265060){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(55.783131,46.024096){\circle{3}} \put(55.783131,36.265060){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(59.036143,49.277107){\circle{3}} \put(59.036143,36.265060){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(62.289155,46.024096){\circle{3}} \put(62.289155,36.265060){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(65.542167,36.265060){\circle{3}} \put(65.542167,39.518072){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(68.795179,39.518072){\circle{3}} \put(68.795179,39.518072){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(72.048191,52.530119){\circle{3}} \put(72.048191,39.518072){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(75.301203,59.036143){\circle{3}} \put(75.301203,39.518072){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(78.554215,59.036143){\circle{3}} \put(78.554215,39.518072){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(81.807227,52.530119){\circle{3}} \put(81.807227,39.518072){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(85.060239,39.518072){\circle{3}} \put(85.060239,42.771084){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(88.313251,42.771084){\circle{3}} \put(88.313251,42.771084){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(91.566263,59.036143){\circle{3}} \put(91.566263,42.771084){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(94.819275,68.795179){\circle{3}} \put(94.819275,42.771084){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(98.072287,72.048191){\circle{3}} \put(98.072287,42.771084){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(101.325299,68.795179){\circle{3}} \put(101.325299,42.771084){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(104.578310,59.036143){\circle{3}} \put(104.578310,42.771084){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(107.831322,42.771084){\circle{3}} \put(107.831322,46.024096){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(111.084334,46.024096){\circle{3}} \put(111.084334,46.024096){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(114.337346,65.542167){\circle{3}} \put(114.337346,46.024096){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(117.590358,78.554215){\circle{3}} \put(117.590358,46.024096){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(120.843370,85.060239){\circle{3}} \put(120.843370,46.024096){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(124.096382,85.060239){\circle{3}} \put(124.096382,46.024096){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(127.349394,78.554215){\circle{3}} \put(127.349394,46.024096){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(130.602406,65.542167){\circle{3}} \put(130.602406,46.024096){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(133.855418,46.024096){\circle{3}} \put(133.855418,49.277107){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(137.108430,49.277107){\circle{3}} \put(137.108430,72.048191){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(140.361442,72.048191){\circle{3}} \put(140.361442,72.048191){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(143.614454,88.313251){\circle{3}} \put(143.614454,72.048191){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(146.867466,98.072287){\circle{3}} \put(146.867466,72.048191){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(150.120478,101.325299){\circle{3}} \put(150.120478,72.048191){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(153.373490,98.072287){\circle{3}} \put(153.373490,72.048191){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(156.626502,88.313251){\circle{3}} \put(156.626502,72.048191){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(159.879514,72.048191){\circle{3}} \put(159.879514,75.301203){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(163.132525,75.301203){\circle{3}} \put(163.132525,78.554215){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(166.385537,78.554215){\circle{3}} \put(166.385537,98.072287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(169.638549,98.072287){\circle{3}} \put(169.638549,98.072287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(172.891561,111.084334){\circle{3}} \put(172.891561,98.072287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(176.144573,117.590358){\circle{3}} \put(176.144573,98.072287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(179.397585,117.590358){\circle{3}} \put(179.397585,98.072287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(182.650597,111.084334){\circle{3}} \put(182.650597,98.072287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(185.903609,98.072287){\circle{3}} \put(185.903609,101.325299){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(189.156621,101.325299){\circle{3}} \put(189.156621,104.578310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(192.409633,104.578310){\circle{3}} \put(192.409633,107.831322){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(195.662645,107.831322){\circle{3}} \put(195.662645,124.096382){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(198.915657,124.096382){\circle{3}} \put(198.915657,124.096382){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(202.168669,133.855418){\circle{3}} \put(202.168669,124.096382){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(205.421681,137.108430){\circle{3}} \put(205.421681,124.096382){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(208.674693,133.855418){\circle{3}} \put(208.674693,124.096382){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(211.927705,124.096382){\circle{3}} \put(211.927705,127.349394){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(215.180717,127.349394){\circle{3}} \put(215.180717,130.602406){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(218.433728,130.602406){\circle{3}} \put(218.433728,133.855418){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(221.686740,133.855418){\circle{3}} \put(221.686740,137.108430){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(224.939752,137.108430){\circle{3}} \put(224.939752,150.120478){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(228.192764,150.120478){\circle{3}} \put(228.192764,150.120478){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(231.445776,156.626502){\circle{3}} \put(231.445776,150.120478){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(234.698788,156.626502){\circle{3}} \put(234.698788,150.120478){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(237.951800,150.120478){\circle{3}} \put(237.951800,153.373490){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(241.204812,153.373490){\circle{3}} \put(241.204812,156.626502){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(244.457824,156.626502){\circle{3}} \put(244.457824,159.879514){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(247.710836,159.879514){\circle{3}} \put(247.710836,163.132525){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(250.963848,163.132525){\circle{3}} \put(250.963848,166.385537){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(254.216860,166.385537){\circle{3}} \put(254.216860,176.144573){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(257.469872,176.144573){\circle{3}} \put(257.469872,176.144573){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(260.722884,179.397585){\circle{3}} \put(260.722884,176.144573){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(263.975896,176.144573){\circle{3}} \put(263.975896,179.397585){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(267.228908,179.397585){\circle{3}} \put(267.228908,182.650597){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(270.481920,182.650597){\circle{3}} \put(270.481920,185.903609){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(273.734931,185.903609){\circle{3}} \put(273.734931,189.156621){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(276.987943,189.156621){\circle{3}} \put(276.987943,192.409633){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(280.240955,192.409633){\circle{3}} \put(280.240955,195.662645){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(283.493967,195.662645){\circle{3}} \put(283.493967,202.168669){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(286.746979,202.168669){\circle{3}} \put(286.746979,202.168669){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(289.999991,202.168669){\circle{3}} \put(289.999991,205.421681){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \end{picture} } \caption{Graph of $\nu_i$ and $\delta_i$ for the Hermitian code over ${\mathbb F}_{8^2}$.} \label{fig:nuob8} \end{figure} \begin{figure} \resizebox{\columnwidth}{!}{ \def\circle{3}{\circle{3}} \def\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}{\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \begin{picture}(300.000002,210.501394) \put(20.000000,20.000000){\vector(0,1){190.501394}} \put(20.000000,20.000000){\vector(1,0){275.000002}} \put(297.500002,20.000000){\makebox(0,0){\phantom{mm}\small{$i$}}} \put(110.250697,18.000000){\line(0,1){4}} \put(110.250697,10.000000){\makebox(0,0){\tiny{$\frac{a(a-1)}{2}$}}} \put(18.000000,200.501394){\line(1,0){4}} \put(0.000000,200.501394){\makebox(0,0){\tiny{$a(a-1)$}}} \put(150.000001,0.000000){\makebox(0,0){\shortstack[c]{\phantom{mmmmm}$\circ$ $\nu_i$\\\phantom{mmmmm}$\times$ $\delta_i$}}}\put(20.000000,20.752089){\circle{3}} \put(20.000000,21.504178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(20.752089,21.504178){\circle{3}} \put(20.752089,21.504178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(21.504178,21.504178){\circle{3}} \put(21.504178,22.256267){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(22.256267,22.256267){\circle{3}} \put(22.256267,22.256267){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(23.008357,23.008357){\circle{3}} \put(23.008357,22.256267){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(23.760446,22.256267){\circle{3}} \put(23.760446,23.008357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(24.512535,23.008357){\circle{3}} \put(24.512535,23.008357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(25.264624,24.512535){\circle{3}} \put(25.264624,23.008357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(26.016713,24.512535){\circle{3}} \put(26.016713,23.008357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(26.768802,23.008357){\circle{3}} \put(26.768802,23.760446){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(27.520891,23.760446){\circle{3}} \put(27.520891,23.760446){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(28.272981,26.016713){\circle{3}} \put(28.272981,23.760446){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(29.025070,26.768802){\circle{3}} \put(29.025070,23.760446){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(29.777159,26.016713){\circle{3}} \put(29.777159,23.760446){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(30.529248,23.760446){\circle{3}} \put(30.529248,24.512535){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(31.281337,24.512535){\circle{3}} \put(31.281337,24.512535){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(32.033426,27.520891){\circle{3}} \put(32.033426,24.512535){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(32.785515,29.025070){\circle{3}} \put(32.785515,24.512535){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(33.537605,29.025070){\circle{3}} \put(33.537605,24.512535){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(34.289694,27.520891){\circle{3}} \put(34.289694,24.512535){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(35.041783,24.512535){\circle{3}} \put(35.041783,25.264624){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(35.793872,25.264624){\circle{3}} \put(35.793872,25.264624){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(36.545961,29.025070){\circle{3}} \put(36.545961,25.264624){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(37.298050,31.281337){\circle{3}} \put(37.298050,25.264624){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(38.050139,32.033426){\circle{3}} \put(38.050139,25.264624){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(38.802229,31.281337){\circle{3}} \put(38.802229,25.264624){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(39.554318,29.025070){\circle{3}} \put(39.554318,25.264624){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(40.306407,25.264624){\circle{3}} \put(40.306407,26.016713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(41.058496,26.016713){\circle{3}} \put(41.058496,26.016713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(41.810585,30.529248){\circle{3}} \put(41.810585,26.016713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(42.562674,33.537605){\circle{3}} \put(42.562674,26.016713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(43.314763,35.041783){\circle{3}} \put(43.314763,26.016713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(44.066853,35.041783){\circle{3}} \put(44.066853,26.016713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(44.818942,33.537605){\circle{3}} \put(44.818942,26.016713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(45.571031,30.529248){\circle{3}} \put(45.571031,26.016713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(46.323120,26.016713){\circle{3}} \put(46.323120,26.768802){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(47.075209,26.768802){\circle{3}} \put(47.075209,26.768802){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(47.827298,32.033426){\circle{3}} \put(47.827298,26.768802){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(48.579387,35.793872){\circle{3}} \put(48.579387,26.768802){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(49.331477,38.050139){\circle{3}} \put(49.331477,26.768802){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(50.083566,38.802229){\circle{3}} \put(50.083566,26.768802){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(50.835655,38.050139){\circle{3}} \put(50.835655,26.768802){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(51.587744,35.793872){\circle{3}} \put(51.587744,26.768802){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(52.339833,32.033426){\circle{3}} \put(52.339833,26.768802){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(53.091922,26.768802){\circle{3}} \put(53.091922,27.520891){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(53.844011,27.520891){\circle{3}} \put(53.844011,27.520891){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(54.596101,33.537605){\circle{3}} \put(54.596101,27.520891){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(55.348190,38.050139){\circle{3}} \put(55.348190,27.520891){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(56.100279,41.058496){\circle{3}} \put(56.100279,27.520891){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(56.852368,42.562674){\circle{3}} \put(56.852368,27.520891){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(57.604457,42.562674){\circle{3}} \put(57.604457,27.520891){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(58.356546,41.058496){\circle{3}} \put(58.356546,27.520891){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(59.108635,38.050139){\circle{3}} \put(59.108635,27.520891){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(59.860725,33.537605){\circle{3}} \put(59.860725,27.520891){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(60.612814,27.520891){\circle{3}} \put(60.612814,28.272981){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(61.364903,28.272981){\circle{3}} \put(61.364903,28.272981){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(62.116992,35.041783){\circle{3}} \put(62.116992,28.272981){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(62.869081,40.306407){\circle{3}} \put(62.869081,28.272981){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(63.621170,44.066853){\circle{3}} \put(63.621170,28.272981){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(64.373259,46.323120){\circle{3}} \put(64.373259,28.272981){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(65.125349,47.075209){\circle{3}} \put(65.125349,28.272981){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(65.877438,46.323120){\circle{3}} \put(65.877438,28.272981){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(66.629527,44.066853){\circle{3}} \put(66.629527,28.272981){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(67.381616,40.306407){\circle{3}} \put(67.381616,28.272981){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(68.133705,35.041783){\circle{3}} \put(68.133705,28.272981){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(68.885794,28.272981){\circle{3}} \put(68.885794,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(69.637883,29.025070){\circle{3}} \put(69.637883,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(70.389973,36.545961){\circle{3}} \put(70.389973,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(71.142062,42.562674){\circle{3}} \put(71.142062,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(71.894151,47.075209){\circle{3}} \put(71.894151,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(72.646240,50.083566){\circle{3}} \put(72.646240,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(73.398329,51.587744){\circle{3}} \put(73.398329,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(74.150418,51.587744){\circle{3}} \put(74.150418,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(74.902507,50.083566){\circle{3}} \put(74.902507,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(75.654597,47.075209){\circle{3}} \put(75.654597,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(76.406686,42.562674){\circle{3}} \put(76.406686,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(77.158775,36.545961){\circle{3}} \put(77.158775,29.025070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(77.910864,29.025070){\circle{3}} \put(77.910864,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(78.662953,29.777159){\circle{3}} \put(78.662953,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(79.415042,38.050139){\circle{3}} \put(79.415042,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(80.167131,44.818942){\circle{3}} \put(80.167131,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(80.919221,50.083566){\circle{3}} \put(80.919221,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(81.671310,53.844011){\circle{3}} \put(81.671310,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(82.423399,56.100279){\circle{3}} \put(82.423399,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(83.175488,56.852368){\circle{3}} \put(83.175488,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(83.927577,56.100279){\circle{3}} \put(83.927577,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(84.679666,53.844011){\circle{3}} \put(84.679666,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(85.431755,50.083566){\circle{3}} \put(85.431755,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(86.183845,44.818942){\circle{3}} \put(86.183845,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(86.935934,38.050139){\circle{3}} \put(86.935934,29.777159){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(87.688023,29.777159){\circle{3}} \put(87.688023,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(88.440112,30.529248){\circle{3}} \put(88.440112,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(89.192201,39.554318){\circle{3}} \put(89.192201,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(89.944290,47.075209){\circle{3}} \put(89.944290,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(90.696379,53.091922){\circle{3}} \put(90.696379,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(91.448469,57.604457){\circle{3}} \put(91.448469,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(92.200558,60.612814){\circle{3}} \put(92.200558,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(92.952647,62.116992){\circle{3}} \put(92.952647,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(93.704736,62.116992){\circle{3}} \put(93.704736,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(94.456825,60.612814){\circle{3}} \put(94.456825,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(95.208914,57.604457){\circle{3}} \put(95.208914,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(95.961003,53.091922){\circle{3}} \put(95.961003,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(96.713093,47.075209){\circle{3}} \put(96.713093,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(97.465182,39.554318){\circle{3}} \put(97.465182,30.529248){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(98.217271,30.529248){\circle{3}} \put(98.217271,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(98.969360,31.281337){\circle{3}} \put(98.969360,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(99.721449,41.058496){\circle{3}} \put(99.721449,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(100.473538,49.331477){\circle{3}} \put(100.473538,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(101.225627,56.100279){\circle{3}} \put(101.225627,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(101.977717,61.364903){\circle{3}} \put(101.977717,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(102.729806,65.125349){\circle{3}} \put(102.729806,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(103.481895,67.381616){\circle{3}} \put(103.481895,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(104.233984,68.133705){\circle{3}} \put(104.233984,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(104.986073,67.381616){\circle{3}} \put(104.986073,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(105.738162,65.125349){\circle{3}} \put(105.738162,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(106.490251,61.364903){\circle{3}} \put(106.490251,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(107.242341,56.100279){\circle{3}} \put(107.242341,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(107.994430,49.331477){\circle{3}} \put(107.994430,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(108.746519,41.058496){\circle{3}} \put(108.746519,31.281337){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(109.498608,31.281337){\circle{3}} \put(109.498608,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(110.250697,32.033426){\circle{3}} \put(110.250697,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(111.002786,42.562674){\circle{3}} \put(111.002786,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(111.754875,51.587744){\circle{3}} \put(111.754875,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(112.506965,59.108635){\circle{3}} \put(112.506965,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(113.259054,65.125349){\circle{3}} \put(113.259054,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(114.011143,69.637883){\circle{3}} \put(114.011143,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(114.763232,72.646240){\circle{3}} \put(114.763232,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(115.515321,74.150418){\circle{3}} \put(115.515321,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(116.267410,74.150418){\circle{3}} \put(116.267410,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(117.019499,72.646240){\circle{3}} \put(117.019499,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(117.771589,69.637883){\circle{3}} \put(117.771589,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(118.523678,65.125349){\circle{3}} \put(118.523678,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(119.275767,59.108635){\circle{3}} \put(119.275767,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(120.027856,51.587744){\circle{3}} \put(120.027856,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(120.779945,42.562674){\circle{3}} \put(120.779945,32.033426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(121.532034,32.033426){\circle{3}} \put(121.532034,32.785515){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(122.284123,32.785515){\circle{3}} \put(122.284123,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(123.036213,44.066853){\circle{3}} \put(123.036213,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(123.788302,53.844011){\circle{3}} \put(123.788302,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(124.540391,62.116992){\circle{3}} \put(124.540391,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(125.292480,68.885794){\circle{3}} \put(125.292480,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(126.044569,74.150418){\circle{3}} \put(126.044569,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(126.796658,77.910864){\circle{3}} \put(126.796658,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(127.548747,80.167131){\circle{3}} \put(127.548747,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(128.300837,80.919221){\circle{3}} \put(128.300837,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(129.052926,80.167131){\circle{3}} \put(129.052926,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(129.805015,77.910864){\circle{3}} \put(129.805015,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(130.557104,74.150418){\circle{3}} \put(130.557104,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(131.309193,68.885794){\circle{3}} \put(131.309193,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(132.061282,62.116992){\circle{3}} \put(132.061282,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(132.813371,53.844011){\circle{3}} \put(132.813371,44.066853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(133.565461,44.066853){\circle{3}} \put(133.565461,44.818942){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(134.317550,44.818942){\circle{3}} \put(134.317550,45.571031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(135.069639,45.571031){\circle{3}} \put(135.069639,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(135.821728,56.100279){\circle{3}} \put(135.821728,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(136.573817,65.125349){\circle{3}} \put(136.573817,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(137.325906,72.646240){\circle{3}} \put(137.325906,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(138.077995,78.662953){\circle{3}} \put(138.077995,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(138.830085,83.175488){\circle{3}} \put(138.830085,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(139.582174,86.183845){\circle{3}} \put(139.582174,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(140.334263,87.688023){\circle{3}} \put(140.334263,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(141.086352,87.688023){\circle{3}} \put(141.086352,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(141.838441,86.183845){\circle{3}} \put(141.838441,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(142.590530,83.175488){\circle{3}} \put(142.590530,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(143.342619,78.662953){\circle{3}} \put(143.342619,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(144.094709,72.646240){\circle{3}} \put(144.094709,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(144.846798,65.125349){\circle{3}} \put(144.846798,56.100279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(145.598887,56.100279){\circle{3}} \put(145.598887,56.852368){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(146.350976,56.852368){\circle{3}} \put(146.350976,57.604457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(147.103065,57.604457){\circle{3}} \put(147.103065,58.356546){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(147.855154,58.356546){\circle{3}} \put(147.855154,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(148.607243,68.133705){\circle{3}} \put(148.607243,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(149.359333,76.406686){\circle{3}} \put(149.359333,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(150.111422,83.175488){\circle{3}} \put(150.111422,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(150.863511,88.440112){\circle{3}} \put(150.863511,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(151.615600,92.200558){\circle{3}} \put(151.615600,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(152.367689,94.456825){\circle{3}} \put(152.367689,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(153.119778,95.208914){\circle{3}} \put(153.119778,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(153.871867,94.456825){\circle{3}} \put(153.871867,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(154.623957,92.200558){\circle{3}} \put(154.623957,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(155.376046,88.440112){\circle{3}} \put(155.376046,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(156.128135,83.175488){\circle{3}} \put(156.128135,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(156.880224,76.406686){\circle{3}} \put(156.880224,68.133705){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(157.632313,68.133705){\circle{3}} \put(157.632313,68.885794){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(158.384402,68.885794){\circle{3}} \put(158.384402,69.637883){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(159.136491,69.637883){\circle{3}} \put(159.136491,70.389973){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(159.888581,70.389973){\circle{3}} \put(159.888581,71.142062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(160.640670,71.142062){\circle{3}} \put(160.640670,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(161.392759,80.167131){\circle{3}} \put(161.392759,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(162.144848,87.688023){\circle{3}} \put(162.144848,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(162.896937,93.704736){\circle{3}} \put(162.896937,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(163.649026,98.217271){\circle{3}} \put(163.649026,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(164.401115,101.225627){\circle{3}} \put(164.401115,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(165.153205,102.729806){\circle{3}} \put(165.153205,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(165.905294,102.729806){\circle{3}} \put(165.905294,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(166.657383,101.225627){\circle{3}} \put(166.657383,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(167.409472,98.217271){\circle{3}} \put(167.409472,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(168.161561,93.704736){\circle{3}} \put(168.161561,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(168.913650,87.688023){\circle{3}} \put(168.913650,80.167131){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(169.665739,80.167131){\circle{3}} \put(169.665739,80.919221){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(170.417829,80.919221){\circle{3}} \put(170.417829,81.671310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(171.169918,81.671310){\circle{3}} \put(171.169918,82.423399){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(171.922007,82.423399){\circle{3}} \put(171.922007,83.175488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(172.674096,83.175488){\circle{3}} \put(172.674096,83.927577){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(173.426185,83.927577){\circle{3}} \put(173.426185,92.200558){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(174.178274,92.200558){\circle{3}} \put(174.178274,92.200558){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(174.930363,98.969360){\circle{3}} \put(174.930363,92.200558){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(175.682453,104.233984){\circle{3}} \put(175.682453,92.200558){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(176.434542,107.994430){\circle{3}} \put(176.434542,92.200558){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(177.186631,110.250697){\circle{3}} \put(177.186631,92.200558){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(177.938720,111.002786){\circle{3}} \put(177.938720,92.200558){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(178.690809,110.250697){\circle{3}} \put(178.690809,92.200558){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(179.442898,107.994430){\circle{3}} \put(179.442898,92.200558){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(180.194987,104.233984){\circle{3}} \put(180.194987,92.200558){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(180.947077,98.969360){\circle{3}} \put(180.947077,92.200558){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(181.699166,92.200558){\circle{3}} \put(181.699166,92.952647){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(182.451255,92.952647){\circle{3}} \put(182.451255,93.704736){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(183.203344,93.704736){\circle{3}} \put(183.203344,94.456825){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(183.955433,94.456825){\circle{3}} \put(183.955433,95.208914){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(184.707522,95.208914){\circle{3}} \put(184.707522,95.961003){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(185.459611,95.961003){\circle{3}} \put(185.459611,96.713093){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(186.211701,96.713093){\circle{3}} \put(186.211701,104.233984){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(186.963790,104.233984){\circle{3}} \put(186.963790,104.233984){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(187.715879,110.250697){\circle{3}} \put(187.715879,104.233984){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(188.467968,114.763232){\circle{3}} \put(188.467968,104.233984){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(189.220057,117.771589){\circle{3}} \put(189.220057,104.233984){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(189.972146,119.275767){\circle{3}} \put(189.972146,104.233984){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(190.724235,119.275767){\circle{3}} \put(190.724235,104.233984){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(191.476325,117.771589){\circle{3}} \put(191.476325,104.233984){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(192.228414,114.763232){\circle{3}} \put(192.228414,104.233984){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(192.980503,110.250697){\circle{3}} \put(192.980503,104.233984){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(193.732592,104.233984){\circle{3}} \put(193.732592,104.986073){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(194.484681,104.986073){\circle{3}} \put(194.484681,105.738162){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(195.236770,105.738162){\circle{3}} \put(195.236770,106.490251){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(195.988859,106.490251){\circle{3}} \put(195.988859,107.242341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(196.740949,107.242341){\circle{3}} \put(196.740949,107.994430){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(197.493038,107.994430){\circle{3}} \put(197.493038,108.746519){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(198.245127,108.746519){\circle{3}} \put(198.245127,109.498608){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(198.997216,109.498608){\circle{3}} \put(198.997216,116.267410){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(199.749305,116.267410){\circle{3}} \put(199.749305,116.267410){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(200.501394,121.532034){\circle{3}} \put(200.501394,116.267410){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(201.253483,125.292480){\circle{3}} \put(201.253483,116.267410){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(202.005573,127.548747){\circle{3}} \put(202.005573,116.267410){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(202.757662,128.300837){\circle{3}} \put(202.757662,116.267410){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(203.509751,127.548747){\circle{3}} \put(203.509751,116.267410){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(204.261840,125.292480){\circle{3}} \put(204.261840,116.267410){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(205.013929,121.532034){\circle{3}} \put(205.013929,116.267410){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(205.766018,116.267410){\circle{3}} \put(205.766018,117.019499){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(206.518107,117.019499){\circle{3}} \put(206.518107,117.771589){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(207.270197,117.771589){\circle{3}} \put(207.270197,118.523678){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(208.022286,118.523678){\circle{3}} \put(208.022286,119.275767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(208.774375,119.275767){\circle{3}} \put(208.774375,120.027856){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(209.526464,120.027856){\circle{3}} \put(209.526464,120.779945){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(210.278553,120.779945){\circle{3}} \put(210.278553,121.532034){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(211.030642,121.532034){\circle{3}} \put(211.030642,122.284123){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(211.782731,122.284123){\circle{3}} \put(211.782731,128.300837){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(212.534821,128.300837){\circle{3}} \put(212.534821,128.300837){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(213.286910,132.813371){\circle{3}} \put(213.286910,128.300837){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(214.038999,135.821728){\circle{3}} \put(214.038999,128.300837){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(214.791088,137.325906){\circle{3}} \put(214.791088,128.300837){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(215.543177,137.325906){\circle{3}} \put(215.543177,128.300837){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(216.295266,135.821728){\circle{3}} \put(216.295266,128.300837){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(217.047355,132.813371){\circle{3}} \put(217.047355,128.300837){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(217.799445,128.300837){\circle{3}} \put(217.799445,129.052926){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(218.551534,129.052926){\circle{3}} \put(218.551534,129.805015){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(219.303623,129.805015){\circle{3}} \put(219.303623,130.557104){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(220.055712,130.557104){\circle{3}} \put(220.055712,131.309193){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(220.807801,131.309193){\circle{3}} \put(220.807801,132.061282){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(221.559890,132.061282){\circle{3}} \put(221.559890,132.813371){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(222.311979,132.813371){\circle{3}} \put(222.311979,133.565461){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(223.064069,133.565461){\circle{3}} \put(223.064069,134.317550){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(223.816158,134.317550){\circle{3}} \put(223.816158,135.069639){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(224.568247,135.069639){\circle{3}} \put(224.568247,140.334263){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(225.320336,140.334263){\circle{3}} \put(225.320336,140.334263){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(226.072425,144.094709){\circle{3}} \put(226.072425,140.334263){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(226.824514,146.350976){\circle{3}} \put(226.824514,140.334263){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(227.576603,147.103065){\circle{3}} \put(227.576603,140.334263){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(228.328693,146.350976){\circle{3}} \put(228.328693,140.334263){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(229.080782,144.094709){\circle{3}} \put(229.080782,140.334263){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(229.832871,140.334263){\circle{3}} \put(229.832871,141.086352){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(230.584960,141.086352){\circle{3}} \put(230.584960,141.838441){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(231.337049,141.838441){\circle{3}} \put(231.337049,142.590530){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(232.089138,142.590530){\circle{3}} \put(232.089138,143.342619){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(232.841227,143.342619){\circle{3}} \put(232.841227,144.094709){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(233.593317,144.094709){\circle{3}} \put(233.593317,144.846798){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(234.345406,144.846798){\circle{3}} \put(234.345406,145.598887){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(235.097495,145.598887){\circle{3}} \put(235.097495,146.350976){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(235.849584,146.350976){\circle{3}} \put(235.849584,147.103065){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(236.601673,147.103065){\circle{3}} \put(236.601673,147.855154){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(237.353762,147.855154){\circle{3}} \put(237.353762,152.367689){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(238.105851,152.367689){\circle{3}} \put(238.105851,152.367689){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(238.857941,155.376046){\circle{3}} \put(238.857941,152.367689){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(239.610030,156.880224){\circle{3}} \put(239.610030,152.367689){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(240.362119,156.880224){\circle{3}} \put(240.362119,152.367689){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(241.114208,155.376046){\circle{3}} \put(241.114208,152.367689){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(241.866297,152.367689){\circle{3}} \put(241.866297,153.119778){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(242.618386,153.119778){\circle{3}} \put(242.618386,153.871867){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(243.370475,153.871867){\circle{3}} \put(243.370475,154.623957){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(244.122565,154.623957){\circle{3}} \put(244.122565,155.376046){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(244.874654,155.376046){\circle{3}} \put(244.874654,156.128135){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(245.626743,156.128135){\circle{3}} \put(245.626743,156.880224){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(246.378832,156.880224){\circle{3}} \put(246.378832,157.632313){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(247.130921,157.632313){\circle{3}} \put(247.130921,158.384402){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(247.883010,158.384402){\circle{3}} \put(247.883010,159.136491){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(248.635099,159.136491){\circle{3}} \put(248.635099,159.888581){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(249.387189,159.888581){\circle{3}} \put(249.387189,160.640670){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(250.139278,160.640670){\circle{3}} \put(250.139278,164.401115){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(250.891367,164.401115){\circle{3}} \put(250.891367,164.401115){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(251.643456,166.657383){\circle{3}} \put(251.643456,164.401115){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(252.395545,167.409472){\circle{3}} \put(252.395545,164.401115){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(253.147634,166.657383){\circle{3}} \put(253.147634,164.401115){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(253.899723,164.401115){\circle{3}} \put(253.899723,165.153205){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(254.651813,165.153205){\circle{3}} \put(254.651813,165.905294){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(255.403902,165.905294){\circle{3}} \put(255.403902,166.657383){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(256.155991,166.657383){\circle{3}} \put(256.155991,167.409472){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(256.908080,167.409472){\circle{3}} \put(256.908080,168.161561){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(257.660169,168.161561){\circle{3}} \put(257.660169,168.913650){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(258.412258,168.913650){\circle{3}} \put(258.412258,169.665739){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(259.164347,169.665739){\circle{3}} \put(259.164347,170.417829){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(259.916437,170.417829){\circle{3}} \put(259.916437,171.169918){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(260.668526,171.169918){\circle{3}} \put(260.668526,171.922007){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(261.420615,171.922007){\circle{3}} \put(261.420615,172.674096){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(262.172704,172.674096){\circle{3}} \put(262.172704,173.426185){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(262.924793,173.426185){\circle{3}} \put(262.924793,176.434542){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(263.676882,176.434542){\circle{3}} \put(263.676882,176.434542){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(264.428971,177.938720){\circle{3}} \put(264.428971,176.434542){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(265.181061,177.938720){\circle{3}} \put(265.181061,176.434542){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(265.933150,176.434542){\circle{3}} \put(265.933150,177.186631){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(266.685239,177.186631){\circle{3}} \put(266.685239,177.938720){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(267.437328,177.938720){\circle{3}} \put(267.437328,178.690809){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(268.189417,178.690809){\circle{3}} \put(268.189417,179.442898){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(268.941506,179.442898){\circle{3}} \put(268.941506,180.194987){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(269.693595,180.194987){\circle{3}} \put(269.693595,180.947077){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(270.445685,180.947077){\circle{3}} \put(270.445685,181.699166){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(271.197774,181.699166){\circle{3}} \put(271.197774,182.451255){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(271.949863,182.451255){\circle{3}} \put(271.949863,183.203344){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(272.701952,183.203344){\circle{3}} \put(272.701952,183.955433){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(273.454041,183.955433){\circle{3}} \put(273.454041,184.707522){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(274.206130,184.707522){\circle{3}} \put(274.206130,185.459611){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(274.958219,185.459611){\circle{3}} \put(274.958219,186.211701){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(275.710309,186.211701){\circle{3}} \put(275.710309,188.467968){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(276.462398,188.467968){\circle{3}} \put(276.462398,188.467968){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(277.214487,189.220057){\circle{3}} \put(277.214487,188.467968){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(277.966576,188.467968){\circle{3}} \put(277.966576,189.220057){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(278.718665,189.220057){\circle{3}} \put(278.718665,189.972146){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(279.470754,189.972146){\circle{3}} \put(279.470754,190.724235){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(280.222843,190.724235){\circle{3}} \put(280.222843,191.476325){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(280.974933,191.476325){\circle{3}} \put(280.974933,192.228414){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(281.727022,192.228414){\circle{3}} \put(281.727022,192.980503){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(282.479111,192.980503){\circle{3}} \put(282.479111,193.732592){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(283.231200,193.732592){\circle{3}} \put(283.231200,194.484681){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(283.983289,194.484681){\circle{3}} \put(283.983289,195.236770){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(284.735378,195.236770){\circle{3}} \put(284.735378,195.988859){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(285.487467,195.988859){\circle{3}} \put(285.487467,196.740949){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(286.239557,196.740949){\circle{3}} \put(286.239557,197.493038){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(286.991646,197.493038){\circle{3}} \put(286.991646,198.245127){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(287.743735,198.245127){\circle{3}} \put(287.743735,198.997216){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(288.495824,198.997216){\circle{3}} \put(288.495824,200.501394){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(289.247913,200.501394){\circle{3}} \put(289.247913,200.501394){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(290.000002,200.501394){\circle{3}} \put(290.000002,201.253483){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \end{picture} } \caption{Graph of $\nu_i$ and $\delta_i$ for the Hermitian code over ${\mathbb F}_{16^2}$.} \label{fig:nuob16} \end{figure} \begin{figure} \resizebox{\columnwidth}{!}{ \def\circle{3}{\circle{3}} \def\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}{\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \begin{picture}(300.000009,210.121055) \put(20.000000,20.000000){\vector(0,1){190.121055}} \put(20.000000,20.000000){\vector(1,0){275.000009}} \put(297.500009,20.000000){\makebox(0,0){\phantom{mm}\small{$i$}}} \put(110.060528,18.000000){\line(0,1){4}} \put(110.060528,10.000000){\makebox(0,0){\tiny{$\frac{a(a-1)}{2}$}}} \put(18.000000,200.121055){\line(1,0){4}} \put(0.000000,200.121055){\makebox(0,0){\tiny{$a(a-1)$}}} \put(150.000005,0.000000){\makebox(0,0){\shortstack[c]{\phantom{mmmmm}$\circ$ $\nu_i$\\\phantom{mmmmm}$\times$ $\delta_i$}}}\put(20.000000,20.181574){\circle{3}} \put(20.000000,20.363147){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(20.181574,20.363147){\circle{3}} \put(20.181574,20.363147){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(20.363147,20.363147){\circle{3}} \put(20.363147,20.544721){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(20.544721,20.544721){\circle{3}} \put(20.544721,20.544721){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(20.726295,20.726295){\circle{3}} \put(20.726295,20.544721){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(20.907868,20.544721){\circle{3}} \put(20.907868,20.726295){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(21.089442,20.726295){\circle{3}} \put(21.089442,20.726295){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(21.271016,21.089442){\circle{3}} \put(21.271016,20.726295){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(21.452589,21.089442){\circle{3}} \put(21.452589,20.726295){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(21.634163,20.726295){\circle{3}} \put(21.634163,20.907868){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(21.815736,20.907868){\circle{3}} \put(21.815736,20.907868){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(21.997310,21.452589){\circle{3}} \put(21.997310,20.907868){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(22.178884,21.634163){\circle{3}} \put(22.178884,20.907868){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(22.360457,21.452589){\circle{3}} \put(22.360457,20.907868){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(22.542031,20.907868){\circle{3}} \put(22.542031,21.089442){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(22.723605,21.089442){\circle{3}} \put(22.723605,21.089442){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(22.905178,21.815736){\circle{3}} \put(22.905178,21.089442){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(23.086752,22.178884){\circle{3}} \put(23.086752,21.089442){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(23.268326,22.178884){\circle{3}} \put(23.268326,21.089442){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(23.449899,21.815736){\circle{3}} \put(23.449899,21.089442){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(23.631473,21.089442){\circle{3}} \put(23.631473,21.271016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(23.813047,21.271016){\circle{3}} \put(23.813047,21.271016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(23.994620,22.178884){\circle{3}} \put(23.994620,21.271016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(24.176194,22.723605){\circle{3}} \put(24.176194,21.271016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(24.357767,22.905178){\circle{3}} \put(24.357767,21.271016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(24.539341,22.723605){\circle{3}} \put(24.539341,21.271016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(24.720915,22.178884){\circle{3}} \put(24.720915,21.271016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(24.902488,21.271016){\circle{3}} \put(24.902488,21.452589){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(25.084062,21.452589){\circle{3}} \put(25.084062,21.452589){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(25.265636,22.542031){\circle{3}} \put(25.265636,21.452589){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(25.447209,23.268326){\circle{3}} \put(25.447209,21.452589){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(25.628783,23.631473){\circle{3}} \put(25.628783,21.452589){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(25.810357,23.631473){\circle{3}} \put(25.810357,21.452589){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(25.991930,23.268326){\circle{3}} \put(25.991930,21.452589){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(26.173504,22.542031){\circle{3}} \put(26.173504,21.452589){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(26.355078,21.452589){\circle{3}} \put(26.355078,21.634163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(26.536651,21.634163){\circle{3}} \put(26.536651,21.634163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(26.718225,22.905178){\circle{3}} \put(26.718225,21.634163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(26.899798,23.813047){\circle{3}} \put(26.899798,21.634163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(27.081372,24.357767){\circle{3}} \put(27.081372,21.634163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(27.262946,24.539341){\circle{3}} \put(27.262946,21.634163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(27.444519,24.357767){\circle{3}} \put(27.444519,21.634163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(27.626093,23.813047){\circle{3}} \put(27.626093,21.634163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(27.807667,22.905178){\circle{3}} \put(27.807667,21.634163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(27.989240,21.634163){\circle{3}} \put(27.989240,21.815736){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(28.170814,21.815736){\circle{3}} \put(28.170814,21.815736){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(28.352388,23.268326){\circle{3}} \put(28.352388,21.815736){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(28.533961,24.357767){\circle{3}} \put(28.533961,21.815736){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(28.715535,25.084062){\circle{3}} \put(28.715535,21.815736){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(28.897109,25.447209){\circle{3}} \put(28.897109,21.815736){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(29.078682,25.447209){\circle{3}} \put(29.078682,21.815736){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(29.260256,25.084062){\circle{3}} \put(29.260256,21.815736){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(29.441830,24.357767){\circle{3}} \put(29.441830,21.815736){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(29.623403,23.268326){\circle{3}} \put(29.623403,21.815736){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(29.804977,21.815736){\circle{3}} \put(29.804977,21.997310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(29.986550,21.997310){\circle{3}} \put(29.986550,21.997310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(30.168124,23.631473){\circle{3}} \put(30.168124,21.997310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(30.349698,24.902488){\circle{3}} \put(30.349698,21.997310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(30.531271,25.810357){\circle{3}} \put(30.531271,21.997310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(30.712845,26.355078){\circle{3}} \put(30.712845,21.997310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(30.894419,26.536651){\circle{3}} \put(30.894419,21.997310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(31.075992,26.355078){\circle{3}} \put(31.075992,21.997310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(31.257566,25.810357){\circle{3}} \put(31.257566,21.997310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(31.439140,24.902488){\circle{3}} \put(31.439140,21.997310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(31.620713,23.631473){\circle{3}} \put(31.620713,21.997310){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(31.802287,21.997310){\circle{3}} \put(31.802287,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(31.983861,22.178884){\circle{3}} \put(31.983861,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(32.165434,23.994620){\circle{3}} \put(32.165434,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(32.347008,25.447209){\circle{3}} \put(32.347008,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(32.528581,26.536651){\circle{3}} \put(32.528581,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(32.710155,27.262946){\circle{3}} \put(32.710155,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(32.891729,27.626093){\circle{3}} \put(32.891729,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(33.073302,27.626093){\circle{3}} \put(33.073302,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(33.254876,27.262946){\circle{3}} \put(33.254876,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(33.436450,26.536651){\circle{3}} \put(33.436450,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(33.618023,25.447209){\circle{3}} \put(33.618023,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(33.799597,23.994620){\circle{3}} \put(33.799597,22.178884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(33.981171,22.178884){\circle{3}} \put(33.981171,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(34.162744,22.360457){\circle{3}} \put(34.162744,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(34.344318,24.357767){\circle{3}} \put(34.344318,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(34.525892,25.991930){\circle{3}} \put(34.525892,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(34.707465,27.262946){\circle{3}} \put(34.707465,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(34.889039,28.170814){\circle{3}} \put(34.889039,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(35.070612,28.715535){\circle{3}} \put(35.070612,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(35.252186,28.897109){\circle{3}} \put(35.252186,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(35.433760,28.715535){\circle{3}} \put(35.433760,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(35.615333,28.170814){\circle{3}} \put(35.615333,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(35.796907,27.262946){\circle{3}} \put(35.796907,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(35.978481,25.991930){\circle{3}} \put(35.978481,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(36.160054,24.357767){\circle{3}} \put(36.160054,22.360457){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(36.341628,22.360457){\circle{3}} \put(36.341628,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(36.523202,22.542031){\circle{3}} \put(36.523202,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(36.704775,24.720915){\circle{3}} \put(36.704775,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(36.886349,26.536651){\circle{3}} \put(36.886349,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(37.067923,27.989240){\circle{3}} \put(37.067923,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(37.249496,29.078682){\circle{3}} \put(37.249496,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(37.431070,29.804977){\circle{3}} \put(37.431070,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(37.612643,30.168124){\circle{3}} \put(37.612643,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(37.794217,30.168124){\circle{3}} \put(37.794217,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(37.975791,29.804977){\circle{3}} \put(37.975791,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(38.157364,29.078682){\circle{3}} \put(38.157364,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(38.338938,27.989240){\circle{3}} \put(38.338938,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(38.520512,26.536651){\circle{3}} \put(38.520512,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(38.702085,24.720915){\circle{3}} \put(38.702085,22.542031){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(38.883659,22.542031){\circle{3}} \put(38.883659,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(39.065233,22.723605){\circle{3}} \put(39.065233,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(39.246806,25.084062){\circle{3}} \put(39.246806,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(39.428380,27.081372){\circle{3}} \put(39.428380,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(39.609954,28.715535){\circle{3}} \put(39.609954,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(39.791527,29.986550){\circle{3}} \put(39.791527,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(39.973101,30.894419){\circle{3}} \put(39.973101,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(40.154675,31.439140){\circle{3}} \put(40.154675,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(40.336248,31.620713){\circle{3}} \put(40.336248,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(40.517822,31.439140){\circle{3}} \put(40.517822,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(40.699395,30.894419){\circle{3}} \put(40.699395,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(40.880969,29.986550){\circle{3}} \put(40.880969,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(41.062543,28.715535){\circle{3}} \put(41.062543,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(41.244116,27.081372){\circle{3}} \put(41.244116,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(41.425690,25.084062){\circle{3}} \put(41.425690,22.723605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(41.607264,22.723605){\circle{3}} \put(41.607264,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(41.788837,22.905178){\circle{3}} \put(41.788837,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(41.970411,25.447209){\circle{3}} \put(41.970411,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(42.151985,27.626093){\circle{3}} \put(42.151985,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(42.333558,29.441830){\circle{3}} \put(42.333558,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(42.515132,30.894419){\circle{3}} \put(42.515132,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(42.696706,31.983861){\circle{3}} \put(42.696706,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(42.878279,32.710155){\circle{3}} \put(42.878279,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(43.059853,33.073302){\circle{3}} \put(43.059853,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(43.241426,33.073302){\circle{3}} \put(43.241426,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(43.423000,32.710155){\circle{3}} \put(43.423000,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(43.604574,31.983861){\circle{3}} \put(43.604574,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(43.786147,30.894419){\circle{3}} \put(43.786147,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(43.967721,29.441830){\circle{3}} \put(43.967721,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(44.149295,27.626093){\circle{3}} \put(44.149295,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(44.330868,25.447209){\circle{3}} \put(44.330868,22.905178){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(44.512442,22.905178){\circle{3}} \put(44.512442,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(44.694016,23.086752){\circle{3}} \put(44.694016,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(44.875589,25.810357){\circle{3}} \put(44.875589,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(45.057163,28.170814){\circle{3}} \put(45.057163,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(45.238737,30.168124){\circle{3}} \put(45.238737,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(45.420310,31.802287){\circle{3}} \put(45.420310,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(45.601884,33.073302){\circle{3}} \put(45.601884,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(45.783457,33.981171){\circle{3}} \put(45.783457,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(45.965031,34.525892){\circle{3}} \put(45.965031,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(46.146605,34.707465){\circle{3}} \put(46.146605,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(46.328178,34.525892){\circle{3}} \put(46.328178,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(46.509752,33.981171){\circle{3}} \put(46.509752,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(46.691326,33.073302){\circle{3}} \put(46.691326,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(46.872899,31.802287){\circle{3}} \put(46.872899,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(47.054473,30.168124){\circle{3}} \put(47.054473,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(47.236047,28.170814){\circle{3}} \put(47.236047,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(47.417620,25.810357){\circle{3}} \put(47.417620,23.086752){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(47.599194,23.086752){\circle{3}} \put(47.599194,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(47.780768,23.268326){\circle{3}} \put(47.780768,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(47.962341,26.173504){\circle{3}} \put(47.962341,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(48.143915,28.715535){\circle{3}} \put(48.143915,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(48.325489,30.894419){\circle{3}} \put(48.325489,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(48.507062,32.710155){\circle{3}} \put(48.507062,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(48.688636,34.162744){\circle{3}} \put(48.688636,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(48.870209,35.252186){\circle{3}} \put(48.870209,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(49.051783,35.978481){\circle{3}} \put(49.051783,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(49.233357,36.341628){\circle{3}} \put(49.233357,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(49.414930,36.341628){\circle{3}} \put(49.414930,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(49.596504,35.978481){\circle{3}} \put(49.596504,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(49.778078,35.252186){\circle{3}} \put(49.778078,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(49.959651,34.162744){\circle{3}} \put(49.959651,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(50.141225,32.710155){\circle{3}} \put(50.141225,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(50.322799,30.894419){\circle{3}} \put(50.322799,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(50.504372,28.715535){\circle{3}} \put(50.504372,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(50.685946,26.173504){\circle{3}} \put(50.685946,23.268326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(50.867520,23.268326){\circle{3}} \put(50.867520,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(51.049093,23.449899){\circle{3}} \put(51.049093,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(51.230667,26.536651){\circle{3}} \put(51.230667,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(51.412240,29.260256){\circle{3}} \put(51.412240,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(51.593814,31.620713){\circle{3}} \put(51.593814,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(51.775388,33.618023){\circle{3}} \put(51.775388,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(51.956961,35.252186){\circle{3}} \put(51.956961,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(52.138535,36.523202){\circle{3}} \put(52.138535,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(52.320109,37.431070){\circle{3}} \put(52.320109,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(52.501682,37.975791){\circle{3}} \put(52.501682,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(52.683256,38.157364){\circle{3}} \put(52.683256,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(52.864830,37.975791){\circle{3}} \put(52.864830,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(53.046403,37.431070){\circle{3}} \put(53.046403,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(53.227977,36.523202){\circle{3}} \put(53.227977,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(53.409551,35.252186){\circle{3}} \put(53.409551,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(53.591124,33.618023){\circle{3}} \put(53.591124,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(53.772698,31.620713){\circle{3}} \put(53.772698,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(53.954271,29.260256){\circle{3}} \put(53.954271,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(54.135845,26.536651){\circle{3}} \put(54.135845,23.449899){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(54.317419,23.449899){\circle{3}} \put(54.317419,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(54.498992,23.631473){\circle{3}} \put(54.498992,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(54.680566,26.899798){\circle{3}} \put(54.680566,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(54.862140,29.804977){\circle{3}} \put(54.862140,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(55.043713,32.347008){\circle{3}} \put(55.043713,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(55.225287,34.525892){\circle{3}} \put(55.225287,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(55.406861,36.341628){\circle{3}} \put(55.406861,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(55.588434,37.794217){\circle{3}} \put(55.588434,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(55.770008,38.883659){\circle{3}} \put(55.770008,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(55.951582,39.609954){\circle{3}} \put(55.951582,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(56.133155,39.973101){\circle{3}} \put(56.133155,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(56.314729,39.973101){\circle{3}} \put(56.314729,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(56.496303,39.609954){\circle{3}} \put(56.496303,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(56.677876,38.883659){\circle{3}} \put(56.677876,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(56.859450,37.794217){\circle{3}} \put(56.859450,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(57.041023,36.341628){\circle{3}} \put(57.041023,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(57.222597,34.525892){\circle{3}} \put(57.222597,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(57.404171,32.347008){\circle{3}} \put(57.404171,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(57.585744,29.804977){\circle{3}} \put(57.585744,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(57.767318,26.899798){\circle{3}} \put(57.767318,23.631473){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(57.948892,23.631473){\circle{3}} \put(57.948892,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(58.130465,23.813047){\circle{3}} \put(58.130465,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(58.312039,27.262946){\circle{3}} \put(58.312039,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(58.493613,30.349698){\circle{3}} \put(58.493613,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(58.675186,33.073302){\circle{3}} \put(58.675186,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(58.856760,35.433760){\circle{3}} \put(58.856760,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(59.038334,37.431070){\circle{3}} \put(59.038334,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(59.219907,39.065233){\circle{3}} \put(59.219907,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(59.401481,40.336248){\circle{3}} \put(59.401481,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(59.583054,41.244116){\circle{3}} \put(59.583054,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(59.764628,41.788837){\circle{3}} \put(59.764628,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(59.946202,41.970411){\circle{3}} \put(59.946202,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(60.127775,41.788837){\circle{3}} \put(60.127775,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(60.309349,41.244116){\circle{3}} \put(60.309349,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(60.490923,40.336248){\circle{3}} \put(60.490923,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(60.672496,39.065233){\circle{3}} \put(60.672496,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(60.854070,37.431070){\circle{3}} \put(60.854070,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(61.035644,35.433760){\circle{3}} \put(61.035644,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(61.217217,33.073302){\circle{3}} \put(61.217217,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(61.398791,30.349698){\circle{3}} \put(61.398791,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(61.580365,27.262946){\circle{3}} \put(61.580365,23.813047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(61.761938,23.813047){\circle{3}} \put(61.761938,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(61.943512,23.994620){\circle{3}} \put(61.943512,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(62.125085,27.626093){\circle{3}} \put(62.125085,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(62.306659,30.894419){\circle{3}} \put(62.306659,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(62.488233,33.799597){\circle{3}} \put(62.488233,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(62.669806,36.341628){\circle{3}} \put(62.669806,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(62.851380,38.520512){\circle{3}} \put(62.851380,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(63.032954,40.336248){\circle{3}} \put(63.032954,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(63.214527,41.788837){\circle{3}} \put(63.214527,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(63.396101,42.878279){\circle{3}} \put(63.396101,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(63.577675,43.604574){\circle{3}} \put(63.577675,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(63.759248,43.967721){\circle{3}} \put(63.759248,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(63.940822,43.967721){\circle{3}} \put(63.940822,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(64.122396,43.604574){\circle{3}} \put(64.122396,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(64.303969,42.878279){\circle{3}} \put(64.303969,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(64.485543,41.788837){\circle{3}} \put(64.485543,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(64.667116,40.336248){\circle{3}} \put(64.667116,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(64.848690,38.520512){\circle{3}} \put(64.848690,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(65.030264,36.341628){\circle{3}} \put(65.030264,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(65.211837,33.799597){\circle{3}} \put(65.211837,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(65.393411,30.894419){\circle{3}} \put(65.393411,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(65.574985,27.626093){\circle{3}} \put(65.574985,23.994620){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(65.756558,23.994620){\circle{3}} \put(65.756558,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(65.938132,24.176194){\circle{3}} \put(65.938132,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(66.119706,27.989240){\circle{3}} \put(66.119706,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(66.301279,31.439140){\circle{3}} \put(66.301279,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(66.482853,34.525892){\circle{3}} \put(66.482853,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(66.664427,37.249496){\circle{3}} \put(66.664427,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(66.846000,39.609954){\circle{3}} \put(66.846000,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(67.027574,41.607264){\circle{3}} \put(67.027574,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(67.209148,43.241426){\circle{3}} \put(67.209148,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(67.390721,44.512442){\circle{3}} \put(67.390721,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(67.572295,45.420310){\circle{3}} \put(67.572295,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(67.753868,45.965031){\circle{3}} \put(67.753868,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(67.935442,46.146605){\circle{3}} \put(67.935442,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(68.117016,45.965031){\circle{3}} \put(68.117016,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(68.298589,45.420310){\circle{3}} \put(68.298589,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(68.480163,44.512442){\circle{3}} \put(68.480163,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(68.661737,43.241426){\circle{3}} \put(68.661737,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(68.843310,41.607264){\circle{3}} \put(68.843310,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(69.024884,39.609954){\circle{3}} \put(69.024884,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(69.206458,37.249496){\circle{3}} \put(69.206458,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(69.388031,34.525892){\circle{3}} \put(69.388031,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(69.569605,31.439140){\circle{3}} \put(69.569605,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(69.751179,27.989240){\circle{3}} \put(69.751179,24.176194){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(69.932752,24.176194){\circle{3}} \put(69.932752,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(70.114326,24.357767){\circle{3}} \put(70.114326,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(70.295899,28.352388){\circle{3}} \put(70.295899,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(70.477473,31.983861){\circle{3}} \put(70.477473,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(70.659047,35.252186){\circle{3}} \put(70.659047,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(70.840620,38.157364){\circle{3}} \put(70.840620,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(71.022194,40.699395){\circle{3}} \put(71.022194,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(71.203768,42.878279){\circle{3}} \put(71.203768,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(71.385341,44.694016){\circle{3}} \put(71.385341,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(71.566915,46.146605){\circle{3}} \put(71.566915,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(71.748489,47.236047){\circle{3}} \put(71.748489,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(71.930062,47.962341){\circle{3}} \put(71.930062,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(72.111636,48.325489){\circle{3}} \put(72.111636,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(72.293210,48.325489){\circle{3}} \put(72.293210,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(72.474783,47.962341){\circle{3}} \put(72.474783,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(72.656357,47.236047){\circle{3}} \put(72.656357,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(72.837930,46.146605){\circle{3}} \put(72.837930,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(73.019504,44.694016){\circle{3}} \put(73.019504,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(73.201078,42.878279){\circle{3}} \put(73.201078,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(73.382651,40.699395){\circle{3}} \put(73.382651,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(73.564225,38.157364){\circle{3}} \put(73.564225,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(73.745799,35.252186){\circle{3}} \put(73.745799,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(73.927372,31.983861){\circle{3}} \put(73.927372,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(74.108946,28.352388){\circle{3}} \put(74.108946,24.357767){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(74.290520,24.357767){\circle{3}} \put(74.290520,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(74.472093,24.539341){\circle{3}} \put(74.472093,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(74.653667,28.715535){\circle{3}} \put(74.653667,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(74.835241,32.528581){\circle{3}} \put(74.835241,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(75.016814,35.978481){\circle{3}} \put(75.016814,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(75.198388,39.065233){\circle{3}} \put(75.198388,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(75.379962,41.788837){\circle{3}} \put(75.379962,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(75.561535,44.149295){\circle{3}} \put(75.561535,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(75.743109,46.146605){\circle{3}} \put(75.743109,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(75.924682,47.780768){\circle{3}} \put(75.924682,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(76.106256,49.051783){\circle{3}} \put(76.106256,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(76.287830,49.959651){\circle{3}} \put(76.287830,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(76.469403,50.504372){\circle{3}} \put(76.469403,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(76.650977,50.685946){\circle{3}} \put(76.650977,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(76.832551,50.504372){\circle{3}} \put(76.832551,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(77.014124,49.959651){\circle{3}} \put(77.014124,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(77.195698,49.051783){\circle{3}} \put(77.195698,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(77.377272,47.780768){\circle{3}} \put(77.377272,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(77.558845,46.146605){\circle{3}} \put(77.558845,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(77.740419,44.149295){\circle{3}} \put(77.740419,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(77.921993,41.788837){\circle{3}} \put(77.921993,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(78.103566,39.065233){\circle{3}} \put(78.103566,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(78.285140,35.978481){\circle{3}} \put(78.285140,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(78.466713,32.528581){\circle{3}} \put(78.466713,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(78.648287,28.715535){\circle{3}} \put(78.648287,24.539341){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(78.829861,24.539341){\circle{3}} \put(78.829861,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(79.011434,24.720915){\circle{3}} \put(79.011434,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(79.193008,29.078682){\circle{3}} \put(79.193008,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(79.374582,33.073302){\circle{3}} \put(79.374582,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(79.556155,36.704775){\circle{3}} \put(79.556155,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(79.737729,39.973101){\circle{3}} \put(79.737729,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(79.919303,42.878279){\circle{3}} \put(79.919303,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(80.100876,45.420310){\circle{3}} \put(80.100876,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(80.282450,47.599194){\circle{3}} \put(80.282450,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(80.464024,49.414930){\circle{3}} \put(80.464024,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(80.645597,50.867520){\circle{3}} \put(80.645597,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(80.827171,51.956961){\circle{3}} \put(80.827171,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(81.008744,52.683256){\circle{3}} \put(81.008744,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(81.190318,53.046403){\circle{3}} \put(81.190318,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(81.371892,53.046403){\circle{3}} \put(81.371892,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(81.553465,52.683256){\circle{3}} \put(81.553465,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(81.735039,51.956961){\circle{3}} \put(81.735039,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(81.916613,50.867520){\circle{3}} \put(81.916613,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(82.098186,49.414930){\circle{3}} \put(82.098186,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(82.279760,47.599194){\circle{3}} \put(82.279760,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(82.461334,45.420310){\circle{3}} \put(82.461334,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(82.642907,42.878279){\circle{3}} \put(82.642907,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(82.824481,39.973101){\circle{3}} \put(82.824481,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(83.006055,36.704775){\circle{3}} \put(83.006055,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(83.187628,33.073302){\circle{3}} \put(83.187628,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(83.369202,29.078682){\circle{3}} \put(83.369202,24.720915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(83.550775,24.720915){\circle{3}} \put(83.550775,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(83.732349,24.902488){\circle{3}} \put(83.732349,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(83.913923,29.441830){\circle{3}} \put(83.913923,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(84.095496,33.618023){\circle{3}} \put(84.095496,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(84.277070,37.431070){\circle{3}} \put(84.277070,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(84.458644,40.880969){\circle{3}} \put(84.458644,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(84.640217,43.967721){\circle{3}} \put(84.640217,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(84.821791,46.691326){\circle{3}} \put(84.821791,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(85.003365,49.051783){\circle{3}} \put(85.003365,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(85.184938,51.049093){\circle{3}} \put(85.184938,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(85.366512,52.683256){\circle{3}} \put(85.366512,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(85.548086,53.954271){\circle{3}} \put(85.548086,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(85.729659,54.862140){\circle{3}} \put(85.729659,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(85.911233,55.406861){\circle{3}} \put(85.911233,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(86.092807,55.588434){\circle{3}} \put(86.092807,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(86.274380,55.406861){\circle{3}} \put(86.274380,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(86.455954,54.862140){\circle{3}} \put(86.455954,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(86.637527,53.954271){\circle{3}} \put(86.637527,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(86.819101,52.683256){\circle{3}} \put(86.819101,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(87.000675,51.049093){\circle{3}} \put(87.000675,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(87.182248,49.051783){\circle{3}} \put(87.182248,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(87.363822,46.691326){\circle{3}} \put(87.363822,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(87.545396,43.967721){\circle{3}} \put(87.545396,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(87.726969,40.880969){\circle{3}} \put(87.726969,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(87.908543,37.431070){\circle{3}} \put(87.908543,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(88.090117,33.618023){\circle{3}} \put(88.090117,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(88.271690,29.441830){\circle{3}} \put(88.271690,24.902488){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(88.453264,24.902488){\circle{3}} \put(88.453264,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(88.634838,25.084062){\circle{3}} \put(88.634838,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(88.816411,29.804977){\circle{3}} \put(88.816411,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(88.997985,34.162744){\circle{3}} \put(88.997985,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(89.179558,38.157364){\circle{3}} \put(89.179558,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(89.361132,41.788837){\circle{3}} \put(89.361132,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(89.542706,45.057163){\circle{3}} \put(89.542706,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(89.724279,47.962341){\circle{3}} \put(89.724279,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(89.905853,50.504372){\circle{3}} \put(89.905853,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(90.087427,52.683256){\circle{3}} \put(90.087427,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(90.269000,54.498992){\circle{3}} \put(90.269000,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(90.450574,55.951582){\circle{3}} \put(90.450574,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(90.632148,57.041023){\circle{3}} \put(90.632148,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(90.813721,57.767318){\circle{3}} \put(90.813721,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(90.995295,58.130465){\circle{3}} \put(90.995295,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(91.176869,58.130465){\circle{3}} \put(91.176869,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(91.358442,57.767318){\circle{3}} \put(91.358442,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(91.540016,57.041023){\circle{3}} \put(91.540016,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(91.721589,55.951582){\circle{3}} \put(91.721589,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(91.903163,54.498992){\circle{3}} \put(91.903163,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(92.084737,52.683256){\circle{3}} \put(92.084737,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(92.266310,50.504372){\circle{3}} \put(92.266310,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(92.447884,47.962341){\circle{3}} \put(92.447884,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(92.629458,45.057163){\circle{3}} \put(92.629458,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(92.811031,41.788837){\circle{3}} \put(92.811031,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(92.992605,38.157364){\circle{3}} \put(92.992605,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(93.174179,34.162744){\circle{3}} \put(93.174179,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(93.355752,29.804977){\circle{3}} \put(93.355752,25.084062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(93.537326,25.084062){\circle{3}} \put(93.537326,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(93.718900,25.265636){\circle{3}} \put(93.718900,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(93.900473,30.168124){\circle{3}} \put(93.900473,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(94.082047,34.707465){\circle{3}} \put(94.082047,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(94.263621,38.883659){\circle{3}} \put(94.263621,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(94.445194,42.696706){\circle{3}} \put(94.445194,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(94.626768,46.146605){\circle{3}} \put(94.626768,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(94.808341,49.233357){\circle{3}} \put(94.808341,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(94.989915,51.956961){\circle{3}} \put(94.989915,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(95.171489,54.317419){\circle{3}} \put(95.171489,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(95.353062,56.314729){\circle{3}} \put(95.353062,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(95.534636,57.948892){\circle{3}} \put(95.534636,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(95.716210,59.219907){\circle{3}} \put(95.716210,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(95.897783,60.127775){\circle{3}} \put(95.897783,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(96.079357,60.672496){\circle{3}} \put(96.079357,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(96.260931,60.854070){\circle{3}} \put(96.260931,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(96.442504,60.672496){\circle{3}} \put(96.442504,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(96.624078,60.127775){\circle{3}} \put(96.624078,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(96.805652,59.219907){\circle{3}} \put(96.805652,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(96.987225,57.948892){\circle{3}} \put(96.987225,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(97.168799,56.314729){\circle{3}} \put(97.168799,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(97.350372,54.317419){\circle{3}} \put(97.350372,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(97.531946,51.956961){\circle{3}} \put(97.531946,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(97.713520,49.233357){\circle{3}} \put(97.713520,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(97.895093,46.146605){\circle{3}} \put(97.895093,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(98.076667,42.696706){\circle{3}} \put(98.076667,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(98.258241,38.883659){\circle{3}} \put(98.258241,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(98.439814,34.707465){\circle{3}} \put(98.439814,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(98.621388,30.168124){\circle{3}} \put(98.621388,25.265636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(98.802962,25.265636){\circle{3}} \put(98.802962,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(98.984535,25.447209){\circle{3}} \put(98.984535,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(99.166109,30.531271){\circle{3}} \put(99.166109,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(99.347683,35.252186){\circle{3}} \put(99.347683,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(99.529256,39.609954){\circle{3}} \put(99.529256,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(99.710830,43.604574){\circle{3}} \put(99.710830,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(99.892403,47.236047){\circle{3}} \put(99.892403,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(100.073977,50.504372){\circle{3}} \put(100.073977,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(100.255551,53.409551){\circle{3}} \put(100.255551,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(100.437124,55.951582){\circle{3}} \put(100.437124,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(100.618698,58.130465){\circle{3}} \put(100.618698,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(100.800272,59.946202){\circle{3}} \put(100.800272,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(100.981845,61.398791){\circle{3}} \put(100.981845,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(101.163419,62.488233){\circle{3}} \put(101.163419,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(101.344993,63.214527){\circle{3}} \put(101.344993,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(101.526566,63.577675){\circle{3}} \put(101.526566,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(101.708140,63.577675){\circle{3}} \put(101.708140,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(101.889714,63.214527){\circle{3}} \put(101.889714,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(102.071287,62.488233){\circle{3}} \put(102.071287,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(102.252861,61.398791){\circle{3}} \put(102.252861,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(102.434435,59.946202){\circle{3}} \put(102.434435,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(102.616008,58.130465){\circle{3}} \put(102.616008,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(102.797582,55.951582){\circle{3}} \put(102.797582,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(102.979155,53.409551){\circle{3}} \put(102.979155,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(103.160729,50.504372){\circle{3}} \put(103.160729,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(103.342303,47.236047){\circle{3}} \put(103.342303,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(103.523876,43.604574){\circle{3}} \put(103.523876,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(103.705450,39.609954){\circle{3}} \put(103.705450,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(103.887024,35.252186){\circle{3}} \put(103.887024,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(104.068597,30.531271){\circle{3}} \put(104.068597,25.447209){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(104.250171,25.447209){\circle{3}} \put(104.250171,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(104.431745,25.628783){\circle{3}} \put(104.431745,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(104.613318,30.894419){\circle{3}} \put(104.613318,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(104.794892,35.796907){\circle{3}} \put(104.794892,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(104.976466,40.336248){\circle{3}} \put(104.976466,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(105.158039,44.512442){\circle{3}} \put(105.158039,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(105.339613,48.325489){\circle{3}} \put(105.339613,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(105.521186,51.775388){\circle{3}} \put(105.521186,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(105.702760,54.862140){\circle{3}} \put(105.702760,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(105.884334,57.585744){\circle{3}} \put(105.884334,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(106.065907,59.946202){\circle{3}} \put(106.065907,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(106.247481,61.943512){\circle{3}} \put(106.247481,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(106.429055,63.577675){\circle{3}} \put(106.429055,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(106.610628,64.848690){\circle{3}} \put(106.610628,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(106.792202,65.756558){\circle{3}} \put(106.792202,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(106.973776,66.301279){\circle{3}} \put(106.973776,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(107.155349,66.482853){\circle{3}} \put(107.155349,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(107.336923,66.301279){\circle{3}} \put(107.336923,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(107.518497,65.756558){\circle{3}} \put(107.518497,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(107.700070,64.848690){\circle{3}} \put(107.700070,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(107.881644,63.577675){\circle{3}} \put(107.881644,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(108.063217,61.943512){\circle{3}} \put(108.063217,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(108.244791,59.946202){\circle{3}} \put(108.244791,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(108.426365,57.585744){\circle{3}} \put(108.426365,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(108.607938,54.862140){\circle{3}} \put(108.607938,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(108.789512,51.775388){\circle{3}} \put(108.789512,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(108.971086,48.325489){\circle{3}} \put(108.971086,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(109.152659,44.512442){\circle{3}} \put(109.152659,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(109.334233,40.336248){\circle{3}} \put(109.334233,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(109.515807,35.796907){\circle{3}} \put(109.515807,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(109.697380,30.894419){\circle{3}} \put(109.697380,25.628783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(109.878954,25.628783){\circle{3}} \put(109.878954,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(110.060528,25.810357){\circle{3}} \put(110.060528,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(110.242101,31.257566){\circle{3}} \put(110.242101,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(110.423675,36.341628){\circle{3}} \put(110.423675,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(110.605248,41.062543){\circle{3}} \put(110.605248,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(110.786822,45.420310){\circle{3}} \put(110.786822,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(110.968396,49.414930){\circle{3}} \put(110.968396,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(111.149969,53.046403){\circle{3}} \put(111.149969,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(111.331543,56.314729){\circle{3}} \put(111.331543,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(111.513117,59.219907){\circle{3}} \put(111.513117,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(111.694690,61.761938){\circle{3}} \put(111.694690,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(111.876264,63.940822){\circle{3}} \put(111.876264,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(112.057838,65.756558){\circle{3}} \put(112.057838,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(112.239411,67.209148){\circle{3}} \put(112.239411,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(112.420985,68.298589){\circle{3}} \put(112.420985,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(112.602559,69.024884){\circle{3}} \put(112.602559,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(112.784132,69.388031){\circle{3}} \put(112.784132,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(112.965706,69.388031){\circle{3}} \put(112.965706,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(113.147280,69.024884){\circle{3}} \put(113.147280,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(113.328853,68.298589){\circle{3}} \put(113.328853,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(113.510427,67.209148){\circle{3}} \put(113.510427,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(113.692000,65.756558){\circle{3}} \put(113.692000,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(113.873574,63.940822){\circle{3}} \put(113.873574,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(114.055148,61.761938){\circle{3}} \put(114.055148,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(114.236721,59.219907){\circle{3}} \put(114.236721,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(114.418295,56.314729){\circle{3}} \put(114.418295,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(114.599869,53.046403){\circle{3}} \put(114.599869,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(114.781442,49.414930){\circle{3}} \put(114.781442,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(114.963016,45.420310){\circle{3}} \put(114.963016,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(115.144590,41.062543){\circle{3}} \put(115.144590,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(115.326163,36.341628){\circle{3}} \put(115.326163,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(115.507737,31.257566){\circle{3}} \put(115.507737,25.810357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(115.689311,25.810357){\circle{3}} \put(115.689311,25.991930){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(115.870884,25.991930){\circle{3}} \put(115.870884,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(116.052458,31.620713){\circle{3}} \put(116.052458,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(116.234031,36.886349){\circle{3}} \put(116.234031,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(116.415605,41.788837){\circle{3}} \put(116.415605,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(116.597179,46.328178){\circle{3}} \put(116.597179,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(116.778752,50.504372){\circle{3}} \put(116.778752,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(116.960326,54.317419){\circle{3}} \put(116.960326,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(117.141900,57.767318){\circle{3}} \put(117.141900,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(117.323473,60.854070){\circle{3}} \put(117.323473,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(117.505047,63.577675){\circle{3}} \put(117.505047,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(117.686621,65.938132){\circle{3}} \put(117.686621,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(117.868194,67.935442){\circle{3}} \put(117.868194,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(118.049768,69.569605){\circle{3}} \put(118.049768,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(118.231342,70.840620){\circle{3}} \put(118.231342,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(118.412915,71.748489){\circle{3}} \put(118.412915,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(118.594489,72.293210){\circle{3}} \put(118.594489,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(118.776062,72.474783){\circle{3}} \put(118.776062,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(118.957636,72.293210){\circle{3}} \put(118.957636,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(119.139210,71.748489){\circle{3}} \put(119.139210,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(119.320783,70.840620){\circle{3}} \put(119.320783,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(119.502357,69.569605){\circle{3}} \put(119.502357,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(119.683931,67.935442){\circle{3}} \put(119.683931,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(119.865504,65.938132){\circle{3}} \put(119.865504,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(120.047078,63.577675){\circle{3}} \put(120.047078,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(120.228652,60.854070){\circle{3}} \put(120.228652,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(120.410225,57.767318){\circle{3}} \put(120.410225,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(120.591799,54.317419){\circle{3}} \put(120.591799,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(120.773373,50.504372){\circle{3}} \put(120.773373,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(120.954946,46.328178){\circle{3}} \put(120.954946,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(121.136520,41.788837){\circle{3}} \put(121.136520,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(121.318094,36.886349){\circle{3}} \put(121.318094,31.620713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(121.499667,31.620713){\circle{3}} \put(121.499667,31.802287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(121.681241,31.802287){\circle{3}} \put(121.681241,31.983861){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(121.862814,31.983861){\circle{3}} \put(121.862814,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(122.044388,37.431070){\circle{3}} \put(122.044388,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(122.225962,42.515132){\circle{3}} \put(122.225962,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(122.407535,47.236047){\circle{3}} \put(122.407535,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(122.589109,51.593814){\circle{3}} \put(122.589109,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(122.770683,55.588434){\circle{3}} \put(122.770683,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(122.952256,59.219907){\circle{3}} \put(122.952256,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(123.133830,62.488233){\circle{3}} \put(123.133830,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(123.315404,65.393411){\circle{3}} \put(123.315404,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(123.496977,67.935442){\circle{3}} \put(123.496977,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(123.678551,70.114326){\circle{3}} \put(123.678551,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(123.860125,71.930062){\circle{3}} \put(123.860125,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(124.041698,73.382651){\circle{3}} \put(124.041698,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(124.223272,74.472093){\circle{3}} \put(124.223272,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(124.404845,75.198388){\circle{3}} \put(124.404845,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(124.586419,75.561535){\circle{3}} \put(124.586419,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(124.767993,75.561535){\circle{3}} \put(124.767993,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(124.949566,75.198388){\circle{3}} \put(124.949566,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(125.131140,74.472093){\circle{3}} \put(125.131140,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(125.312714,73.382651){\circle{3}} \put(125.312714,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(125.494287,71.930062){\circle{3}} \put(125.494287,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(125.675861,70.114326){\circle{3}} \put(125.675861,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(125.857435,67.935442){\circle{3}} \put(125.857435,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(126.039008,65.393411){\circle{3}} \put(126.039008,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(126.220582,62.488233){\circle{3}} \put(126.220582,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(126.402156,59.219907){\circle{3}} \put(126.402156,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(126.583729,55.588434){\circle{3}} \put(126.583729,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(126.765303,51.593814){\circle{3}} \put(126.765303,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(126.946876,47.236047){\circle{3}} \put(126.946876,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(127.128450,42.515132){\circle{3}} \put(127.128450,37.431070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(127.310024,37.431070){\circle{3}} \put(127.310024,37.612643){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(127.491597,37.612643){\circle{3}} \put(127.491597,37.794217){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(127.673171,37.794217){\circle{3}} \put(127.673171,37.975791){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(127.854745,37.975791){\circle{3}} \put(127.854745,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(128.036318,43.241426){\circle{3}} \put(128.036318,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(128.217892,48.143915){\circle{3}} \put(128.217892,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(128.399466,52.683256){\circle{3}} \put(128.399466,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(128.581039,56.859450){\circle{3}} \put(128.581039,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(128.762613,60.672496){\circle{3}} \put(128.762613,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(128.944187,64.122396){\circle{3}} \put(128.944187,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(129.125760,67.209148){\circle{3}} \put(129.125760,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(129.307334,69.932752){\circle{3}} \put(129.307334,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(129.488908,72.293210){\circle{3}} \put(129.488908,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(129.670481,74.290520){\circle{3}} \put(129.670481,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(129.852055,75.924682){\circle{3}} \put(129.852055,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(130.033628,77.195698){\circle{3}} \put(130.033628,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(130.215202,78.103566){\circle{3}} \put(130.215202,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(130.396776,78.648287){\circle{3}} \put(130.396776,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(130.578349,78.829861){\circle{3}} \put(130.578349,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(130.759923,78.648287){\circle{3}} \put(130.759923,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(130.941497,78.103566){\circle{3}} \put(130.941497,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(131.123070,77.195698){\circle{3}} \put(131.123070,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(131.304644,75.924682){\circle{3}} \put(131.304644,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(131.486218,74.290520){\circle{3}} \put(131.486218,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(131.667791,72.293210){\circle{3}} \put(131.667791,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(131.849365,69.932752){\circle{3}} \put(131.849365,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(132.030939,67.209148){\circle{3}} \put(132.030939,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(132.212512,64.122396){\circle{3}} \put(132.212512,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(132.394086,60.672496){\circle{3}} \put(132.394086,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(132.575659,56.859450){\circle{3}} \put(132.575659,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(132.757233,52.683256){\circle{3}} \put(132.757233,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(132.938807,48.143915){\circle{3}} \put(132.938807,43.241426){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(133.120380,43.241426){\circle{3}} \put(133.120380,43.423000){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(133.301954,43.423000){\circle{3}} \put(133.301954,43.604574){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(133.483528,43.604574){\circle{3}} \put(133.483528,43.786147){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(133.665101,43.786147){\circle{3}} \put(133.665101,43.967721){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(133.846675,43.967721){\circle{3}} \put(133.846675,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(134.028249,49.051783){\circle{3}} \put(134.028249,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(134.209822,53.772698){\circle{3}} \put(134.209822,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(134.391396,58.130465){\circle{3}} \put(134.391396,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(134.572970,62.125085){\circle{3}} \put(134.572970,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(134.754543,65.756558){\circle{3}} \put(134.754543,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(134.936117,69.024884){\circle{3}} \put(134.936117,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(135.117690,71.930062){\circle{3}} \put(135.117690,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(135.299264,74.472093){\circle{3}} \put(135.299264,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(135.480838,76.650977){\circle{3}} \put(135.480838,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(135.662411,78.466713){\circle{3}} \put(135.662411,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(135.843985,79.919303){\circle{3}} \put(135.843985,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(136.025559,81.008744){\circle{3}} \put(136.025559,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(136.207132,81.735039){\circle{3}} \put(136.207132,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(136.388706,82.098186){\circle{3}} \put(136.388706,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(136.570280,82.098186){\circle{3}} \put(136.570280,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(136.751853,81.735039){\circle{3}} \put(136.751853,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(136.933427,81.008744){\circle{3}} \put(136.933427,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(137.115001,79.919303){\circle{3}} \put(137.115001,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(137.296574,78.466713){\circle{3}} \put(137.296574,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(137.478148,76.650977){\circle{3}} \put(137.478148,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(137.659721,74.472093){\circle{3}} \put(137.659721,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(137.841295,71.930062){\circle{3}} \put(137.841295,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(138.022869,69.024884){\circle{3}} \put(138.022869,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(138.204442,65.756558){\circle{3}} \put(138.204442,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(138.386016,62.125085){\circle{3}} \put(138.386016,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(138.567590,58.130465){\circle{3}} \put(138.567590,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(138.749163,53.772698){\circle{3}} \put(138.749163,49.051783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(138.930737,49.051783){\circle{3}} \put(138.930737,49.233357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(139.112311,49.233357){\circle{3}} \put(139.112311,49.414930){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(139.293884,49.414930){\circle{3}} \put(139.293884,49.596504){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(139.475458,49.596504){\circle{3}} \put(139.475458,49.778078){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(139.657032,49.778078){\circle{3}} \put(139.657032,49.959651){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(139.838605,49.959651){\circle{3}} \put(139.838605,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(140.020179,54.862140){\circle{3}} \put(140.020179,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(140.201753,59.401481){\circle{3}} \put(140.201753,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(140.383326,63.577675){\circle{3}} \put(140.383326,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(140.564900,67.390721){\circle{3}} \put(140.564900,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(140.746473,70.840620){\circle{3}} \put(140.746473,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(140.928047,73.927372){\circle{3}} \put(140.928047,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(141.109621,76.650977){\circle{3}} \put(141.109621,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(141.291194,79.011434){\circle{3}} \put(141.291194,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(141.472768,81.008744){\circle{3}} \put(141.472768,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(141.654342,82.642907){\circle{3}} \put(141.654342,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(141.835915,83.913923){\circle{3}} \put(141.835915,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(142.017489,84.821791){\circle{3}} \put(142.017489,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(142.199063,85.366512){\circle{3}} \put(142.199063,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(142.380636,85.548086){\circle{3}} \put(142.380636,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(142.562210,85.366512){\circle{3}} \put(142.562210,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(142.743784,84.821791){\circle{3}} \put(142.743784,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(142.925357,83.913923){\circle{3}} \put(142.925357,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(143.106931,82.642907){\circle{3}} \put(143.106931,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(143.288504,81.008744){\circle{3}} \put(143.288504,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(143.470078,79.011434){\circle{3}} \put(143.470078,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(143.651652,76.650977){\circle{3}} \put(143.651652,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(143.833225,73.927372){\circle{3}} \put(143.833225,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(144.014799,70.840620){\circle{3}} \put(144.014799,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(144.196373,67.390721){\circle{3}} \put(144.196373,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(144.377946,63.577675){\circle{3}} \put(144.377946,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(144.559520,59.401481){\circle{3}} \put(144.559520,54.862140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(144.741094,54.862140){\circle{3}} \put(144.741094,55.043713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(144.922667,55.043713){\circle{3}} \put(144.922667,55.225287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(145.104241,55.225287){\circle{3}} \put(145.104241,55.406861){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(145.285815,55.406861){\circle{3}} \put(145.285815,55.588434){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(145.467388,55.588434){\circle{3}} \put(145.467388,55.770008){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(145.648962,55.770008){\circle{3}} \put(145.648962,55.951582){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(145.830535,55.951582){\circle{3}} \put(145.830535,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(146.012109,60.672496){\circle{3}} \put(146.012109,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(146.193683,65.030264){\circle{3}} \put(146.193683,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(146.375256,69.024884){\circle{3}} \put(146.375256,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(146.556830,72.656357){\circle{3}} \put(146.556830,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(146.738404,75.924682){\circle{3}} \put(146.738404,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(146.919977,78.829861){\circle{3}} \put(146.919977,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(147.101551,81.371892){\circle{3}} \put(147.101551,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(147.283125,83.550775){\circle{3}} \put(147.283125,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(147.464698,85.366512){\circle{3}} \put(147.464698,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(147.646272,86.819101){\circle{3}} \put(147.646272,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(147.827846,87.908543){\circle{3}} \put(147.827846,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(148.009419,88.634838){\circle{3}} \put(148.009419,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(148.190993,88.997985){\circle{3}} \put(148.190993,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(148.372567,88.997985){\circle{3}} \put(148.372567,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(148.554140,88.634838){\circle{3}} \put(148.554140,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(148.735714,87.908543){\circle{3}} \put(148.735714,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(148.917287,86.819101){\circle{3}} \put(148.917287,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(149.098861,85.366512){\circle{3}} \put(149.098861,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(149.280435,83.550775){\circle{3}} \put(149.280435,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(149.462008,81.371892){\circle{3}} \put(149.462008,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(149.643582,78.829861){\circle{3}} \put(149.643582,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(149.825156,75.924682){\circle{3}} \put(149.825156,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(150.006729,72.656357){\circle{3}} \put(150.006729,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(150.188303,69.024884){\circle{3}} \put(150.188303,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(150.369877,65.030264){\circle{3}} \put(150.369877,60.672496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(150.551450,60.672496){\circle{3}} \put(150.551450,60.854070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(150.733024,60.854070){\circle{3}} \put(150.733024,61.035644){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(150.914598,61.035644){\circle{3}} \put(150.914598,61.217217){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(151.096171,61.217217){\circle{3}} \put(151.096171,61.398791){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(151.277745,61.398791){\circle{3}} \put(151.277745,61.580365){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(151.459318,61.580365){\circle{3}} \put(151.459318,61.761938){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(151.640892,61.761938){\circle{3}} \put(151.640892,61.943512){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(151.822466,61.943512){\circle{3}} \put(151.822466,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(152.004039,66.482853){\circle{3}} \put(152.004039,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(152.185613,70.659047){\circle{3}} \put(152.185613,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(152.367187,74.472093){\circle{3}} \put(152.367187,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(152.548760,77.921993){\circle{3}} \put(152.548760,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(152.730334,81.008744){\circle{3}} \put(152.730334,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(152.911908,83.732349){\circle{3}} \put(152.911908,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(153.093481,86.092807){\circle{3}} \put(153.093481,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(153.275055,88.090117){\circle{3}} \put(153.275055,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(153.456629,89.724279){\circle{3}} \put(153.456629,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(153.638202,90.995295){\circle{3}} \put(153.638202,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(153.819776,91.903163){\circle{3}} \put(153.819776,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(154.001349,92.447884){\circle{3}} \put(154.001349,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(154.182923,92.629458){\circle{3}} \put(154.182923,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(154.364497,92.447884){\circle{3}} \put(154.364497,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(154.546070,91.903163){\circle{3}} \put(154.546070,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(154.727644,90.995295){\circle{3}} \put(154.727644,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(154.909218,89.724279){\circle{3}} \put(154.909218,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(155.090791,88.090117){\circle{3}} \put(155.090791,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(155.272365,86.092807){\circle{3}} \put(155.272365,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(155.453939,83.732349){\circle{3}} \put(155.453939,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(155.635512,81.008744){\circle{3}} \put(155.635512,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(155.817086,77.921993){\circle{3}} \put(155.817086,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(155.998660,74.472093){\circle{3}} \put(155.998660,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(156.180233,70.659047){\circle{3}} \put(156.180233,66.482853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(156.361807,66.482853){\circle{3}} \put(156.361807,66.664427){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(156.543380,66.664427){\circle{3}} \put(156.543380,66.846000){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(156.724954,66.846000){\circle{3}} \put(156.724954,67.027574){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(156.906528,67.027574){\circle{3}} \put(156.906528,67.209148){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(157.088101,67.209148){\circle{3}} \put(157.088101,67.390721){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(157.269675,67.390721){\circle{3}} \put(157.269675,67.572295){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(157.451249,67.572295){\circle{3}} \put(157.451249,67.753868){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(157.632822,67.753868){\circle{3}} \put(157.632822,67.935442){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(157.814396,67.935442){\circle{3}} \put(157.814396,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(157.995970,72.293210){\circle{3}} \put(157.995970,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(158.177543,76.287830){\circle{3}} \put(158.177543,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(158.359117,79.919303){\circle{3}} \put(158.359117,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(158.540691,83.187628){\circle{3}} \put(158.540691,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(158.722264,86.092807){\circle{3}} \put(158.722264,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(158.903838,88.634838){\circle{3}} \put(158.903838,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(159.085412,90.813721){\circle{3}} \put(159.085412,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(159.266985,92.629458){\circle{3}} \put(159.266985,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(159.448559,94.082047){\circle{3}} \put(159.448559,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(159.630132,95.171489){\circle{3}} \put(159.630132,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(159.811706,95.897783){\circle{3}} \put(159.811706,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(159.993280,96.260931){\circle{3}} \put(159.993280,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(160.174853,96.260931){\circle{3}} \put(160.174853,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(160.356427,95.897783){\circle{3}} \put(160.356427,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(160.538001,95.171489){\circle{3}} \put(160.538001,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(160.719574,94.082047){\circle{3}} \put(160.719574,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(160.901148,92.629458){\circle{3}} \put(160.901148,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(161.082722,90.813721){\circle{3}} \put(161.082722,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(161.264295,88.634838){\circle{3}} \put(161.264295,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(161.445869,86.092807){\circle{3}} \put(161.445869,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(161.627443,83.187628){\circle{3}} \put(161.627443,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(161.809016,79.919303){\circle{3}} \put(161.809016,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(161.990590,76.287830){\circle{3}} \put(161.990590,72.293210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(162.172163,72.293210){\circle{3}} \put(162.172163,72.474783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(162.353737,72.474783){\circle{3}} \put(162.353737,72.656357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(162.535311,72.656357){\circle{3}} \put(162.535311,72.837930){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(162.716884,72.837930){\circle{3}} \put(162.716884,73.019504){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(162.898458,73.019504){\circle{3}} \put(162.898458,73.201078){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(163.080032,73.201078){\circle{3}} \put(163.080032,73.382651){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(163.261605,73.382651){\circle{3}} \put(163.261605,73.564225){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(163.443179,73.564225){\circle{3}} \put(163.443179,73.745799){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(163.624753,73.745799){\circle{3}} \put(163.624753,73.927372){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(163.806326,73.927372){\circle{3}} \put(163.806326,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(163.987900,78.103566){\circle{3}} \put(163.987900,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(164.169474,81.916613){\circle{3}} \put(164.169474,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(164.351047,85.366512){\circle{3}} \put(164.351047,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(164.532621,88.453264){\circle{3}} \put(164.532621,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(164.714194,91.176869){\circle{3}} \put(164.714194,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(164.895768,93.537326){\circle{3}} \put(164.895768,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(165.077342,95.534636){\circle{3}} \put(165.077342,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(165.258915,97.168799){\circle{3}} \put(165.258915,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(165.440489,98.439814){\circle{3}} \put(165.440489,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(165.622063,99.347683){\circle{3}} \put(165.622063,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(165.803636,99.892403){\circle{3}} \put(165.803636,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(165.985210,100.073977){\circle{3}} \put(165.985210,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(166.166784,99.892403){\circle{3}} \put(166.166784,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(166.348357,99.347683){\circle{3}} \put(166.348357,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(166.529931,98.439814){\circle{3}} \put(166.529931,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(166.711505,97.168799){\circle{3}} \put(166.711505,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(166.893078,95.534636){\circle{3}} \put(166.893078,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(167.074652,93.537326){\circle{3}} \put(167.074652,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(167.256226,91.176869){\circle{3}} \put(167.256226,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(167.437799,88.453264){\circle{3}} \put(167.437799,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(167.619373,85.366512){\circle{3}} \put(167.619373,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(167.800946,81.916613){\circle{3}} \put(167.800946,78.103566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(167.982520,78.103566){\circle{3}} \put(167.982520,78.285140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(168.164094,78.285140){\circle{3}} \put(168.164094,78.466713){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(168.345667,78.466713){\circle{3}} \put(168.345667,78.648287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(168.527241,78.648287){\circle{3}} \put(168.527241,78.829861){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(168.708815,78.829861){\circle{3}} \put(168.708815,79.011434){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(168.890388,79.011434){\circle{3}} \put(168.890388,79.193008){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(169.071962,79.193008){\circle{3}} \put(169.071962,79.374582){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(169.253536,79.374582){\circle{3}} \put(169.253536,79.556155){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(169.435109,79.556155){\circle{3}} \put(169.435109,79.737729){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(169.616683,79.737729){\circle{3}} \put(169.616683,79.919303){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(169.798257,79.919303){\circle{3}} \put(169.798257,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(169.979830,83.913923){\circle{3}} \put(169.979830,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(170.161404,87.545396){\circle{3}} \put(170.161404,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(170.342977,90.813721){\circle{3}} \put(170.342977,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(170.524551,93.718900){\circle{3}} \put(170.524551,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(170.706125,96.260931){\circle{3}} \put(170.706125,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(170.887698,98.439814){\circle{3}} \put(170.887698,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(171.069272,100.255551){\circle{3}} \put(171.069272,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(171.250846,101.708140){\circle{3}} \put(171.250846,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(171.432419,102.797582){\circle{3}} \put(171.432419,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(171.613993,103.523876){\circle{3}} \put(171.613993,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(171.795567,103.887024){\circle{3}} \put(171.795567,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(171.977140,103.887024){\circle{3}} \put(171.977140,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(172.158714,103.523876){\circle{3}} \put(172.158714,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(172.340288,102.797582){\circle{3}} \put(172.340288,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(172.521861,101.708140){\circle{3}} \put(172.521861,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(172.703435,100.255551){\circle{3}} \put(172.703435,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(172.885008,98.439814){\circle{3}} \put(172.885008,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(173.066582,96.260931){\circle{3}} \put(173.066582,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(173.248156,93.718900){\circle{3}} \put(173.248156,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(173.429729,90.813721){\circle{3}} \put(173.429729,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(173.611303,87.545396){\circle{3}} \put(173.611303,83.913923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(173.792877,83.913923){\circle{3}} \put(173.792877,84.095496){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(173.974450,84.095496){\circle{3}} \put(173.974450,84.277070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(174.156024,84.277070){\circle{3}} \put(174.156024,84.458644){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(174.337598,84.458644){\circle{3}} \put(174.337598,84.640217){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(174.519171,84.640217){\circle{3}} \put(174.519171,84.821791){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(174.700745,84.821791){\circle{3}} \put(174.700745,85.003365){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(174.882319,85.003365){\circle{3}} \put(174.882319,85.184938){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(175.063892,85.184938){\circle{3}} \put(175.063892,85.366512){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(175.245466,85.366512){\circle{3}} \put(175.245466,85.548086){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(175.427040,85.548086){\circle{3}} \put(175.427040,85.729659){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(175.608613,85.729659){\circle{3}} \put(175.608613,85.911233){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(175.790187,85.911233){\circle{3}} \put(175.790187,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(175.971760,89.724279){\circle{3}} \put(175.971760,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(176.153334,93.174179){\circle{3}} \put(176.153334,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(176.334908,96.260931){\circle{3}} \put(176.334908,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(176.516481,98.984535){\circle{3}} \put(176.516481,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(176.698055,101.344993){\circle{3}} \put(176.698055,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(176.879629,103.342303){\circle{3}} \put(176.879629,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(177.061202,104.976466){\circle{3}} \put(177.061202,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(177.242776,106.247481){\circle{3}} \put(177.242776,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(177.424350,107.155349){\circle{3}} \put(177.424350,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(177.605923,107.700070){\circle{3}} \put(177.605923,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(177.787497,107.881644){\circle{3}} \put(177.787497,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(177.969071,107.700070){\circle{3}} \put(177.969071,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(178.150644,107.155349){\circle{3}} \put(178.150644,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(178.332218,106.247481){\circle{3}} \put(178.332218,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(178.513791,104.976466){\circle{3}} \put(178.513791,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(178.695365,103.342303){\circle{3}} \put(178.695365,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(178.876939,101.344993){\circle{3}} \put(178.876939,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(179.058512,98.984535){\circle{3}} \put(179.058512,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(179.240086,96.260931){\circle{3}} \put(179.240086,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(179.421660,93.174179){\circle{3}} \put(179.421660,89.724279){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(179.603233,89.724279){\circle{3}} \put(179.603233,89.905853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(179.784807,89.905853){\circle{3}} \put(179.784807,90.087427){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(179.966381,90.087427){\circle{3}} \put(179.966381,90.269000){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(180.147954,90.269000){\circle{3}} \put(180.147954,90.450574){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(180.329528,90.450574){\circle{3}} \put(180.329528,90.632148){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(180.511102,90.632148){\circle{3}} \put(180.511102,90.813721){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(180.692675,90.813721){\circle{3}} \put(180.692675,90.995295){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(180.874249,90.995295){\circle{3}} \put(180.874249,91.176869){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(181.055822,91.176869){\circle{3}} \put(181.055822,91.358442){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(181.237396,91.358442){\circle{3}} \put(181.237396,91.540016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(181.418970,91.540016){\circle{3}} \put(181.418970,91.721589){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(181.600543,91.721589){\circle{3}} \put(181.600543,91.903163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(181.782117,91.903163){\circle{3}} \put(181.782117,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(181.963691,95.534636){\circle{3}} \put(181.963691,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(182.145264,98.802962){\circle{3}} \put(182.145264,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(182.326838,101.708140){\circle{3}} \put(182.326838,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(182.508412,104.250171){\circle{3}} \put(182.508412,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(182.689985,106.429055){\circle{3}} \put(182.689985,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(182.871559,108.244791){\circle{3}} \put(182.871559,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(183.053133,109.697380){\circle{3}} \put(183.053133,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(183.234706,110.786822){\circle{3}} \put(183.234706,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(183.416280,111.513117){\circle{3}} \put(183.416280,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(183.597853,111.876264){\circle{3}} \put(183.597853,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(183.779427,111.876264){\circle{3}} \put(183.779427,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(183.961001,111.513117){\circle{3}} \put(183.961001,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(184.142574,110.786822){\circle{3}} \put(184.142574,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(184.324148,109.697380){\circle{3}} \put(184.324148,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(184.505722,108.244791){\circle{3}} \put(184.505722,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(184.687295,106.429055){\circle{3}} \put(184.687295,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(184.868869,104.250171){\circle{3}} \put(184.868869,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(185.050443,101.708140){\circle{3}} \put(185.050443,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(185.232016,98.802962){\circle{3}} \put(185.232016,95.534636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(185.413590,95.534636){\circle{3}} \put(185.413590,95.716210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(185.595164,95.716210){\circle{3}} \put(185.595164,95.897783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(185.776737,95.897783){\circle{3}} \put(185.776737,96.079357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(185.958311,96.079357){\circle{3}} \put(185.958311,96.260931){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(186.139885,96.260931){\circle{3}} \put(186.139885,96.442504){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(186.321458,96.442504){\circle{3}} \put(186.321458,96.624078){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(186.503032,96.624078){\circle{3}} \put(186.503032,96.805652){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(186.684605,96.805652){\circle{3}} \put(186.684605,96.987225){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(186.866179,96.987225){\circle{3}} \put(186.866179,97.168799){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(187.047753,97.168799){\circle{3}} \put(187.047753,97.350372){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(187.229326,97.350372){\circle{3}} \put(187.229326,97.531946){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(187.410900,97.531946){\circle{3}} \put(187.410900,97.713520){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(187.592474,97.713520){\circle{3}} \put(187.592474,97.895093){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(187.774047,97.895093){\circle{3}} \put(187.774047,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(187.955621,101.344993){\circle{3}} \put(187.955621,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(188.137195,104.431745){\circle{3}} \put(188.137195,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(188.318768,107.155349){\circle{3}} \put(188.318768,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(188.500342,109.515807){\circle{3}} \put(188.500342,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(188.681916,111.513117){\circle{3}} \put(188.681916,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(188.863489,113.147280){\circle{3}} \put(188.863489,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(189.045063,114.418295){\circle{3}} \put(189.045063,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(189.226636,115.326163){\circle{3}} \put(189.226636,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(189.408210,115.870884){\circle{3}} \put(189.408210,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(189.589784,116.052458){\circle{3}} \put(189.589784,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(189.771357,115.870884){\circle{3}} \put(189.771357,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(189.952931,115.326163){\circle{3}} \put(189.952931,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(190.134505,114.418295){\circle{3}} \put(190.134505,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(190.316078,113.147280){\circle{3}} \put(190.316078,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(190.497652,111.513117){\circle{3}} \put(190.497652,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(190.679226,109.515807){\circle{3}} \put(190.679226,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(190.860799,107.155349){\circle{3}} \put(190.860799,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(191.042373,104.431745){\circle{3}} \put(191.042373,101.344993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(191.223947,101.344993){\circle{3}} \put(191.223947,101.526566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(191.405520,101.526566){\circle{3}} \put(191.405520,101.708140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(191.587094,101.708140){\circle{3}} \put(191.587094,101.889714){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(191.768667,101.889714){\circle{3}} \put(191.768667,102.071287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(191.950241,102.071287){\circle{3}} \put(191.950241,102.252861){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(192.131815,102.252861){\circle{3}} \put(192.131815,102.434435){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(192.313388,102.434435){\circle{3}} \put(192.313388,102.616008){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(192.494962,102.616008){\circle{3}} \put(192.494962,102.797582){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(192.676536,102.797582){\circle{3}} \put(192.676536,102.979155){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(192.858109,102.979155){\circle{3}} \put(192.858109,103.160729){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(193.039683,103.160729){\circle{3}} \put(193.039683,103.342303){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(193.221257,103.342303){\circle{3}} \put(193.221257,103.523876){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(193.402830,103.523876){\circle{3}} \put(193.402830,103.705450){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(193.584404,103.705450){\circle{3}} \put(193.584404,103.887024){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(193.765978,103.887024){\circle{3}} \put(193.765978,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(193.947551,107.155349){\circle{3}} \put(193.947551,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(194.129125,110.060528){\circle{3}} \put(194.129125,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(194.310699,112.602559){\circle{3}} \put(194.310699,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(194.492272,114.781442){\circle{3}} \put(194.492272,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(194.673846,116.597179){\circle{3}} \put(194.673846,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(194.855419,118.049768){\circle{3}} \put(194.855419,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(195.036993,119.139210){\circle{3}} \put(195.036993,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(195.218567,119.865504){\circle{3}} \put(195.218567,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(195.400140,120.228652){\circle{3}} \put(195.400140,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(195.581714,120.228652){\circle{3}} \put(195.581714,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(195.763288,119.865504){\circle{3}} \put(195.763288,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(195.944861,119.139210){\circle{3}} \put(195.944861,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(196.126435,118.049768){\circle{3}} \put(196.126435,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(196.308009,116.597179){\circle{3}} \put(196.308009,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(196.489582,114.781442){\circle{3}} \put(196.489582,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(196.671156,112.602559){\circle{3}} \put(196.671156,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(196.852730,110.060528){\circle{3}} \put(196.852730,107.155349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(197.034303,107.155349){\circle{3}} \put(197.034303,107.336923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(197.215877,107.336923){\circle{3}} \put(197.215877,107.518497){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(197.397450,107.518497){\circle{3}} \put(197.397450,107.700070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(197.579024,107.700070){\circle{3}} \put(197.579024,107.881644){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(197.760598,107.881644){\circle{3}} \put(197.760598,108.063217){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(197.942171,108.063217){\circle{3}} \put(197.942171,108.244791){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(198.123745,108.244791){\circle{3}} \put(198.123745,108.426365){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(198.305319,108.426365){\circle{3}} \put(198.305319,108.607938){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(198.486892,108.607938){\circle{3}} \put(198.486892,108.789512){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(198.668466,108.789512){\circle{3}} \put(198.668466,108.971086){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(198.850040,108.971086){\circle{3}} \put(198.850040,109.152659){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(199.031613,109.152659){\circle{3}} \put(199.031613,109.334233){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(199.213187,109.334233){\circle{3}} \put(199.213187,109.515807){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(199.394761,109.515807){\circle{3}} \put(199.394761,109.697380){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(199.576334,109.697380){\circle{3}} \put(199.576334,109.878954){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(199.757908,109.878954){\circle{3}} \put(199.757908,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(199.939481,112.965706){\circle{3}} \put(199.939481,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(200.121055,115.689311){\circle{3}} \put(200.121055,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(200.302629,118.049768){\circle{3}} \put(200.302629,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(200.484202,120.047078){\circle{3}} \put(200.484202,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(200.665776,121.681241){\circle{3}} \put(200.665776,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(200.847350,122.952256){\circle{3}} \put(200.847350,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(201.028923,123.860125){\circle{3}} \put(201.028923,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(201.210497,124.404845){\circle{3}} \put(201.210497,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(201.392071,124.586419){\circle{3}} \put(201.392071,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(201.573644,124.404845){\circle{3}} \put(201.573644,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(201.755218,123.860125){\circle{3}} \put(201.755218,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(201.936792,122.952256){\circle{3}} \put(201.936792,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(202.118365,121.681241){\circle{3}} \put(202.118365,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(202.299939,120.047078){\circle{3}} \put(202.299939,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(202.481513,118.049768){\circle{3}} \put(202.481513,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(202.663086,115.689311){\circle{3}} \put(202.663086,112.965706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(202.844660,112.965706){\circle{3}} \put(202.844660,113.147280){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(203.026233,113.147280){\circle{3}} \put(203.026233,113.328853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(203.207807,113.328853){\circle{3}} \put(203.207807,113.510427){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(203.389381,113.510427){\circle{3}} \put(203.389381,113.692000){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(203.570954,113.692000){\circle{3}} \put(203.570954,113.873574){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(203.752528,113.873574){\circle{3}} \put(203.752528,114.055148){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(203.934102,114.055148){\circle{3}} \put(203.934102,114.236721){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(204.115675,114.236721){\circle{3}} \put(204.115675,114.418295){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(204.297249,114.418295){\circle{3}} \put(204.297249,114.599869){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(204.478823,114.599869){\circle{3}} \put(204.478823,114.781442){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(204.660396,114.781442){\circle{3}} \put(204.660396,114.963016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(204.841970,114.963016){\circle{3}} \put(204.841970,115.144590){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(205.023544,115.144590){\circle{3}} \put(205.023544,115.326163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(205.205117,115.326163){\circle{3}} \put(205.205117,115.507737){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(205.386691,115.507737){\circle{3}} \put(205.386691,115.689311){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(205.568264,115.689311){\circle{3}} \put(205.568264,115.870884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(205.749838,115.870884){\circle{3}} \put(205.749838,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(205.931412,118.776062){\circle{3}} \put(205.931412,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(206.112985,121.318094){\circle{3}} \put(206.112985,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(206.294559,123.496977){\circle{3}} \put(206.294559,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(206.476133,125.312714){\circle{3}} \put(206.476133,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(206.657706,126.765303){\circle{3}} \put(206.657706,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(206.839280,127.854745){\circle{3}} \put(206.839280,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(207.020854,128.581039){\circle{3}} \put(207.020854,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(207.202427,128.944187){\circle{3}} \put(207.202427,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(207.384001,128.944187){\circle{3}} \put(207.384001,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(207.565575,128.581039){\circle{3}} \put(207.565575,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(207.747148,127.854745){\circle{3}} \put(207.747148,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(207.928722,126.765303){\circle{3}} \put(207.928722,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(208.110295,125.312714){\circle{3}} \put(208.110295,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(208.291869,123.496977){\circle{3}} \put(208.291869,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(208.473443,121.318094){\circle{3}} \put(208.473443,118.776062){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(208.655016,118.776062){\circle{3}} \put(208.655016,118.957636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(208.836590,118.957636){\circle{3}} \put(208.836590,119.139210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(209.018164,119.139210){\circle{3}} \put(209.018164,119.320783){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(209.199737,119.320783){\circle{3}} \put(209.199737,119.502357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(209.381311,119.502357){\circle{3}} \put(209.381311,119.683931){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(209.562885,119.683931){\circle{3}} \put(209.562885,119.865504){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(209.744458,119.865504){\circle{3}} \put(209.744458,120.047078){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(209.926032,120.047078){\circle{3}} \put(209.926032,120.228652){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(210.107606,120.228652){\circle{3}} \put(210.107606,120.410225){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(210.289179,120.410225){\circle{3}} \put(210.289179,120.591799){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(210.470753,120.591799){\circle{3}} \put(210.470753,120.773373){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(210.652326,120.773373){\circle{3}} \put(210.652326,120.954946){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(210.833900,120.954946){\circle{3}} \put(210.833900,121.136520){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(211.015474,121.136520){\circle{3}} \put(211.015474,121.318094){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(211.197047,121.318094){\circle{3}} \put(211.197047,121.499667){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(211.378621,121.499667){\circle{3}} \put(211.378621,121.681241){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(211.560195,121.681241){\circle{3}} \put(211.560195,121.862814){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(211.741768,121.862814){\circle{3}} \put(211.741768,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(211.923342,124.586419){\circle{3}} \put(211.923342,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(212.104916,126.946876){\circle{3}} \put(212.104916,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(212.286489,128.944187){\circle{3}} \put(212.286489,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(212.468063,130.578349){\circle{3}} \put(212.468063,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(212.649637,131.849365){\circle{3}} \put(212.649637,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(212.831210,132.757233){\circle{3}} \put(212.831210,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(213.012784,133.301954){\circle{3}} \put(213.012784,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(213.194358,133.483528){\circle{3}} \put(213.194358,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(213.375931,133.301954){\circle{3}} \put(213.375931,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(213.557505,132.757233){\circle{3}} \put(213.557505,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(213.739078,131.849365){\circle{3}} \put(213.739078,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(213.920652,130.578349){\circle{3}} \put(213.920652,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(214.102226,128.944187){\circle{3}} \put(214.102226,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(214.283799,126.946876){\circle{3}} \put(214.283799,124.586419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(214.465373,124.586419){\circle{3}} \put(214.465373,124.767993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(214.646947,124.767993){\circle{3}} \put(214.646947,124.949566){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(214.828520,124.949566){\circle{3}} \put(214.828520,125.131140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(215.010094,125.131140){\circle{3}} \put(215.010094,125.312714){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(215.191668,125.312714){\circle{3}} \put(215.191668,125.494287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(215.373241,125.494287){\circle{3}} \put(215.373241,125.675861){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(215.554815,125.675861){\circle{3}} \put(215.554815,125.857435){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(215.736389,125.857435){\circle{3}} \put(215.736389,126.039008){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(215.917962,126.039008){\circle{3}} \put(215.917962,126.220582){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(216.099536,126.220582){\circle{3}} \put(216.099536,126.402156){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(216.281109,126.402156){\circle{3}} \put(216.281109,126.583729){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(216.462683,126.583729){\circle{3}} \put(216.462683,126.765303){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(216.644257,126.765303){\circle{3}} \put(216.644257,126.946876){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(216.825830,126.946876){\circle{3}} \put(216.825830,127.128450){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(217.007404,127.128450){\circle{3}} \put(217.007404,127.310024){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(217.188978,127.310024){\circle{3}} \put(217.188978,127.491597){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(217.370551,127.491597){\circle{3}} \put(217.370551,127.673171){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(217.552125,127.673171){\circle{3}} \put(217.552125,127.854745){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(217.733699,127.854745){\circle{3}} \put(217.733699,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(217.915272,130.396776){\circle{3}} \put(217.915272,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(218.096846,132.575659){\circle{3}} \put(218.096846,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(218.278420,134.391396){\circle{3}} \put(218.278420,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(218.459993,135.843985){\circle{3}} \put(218.459993,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(218.641567,136.933427){\circle{3}} \put(218.641567,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(218.823140,137.659721){\circle{3}} \put(218.823140,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(219.004714,138.022869){\circle{3}} \put(219.004714,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(219.186288,138.022869){\circle{3}} \put(219.186288,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(219.367861,137.659721){\circle{3}} \put(219.367861,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(219.549435,136.933427){\circle{3}} \put(219.549435,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(219.731009,135.843985){\circle{3}} \put(219.731009,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(219.912582,134.391396){\circle{3}} \put(219.912582,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(220.094156,132.575659){\circle{3}} \put(220.094156,130.396776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(220.275730,130.396776){\circle{3}} \put(220.275730,130.578349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(220.457303,130.578349){\circle{3}} \put(220.457303,130.759923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(220.638877,130.759923){\circle{3}} \put(220.638877,130.941497){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(220.820451,130.941497){\circle{3}} \put(220.820451,131.123070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(221.002024,131.123070){\circle{3}} \put(221.002024,131.304644){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(221.183598,131.304644){\circle{3}} \put(221.183598,131.486218){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(221.365172,131.486218){\circle{3}} \put(221.365172,131.667791){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(221.546745,131.667791){\circle{3}} \put(221.546745,131.849365){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(221.728319,131.849365){\circle{3}} \put(221.728319,132.030939){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(221.909892,132.030939){\circle{3}} \put(221.909892,132.212512){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(222.091466,132.212512){\circle{3}} \put(222.091466,132.394086){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(222.273040,132.394086){\circle{3}} \put(222.273040,132.575659){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(222.454613,132.575659){\circle{3}} \put(222.454613,132.757233){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(222.636187,132.757233){\circle{3}} \put(222.636187,132.938807){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(222.817761,132.938807){\circle{3}} \put(222.817761,133.120380){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(222.999334,133.120380){\circle{3}} \put(222.999334,133.301954){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(223.180908,133.301954){\circle{3}} \put(223.180908,133.483528){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(223.362482,133.483528){\circle{3}} \put(223.362482,133.665101){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(223.544055,133.665101){\circle{3}} \put(223.544055,133.846675){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(223.725629,133.846675){\circle{3}} \put(223.725629,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(223.907203,136.207132){\circle{3}} \put(223.907203,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(224.088776,138.204442){\circle{3}} \put(224.088776,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(224.270350,139.838605){\circle{3}} \put(224.270350,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(224.451923,141.109621){\circle{3}} \put(224.451923,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(224.633497,142.017489){\circle{3}} \put(224.633497,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(224.815071,142.562210){\circle{3}} \put(224.815071,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(224.996644,142.743784){\circle{3}} \put(224.996644,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(225.178218,142.562210){\circle{3}} \put(225.178218,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(225.359792,142.017489){\circle{3}} \put(225.359792,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(225.541365,141.109621){\circle{3}} \put(225.541365,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(225.722939,139.838605){\circle{3}} \put(225.722939,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(225.904513,138.204442){\circle{3}} \put(225.904513,136.207132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(226.086086,136.207132){\circle{3}} \put(226.086086,136.388706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(226.267660,136.388706){\circle{3}} \put(226.267660,136.570280){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(226.449234,136.570280){\circle{3}} \put(226.449234,136.751853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(226.630807,136.751853){\circle{3}} \put(226.630807,136.933427){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(226.812381,136.933427){\circle{3}} \put(226.812381,137.115001){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(226.993954,137.115001){\circle{3}} \put(226.993954,137.296574){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(227.175528,137.296574){\circle{3}} \put(227.175528,137.478148){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(227.357102,137.478148){\circle{3}} \put(227.357102,137.659721){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(227.538675,137.659721){\circle{3}} \put(227.538675,137.841295){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(227.720249,137.841295){\circle{3}} \put(227.720249,138.022869){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(227.901823,138.022869){\circle{3}} \put(227.901823,138.204442){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(228.083396,138.204442){\circle{3}} \put(228.083396,138.386016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(228.264970,138.386016){\circle{3}} \put(228.264970,138.567590){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(228.446544,138.567590){\circle{3}} \put(228.446544,138.749163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(228.628117,138.749163){\circle{3}} \put(228.628117,138.930737){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(228.809691,138.930737){\circle{3}} \put(228.809691,139.112311){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(228.991265,139.112311){\circle{3}} \put(228.991265,139.293884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(229.172838,139.293884){\circle{3}} \put(229.172838,139.475458){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(229.354412,139.475458){\circle{3}} \put(229.354412,139.657032){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(229.535985,139.657032){\circle{3}} \put(229.535985,139.838605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(229.717559,139.838605){\circle{3}} \put(229.717559,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(229.899133,142.017489){\circle{3}} \put(229.899133,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(230.080706,143.833225){\circle{3}} \put(230.080706,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(230.262280,145.285815){\circle{3}} \put(230.262280,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(230.443854,146.375256){\circle{3}} \put(230.443854,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(230.625427,147.101551){\circle{3}} \put(230.625427,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(230.807001,147.464698){\circle{3}} \put(230.807001,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(230.988575,147.464698){\circle{3}} \put(230.988575,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(231.170148,147.101551){\circle{3}} \put(231.170148,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(231.351722,146.375256){\circle{3}} \put(231.351722,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(231.533296,145.285815){\circle{3}} \put(231.533296,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(231.714869,143.833225){\circle{3}} \put(231.714869,142.017489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(231.896443,142.017489){\circle{3}} \put(231.896443,142.199063){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(232.078017,142.199063){\circle{3}} \put(232.078017,142.380636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(232.259590,142.380636){\circle{3}} \put(232.259590,142.562210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(232.441164,142.562210){\circle{3}} \put(232.441164,142.743784){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(232.622737,142.743784){\circle{3}} \put(232.622737,142.925357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(232.804311,142.925357){\circle{3}} \put(232.804311,143.106931){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(232.985885,143.106931){\circle{3}} \put(232.985885,143.288504){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(233.167458,143.288504){\circle{3}} \put(233.167458,143.470078){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(233.349032,143.470078){\circle{3}} \put(233.349032,143.651652){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(233.530606,143.651652){\circle{3}} \put(233.530606,143.833225){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(233.712179,143.833225){\circle{3}} \put(233.712179,144.014799){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(233.893753,144.014799){\circle{3}} \put(233.893753,144.196373){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(234.075327,144.196373){\circle{3}} \put(234.075327,144.377946){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(234.256900,144.377946){\circle{3}} \put(234.256900,144.559520){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(234.438474,144.559520){\circle{3}} \put(234.438474,144.741094){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(234.620048,144.741094){\circle{3}} \put(234.620048,144.922667){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(234.801621,144.922667){\circle{3}} \put(234.801621,145.104241){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(234.983195,145.104241){\circle{3}} \put(234.983195,145.285815){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(235.164768,145.285815){\circle{3}} \put(235.164768,145.467388){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(235.346342,145.467388){\circle{3}} \put(235.346342,145.648962){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(235.527916,145.648962){\circle{3}} \put(235.527916,145.830535){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(235.709489,145.830535){\circle{3}} \put(235.709489,147.827846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(235.891063,147.827846){\circle{3}} \put(235.891063,147.827846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(236.072637,149.462008){\circle{3}} \put(236.072637,147.827846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(236.254210,150.733024){\circle{3}} \put(236.254210,147.827846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(236.435784,151.640892){\circle{3}} \put(236.435784,147.827846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(236.617358,152.185613){\circle{3}} \put(236.617358,147.827846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(236.798931,152.367187){\circle{3}} \put(236.798931,147.827846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(236.980505,152.185613){\circle{3}} \put(236.980505,147.827846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(237.162079,151.640892){\circle{3}} \put(237.162079,147.827846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(237.343652,150.733024){\circle{3}} \put(237.343652,147.827846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(237.525226,149.462008){\circle{3}} \put(237.525226,147.827846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(237.706799,147.827846){\circle{3}} \put(237.706799,148.009419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(237.888373,148.009419){\circle{3}} \put(237.888373,148.190993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(238.069947,148.190993){\circle{3}} \put(238.069947,148.372567){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(238.251520,148.372567){\circle{3}} \put(238.251520,148.554140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(238.433094,148.554140){\circle{3}} \put(238.433094,148.735714){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(238.614668,148.735714){\circle{3}} \put(238.614668,148.917287){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(238.796241,148.917287){\circle{3}} \put(238.796241,149.098861){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(238.977815,149.098861){\circle{3}} \put(238.977815,149.280435){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(239.159389,149.280435){\circle{3}} \put(239.159389,149.462008){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(239.340962,149.462008){\circle{3}} \put(239.340962,149.643582){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(239.522536,149.643582){\circle{3}} \put(239.522536,149.825156){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(239.704110,149.825156){\circle{3}} \put(239.704110,150.006729){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(239.885683,150.006729){\circle{3}} \put(239.885683,150.188303){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(240.067257,150.188303){\circle{3}} \put(240.067257,150.369877){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(240.248831,150.369877){\circle{3}} \put(240.248831,150.551450){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(240.430404,150.551450){\circle{3}} \put(240.430404,150.733024){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(240.611978,150.733024){\circle{3}} \put(240.611978,150.914598){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(240.793551,150.914598){\circle{3}} \put(240.793551,151.096171){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(240.975125,151.096171){\circle{3}} \put(240.975125,151.277745){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(241.156699,151.277745){\circle{3}} \put(241.156699,151.459318){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(241.338272,151.459318){\circle{3}} \put(241.338272,151.640892){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(241.519846,151.640892){\circle{3}} \put(241.519846,151.822466){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(241.701420,151.822466){\circle{3}} \put(241.701420,153.638202){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(241.882993,153.638202){\circle{3}} \put(241.882993,153.638202){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(242.064567,155.090791){\circle{3}} \put(242.064567,153.638202){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(242.246141,156.180233){\circle{3}} \put(242.246141,153.638202){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(242.427714,156.906528){\circle{3}} \put(242.427714,153.638202){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(242.609288,157.269675){\circle{3}} \put(242.609288,153.638202){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(242.790862,157.269675){\circle{3}} \put(242.790862,153.638202){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(242.972435,156.906528){\circle{3}} \put(242.972435,153.638202){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(243.154009,156.180233){\circle{3}} \put(243.154009,153.638202){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(243.335582,155.090791){\circle{3}} \put(243.335582,153.638202){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(243.517156,153.638202){\circle{3}} \put(243.517156,153.819776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(243.698730,153.819776){\circle{3}} \put(243.698730,154.001349){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(243.880303,154.001349){\circle{3}} \put(243.880303,154.182923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(244.061877,154.182923){\circle{3}} \put(244.061877,154.364497){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(244.243451,154.364497){\circle{3}} \put(244.243451,154.546070){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(244.425024,154.546070){\circle{3}} \put(244.425024,154.727644){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(244.606598,154.727644){\circle{3}} \put(244.606598,154.909218){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(244.788172,154.909218){\circle{3}} \put(244.788172,155.090791){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(244.969745,155.090791){\circle{3}} \put(244.969745,155.272365){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(245.151319,155.272365){\circle{3}} \put(245.151319,155.453939){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(245.332893,155.453939){\circle{3}} \put(245.332893,155.635512){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(245.514466,155.635512){\circle{3}} \put(245.514466,155.817086){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(245.696040,155.817086){\circle{3}} \put(245.696040,155.998660){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(245.877613,155.998660){\circle{3}} \put(245.877613,156.180233){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(246.059187,156.180233){\circle{3}} \put(246.059187,156.361807){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(246.240761,156.361807){\circle{3}} \put(246.240761,156.543380){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(246.422334,156.543380){\circle{3}} \put(246.422334,156.724954){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(246.603908,156.724954){\circle{3}} \put(246.603908,156.906528){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(246.785482,156.906528){\circle{3}} \put(246.785482,157.088101){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(246.967055,157.088101){\circle{3}} \put(246.967055,157.269675){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(247.148629,157.269675){\circle{3}} \put(247.148629,157.451249){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(247.330203,157.451249){\circle{3}} \put(247.330203,157.632822){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(247.511776,157.632822){\circle{3}} \put(247.511776,157.814396){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(247.693350,157.814396){\circle{3}} \put(247.693350,159.448559){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(247.874924,159.448559){\circle{3}} \put(247.874924,159.448559){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(248.056497,160.719574){\circle{3}} \put(248.056497,159.448559){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(248.238071,161.627443){\circle{3}} \put(248.238071,159.448559){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(248.419645,162.172163){\circle{3}} \put(248.419645,159.448559){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(248.601218,162.353737){\circle{3}} \put(248.601218,159.448559){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(248.782792,162.172163){\circle{3}} \put(248.782792,159.448559){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(248.964365,161.627443){\circle{3}} \put(248.964365,159.448559){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(249.145939,160.719574){\circle{3}} \put(249.145939,159.448559){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(249.327513,159.448559){\circle{3}} \put(249.327513,159.630132){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(249.509086,159.630132){\circle{3}} \put(249.509086,159.811706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(249.690660,159.811706){\circle{3}} \put(249.690660,159.993280){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(249.872234,159.993280){\circle{3}} \put(249.872234,160.174853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(250.053807,160.174853){\circle{3}} \put(250.053807,160.356427){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(250.235381,160.356427){\circle{3}} \put(250.235381,160.538001){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(250.416955,160.538001){\circle{3}} \put(250.416955,160.719574){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(250.598528,160.719574){\circle{3}} \put(250.598528,160.901148){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(250.780102,160.901148){\circle{3}} \put(250.780102,161.082722){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(250.961676,161.082722){\circle{3}} \put(250.961676,161.264295){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(251.143249,161.264295){\circle{3}} \put(251.143249,161.445869){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(251.324823,161.445869){\circle{3}} \put(251.324823,161.627443){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(251.506396,161.627443){\circle{3}} \put(251.506396,161.809016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(251.687970,161.809016){\circle{3}} \put(251.687970,161.990590){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(251.869544,161.990590){\circle{3}} \put(251.869544,162.172163){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(252.051117,162.172163){\circle{3}} \put(252.051117,162.353737){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(252.232691,162.353737){\circle{3}} \put(252.232691,162.535311){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(252.414265,162.535311){\circle{3}} \put(252.414265,162.716884){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(252.595838,162.716884){\circle{3}} \put(252.595838,162.898458){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(252.777412,162.898458){\circle{3}} \put(252.777412,163.080032){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(252.958986,163.080032){\circle{3}} \put(252.958986,163.261605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(253.140559,163.261605){\circle{3}} \put(253.140559,163.443179){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(253.322133,163.443179){\circle{3}} \put(253.322133,163.624753){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(253.503707,163.624753){\circle{3}} \put(253.503707,163.806326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(253.685280,163.806326){\circle{3}} \put(253.685280,165.258915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(253.866854,165.258915){\circle{3}} \put(253.866854,165.258915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(254.048427,166.348357){\circle{3}} \put(254.048427,165.258915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(254.230001,167.074652){\circle{3}} \put(254.230001,165.258915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(254.411575,167.437799){\circle{3}} \put(254.411575,165.258915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(254.593148,167.437799){\circle{3}} \put(254.593148,165.258915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(254.774722,167.074652){\circle{3}} \put(254.774722,165.258915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(254.956296,166.348357){\circle{3}} \put(254.956296,165.258915){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(255.137869,165.258915){\circle{3}} \put(255.137869,165.440489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(255.319443,165.440489){\circle{3}} \put(255.319443,165.622063){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(255.501017,165.622063){\circle{3}} \put(255.501017,165.803636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(255.682590,165.803636){\circle{3}} \put(255.682590,165.985210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(255.864164,165.985210){\circle{3}} \put(255.864164,166.166784){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(256.045738,166.166784){\circle{3}} \put(256.045738,166.348357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(256.227311,166.348357){\circle{3}} \put(256.227311,166.529931){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(256.408885,166.529931){\circle{3}} \put(256.408885,166.711505){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(256.590458,166.711505){\circle{3}} \put(256.590458,166.893078){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(256.772032,166.893078){\circle{3}} \put(256.772032,167.074652){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(256.953606,167.074652){\circle{3}} \put(256.953606,167.256226){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(257.135179,167.256226){\circle{3}} \put(257.135179,167.437799){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(257.316753,167.437799){\circle{3}} \put(257.316753,167.619373){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(257.498327,167.619373){\circle{3}} \put(257.498327,167.800946){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(257.679900,167.800946){\circle{3}} \put(257.679900,167.982520){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(257.861474,167.982520){\circle{3}} \put(257.861474,168.164094){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(258.043048,168.164094){\circle{3}} \put(258.043048,168.345667){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(258.224621,168.345667){\circle{3}} \put(258.224621,168.527241){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(258.406195,168.527241){\circle{3}} \put(258.406195,168.708815){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(258.587769,168.708815){\circle{3}} \put(258.587769,168.890388){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(258.769342,168.890388){\circle{3}} \put(258.769342,169.071962){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(258.950916,169.071962){\circle{3}} \put(258.950916,169.253536){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(259.132490,169.253536){\circle{3}} \put(259.132490,169.435109){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(259.314063,169.435109){\circle{3}} \put(259.314063,169.616683){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(259.495637,169.616683){\circle{3}} \put(259.495637,169.798257){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(259.677210,169.798257){\circle{3}} \put(259.677210,171.069272){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(259.858784,171.069272){\circle{3}} \put(259.858784,171.069272){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(260.040358,171.977140){\circle{3}} \put(260.040358,171.069272){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(260.221931,172.521861){\circle{3}} \put(260.221931,171.069272){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(260.403505,172.703435){\circle{3}} \put(260.403505,171.069272){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(260.585079,172.521861){\circle{3}} \put(260.585079,171.069272){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(260.766652,171.977140){\circle{3}} \put(260.766652,171.069272){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(260.948226,171.069272){\circle{3}} \put(260.948226,171.250846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(261.129800,171.250846){\circle{3}} \put(261.129800,171.432419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(261.311373,171.432419){\circle{3}} \put(261.311373,171.613993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(261.492947,171.613993){\circle{3}} \put(261.492947,171.795567){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(261.674521,171.795567){\circle{3}} \put(261.674521,171.977140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(261.856094,171.977140){\circle{3}} \put(261.856094,172.158714){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(262.037668,172.158714){\circle{3}} \put(262.037668,172.340288){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(262.219241,172.340288){\circle{3}} \put(262.219241,172.521861){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(262.400815,172.521861){\circle{3}} \put(262.400815,172.703435){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(262.582389,172.703435){\circle{3}} \put(262.582389,172.885008){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(262.763962,172.885008){\circle{3}} \put(262.763962,173.066582){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(262.945536,173.066582){\circle{3}} \put(262.945536,173.248156){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(263.127110,173.248156){\circle{3}} \put(263.127110,173.429729){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(263.308683,173.429729){\circle{3}} \put(263.308683,173.611303){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(263.490257,173.611303){\circle{3}} \put(263.490257,173.792877){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(263.671831,173.792877){\circle{3}} \put(263.671831,173.974450){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(263.853404,173.974450){\circle{3}} \put(263.853404,174.156024){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(264.034978,174.156024){\circle{3}} \put(264.034978,174.337598){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(264.216552,174.337598){\circle{3}} \put(264.216552,174.519171){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(264.398125,174.519171){\circle{3}} \put(264.398125,174.700745){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(264.579699,174.700745){\circle{3}} \put(264.579699,174.882319){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(264.761272,174.882319){\circle{3}} \put(264.761272,175.063892){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(264.942846,175.063892){\circle{3}} \put(264.942846,175.245466){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(265.124420,175.245466){\circle{3}} \put(265.124420,175.427040){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(265.305993,175.427040){\circle{3}} \put(265.305993,175.608613){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(265.487567,175.608613){\circle{3}} \put(265.487567,175.790187){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(265.669141,175.790187){\circle{3}} \put(265.669141,176.879629){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(265.850714,176.879629){\circle{3}} \put(265.850714,176.879629){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(266.032288,177.605923){\circle{3}} \put(266.032288,176.879629){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(266.213862,177.969071){\circle{3}} \put(266.213862,176.879629){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(266.395435,177.969071){\circle{3}} \put(266.395435,176.879629){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(266.577009,177.605923){\circle{3}} \put(266.577009,176.879629){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(266.758583,176.879629){\circle{3}} \put(266.758583,177.061202){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(266.940156,177.061202){\circle{3}} \put(266.940156,177.242776){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(267.121730,177.242776){\circle{3}} \put(267.121730,177.424350){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(267.303304,177.424350){\circle{3}} \put(267.303304,177.605923){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(267.484877,177.605923){\circle{3}} \put(267.484877,177.787497){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(267.666451,177.787497){\circle{3}} \put(267.666451,177.969071){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(267.848024,177.969071){\circle{3}} \put(267.848024,178.150644){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(268.029598,178.150644){\circle{3}} \put(268.029598,178.332218){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(268.211172,178.332218){\circle{3}} \put(268.211172,178.513791){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(268.392745,178.513791){\circle{3}} \put(268.392745,178.695365){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(268.574319,178.695365){\circle{3}} \put(268.574319,178.876939){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(268.755893,178.876939){\circle{3}} \put(268.755893,179.058512){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(268.937466,179.058512){\circle{3}} \put(268.937466,179.240086){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(269.119040,179.240086){\circle{3}} \put(269.119040,179.421660){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(269.300614,179.421660){\circle{3}} \put(269.300614,179.603233){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(269.482187,179.603233){\circle{3}} \put(269.482187,179.784807){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(269.663761,179.784807){\circle{3}} \put(269.663761,179.966381){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(269.845335,179.966381){\circle{3}} \put(269.845335,180.147954){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(270.026908,180.147954){\circle{3}} \put(270.026908,180.329528){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(270.208482,180.329528){\circle{3}} \put(270.208482,180.511102){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(270.390055,180.511102){\circle{3}} \put(270.390055,180.692675){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(270.571629,180.692675){\circle{3}} \put(270.571629,180.874249){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(270.753203,180.874249){\circle{3}} \put(270.753203,181.055822){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(270.934776,181.055822){\circle{3}} \put(270.934776,181.237396){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(271.116350,181.237396){\circle{3}} \put(271.116350,181.418970){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(271.297924,181.418970){\circle{3}} \put(271.297924,181.600543){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(271.479497,181.600543){\circle{3}} \put(271.479497,181.782117){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(271.661071,181.782117){\circle{3}} \put(271.661071,182.689985){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(271.842645,182.689985){\circle{3}} \put(271.842645,182.689985){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(272.024218,183.234706){\circle{3}} \put(272.024218,182.689985){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(272.205792,183.416280){\circle{3}} \put(272.205792,182.689985){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(272.387366,183.234706){\circle{3}} \put(272.387366,182.689985){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(272.568939,182.689985){\circle{3}} \put(272.568939,182.871559){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(272.750513,182.871559){\circle{3}} \put(272.750513,183.053133){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(272.932086,183.053133){\circle{3}} \put(272.932086,183.234706){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(273.113660,183.234706){\circle{3}} \put(273.113660,183.416280){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(273.295234,183.416280){\circle{3}} \put(273.295234,183.597853){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(273.476807,183.597853){\circle{3}} \put(273.476807,183.779427){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(273.658381,183.779427){\circle{3}} \put(273.658381,183.961001){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(273.839955,183.961001){\circle{3}} \put(273.839955,184.142574){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(274.021528,184.142574){\circle{3}} \put(274.021528,184.324148){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(274.203102,184.324148){\circle{3}} \put(274.203102,184.505722){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(274.384676,184.505722){\circle{3}} \put(274.384676,184.687295){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(274.566249,184.687295){\circle{3}} \put(274.566249,184.868869){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(274.747823,184.868869){\circle{3}} \put(274.747823,185.050443){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(274.929397,185.050443){\circle{3}} \put(274.929397,185.232016){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(275.110970,185.232016){\circle{3}} \put(275.110970,185.413590){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(275.292544,185.413590){\circle{3}} \put(275.292544,185.595164){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(275.474118,185.595164){\circle{3}} \put(275.474118,185.776737){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(275.655691,185.776737){\circle{3}} \put(275.655691,185.958311){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(275.837265,185.958311){\circle{3}} \put(275.837265,186.139885){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(276.018838,186.139885){\circle{3}} \put(276.018838,186.321458){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(276.200412,186.321458){\circle{3}} \put(276.200412,186.503032){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(276.381986,186.503032){\circle{3}} \put(276.381986,186.684605){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(276.563559,186.684605){\circle{3}} \put(276.563559,186.866179){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(276.745133,186.866179){\circle{3}} \put(276.745133,187.047753){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(276.926707,187.047753){\circle{3}} \put(276.926707,187.229326){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(277.108280,187.229326){\circle{3}} \put(277.108280,187.410900){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(277.289854,187.410900){\circle{3}} \put(277.289854,187.592474){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(277.471428,187.592474){\circle{3}} \put(277.471428,187.774047){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(277.653001,187.774047){\circle{3}} \put(277.653001,188.500342){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(277.834575,188.500342){\circle{3}} \put(277.834575,188.500342){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(278.016149,188.863489){\circle{3}} \put(278.016149,188.500342){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(278.197722,188.863489){\circle{3}} \put(278.197722,188.500342){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(278.379296,188.500342){\circle{3}} \put(278.379296,188.681916){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(278.560869,188.681916){\circle{3}} \put(278.560869,188.863489){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(278.742443,188.863489){\circle{3}} \put(278.742443,189.045063){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(278.924017,189.045063){\circle{3}} \put(278.924017,189.226636){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(279.105590,189.226636){\circle{3}} \put(279.105590,189.408210){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(279.287164,189.408210){\circle{3}} \put(279.287164,189.589784){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(279.468738,189.589784){\circle{3}} \put(279.468738,189.771357){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(279.650311,189.771357){\circle{3}} \put(279.650311,189.952931){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(279.831885,189.952931){\circle{3}} \put(279.831885,190.134505){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(280.013459,190.134505){\circle{3}} \put(280.013459,190.316078){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(280.195032,190.316078){\circle{3}} \put(280.195032,190.497652){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(280.376606,190.497652){\circle{3}} \put(280.376606,190.679226){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(280.558180,190.679226){\circle{3}} \put(280.558180,190.860799){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(280.739753,190.860799){\circle{3}} \put(280.739753,191.042373){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(280.921327,191.042373){\circle{3}} \put(280.921327,191.223947){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(281.102900,191.223947){\circle{3}} \put(281.102900,191.405520){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(281.284474,191.405520){\circle{3}} \put(281.284474,191.587094){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(281.466048,191.587094){\circle{3}} \put(281.466048,191.768667){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(281.647621,191.768667){\circle{3}} \put(281.647621,191.950241){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(281.829195,191.950241){\circle{3}} \put(281.829195,192.131815){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(282.010769,192.131815){\circle{3}} \put(282.010769,192.313388){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(282.192342,192.313388){\circle{3}} \put(282.192342,192.494962){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(282.373916,192.494962){\circle{3}} \put(282.373916,192.676536){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(282.555490,192.676536){\circle{3}} \put(282.555490,192.858109){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(282.737063,192.858109){\circle{3}} \put(282.737063,193.039683){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(282.918637,193.039683){\circle{3}} \put(282.918637,193.221257){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(283.100211,193.221257){\circle{3}} \put(283.100211,193.402830){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(283.281784,193.402830){\circle{3}} \put(283.281784,193.584404){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(283.463358,193.584404){\circle{3}} \put(283.463358,193.765978){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(283.644931,193.765978){\circle{3}} \put(283.644931,194.310699){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(283.826505,194.310699){\circle{3}} \put(283.826505,194.310699){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(284.008079,194.492272){\circle{3}} \put(284.008079,194.310699){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(284.189652,194.310699){\circle{3}} \put(284.189652,194.492272){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(284.371226,194.492272){\circle{3}} \put(284.371226,194.673846){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(284.552800,194.673846){\circle{3}} \put(284.552800,194.855419){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(284.734373,194.855419){\circle{3}} \put(284.734373,195.036993){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(284.915947,195.036993){\circle{3}} \put(284.915947,195.218567){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(285.097521,195.218567){\circle{3}} \put(285.097521,195.400140){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(285.279094,195.400140){\circle{3}} \put(285.279094,195.581714){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(285.460668,195.581714){\circle{3}} \put(285.460668,195.763288){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(285.642242,195.763288){\circle{3}} \put(285.642242,195.944861){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(285.823815,195.944861){\circle{3}} \put(285.823815,196.126435){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(286.005389,196.126435){\circle{3}} \put(286.005389,196.308009){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(286.186963,196.308009){\circle{3}} \put(286.186963,196.489582){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(286.368536,196.489582){\circle{3}} \put(286.368536,196.671156){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(286.550110,196.671156){\circle{3}} \put(286.550110,196.852730){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(286.731683,196.852730){\circle{3}} \put(286.731683,197.034303){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(286.913257,197.034303){\circle{3}} \put(286.913257,197.215877){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(287.094831,197.215877){\circle{3}} \put(287.094831,197.397450){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(287.276404,197.397450){\circle{3}} \put(287.276404,197.579024){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(287.457978,197.579024){\circle{3}} \put(287.457978,197.760598){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(287.639552,197.760598){\circle{3}} \put(287.639552,197.942171){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(287.821125,197.942171){\circle{3}} \put(287.821125,198.123745){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(288.002699,198.123745){\circle{3}} \put(288.002699,198.305319){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(288.184273,198.305319){\circle{3}} \put(288.184273,198.486892){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(288.365846,198.486892){\circle{3}} \put(288.365846,198.668466){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(288.547420,198.668466){\circle{3}} \put(288.547420,198.850040){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(288.728994,198.850040){\circle{3}} \put(288.728994,199.031613){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(288.910567,199.031613){\circle{3}} \put(288.910567,199.213187){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(289.092141,199.213187){\circle{3}} \put(289.092141,199.394761){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(289.273714,199.394761){\circle{3}} \put(289.273714,199.576334){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(289.455288,199.576334){\circle{3}} \put(289.455288,199.757908){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(289.636862,199.757908){\circle{3}} \put(289.636862,200.121055){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(289.818435,200.121055){\circle{3}} \put(289.818435,200.121055){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \put(290.000009,200.121055){\circle{3}} \put(290.000009,200.302629){\makebox(0,0){\resizebox{5\unitlength}{5\unitlength}{$\times$}}} \end{picture} } \caption{Graph of $\nu_i$ and $\delta_i$ for the Hermitian code over ${\mathbb F}_{32^2}$.} \label{fig:nuob32} \end{figure} \def$'${$'$}
1,477,468,750,182
arxiv
\section{Introduction} Frustrated quantum antiferromagnetism can give rise to extraordinarily rich physics \cite{Reviews}. It is not only supposed to play a crucial role for the properties of high-$T_c$ superconductors, but it is also an interesting subject on its own, and has potential applications in topological quantum information processing and storage \cite{NayakEtAl08}. Apart from the possibility of keeping a classical N\'eel-ordered (staggered or spiral) spin configuration with long-range order, the spins of a quantum antiferromagnet can also form singlet pairs (``valence bonds''), such that the spin-rotation symmetry is not broken. These singlets either order spatially to form a valence bond solid or the system's state is a superposition of many singlet coverings, neither breaking translational nor spin-rotational symmetry. The latter is termed a resonating valence bond spin liquid (SL). A gapped SL with an exponential decay of spin correlations is expected to exhibit non-local topological order being immune against local perturbations. It also can possess anyonic excitations. This makes such \emph{topological} SL states candidates for robust quantum memories and processors \cite{NayakEtAl08}. Alternatively, a \emph{critical} SL is characterized by a huge density of low-lying excitations and a power-law decay of spin correlations. Since frustrated quantum antiferromagnets are hard to simulate (path-integral Monte-Carlo methods fail) and clean solid-state realizations are not available, it is desirable to study these exotic many-body systems with ultracold atoms in optical lattice potentials \cite{OpticalLattices} providing both clean conditions and far-reaching control. The most straightforward cold atom implementation of a quantum magnet is to create a Mott-insulator of fermions in two different internal states forming a pseudo spin. However, the necessary low temperatures (smaller than the weak superexchange spin coupling) have not yet been achieved. Also spinless fermions at filling 2/3 in a not yet realized trimerized Kagom\'e lattice resemble a quantum magnet \cite{DamskiEtAl05}. In this letter we propose a different strategy for the realization of a frustrated quantum system with ultracold atoms that --- in contrast to the aforementioned approaches --- can be pursued in existing experimental setups, at temperatures already reached. Our idea is to consider spinless ultracold bosonic atoms in a triangular optical lattice and to induce frustration via a sign change of the matrix elements describing tunneling between adjacent potential minima. As will be shown, such a sign change can be achieved effectively by dressing the system with a high-frequency elliptical lattice acceleration. In the hard-core boson limit of strong repulsive interaction, the physics is then described by the antiferromagnetic spin-1/2 $XY$-model on the triangular lattice. For certain regimes of anisotropic coupling this model is expected to show gapped SL phases \cite{SchmiedEtAl08}. This letter is organized as follows. We start with a discussion of the frustrated positive-hopping Bose-Hubbard model that describes the system to be realized experimentally. In order to sketch the expected phase diagram we combine (i) recently published numerical data (based on PEPS as well as exact simulations) \cite{SchmiedEtAl08} valid in the limit of strong interaction with (ii) results obtained by starting from the limit of weak interaction and systematically including quantum fluctuations (beyond Bogoliubov). Then we discuss the experimental realization of the model, putting emphasis on how to change the sign of the tunneling matrix elements via a fast elliptical lattice acceleration. It follows a part devoted to the preparation of the frustrated model's ground state in the presence of a trapping potential. Finally, before giving a brief conclusion, we discuss possible experimental signatures of the expected phases. \section{Positive-hopping Bose-Hubbard model on a triangular lattice} Consider a sample of ultracold bosonic atoms in a deep triangular optical lattice that is forced inertially by moving the lattice rapidly along an elliptical orbit. According to the following section, the time evolution of such a system has a simple description. Integrating out the fast oscillatory motion on the short time scale $T=2\pi/\omega$ of the elliptical forcing, one finds the system's evolution on longer time scales governed by the time-independent effective Bose-Hubbard Hamiltonian \begin{equation}\label{eq:Heff} \hat{H}_\text{eff} = \sum_{\langle ij\rangle}J^\text{eff}_{ij} \hat{b}^{\dag}_i\hat{b}^{\phantom\dag}_j +\sum_i \left[\frac{U}{2}\hat{n}_i(\hat{n}_i-1)-\mu_i\hat{n}_i\right]. \end{equation} Here $\hat{b}_i$ and $\hat{n}_i$ are the bosonic annihilation and number operators for Wannier states localized at the minima ${\bm r}_i$ of the triangular lattice potential. The first term comprises tunneling between adjacent sites $i$ and $j$ with --- this is the crucial point --- matrix elements $J_{ij}^\text{eff}$ that are smoothly tunable from negative to positive values by variation of the forcing strength.\footnote{In our convention $\langle ij\rangle$ denotes an oriented pair of neighboring sites, $\langle ij\rangle\ne\langle ji\rangle$.} The on-site terms are characterized by the positive interaction parameter $U$ and the local chemical potential $\mu_i\equiv\mu-V_i$ including the trapping potential $V_i$. We consider the anisotropic lattice shown in Fig.~\ref{fig:lattice}(a) with the $J_{ij}^\text{eff}$ equal to either $J$ or $J'\equiv\alpha J$ (assuming $\alpha\ge0$). \begin{figure}[t]\centering \includegraphics[width = 1\linewidth]{Fig1} \caption{\label{fig:lattice} (color online) (a) Anisotropic triangular lattice considered, with primitive vectors ${\bm a}_1\equiv d{\bm e}_x$, ${\bm a}_2\equiv d[(1/2){\bm e}_x+(\sqrt{3}/2){\bm e}_y]$, as well as ${\bm a}_3\equiv -{\bm a}_1+{\bm a}_2$. The tunneling matrix elements $J_{ij}^\text{eff}$ take values $J$ and $J'\equiv\alpha J$ (with $\alpha\ge0$) along the solid and dashed bonds, respectively. (b) Reciprocal lattice with $b=(4\pi/\sqrt{3})d^{-1}$. The first Brillouin-zone, centered at ${\bm p}={\bm 0}$, is shaded. Considering antiferromagnetic coupling $J>0$, we have marked the ordering vector ${\bm q}$ describing a N\'eel SF in the limit of weak interaction: For $\alpha\ge\alpha_0$ ${\bm q}$ lies on one of the x-shaped crosses (being equivalent modulo reciprocal lattice vectors). This corresponds to a staggered configuration of the local phase angles $\varphi_i$ on the rhombic lattice of $J'$-bonds [shown in (c) with the $\varphi_i$ visualized by pointers]. Lowering $\alpha$, at $\alpha=\alpha_0$ the position of ${\bm q}$ splits in a continuous way into two non-equivalent possible positions that separate symmetrically along the arrows drawn in (b). The phases $\varphi_i$ assume a spiral pattern with two possible chiralities; subfigure (d) corresponds to the isotropic lattice ($\alpha=1<\alpha_0$) with ${\bm q}$ lying on one of the corners of the first Brillouin zone. Finally, in the 1D limit ($\alpha=0$) only $q_x$ has a defined value that is marked by the dashed lines in (b). The phase pattern is staggered along the 1D chains of $J$-bonds [as sketched in (e)].} \end{figure} The homogeneous model ($\mu_i=\mu$) interpolates between a classical rotor and a quantum spin model: For weak interaction $U\ll n|J|$, with a mean filling of $n$ particles per site, the superfluid (SF) ground state can (locally) be approximated by $\prod_i\exp(\psi_i\hat{b}^{\dag}_i)|\text{vacuum}\rangle$ with discrete order parameter $\psi_i =\sqrt{n_i}\exp(\mathrm{i}\varphi_i)$. A homogeneous density $n_i=n$ is favored and the local phases $\varphi_i$ play the role of classical rotors assuming a configuration $\varphi_i\equiv{\bm q}\cdot{\bm r}_i$ described by the ordering vector ${\bm q}$. {Antiferromagnetic coupling $J>0$ implies N\'eel ordered phases $\varphi_i$ as depicted in Fig.~\ref{fig:lattice}(b-e). We call such a state a N\'eel SF. When $\alpha$ exceeds a value $\alpha_0$, spiral continuously transforms into staggered N\'eel order. While $\alpha_0$ equals 2 for $U/J=0$, it slightly decreases with increasing interaction, cf.\ Fig.~\ref{fig:app}(a).} In the opposite limit of strong interaction $U\gg n|J|$, there are only two energetically favored site occupations, $n_i=[n]\equiv g$ (the largest integer smaller than $n$) and $n_i=g+1$. Associating them with ``spin up'' and ``spin down'', respectively, gives the Bloch-sphere representation $|\vartheta_i,\varphi_i\rangle\equiv\cos(\vartheta_i/2)|g\rangle_i +\sin(\vartheta_i/2)\exp(\mathrm{i}\varphi_i)|g+1\rangle_i$ at each site. Replacing $(g+1)^{\!-1}\hat{b}^{\phantom\dag}_i$ by the spin lowering operator \mbox{$(\hat{\sigma}^{x}_i-\mathrm{i}\hat{\sigma}^{y}_i)/2$}, one arrives at the $XY$-model \begin{equation}\label{eq:Hxy} \hat{H}_{XY} = \sum_{\langle i j\rangle}J^{XY}_{ij} (\hat{\sigma}^{x}_i\hat{\sigma}^{x}_j+\hat{\sigma}^{y}_i\hat{\sigma}^{y}_j) +\sum_i h_i \hat{\sigma}^{z}_i \end{equation} with $h_i\equiv\frac{1}{2}(\mu_i-Ug)$, $J^{XY}_{ij}\equiv \frac{g+1}{4}J^\text{eff}_{ij}$, and $\hat{\sigma}^{x}_i$, $\hat{\sigma}^{y}_i$, $\hat{\sigma}^{z}_i$ being spin-1/2 Pauli operators at site $i$. The ground state of $\hat{H}_{XY}$ cannot be a product state like $\prod_i|\vartheta_i,\varphi_i\rangle$ with definite local phases $\varphi_i$ anymore, since $|\vartheta_i,\varphi_i\rangle$ cannot be an eigenstate of both $\hat{\sigma}^{x}_i$ and $\hat{\sigma}^{y}_i$. Viewed from the Bose-Hubbard perspective, increasing interparticle repulsion increases the fluctuations of the local phases $\varphi_i$. While for ferromagnetic coupling $J<0$ the classical phase configuration is supposed to survive the presence of quantum fluctuations in the spin-1/2 limit $U\gg n|J|$, for antiferromagnetic coupling $J>0$ recent simulations suggest that (for $\sum_i\langle\hat{\sigma}^{z}_i\rangle=0$) classical N\'eel order is not necessarily preserved \cite{SchmiedEtAl08}: Along the $\alpha$-axis different N\'eel phases are separated by gapped SL phases with exponentially decaying spin correlations. The results of Ref.~\cite{SchmiedEtAl08} are displayed along the upper edge of the phase diagram shown in Fig.~\ref{fig:diagram}. \begin{figure}[t]\centering \includegraphics[width = 0.9\linewidth]{Fig2} \caption{\label{fig:diagram}(color online) Sketch of the phase diagram of the anisotropic positive-hopping Bose-Hubbard model on the triangular lattice for half-odd-integer filling. The parameter plane is spanned by interaction strength $U/(nJ)$ and anisotropy ratio $\alpha=J'/J$. The data (in green) for the spin-1/2 limit [$U/(nJ)\gg1$] are taken from Ref.~\cite{SchmiedEtAl08}. We assume the SL phases to survive at small finite values of $J/U$, since they are protected by a gap. The behaviour at small $U/(nJ)$ corresponds to results obtained within a generalized Bogoliubov theory, cf.~Fig.~\ref{fig:app}(a).} \end{figure} In order to gain further insight into the physics of the frustrated positive-hopping Bose-Hubbard model (\ref{eq:Heff}), we start from the classical limit of weak interaction $U\ll n|J|$ (assuming a homogeneous system) and include quantum fluctuations by using the generalized Bogoliubov approach introduced in Ref.~\cite{MoraCastin03}. For filling already moderately larger than 1, we can replace $\hat{b}^{\phantom\dag}_i\simeq \exp[\mathrm{i}(\varphi_i+\delta\hat{\varphi}_i)]\sqrt{n_i+\delta\hat{n}_i}$, where $\delta\hat{n}_i=\delta\hat{n}_i^\dag$ and $\delta\hat{\varphi}_i\simeq\delta\hat{\varphi}_i^\dag$ describe quantum fluctuations to the local particle numbers $n_i$ and phases $\varphi_i$, respectively, with $[\delta\hat{n}_i,\delta\hat{\varphi}_i]\simeq\mathrm{i}\delta_{i,j}$. While $\langle(\delta\hat{n}_i)^2\rangle_i/n^2\ll1$ can be assumed, the phase fluctuations $\langle(\delta\hat{\varphi}_i)^2\rangle$ diverge in the 1D limit ($\alpha=0$) (as well as at finite temperatures) where only quasi-long-range order is possible. However, one can still expect the fluctuation of the \emph{relative} phases $\langle(\delta\hat{\varphi}_i-\delta\hat{\varphi}_j)^2\rangle$ between neighboring sites $i$ and $j$ to be small. Expanding the Hamiltonian (\ref{eq:Heff}) up to second order in $\delta\hat{n}_i/n_i$ and $(\delta\hat{\varphi}_i-\delta\hat{\varphi}_j)$, it will be quadratic in terms of new bosonic operators $\hat{d}^{\phantom\dag}_i\equiv\sqrt{n_i}[\delta\hat{n}_i/(2n_i) + \mathrm{i}\delta\hat{\varphi}_i]$ and $\hat{d}^{\dag}_i$, and can be diagonalized by a Bogoliubov transform (keeping $\langle\delta\hat{n}_i\rangle=0$). When computing, e.g., correlations $\langle\hat{b}^{\dag}_i\hat{b}^{\phantom\dag}_j\rangle$ between distant sites $i$ and $j$, one cannot treat $(\delta\hat{\varphi}_i-\delta\hat{\varphi}_j)$ as a small quantity, but has to use Wick's theorem to evaluate expectation values of all powers of $\delta\hat{\varphi}_i$ \cite{MoraCastin03}. We augment this analysis by taking into account also the (Wick-decomposed) quartic corrections to the Hamiltonian when minimizing the ground-state energy with respect to both the Bogoliubov coefficients and the ordering vector ${\bm q}$.\footnote{In the case of spiral order ($\pi<d|q_x|<2\pi$ with $q_y=0$), one finds two solutions, ${\bm q}$ and ${\bm q}'=-{\bm q}$, and has to choose one of them.} This self-consistent above-Bogoliubov correction, that we include using a numeric iteration scheme, is necessary in order to explain a shift of $\alpha_0$ with increasing interaction.\footnote{ Taking into account self-consistently the quartic terms does not lead to a spurious gap in the quasiparticle spectrum, as it is the case within the standard Bogoliubov treatment \cite{Griffin96}. This gaplessness is, thus, a feature of the generalized Bogoliubov expansion \cite{MoraCastin03} in terms of fluctuations in density and relative-phase.} Assuming homogeneous filling $n_i=n$ as well as $q_y=0$, the method sketched in the above paragraph leads to the following results: With increasing interaction/quantum fluctuations, $\alpha_0$ decreases, i.e.\ the $\alpha$-domain of rhombic-staggered N\'eel order grows [thick line in Fig.~\ref{fig:app}(a)] . This {``order by disorder'' phenomenon \cite{Reviews}} is in accordance with the spin-1/2 results of Ref.~\cite{SchmiedEtAl08} (cf.\ upper edge of Fig.~\ref{fig:diagram}). In contrast, a finite $\alpha$-interval of staggered 1D quasi-long-range N\'eel order, also predicted for the spin model, is not found. We have used these findings to draw the lower part of the phase diagram of Fig.~\ref{fig:diagram}. The quasiparticle dispersion relation is gapless and phonon-like for quasimomentum wave numbers ${\bm p}$ around ${\bm p}={\bm q}$. However, whenever spiral order is found, it is symmetric with respect to $({\bm p}-{\bm q})\to-({\bm p}-{\bm q})$ only in the limit of small $|{\bm p}-{\bm q}|$. In contrast, the zero temperature quasimomentum distribution, having sharp peaks at ${\bm p}={\bm q}$ + reciprocal lattice vectors, possesses reflection symmetry with respect to ${\bm p}={\bm q}$. Further results, for $n=3.5$, are shown in Fig.~\ref{fig:app}(a). An estimate for the range of validity of the approximation is given by the dotted and the dashed line. Above them the relative phase fluctuations between neighboring sites separated by ${\bm a}_1$ and ${\bm a}_2$, respectively, exceed a value taken to be $\pi/4$. The fact that the dotted line does not approach zero in the limit of decoupled 1D chains ($\alpha\to0$) indicates that in this limit the approximation still captures the physics in ${\bm a}_1$-direction (along the chains). Moreover, the dip around $\alpha=\alpha_0$ can be interpreted as a precursor of the SL phase predicted in Ref.~\cite{SchmiedEtAl08} (cf.~Fig.~\ref{fig:diagram}). Finally, the thin solid line marks the interaction where the condensate fraction $f_c\equiv\lim_{|{\bm r}_{ij}|\to\infty} |\langle\hat{b}^{\dag}_i\hat{b}^{\phantom\dag}_j\rangle|/n$ is reduced to 0.75. Again, a sharp dip at $\alpha=\alpha_0$ is a hint at a quantum disordered phase in the limit of large interaction. \begin{figure}[t]\centering \includegraphics[width = 1\linewidth]{Fig3} \caption{\label{fig:app} (color online) (a) Generalized Bogoliubov theory for $n=3.5$. Values of $U/(n|J|)$ at which: spiral changes to rhombic-staggered N\'eel order (thick line), relative phase fluctuations reach $\pi/4$ for sites separated by ${\bm a}_1$ (dotted line) and ${\bm a}_2$ (dashed line), the condensate fraction has dropped to $0.75$ (thin solid line). (b) Boundaries of the MI phases with integer filling $n$ in the $\mu/U$-$J/U$-plane, both in 2nd-order strong-coupling (solid lines) and meanfield (dashed lines) approximation. As a consequence of frustration, the MI double-lobes are larger on the antiferromagnetic side ($J>0$) of the phase diagram. The grey bubbles between the MI regions, indicating the expected gapped SL phases at half-odd-integer filling, are just sketched. } \end{figure} \section{{Proposal for an} experimental realization} Having discussed the physics of the triangular-lattice positive-hopping Bose-Hubbard Hamiltonian, let us turn to the realization of the model with ultracold atoms in a deep optical lattice. The sign change of the tunneling matrix element, from negative to positive values, shall be induced by dressing the system with a fast time-periodic lattice acceleration. For hypercubic lattices, such a dynamical modification of tunneling has been predicted theoretically not only for single \cite{DynamicLocalization}, but also for many interacting particles \cite{EckardtEtAl05II}. Moreover, it has been observed experimentally with ultracold atoms both in the weakly interacting regime (via the expansion of a Bose-Einstein condensate \cite{LignierEtAl07}), as well as in the strong-coupling regime, where it has been used to induce the quantum phase transition from a SF to a Mott insulator (MI) and back \cite{EckardtEtAl05II,ZenesiniEtAl09}. However, the linear driving scheme used in the work just mentioned, with the system being forced sinusoidally along a single direction (chosen to be diagonal with respect to all symmetry axes in the case of a square or a cubic lattice), is not suitable for the triangular lattice geometry. In order to be able to manipulate the system in a symmetric way with respect to the three non-orthogonal lattice directions ${\bm a}_1$, ${\bm a}_2$, and ${\bm a}_3$ [Fig.~\ref{fig:lattice}(a)], here we propose to use elliptical forcing. This includes isotropic circular as well as linear forcing. The driving scheme to be considered can be realized inertially by moving the lattice along an elliptical orbit ${\bm x}(t)=\Delta x_c\cos(\omega t){\bm e}_c+\Delta x_s \sin(\omega t){\bm e}_s$ in space, with angular frequency $\omega$, orthogonal unit vectors ${\bm e}_c$ and ${\bm e}_s$, as well as amplitudes $\Delta x_c$ and $\Delta x_s$. The resulting inertial force in the lattice frame of reference reads ${\bm F}(t)= -m\ddot{{\bm x}} = F_c\cos(\omega t){\bm e}_c + F_s\sin(\omega t){\bm e}_s$ where $m$ is the boson mass and $F_{c/s}=m\omega^2\Delta x_{c/s}$. Choosing $\omega$ and $F_{c/s}$ small enough to exclude transitions from the lowest to higher Bloch bands, one can describe the system in the lattice frame of reference by the driven Bose-Hubbard model \begin{equation}\label{eq:dbh} \hat{H}(t) = \sum_{\langle ij\rangle}J_{ij}\hat{b}^{\dag}_i\hat{b}^{\phantom\dag}_j +\frac{U}{2}\sum_i \hat{n}_i(\hat{n}_i-1) + \sum_i[v_i(t)-\mu_i]\hat{n}_i. \end{equation} Here $J_{ij}<0$ are the bare tunneling matrix elements and $v_i(t)\equiv-{\bm r}_i\cdot{\bm F}(t)$ oscillating on-site energies. We assume that $\hbar\omega$ is large compared to the energy scales given by interaction ($U$), tunneling ($n|J_{ij}|$), and trapping ($|\mu_i-\mu_j|$, with neighbors $i$ and $j$), i.e.\ that the forcing is fast with respect to the time scales governing the undriven model. Under these conditions the time evolution of the driven system's state $|\psi(t)\rangle$ will be to good approximation of the form \begin{equation} |\psi(t)\rangle \approx \hat{U}(t)|\psi_\text{eff}(t)\rangle. \end{equation} The unitary operator \begin{equation} \hat{U}(t)\equiv \exp\Big(-\frac{\mathrm{i}}{\hbar}\sum_i \hat{n}_i W_i(t)\Big), \end{equation} where \begin{equation} W_i(t)\equiv\int_0^t\!\mathrm{d}\tau\,v_i(\tau) -\frac{1}{T}\int_0^T\!\mathrm{d} t'\int_0^{t'}\!\mathrm{d}\tau\,v_i(\tau), \end{equation} just describes a periodically time-dependent shift by $-m\dot{{\bm x}}$ of the whole system in quasimomentum, $W_i={\bm r}_i\cdot m{\dot{{\bm x}}}$. On top of this simple oscillatory motion on the short time scale $T=2\pi/\omega$, the time evolution on longer times is governed by the effective time-independent Hamiltonian $\hat{H}_\text{eff}$ shown in Eq.~(\ref{eq:Heff}), namely \begin{equation} |\psi_\text{eff}(t)\rangle =\exp\bigg(-\frac{\mathrm{i}}{\hbar}\hat{H}_\text{eff}t\bigg) |\psi_\text{eff}(0)\rangle. \end{equation} The dressed tunneling matrix elements are given by \begin{equation}\label{eq:Jeff} J_{ij}^\text{eff}= J_{ij}\mathrm{J}_0\bigg(\frac{K_{ij}}{\hbar\omega}\bigg). \end{equation} Here $\mathrm{J}_0$ is the zero-order Bessel function and $K_{ij}\equiv\sqrt{(F_c{\bm e}_c\cdot{\bm r}_{ij})^2+(F_s{\bm e}_s\cdot{\bm r}_{ij})^2}$ the amplitude of the potential modulation between site $i$ and $j$, where ${\bm r}_{ij}\equiv{\bm r}_i-{\bm r}_j$. Thus, in the lattice frame, apart from the superimposed fast oscillation in quasimomentum, the system behaves as the one described by $\hat{H}_\text{eff}$. When measuring the momentum distribution of the system in the laboratory frame by taking time-of-flight absorption images, one will encounter the periodic quasimomentum distribution of $|\psi_\text{eff}\rangle$ at rest, being enveloped by the momentum distribution of the Wannier wave function oscillating like $m\dot{{\bm x}}$. The result presented in the preceding paragraph relies on the separation of time scales as well as on time averaging. We have obtained it within the framework of quantum Floquet theory \cite{Floquet} by generalizing the approach introduced in Refs.~\cite{EckardtEtAl05II,EckardtHolthaus08b} to elliptical forcing. The derivation is based on stationary degenerate-state perturbation theory on the level of an extended Hilbert space including time as a coordinate. Here we just give a simple argument making the $\hat{H}_\text{eff}$-description plausible: Transforming $|\psi'\rangle=\hat{U}^\dag|\psi\rangle$ leads to the new Hamiltonian $\hat{H}'=\hat{U}^\dag\hat{H}\hat{U}-\mathrm{i}\hbar\hat{U}^\dag(\mathrm{d}_t\hat{U})$. Accordingly, $\hat{H}'$ is obtained from $\hat{H}$ by subracting the oscillating potential terms $\propto v_i(t)$ and replacing $J_{ij}\to J_{ij}\exp(\mathrm{i}[W_i-W_j]/\hbar)$. Now the rapidly oscillating phase factors in the tunneling terms of $\hat{H}'$ can approximately be taken into account on time average, $\hat{H}'(t)\to\frac{1}{T}\int_0^T\!\mathrm{d} t\,\hat{H}'(t)=\hat{H}_\text{eff}$, giving $|\psi'\rangle\approx|\psi_\text{eff}\rangle$. A 2D triangular optical lattice can be realized by superimposing three laser beams, all polarized in $z$-direction, at an angle of $2\pi/3$ in the $xy$-plane, while a standing light wave in $z$-direction is used to create a stack of effectively two-dimensional systems. A further beam in $z$-direction allows to modify the trapping potential in the $xy$-plane. The lattice motion can be realized by varying the relative frequencies of the beams by means of acousto-optical modulators. For the purposes described above, an orbit ${\bm x}(t)=\Delta x_c\cos(\omega t){\bm e}_c+\Delta x_s \sin(\omega t){\bm e}_s$ is required, with $\Delta x_{s/c}$ on the order of a lattice constant and $\omega/(2\pi)$ being a few kHz. Starting from an isotropic undriven lattice with bare tunneling matrix elements $J_{ij}=\bar{J}<0$ and choosing ${\bm e}_{c/s}={\bm e}_{x/y}$, one obtains effective tunneling matrix elements (\ref{eq:Jeff}) distributed as depicted in Fig.~\ref{fig:lattice}(a). Namely $K_{ij}$ reads $K\equiv d|F_c|$ and $K'\equiv d\sqrt{F_c^2+3F_s^2}/2$ along the solid and dashed bonds, respectively, giving $J=\bar{J}\mathrm{J}_0\big(K/(\hbar\omega)\big)$ and $J'=\bar{J}\mathrm{J}_0\big(K'/(\hbar\omega)\big)$ according to Eq.~(\ref{eq:Jeff}). This allows for any value of the anisotropy parameter $\alpha=J'/J$. {We have already implemented a triangular optical lattice in the laboratory, loaded it with ultracold $^{87}$Rb atoms, and observed the transition from a SF to a MI. Also a controlled motion of the lattice has been achieved.} \section{State preparation and role of trapping potential} For elliptical forcing there are no instants in time where $\hat{U}(t)$ is equal to the identity (i.e.\ with $\dot{{\bm x}}=0$) like it is the case for linear forcing ($F_s=0$) at integer $t\omega/(2\pi)$. Thus, it is not possible to ``map'' the state $|\psi\rangle$ of an initially unforced system on $|\psi_\text{eff}\rangle$ by suddenly switching on the forcing. However, one can smoothly switch on the drive. According to the adiabatic principle for quantum Floquet states \cite{BreuerHolthaus89II}, $|\psi_\text{eff}\rangle$ can follow adiabatically when $\hat{H}_\text{eff}$ is modified by the forcing, starting from $|\psi_\text{eff}\rangle=|\psi\rangle$ in the undriven limit \cite{EckardtEtAl05II,EckardtHolthaus08b}. Before passing from the ground state of the undriven system ($J_{ij}^\text{eff}=J_{ij}<0$) to the positive-hopping regime ($J_{ij}^\text{eff}>0$) in the presence of a trapping potential, the lattice should be tuned very deep, such that $U\gg n_0|J|$ with filling $n_0$ in the trap center. The system will form MI regions \cite{Mott} with an integer number $g$ of particles (depending on $\mu_i/U$) localized at each site. Different MI regions will be separated only by tiny intermediate domains of non-integer filling. In the MI phases the state can follow smoothly through the sign-change of $J$ when the lattice acceleration is ramped up continuously in a next step. Moreover, in a deep lattice unwanted interband transitions are strongly suppressed. When the desired strength of the forcing is reached, in the center of the trap the MI has to be melted. This can be achieved both by decreasing the lattice depth (without leaving the regime of strong correlation $U\sim n_0|J|$) and by tuning the chemical potential in the trap center. The latter can be achieved by varying the trap, such that atoms are pushed into or pulled out of the center. We have studied the MI-to-SF ($J<0$) and MI-to-N\'eel SF ($J>0$) transition in the triangular lattice theoretically. In the parameter plane spanned by $\mu/U$ and $J/U$, a strong-coupling expansion as described in Ref.~\cite{FreericksMonien94} gives the upper and lower boundary, $\mu_\text{p}/U$ and $\mu_\text{h}/U$, of the MI phase with integer filling $n=g$. One finds $\mu_\text{p}/U=g -(g+1)\eta-gc_\text{p}\eta^2+\mathcal{O}(\eta^3)$ and $\mu_\text{h}/U= (g-1)+g\eta+(g+1)c_\text{h}\eta^2+\mathcal{O}(\eta^3)$. The expansion parameter is given by $\eta\equiv-\varepsilon({\bm q})/U= w|J|/U$ with $\varepsilon({\bm q})\equiv-|J|w$ being the single-particle dispersion relation $\varepsilon({\bm p})\equiv 2J[\cos(dp_x)+2\alpha\cos(dp_x/2)\cos(\sqrt{3}dp_y/2)]$ evaluated at its minimum ${\bm q}$. The coefficients read $c_\text{p} \equiv g+1 - (5g+4)(1+2\alpha^2)/w^2$ and $c_\text{h} \equiv g - (5g+1)(1+2\alpha^2)/w^2$. Here $w$ directly reflects frustration; while $w=4\alpha+2$ for ferromagnetic $J<0$, it is smaller for antiferromagnetic $J>0$, namely $w=\alpha^2+2$ for $0\le\alpha\le2$ and $w=4\alpha-2$ for $\alpha\ge2$. As a consequence, the MI regions extend to larger values of $|J|/U$ on the frustrated side of the $J-\mu$ plane. This can also be observed in Fig.~\ref{fig:app}(b) displaying the phase diagram for $\alpha=1.3$. Moreover, the transition from 1D like concave phase boundaries $c_\text{p/h}<0$ to square lattice like convex ones $c_\text{p/h}>0$ happens at noticeably larger $\alpha$ in the case of fustration. Namely it occurs for $\alpha$ between 0.03 and 0.13 (2.3 and 4.2) when $J<0$ (for $J>0$). For convex boundaries, also $\mu_\text{p/h}/U \simeq \frac{1}{2}\{2g-1-\eta \pm [1-2(2g+1)\eta +\eta^2]^{1/2}\}$ obtained within meanfield approximation (cf.\ Refs.~\cite{Mott}) can be expected to provide a reasonable description. The phase diagram plotted in Fig.~\ref{fig:app}(b) shows: the smaller $n|J|/U$ gets, the smaller get the intervals of $\mu/U$ with non-integer filling (i.e.\ outside the MI lobes). In order to reach the strong coupling limit $U\gg n|J|$ at non-integer filling $g<n<g+1$ [where the spin-1/2 description (\ref{eq:Hxy}) with non-trivial polarization applies], the variation of $\mu_i$ (i.e.\ of $V_i$) must be smaller than $\mu_\text{h}^{(g+1)}-\mu_\text{p}^{(g)}\sim2(g+1)w|J|$ over an appreciable number of sites. Such a situation, where also the gapped SL phases are supposed to appear, can be achieved in the center of a shallow trap. Note that the presence of a (shallow) controllable trapping potential is definitely desirable: tuning its depth allows to manipulate the chemical potential/filling in the trap center. With respect to the chemical potential, the gapped SL phases, expected for large interaction and $\alpha$ near 0.5 or 1.3 [cf.~Fig.~\ref{fig:diagram}], would appear as incompressible regions at half-odd-integer filling. In Fig.~\ref{fig:app}(b) we have sketched these phases (shaded in grey); they show up as ``bubbles'' on the frustrated side between the MI regions. \section{Experimental signatures {of frustration}} Experimental signatures of the N\'eel SF are sharp quasimomentum peaks at ${\bm p}={\bm q}$ + reciprocal lattice vectors {[cf.\ Fig.~\ref{fig:lattice}(b)]}. In the case of spiral order, the ordering vector ${\bm q}$ can take two different values; when measuring (or already before) the system will spontaneously choose one of them. For a whole stack of 2D systems, the measurement will average over both quasimomentum distributions, unless there remains a finite coupling between the 2D-layers establishing the same order everywhere. Also the predicted downshift of the anisotropy ratio $\alpha_0$ (where spiral continuously transforms into rhombic-staggered N\'eel order) with increasing interaction/lattice depth [cf.\ Figs.\ \ref{fig:diagram} and \ref{fig:app}(a)] can be investigated experimentally. {The growth of the staggered $\alpha$-domain with increasing quantum fluctuations is an example for ``order by disorder'' \cite{Reviews}.} Another measurable consequence of frustration is the extension of the MI phases to larger values of $|J|/U$ [cf.\ Fig.~\ref{fig:app}(b)]. {Due to the lack of long-range order, the MI does not show sharp peaks in the single- but rather in the two-particle momentum distribution (noise correlations) \cite{NoiseCorrelation}. This is also true for the gapped SL phases, being the most striking implication of frustration expected. Thus, in order to distinguishing the SL from the MI experimentally, one should search for structures in the momentum distribution beyond sharp peaks. The SL can feature a pattern in the momentum distribution on the scale of a Brillouin zone (i.e.\ the inverse lattice spacing $\pi/d$), reflecting delocalization of particles on pairs of neighboring sites forming ``singlets''. Apart from that, single-site resolved measurements \cite{SingleSite} clearly distinguish between MI and SL by number fluctuations.} \section{Conclusion and Outlook} We have proposed to realize the positive-hopping Bose-Hubbard model with a system of ultracold spinless atoms in a deep triangular optical lattice dressed by a rapid elliptical acceleration. Our scheme allows to experimentally investigate the physics of a frustrated quantum system under the clean and controlled conditions provided by ultracold atoms. Since frustration is induced to motional bosonic degrees of freedom, it is experimentally possible to reach temperatures that are low compared to the energy scales governing the system. The model smoothly approaches a quantum spin-1/2 $XY$-model in the deep-lattice limit of strong interaction. In order to draw the phase diagrams shown in Figs.~\ref{fig:diagram} and \ref{fig:app}(b) we have combined results from different approaches: (i) numerical simulations applying to the spin-1/2 limit of strong interaction at half-odd-integer filling (published recently in Ref.~\cite{SchmiedEtAl08}), (ii) an above-Bogoliubov theory valid in the limit of weak interaction, and (iii) analytical strong-coupling as well as meanfield results for the limit of strong interaction at integer filling. Expected are superfluid phases showing staggered or spiral N\'eel order, Mott insulator phases having integer filling, and gapped spin-liquid phases at half-odd-integer filling. We have also described how the positive-hopping regime can be reached adiabatically, if initially the system is prepared in the usual negative-hopping ground state. Finally, experimental signatures of the different phases have been discussed. In conclusion, using an existing setup the experiment proposed here can provide novel information about a frustrated quantum system. We have restricted our analysis to the triangular lattice geometry that we have implemented already in the laboratory. However, the route described here, namely (i) realizing a positive-hopping Bose-Hubbard model with ultracold atoms by dressing the system by a fast elliptical lattice acceleration and (ii) approaching the physics of a spin-1/2 $XY$-model in the limit of strong interaction, applies equally to other two-dimensional non-bipartite lattices such as the Kagom\'e lattice. This opens perspectives for interesting future research. \acknowledgments We thank R.\ Schmied, T.\ Roscilde, V.\ Murg, D.\ Porras, and I.\ Cirac for discussions on the spin model. Support by the spanish MCI [FIS2008-00784, FIS2007-29996-E (ESF-EUROQUAM project FERMIX)], the Alexander von Humboldt foundation, Caixa Manresa, and through ERC grant QUAGATUA as well as through the EU STREP NAMEQUAM is gratefully acknowledged.
1,477,468,750,183
arxiv
\section{Introduction} \label{S:1} \justify Students often encounter formulas for sums of powers of the first $n$ positive integers as examples of statements that can be proved using the Principle of Mathematical Induction and, perhaps less often nowadays, in Riemann sums during an introduction to definite integration. In either situation, they usually see only the first three such sum formulas, \begin{equation*} 1+2+3+\cdots+n=\frac{n(n+1)}{2} \end{equation*} \begin{equation*} 1^2+2^2+3^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6} \end{equation*}and \begin{equation*} 1^3+2^3+3^3+\cdots+n^3=\frac{n^2(n+1)^2}{4} \end{equation*} for any positive integer $n$.\\[3mm] Formulas for sums of integer powers were first given in generalizable form in the West by Thomas Harriot (c. 1560-1621) of England. At about the same time, Johann Faulhaber (1580-1635) of Germany gave formulas for these sums up to the $17^{th}$ power, far higher than anyone before him, but he did not make clear how to generalize them. Pierre de Fermat (1601-1665) often is credited with the discovery of formulas for sums of integer powers, but his fellow French mathematician Blaise Pascal (1623-1662) gave the formulas much more explicitly. The Swiss mathematician Jakob Bernoulli (1654-1705) is perhaps best and most deservedly known for presenting formulas for sums of integer powers to the European mathematical community. His was the most useful and generalizable formulation to date because he gave by far the most explicit and succinct instructions for finding the coefficients of the formulas.\\ Generally, the present paper provided an identity to solve one sums of powers of complex functions with out depending on the other sums, refer to Section \ref{S:3}. \section{Preliminary} \label{S:2} In the following Theorem $1$ and $3$, formulas $(1)$ and $(2)$ generalize the well known formulas for $a=d=1$\cite{8}. We note that equation $(2)$ was found by Wiener\cite{10}; see also\cite{2}. Bachmann\cite{1} found a recurrence for $L_{k,n}(a,d)$ involving only $L_{j,n}$ for $j=1,2,3,\cdots,k-1.$ This paper presents a formula for $L_{k,n}(a,d)$ and $T_{k,n}(a,d)$ from equation $(1)$ and $(2)$ without the involvement of $L_{j,n}$ for $j=1,2,3,\cdots,k-1$ and $T_{j,n}$ for $j=1,2,3,\cdots,k-1$ respectively. \begin{theorem} For all $n,k \in \mathbb{N}$, and $a,d\in{\mathbb{C}}$, where $d\ne0$ \begin{equation} \sum_{j=0}^{k}\binom{k+1}jd^{k+1-j}L_{j,n}(a,d)=(a+nd)^{k+1}-(a)^{k+1} \end{equation}Where \begin{equation*} \binom {k+1} j=\frac{(k+1)!}{j!(k+1-r)!} \end{equation*}and \begin{equation*} L_{j,n}(a,d)=a^j+(a+d)^j+(a+2d)^j+\cdots+(a+(n-1)d)^j \end{equation*} \end{theorem} \begin{theorem} The above theorem can be rewritten in the following system of equation form for $n=1,2,3,\cdots$ \begin{equation*} AX=B \end{equation*}Where \begin{equation*} A=\begin{pmatrix} \binom 1 0 d & 0 & 0 & 0 & 0 & 0 & 0 & 0 &0 & 0\\[2mm] \binom 2 0 d^2& \binom 2 1 d& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\[2mm] \binom 3 0 d^3& \binom 3 1 d^2& \binom 3 2 d & 0 & 0 & 0 & 0 & 0 & 0 & 0\\[2mm] \binom 4 0 d^4& \binom 4 1 d^3 & \binom 4 2 d^2 & \binom 4 3 d & 0 & 0 & 0 & 0 & 0 & 0 \\[2mm] \binom 5 0 d^5& \binom 5 1 d^4 &\binom 5 2 d^3 & \binom 5 3 d^2 & \binom 5 4 d & 0 & 0 & 0 & 0 & 0\\[2mm] . & . &. &.&.& .& 0 & 0 & 0 &0\\[2mm] . & . &. &.&.& . & . & 0 & 0 &0\\[2mm] . & . &. &.&.& . & . & . & 0 &0\\[2mm] \binom {k} 0d^k & \binom {k} 1 d^{k-1}&\binom {k} 2 d^{k-2} & \binom {k} 3 d^{k-3} & . & . & . & . & \binom {k} {k-1}d & 0\\[2mm] \binom {k+1} 0 d^{k+1} & \binom {k+1} 1 d^{k}&\binom {k+1} 2 d^{k-1}& \binom {k+1} 3 d^{k-2} & . & . & . & . &\binom {k+1} {k-1}d^2 & \binom {k+1} {k}d\\[2mm] \end{pmatrix} \end{equation*} \begin{equation*} X= \begin{pmatrix} L_{0,n}(a,d)\\[2mm] L_{1,n}(a,d)\\[2mm] L_{2,n}(a,d)\\[2mm] L_{3,n}(a,d)\\[2mm] . \\[2mm] .\\[2mm] .\\[2mm] L_{k-1,n}(a,d)\\[2mm] L_{k,n}(a,d) \end{pmatrix} \text{ , } B=\begin{pmatrix} (a+nd)-a\\[2mm] (a+nd)^{2}-a^{2}\\[2mm] (a+nd)^{3}-a^{3}\\[2mm] (a+nd)^{4}-a^{4}\\[2mm] .\\[2mm] .\\[2mm] .\\[2mm] (a+nd)^{k}-a^{k}\\[2mm] (a+nd)^{k+1}-a^{k+1} \end{pmatrix} \end{equation*} Thus to solve for $X$ we can use cramer's rule of solving linear systems of equation, since $det(A)\neq 0$ \end{theorem} \begin{theorem} For all $n,k \in \mathbb{N}$, and $a,d\in{\mathbb{C}}$, where $d\ne0$ \begin{equation} \sum_{j=0}^{k}(-1)^j\binom{k+1}jd^{k+1-j}T_{j,n}(a,d)=(-1)^k\bigg[(a+nd-d)^{k+1}-(a-d)^{k+1}\bigg] \end{equation}Where \begin{equation*} \binom {k+1} j=\frac{(k+1)!}{j!(k+1-r)!} \end{equation*}and \begin{equation*} T_{j,n}(a,d)=a^j-(a+d)^j+(a+2d)^j-\cdots+(-1)^{(n-1)}(a+(n-1)d)^j \end{equation*} \end{theorem} \section{Main Result} \label{S:3} \begin{theorem} For all $k=2,3,4,\cdots$ and $n=k+1,k+2,k+3,\cdots$ \begin{equation} S^{k-2}_n=-\binom n {k-1}\frac{1}{k}d^{n-k}S^{k-3}_k+S^{k-3}_{n} \end{equation}Where \begin{equation*} S^0_{n}=S_n=\bigg(\frac{n}{2}-1\bigg)kd^{n}-\frac{n}{2}d^{n-2}((a+kd)^2-a^2)+(a+kd)^{n}-a^{n} \end{equation*} \end{theorem} \begin{proof}Let $J^r=(a+nd)^r-a^r$ for $r=1,2,3,\cdots,k+1$ and \begin{equation*} M_1=\begin{pmatrix} \binom 1 0 d & 0 & 0 & 0 & 0 & 0 & 0 & 0 &0 & J\\[2mm] \binom 2 0 d^2& \binom 2 1 d& 0 & 0 & 0 & 0 & 0 & 0 & 0 & J^2\\[2mm] \binom 3 0 d^3& \binom 3 1 d^2& \binom 3 2 d & 0 & 0 & 0 & 0 & 0 & 0 & J^3\\[2mm] \binom 4 0 d^4& \binom 4 1 d^3 & \binom 4 2 d^2 & \binom 4 3 d & 0 & 0 & 0 & 0 & 0 & J^4 \\[2mm] \binom 5 0 d^5& \binom 5 1 d^4 &\binom 5 2 d^3 & \binom 5 3 d^2 & \binom 5 4 d & 0 & 0 & 0 & 0 & J^5\\[2mm] . & . &. &.&.& .& 0 & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & . & 0 &.\\[2mm] \binom {k} 0d^k & \binom {k} 1 d^{k-1}&\binom {k} 2 d^{k-2} & \binom {k} 3 d^{k-3} & . & . & . & . & \binom {k} {k-1}d & J^k\\[2mm] \binom {k+1} 0 d^{k+1} & \binom {k+1} 1 d^{k}&\binom {k+1} 2 d^{k-1}& \binom {k+1} 3 d^{k-2} & . & . & . & . &\binom {k+1} {k-1}d^2 & J^{k+1}\\[2mm] \end{pmatrix} \end{equation*} Let us apply the following elementary row operations to determine the determinant of matrix $M_1$. Apply \begin{equation*} -d^{n-1}R_1+R_n\longrightarrow R_n \text{ for }n=2,3,\cdots k+1. \end{equation*} Then we get the matrix \begin{equation*} M_2=\begin{pmatrix} \binom 1 0 d & 0 & 0 & 0 & 0 & 0 & 0 & 0 &0 & J\\[2mm] 0& \binom 2 1 d& 0 & 0 & 0 & 0 & 0 & 0 & 0 & -d^{1}J+J^2\\[2mm] 0& \binom 3 1 d^2& \binom 3 2 d & 0 & 0 & 0 & 0 & 0 & 0 & -d^{2}J+J^3\\[2mm] 0& \binom 4 1 d^3 & \binom 4 2 d^2 & \binom 4 3 d & 0 & 0 & 0 & 0 & 0 & -d^{3}J+J^4 \\[2mm] 0& \binom 5 1 d^4 &\binom 5 2 d^3 & \binom 5 3 d^2 & \binom 5 4 d & 0 & 0 & 0 & 0 & -d^{4}J+J^5\\[2mm] . & . &. &.&.& .& 0 & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & . & 0 &.\\[2mm] 0& \binom {k} 1 d^{k-1}&\binom {k} 2 d^{k-2} & \binom {k} 3 d^{k-3} & . & . & . & . & \binom {k} {k-1}d & -d^{k-1}J+J^k\\[2mm] 0 & \binom {k+1} 1 d^{k}&\binom {k+1} 2 d^{k-1}& \binom {k+1} 3 d^{k-2} & . & . & . & . &\binom {k+1} {k-1}d^2 & -d^{k}J+J^{k+1}\\[2mm] \end{pmatrix} \end{equation*} Apply the elementary row opertion on matrix $M_2$: \begin{equation*} -\binom n 1\frac{1}{2}d^{n-2} R_2+R_n\longrightarrow R_n \text{ for }n=3,4,\cdots k+1. \end{equation*} To get the matrix: \begin{equation*} \begin{pmatrix} \binom 1 0 d & 0 & 0 & 0 & 0 & 0 & 0 & 0 &0 & J\\[2mm] 0& \binom 2 1 d& 0 & 0 & 0 & 0 & 0 & 0 & 0 & -d^{1}J+J^2\\[2mm] 0& 0& \binom 3 2 d & 0 & 0 & 0 & 0 & 0 & 0 &-\frac{3}{2}d^{1}(-d^{1}J+J^2) -d^{2}J+J^3\\[2mm] 0& 0& \binom 4 2 d^2 & \binom 4 3 d & 0 & 0 & 0 & 0 & 0 & -\frac{4}{2}d^{2}(-d^{1}J+J^2)-d^{3}J+J^4 \\[2mm] 0& 0 &\binom 5 2 d^3 & \binom 5 3 d^2 & \binom 5 4 d & 0 & 0 & 0 & 0 &-\frac{5}{2}d^{3}(-d^{1}J+J^2)-d^{4}J+J^5\\[2mm] . & . &. &.&.& .& 0 & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & . & 0 &.\\[2mm] 0& 0&\binom {k} 2 d^{k-2} & \binom {k} 3 d^{k-3} & . & . & . & . & \binom {k} {k-1}d & -\frac{k}{2}d^{k-2}(-d^{1}J+J^2)-d^{(k-1)}J+J^k\\[2mm] 0 & 0&\binom {k+1} 2 d^{k-1}& \binom {k+1} 3 d^{k-2} & . & . & . & . &\binom {k+1} {k-1}d^2 & -\frac{k+1}{2}d^{k-1}(-d^{1}J+J^2)-d^{k}J+J^{k+1}\\[2mm] \end{pmatrix} \end{equation*} Let \begin{equation*} S_1=J \end{equation*} \begin{equation*} S_2=-d^{1}J+J^2 \end{equation*} \begin{equation*} S_3= \frac{3}{2}d^{2}J-\frac{3}{2}d^{1}J^2-d^{2}J+J^3 \end{equation*} \begin{equation*} S_4=\frac{4}{2}d^{3}J-\frac{4}{2}d^{2}J^2-d^{3}J+J^4 \end{equation*} \begin{equation*} S_5=\frac{5}{2}d^{4}J-\frac{5}{2}d^{3}J^2 -d^{4}J+J^5 \end{equation*} \begin{equation*} \cdots \end{equation*} \begin{equation*} \cdots \end{equation*} \begin{equation*} \cdots \end{equation*} \begin{equation*} S_{k}=\frac{k}{2}d^{k-1}J-\frac{k}{2}d^{k-2}J^2-d^{k-1}J+J^k \end{equation*} \begin{equation*} S_{(k+1)}=\frac{k+1}{2}d^{k}J-\frac{k+1}{2}d^{k-1}J^2-d^{k}J+J^{k+1} \end{equation*} Therefore matrix $M_4$ becomes \begin{equation*}M_4= \begin{pmatrix} \binom 1 0 d & 0 & 0 & 0 & 0 & 0 & 0 & 0 &0 & S_1\\[2mm] 0& \binom 2 1 d& 0 & 0 & 0 & 0 & 0 & 0 & 0 & S_2\\[2mm] 0& 0& \binom 3 2 d & 0 & 0 & 0 & 0 & 0 & 0 & S_3\\[2mm] 0& 0& \binom 4 2 d^2 & \binom 4 3 d & 0 & 0 & 0 & 0 & 0 & S_4 \\[2mm] 0& 0 &\binom 5 2 d^3 & \binom 5 3 d^2 & \binom 5 4 d & 0 & 0 & 0 & 0 &S_5\\[2mm] . & . &. &.&.& .& 0 & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & . & 0 &.\\[2mm] 0& 0&\binom {k} 2 d^{k-2} & \binom {k} 3 d^{k-3} & . & . & . & . & \binom {k} {k-1}d & S_k\\[2mm] 0 & 0&\binom {k+1} 2 d^{k-1}& \binom {k+1} 3 d^{k-2} & . & . & . & . &\binom {k+1} {k-1}d^2 & S_{(k+1)}\\[2mm] \end{pmatrix} \end{equation*} Apply the elementary row opertion on matrix $M_4$: \begin{equation*} -\binom n 2\frac{1}{3}d^{n-3} R_3+R_n\longrightarrow R_n \text{ for }n=4,5,\cdots k+1. \end{equation*} Then we get the matrix \begin{equation*}M_5= \begin{pmatrix} \binom 1 0 d & 0 & 0 & 0 & 0 & 0 & 0 & 0 &0 & S_1\\[2mm] 0& \binom 2 1 d& 0 & 0 & 0 & 0 & 0 & 0 & 0 & S_2\\[2mm] 0& 0& \binom 3 2 d & 0 & 0 & 0 & 0 & 0 & 0 & S_3\\[2mm] 0& 0& 0 & \binom 4 3 d & 0 & 0 & 0 & 0 & 0 & -\binom 4 2\frac{1}{3}d^{1} S_3+S_4 \\[2mm] 0& 0 &0 & \binom 5 3 d^2 & \binom 5 4 d & 0 & 0 & 0 & 0 &-\binom 5 2\frac{1}{3}d^{2} S_3+S_5\\[2mm] . & . &. &.&.& .& 0 & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & . & 0 &.\\[2mm] 0& 0&0 & \binom {k} 3 d^{k-3} & . & . & . & . & \binom {k} {k-1}d & -\binom k 2\frac{1}{3}d^{k-3} S_3+S_k\\[2mm] 0 & 0&0& \binom {k+1} 3 d^{k-2} & . & . & . & . &\binom {k+1} {k-1}d^2 & -\binom {k+1} 2\frac{1}{3}d^{k-2} S_3+S_{(k+1)}\\[2mm] \end{pmatrix} \end{equation*} Let \begin{equation*} S^1_4=-\binom 4 2\frac{1}{3}d^{1} S_3+S_4 \end{equation*} \begin{equation*} S^1_5=-\binom 5 2\frac{1}{3}d^{2} S_3+S_5 \end{equation*} \begin{equation*} \cdots \end{equation*} \begin{equation*} \cdots \end{equation*} \begin{equation*} \cdots \end{equation*} \begin{equation*} S^1_{k}=-\binom k 2\frac{1}{3}d^{k-3} S_3+S_k \end{equation*} \begin{equation*} S^1_{(k+1)}=-\binom {k+1} 2\frac{1}{3}d^{k-2} S_3+S_{(k+1)} \end{equation*} Therefore \begin{equation*}M_5= \begin{pmatrix} \binom 1 0 d & 0 & 0 & 0 & 0 & 0 & 0 & 0 &0 & S_1\\[2mm] 0& \binom 2 1 d& 0 & 0 & 0 & 0 & 0 & 0 & 0 & S_2\\[2mm] 0& 0& \binom 3 2 d & 0 & 0 & 0 & 0 & 0 & 0 & S_3\\[2mm] 0& 0& 0 & \binom 4 3 d & 0 & 0 & 0 & 0 & 0 & S^1_4 \\[2mm] 0& 0 &0 & \binom 5 3 d^2 & \binom 5 4 d & 0 & 0 & 0 & 0 &S^1_5\\[2mm] . & . &. &.&.& .& 0 & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & . & 0 &.\\[2mm] 0& 0&0 & \binom {k} 3 d^{k-3} & . & . & . & . & \binom {k} {k-1}d & S^1_k\\[2mm] 0 & 0&0& \binom {k+1} 3 d^{k-2} & . & . & . & . &\binom {k+1} {k-1}d^2 & S^1_{(k+1)}\\[2mm] \end{pmatrix} \end{equation*} Apply the elementary row opertion on matrix $M_5$: \begin{equation*} -\binom n 3\frac{1}{4}d^{n-4}R_4+R_n\longrightarrow R_n \text{ for }n=5,6,\cdots k+1. \end{equation*} Then we get the matrix \begin{equation*}M_6= \begin{pmatrix} \binom 1 0 d & 0 & 0 & 0 & 0 & 0 & 0 & 0 &0 & S_1\\[2mm] 0& \binom 2 1 d& 0 & 0 & 0 & 0 & 0 & 0 & 0 & S_2\\[2mm] 0& 0& \binom 3 2 d & 0 & 0 & 0 & 0 & 0 & 0 & S_3\\[2mm] 0& 0& 0 & \binom 4 3 d & 0 & 0 & 0 & 0 & 0 & S^1_4 \\[2mm] 0& 0 &0 & 0 & \binom 5 4 d & 0 & 0 & 0 & 0 &-\binom 5 3\frac{1}{4}d^{1} S^1_4+S^1_5\\[2mm] . & . &. &.&.& .& 0 & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & . & 0 &.\\[2mm] 0& 0&0 & 0 & . & . & . & . & \binom {k} {k-1}d &-\binom k 3\frac{1}{4}d^{k-4} S^1_4 +S^1_k\\[2mm] 0 & 0&0& 0 & . & . & . & . &\binom {k+1} {k-1}d^2 &-\binom {k+1} 3\frac{1}{4}d^{k-3} S^1_4+ S^1_{(k+1)}\\[2mm] \end{pmatrix} \end{equation*} Let \begin{equation*} S^2_5=-\binom 5 3\frac{1}{4}d^{1} S^1_4+S^1_5 \end{equation*} \begin{equation*} S^2_6=-\binom 6 3\frac{1}{4}d^{2} S^1_4+S^1_6 \end{equation*} \begin{equation*} S^2_7=-\binom 7 3\frac{1}{4}d^{3} S^1_4+S^1_7 \end{equation*} \begin{equation*} S^2_8=-\binom 8 3\frac{1}{4}d^{4} S^1_4+S^1_8 \end{equation*} \begin{equation*} \cdots \end{equation*} \begin{equation*} \cdots \end{equation*} \begin{equation*} \cdots \end{equation*} \begin{equation*} S^2_{k}=-\binom {k} 3\frac{1}{4}d^{k-4} S^1_4 +S^1_k \end{equation*} \begin{equation*} S^2_{(k+1)}=-\binom {k+1} 3\frac{1}{4}d^{k-3} S^1_4+ S^1_{(k+1)} \end{equation*} Therefore \begin{equation*}M_6= \begin{pmatrix} \binom 1 0 d & 0 & 0 & 0 & 0 & 0 & 0 & 0 &0 & S_1\\[2mm] 0& \binom 2 1 d& 0 & 0 & 0 & 0 & 0 & 0 & 0 & S_2\\[2mm] 0& 0& \binom 3 2 d & 0 & 0 & 0 & 0 & 0 & 0 & S_3\\[2mm] 0& 0& 0 & \binom 4 3 d & 0 & 0 & 0 & 0 & 0 & S^1_4 \\[2mm] 0& 0 &0 & 0 & \binom 5 4 d & 0 & 0 & 0 & 0 &S^2_5\\[2mm] . & . &. &.&.& .& 0 & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & 0 & 0 &.\\[2mm] . & . &. &.&.& . & . & . & 0 &.\\[2mm] 0& 0&0 & 0 & . & . & . & . & \binom {k} {k-1}d &S^2_k\\[2mm] 0 & 0&0& 0 & . & . & . & . &\binom {k+1} {k-1}d^2 & S^2_{(k+1)}\\[2mm] \end{pmatrix} \end{equation*} By applying Elementary row operation repeatedly, then we get the matrix \begin{equation*}M_{k+2}= \begin{pmatrix} \binom 1 0 d & 0 & 0 & 0 & 0 & 0 & 0 & 0 &0 & S_1\\[2mm] 0 & \binom 2 1 d& 0 & 0 & 0 & 0 & 0 & 0 & 0 & S_2\\[2mm] 0 & 0& \binom 3 2 d& 0 & 0 & 0 & 0 & 0 & 0 & S_3\\[2mm] 0 & 0 & 0 & \binom 4 3d & 0 & 0 & 0 & 0 & 0 &S^1_4 \\[2mm] 0 & 0 & 0 & 0 & \binom 5 4 d& 0 & 0 & 0 & 0 &S^2_5\\[2mm] . & . &. &.&.& .& 0 & 0 & 0 & S^3_6\\[2mm] . & . &. &.&.& . & . & 0 & 0 & S^4_7\\[2mm] . & . &. &.&.& . & . & . & 0 & S^5_8\\[2mm] . & . &. &.&.& . & . & . & . & .\\[2mm] . & . &. &.&.& . & . & . & . & .\\[2mm] . & . &. &.&.& . & . & . & . & .\\[2mm] 0 & 0 & 0 & 0 & . & . & . & . & \binom {k} {k-1}d &S^{(k-3)}_{k}\\[2mm] 0 & 0 & 0 & 0 & . & . & . & . & 0 & S^{(k-2)}_{(k+1)}\\[2mm] \end{pmatrix} \end{equation*} Therefore we get the identity for $k=3,4,5,\cdots$ \begin{equation*} S^{k-2}_n=-\binom n {k-1}\frac{1}{k}d^{n-k}S^{k-3}_k+S^{k-3}_{n}\text{ for }n=k+1,k+2,k+3,\cdots \end{equation*} Hence from properties of determinant \begin{equation*} |M_1|=|M_{(k+3)}|=S^{(k-2)}_{(k+1)}d^k\prod_{i=1}^{k}(k+1-i)=k!d^{k}S^{(k-2)}_{(k+1)} \end{equation*} \end{proof} \begin{theorem} For $n=4,5,6,7,\cdots$ \begin{equation} S^{(n-3)}_n=\sum_{i=0}^{m}\binom m i \bigg(\frac{d}{2}\bigg)^i\frac{n!}{(n-i)!}(-1)^iS_{n-i}^{n-3-m} \end{equation} \end{theorem} \begin{proof} \begin{equation*} S^{(n-3)}_n=-\frac{n}{2}dS_{n-1}^{n-4}+S_n^{n-4}=-\frac{n}{2}d\bigg[-\frac{n-1}{2}dS_{n-2}^{n-5}+S_{n-1}^{n-5}\bigg]-\frac{n}{2}dS_{n-1}^{n-5}+S_n^{n-5} \end{equation*} \begin{equation*} =\frac{n(n-1)}{2^2}d^2S_{n-2}^{n-5}-\frac{n}{2}dS_{n-1}^{n-5}-\frac{n}{2}dS_{n-1}^{n-5}+S_n^{n-5} \end{equation*} \begin{equation*} =\frac{n(n-1)}{2^2}d^2\bigg[-\frac{n-2}{2}dS_{n-3}^{n-6}+S_{n-2}^{n-6}\bigg]-2\frac{n}{2}d\bigg[ -\frac{n-1}{2}dS_{n-2}^{n-6}+S_{n-1}^{n-6} \bigg]-\frac{n}{2}dS_{n-1}^{n-6}+S_{n}^{n-6} \end{equation*} \begin{equation*} =-\frac{n(n-1)(n-2)}{2^3}d^3S_{n-3}^{n-6}+\frac{n(n-1)}{2^2}d^2 S_{n-2}^{n-6}+2\bigg[\frac{n(n-1)}{2^2}d^2S_{n-2}^{n-6}-\frac{n}{2}d S_{n-1}^{n-6} \bigg]-\frac{n}{2}dS_{n-1}^{n-6}+S_{n}^{n-6} \end{equation*} \begin{equation*} =\bigg[-\frac{n(n-1)(n-2)}{2^3}d^3S_{n-3}^{n-6}\bigg]+3\bigg[\frac{n(n-1)}{2^2}d^2S_{n-2}^{n-6}-\frac{n}{2}d S_{n-1}^{n-6} \bigg]+S_{n}^{n-6} \end{equation*} \begin{eqnarray*} =-\frac{n(n-1)(n-2)}{2^3}d^3\bigg[-\frac{n-3}{2} dS_{n-4}^{n-7}+S_{n-3}^{n-7}\bigg]+3\frac{n(n-1)}{2^2}d^2\bigg(-\frac{n-2}{2} dS_{n-3}^{n-7}+S_{n-2}^{n-7}\bigg)\\[2mm]-3\frac{n}{2}d\bigg(-\frac{(n-1)}{2} dS_{n-2}^{n-7}+S_{n-1}^{n-7} \bigg)-\frac{n}{2}d S_{n-1}^{n-7}+S_{n}^{n-7} \end{eqnarray*} \begin{eqnarray*} =\frac{n(n-1)(n-2)(n-3)}{2^4}d^4 S_{n-4}^{n-7}-4\frac{n(n-1)(n-2)}{2^3}d^3S_{n-3}^{n-7}+6\frac{n(n-1)}{2^2}d^2S_{n-2}^{n-7}-4\frac{n}{2}d S_{n-1}^{n-7}+S_{n}^{n-7} \end{eqnarray*} \begin{eqnarray*} =-\frac{n(n-1)(n-2)(n-3)(n-4)}{2^5} d^5S_{n-5}^{n-8}+5\frac{n(n-1)(n-2)(n-3)}{2^4}d^4S_{n-4}^{n-8}\\[2mm]-10\frac{n(n-1)(n-2)}{2^3}d^3S_{n-3}^{n-8}+10\frac{n(n-1)}{2^2}d^2 S_{n-2}^{n-8}-5\frac{n}{2}d S_{n-1}^{n-8}+S_n^{n-8} \end{eqnarray*} \begin{equation*} \cdots \end{equation*} \begin{equation*} \cdots \end{equation*} \begin{equation*} \cdots \end{equation*} \begin{equation*} =\sum_{i=0}^{m}\binom m i \frac{\prod_{j=0}^{i-1}(n-j)}{2^i}(-1)^id^iS_{n-i}^{n-3-m} =\sum_{i=0}^{m}\binom m i \bigg(\frac{d}{2}\bigg)^i\frac{n!}{(n-i)!}(-1)^iS_{n-i}^{n-3-m} \end{equation*} \end{proof} \begin{theorem}Solutions of Theorem $2$ Or solutions of Equation $(3)$: \begin{equation} \sum_{r=1}^{k}(a+(r-1)d)^{n-1}=\frac{1}{nd}\bigg[\sum_{i=0}^{m}\binom m i \bigg(\frac{d}{2}\bigg)^i\frac{n!}{(n-i)!}(-1)^iS_{n-i}^{n-3-m}\bigg] \end{equation} \end{theorem} \begin{proof}Easily followed from Cramer's rule of solving linear system of equations: \begin{equation*} \sum_{r=1}^{k}(a+(r-1)d)^{n-1}=\frac{(n-1)!d^{(n-1)}}{n!d^n}S^{(n-3)}_n=\frac{1}{nd}S^{(n-3)}_n \end{equation*} \begin{equation*} =\frac{1}{nd}\bigg[\sum_{i=0}^{m}\binom m i \bigg(\frac{d}{2}\bigg)^i\frac{n!}{(n-i)!}(-1)^iS_{n-i}^{n-3-m}\bigg] \end{equation*} \begin{equation*} \Longrightarrow \sum_{r=1}^{k}(a+(r-1)d)^{n-1}=\frac{1}{nd}\bigg[\sum_{i=0}^{m}\binom m i \bigg(\frac{d}{2}\bigg)^i\frac{n!}{(n-i)!}(-1)^iS_{n-i}^{n-3-m}\bigg] \end{equation*} \end{proof} \begin{corollary} \begin{equation} \frac{d}{(n-1)!(n-3)!}\sum_{r=1}^{k}(a+(r-1)d)^{n-1}=\sum_{i=0}^{n-3}\frac{1}{i!(n-i)!(n-3-i)!} \bigg(\frac{d}{2}\bigg)^i(-1)^iS_{n-i} \end{equation} Where \begin{equation*} S_{n-i}^0=S_{n-i}=\frac{n-i}{2}d^{n-i-1}J-\frac{n-i}{2}d^{n-i-2}J^2-d^{n-i-1}J+J^{n-i} \end{equation*} \begin{equation} =\bigg(\frac{n-i}{2}-1\bigg)kd^{n-i}-\frac{n-i}{2}d^{n-i-2}((a+kd)^2-a^2)+(a+kd)^{n-i}-a^{n-i} \end{equation} \end{corollary} \begin{proof} Follows from equation $(5)$ for $m=n-3$. \end{proof} \begin{example} Solve for \begin{equation*} \sum_{r=1}^{k}r^{3000} \end{equation*} \end{example} {\bf{Solution.}} From Equation $(6)$ for $a=d=1$, we have \begin{equation*} \sum_{r=1}^{k}r^{3000}=\frac{1}{3001}\bigg[\sum_{i=0}^{m}\binom m i \frac{1}{2^i}\frac{3001!}{(3001-i)!}(-1)^iS_{3001-i}^{3001-3-m} \bigg] \end{equation*} \begin{equation*} =3000!\bigg[\sum_{i=0}^{2998}\binom {2998} i \frac{1}{2^i}\frac{1}{(3001-i)!}(-1)^iS_{3001-i}^{0} \bigg] \end{equation*} Where, for $i=0,1,\cdots 2998$, we have \begin{equation*} S_{3001-i}^0=S_{3001-i}=\bigg(\frac{3001-i}{2}-1\bigg)k-\frac{3001-i}{2}(k^2+2k)+(1+k)^{3001-i}-1 \end{equation*} \begin{example}Let $d=1$ and $a=x+iy$, where $x,y\in\mathbb{R}$ and $i=\sqrt{-1}$ \begin{equation*} \frac{1}{(n-1)!(n-3)!}\sum_{r=1}^{k}(x+iy+r-1)^{n-1}=\sum_{i=0}^{n-3}\frac{1}{i!(n-i)!(n-3-i)!} \bigg(\frac{1}{2}\bigg)^i(-1)^iS_{n-i} \end{equation*} Where \begin{equation*} S_{n-i}=\bigg(\frac{n-i}{2}-1\bigg)k-\frac{n-i}{2}(2(x+iy)k+k^2)+(x+iy+k)^{n-i}-(x+iy)^{n-i} \end{equation*} \end{example} \begin{theorem}Solutions of Theorem $3$. \begin{equation} \sum_{r=1}^{k}(-1)^{(r-1)}(a+(r-1)d)^{n-1}=\frac{1}{nd}\bigg[\sum_{i=0}^{m}\binom m i \bigg(\frac{d}{2}\bigg)^i\frac{n!}{(n-i)!}(-1)^{(i+1)}S_{n-i}^{n-3-m}\bigg] \end{equation} \end{theorem} \begin{proof} Please kindly see the proof of Theorem $2$. \end{proof} \begin{corollary} \begin{equation} \sum_{r=1}^{k}(-1)^{(r-1)}(a+(r-1)d)^{n-1}=\frac{1}{nd}\bigg[\sum_{i=0}^{n-3}\binom {n-3} i \bigg(\frac{d}{2}\bigg)^i\frac{n!}{(n-i)!}(-1)^{(i+1)}S_{n-i}\bigg] \end{equation} Where \begin{equation*} S_{n-i}^0=S_{n-i}=\frac{n-i}{2}d^{n-i-1}J-\frac{n-i}{2}d^{n-i-2}J^2-d^{n-i-1}J+J^{n-i} \end{equation*} \begin{equation} =\bigg(\frac{n-i}{2}-1\bigg)kd^{n-i}+\frac{n-i}{2}d^{n-i-2}((a+kd-d)^2-(a-d)^2)+(-1)^{(n-i-1)}[(a+kd-d)^{n-i}-(a-d)^{n-i}] \end{equation} \end{corollary} \begin{proof} Follows from equation $(8)$ for $m=n-3$. \end{proof} \newpage \bibliographystyle{model1-num-names}
1,477,468,750,184
arxiv
\section{ISR effects without invisible particle emission}\label{sec:isrorig} We first review briefly the main results of ref.~\cite{Papaefstathiou:2009hp}. Consider the emission of ISR partons from incoming partons $a$ and $b$, as in Fig.~\ref{fig:feynman}. If all the products of the hard subprocess and all the ISR emitted at angles greater than $\theta_c$ are detected, the resummed differential cross section may be written as: \begin{equation}\label{eq:Sigma} M^2\frac{d\sigma_{ab}}{dM^2 dY} = \int dx_1\,dx_2\,K_{a'a}(x_1/\bar x_1) f_a(\bar x_1,Q_c)\,K_{b'b}(x_2/\bar x_2)f_b(\bar x_2,Q_c)\,\hat\sigma_{a'b'}(x_1x_2S)\;\;, \end{equation} or equivalently \begin{equation}\label{eq:SigmaDef} S\frac{d\sigma_{ab}}{dM^2 dY} = \int dz_1\,dz_2\,K_{a'a}(z_1) f_a(\bar x_1,Q_c)\,K_{b'b}(z_2)f_b(\bar x_2,Q_c)\,\hat\sigma_{a'b'}(z_1z_2M^2)\;\;, \end{equation} where $M$ and $Y$ are the invariant mass and rapidity of the detected system (the `visible' mass and rapidity) and \begin{equation}\label{eq:xbar} \bar x_1 = \frac{M}{\sqrt S}e^{Y}\;,\;\;\; \bar x_2 = \frac{M}{\sqrt S}e^{-Y}\;, \end{equation} $\sqrt S$ being the overall c.m.\ energy-squared. The hard subprocess cross section $\hat\sigma_{a'b'}$ is evaluated at c.m.\ energy squared $Q^2=x_1x_2S=z_1z_2M^2$, where $z_i=x_i/\bar x_i$. The parton distribution functions (PDFs) $f_{a,b}$ are evaluated at the lower scale $Q_c\sim \theta_c Q$.\footnote{We find that results are somewhat sensitive to the constant of proportionality here. We actually use $Q_c = Q\exp(-\eta_{\rm max})$ where $\eta_{\rm max}=-\ln\tan(\theta_c/2)$ is the maximum visible pseudorapidity.} The kernel functions $K_{a'a}$ and $K_{b'b}$ describe the evolution of the PDFs from scale $Q_c$ to $Q$. They satisfy an evolution equation like that of the parton distributions themselves: \begin{equation}\label{eq:evolK} Q\frac{\partial}{\partial Q}K_{b'b}(z) =\frac{\alpha_{\rm S}(Q)}\pi\int\frac{dz'}{z'} P_{b'a}(z') K_{ab}(z/z')\;, \end{equation} with the initial condition that $K_{ab}(z)=\delta_{ab}\delta(1-z)$ at $Q=Q_c$. We describe in appendix~\ref{app:mellin} the method that we used to compute the kernel functions. The main conclusions from the study in ref.~\cite{Papaefstathiou:2009hp} are that, in the absence of invisible particles, the above analytical results are in good agreement with those of Monte Carlo simulations, and that the distribution of the new variable (\ref{eq:smin_def}) is indeed determined primarily by that of the visible mass $M$. \section{ISR effects including invisible particle emission}\label{sec:isrinvis} Suppose now that an invisible 4-momentum $p^{\mu}_{inv}$ is emitted from the hard subprocess. If we define the total lab-frame 4-momentum of the incoming partons $a$ and $b$ as $P^{\mu} = (E,\vec{P})$, \begin{equation}\label{eq:total4mom} P^{\mu} = \frac{1}{2} \sqrt{S} [ (\bar{x}_1 + \bar{x}_2), 0, 0, (\bar{x}_1 - \bar{x}_2) ] \;, \end{equation} then the visible 4-momentum will be $P^{\mu} - p^{\mu}_{inv}$. By definition, the visible mass is then given by: \begin{equation}\label{eq:vismassdef} M^2 = (P - p_{inv})^2 = P^{\mu} P_{\mu} + p^{\mu}_{inv} p_{inv,\mu} - 2 p^{\mu}_{inv} P_{\mu}\;. \end{equation} Equation~(\ref{eq:vismassdef}) demonstrates the interplay between two effects: on one hand ISR increases the `true' scale of the hard process $Q$, to the `apparent' scale $M$ by contaminating the detector with extra particles, and on the other hand the invisible particle emission decreases $M$ by the loss of particles. In the case of gluino pair-production both effects are equally important, as we will show. Substituting from eq.~(\ref{eq:total4mom}) in eq.~(\ref{eq:vismassdef}) and defining $p^{\pm}_{inv} = p^0_{inv} \pm p^3_{inv}$ we obtain \begin{equation}\label{eq:vismass3} M^2 = \bar{x}_1 \bar{x}_2 S + m_{inv}^2 - \sqrt{S} [ \bar{x}_1 p^-_{inv} + \bar{x}_2 p^+_{inv} ]\;, \end{equation} where $m_{inv}$ represents the total invariant mass of the invisibles, $m_{inv}^2 = p^{\mu}_{inv} p_{inv,\mu}$. The momenta $p^\mu_{inv}$ are defined in the lab frame, relative to which the c.m.\ frame of the hard subprocess is boosted by an amount defined by the momentum fractions $x_1$ and $x_2$ of the partons entering the subprocess. This implies that the $p^{\pm}_{inv}$ transform as: \begin{eqnarray}\label{eq:pplusminus} p^+_{inv} = \sqrt{\frac{x_1}{x_2}} q^+_{inv} \;,\;\;\; p^-_{inv} = \sqrt{\frac{x_2}{x_1}} q^-_{inv}\;, \end{eqnarray} where $q^{\pm}_{inv} = q^0_{inv} \pm q^3_{inv}$, defined in terms of the invisible momentum, $q^{\mu}_{inv}$, in the c.m.\ frame of the hard subprocess. Substituting the expressions of eq.~(\ref{eq:pplusminus}) in eq.~(\ref{eq:vismass3}) we find an expression for the visible invariant mass: \begin{eqnarray} M^2 = m_{inv}^2 + \bar{x}_1 \bar{x}_2S \left[ 1 - z_1 f^+_{inv} - z_2 f^-_{inv} \right]\;, \label{eq:invmsq} \end{eqnarray} where we have defined $f^{\pm}_{inv} = q^{\pm}_{inv}/Q$ and used $Q^2 = \bar{x}_1 \bar{x}_2 z_1 z_2 S$. We may now solve eq.~(\ref{eq:invmsq}) for $Q^2$ to obtain $Q^2$ in terms of $M^2$: \begin{eqnarray}\label{eq:s} Q^2 = \frac{ z_1 z_2 (M^2 - m_{inv}^2)}{1 - z_1 f^+_{inv} - z_2 f^-_{inv}}\;. \end{eqnarray} The above expression for the hard subprocess scale now becomes the argument of the parton-level cross section, $\hat{\sigma}_{a'b'}$ in eq.~(\ref{eq:SigmaDef}): \begin{equation}\label{eq:SigmaDefinv} S\frac{d\sigma_{ab}}{dM^2 dY} = \int dz_1\,dz_2\,K_{a'a}(z_1)f_a(\bar x_1,Q_c)\,K_{b'b}(z_2)f_b(\bar x_2,Q_c) \,\hat\sigma_{a'b'}\left( \frac{ z_1 z_2 (M^2 - m_{inv}^2)}{1 - z_1 f^+_{inv} - z_2 f^-_{inv}} \right)\,. \end{equation} The functions $f^{\pm}_{inv}$, which are related to the invisible particle four-momenta, remain to be determined. The visible system rapidity, $Y$, is also modified by the presence of invisible particles as: \begin{equation}\label{eq:Yinv} Y = \frac{1}{2} \log \left( \frac{\bar{x}_1 ( 1 - z_1 f^+_{inv} )} { \bar{x}_2 ( 1 - z_2 f^-_{inv} ) } \right)\;, \end{equation} and therefore eqs. (\ref{eq:xbar}) for $\bar{x}_{1,2}$ become \begin{eqnarray}\label{eq:xbarinv} \bar x_1 &=& \sqrt{\frac{(M^2 - m_{inv}^2)(1 - z_2 f^-_{inv})}{S(1 - z_1 f^+_{inv} - z_2 f^-_{inv})(1- z_1 f^+_{inv})}} e^{Y}\;,\nonumber\\ \bar x_2 &=& \sqrt{\frac{(M^2 - m_{inv}^2)(1 - z_1 f^+_{inv})}{S(1 - z_1 f^+_{inv} - z_2 f^-_{inv})(1- z_2 f^-_{inv})}} e^{-Y}\;. \end{eqnarray} The kinematic constraints restrict $Q^2$ to be greater than the threshold energy squared for the process and the true invariant mass, $M_{true}^2 \equiv \bar{x}_1 \bar{x}_2 S = Q^2 / (z_1 z_2)$, to be greater than the visible invariant mass, $M^2$. These result in the following constraints for $Q^2$: \begin{equation} Q^2 > Q_{threshold}^2 \;,\;\;\; Q^2 > z_1 z_2 M^2 \;. \end{equation} \subsection{Single-invisible decays}\label{sec:1inv} The benchmark scenario for a single invisible decay originating from the hard process is $t\bar{t}$ production in which one of the two tops decays into $bqq'$ (hadronic) and the other into $b\ell\nu$ (semi-leptonic), the neutrino comprising the missing four-momentum. Excluding the proton remnants, we assume that all other particles within the pseudorapidity coverage are detected. We will refer to the neutrino as the invisible particle and the $W$ as the intermediate particle in the $t\bar{t}$ case, but the treatment is readily applicable to the gluino case where the invisible particle is the $\chi_1^0$ and the intermediate particle is a squark (treated in section~\ref{sec:2inv}). To calculate the functions $f^{\pm}_{inv}$ and obtain $Q^2$, we need to calculate the neutrino four-momentum in the hard process frame. This is done by choosing the neutrino four-momentum in the frame of its parent $W$ and then applying two subsequent Lorentz boosts: one going from the $W$ frame to the top frame, and one from the top frame to the hard process frame. The decay chain is shown in Fig.~\ref{fig:decchain}. Each of these boosts involves two angular variables which originate from the `decay' of the parent particle. Hence the four momentum $q^{\mu}_{inv}$ of the neutrino may be written as \begin{equation}\label{eq:qinv} q^{\mu}_{inv} = \Lambda^{\mu}_\kappa\left(Q, \hat{\theta}, \hat{\phi} \right) \Lambda^{\kappa}_\lambda \left( \tilde{\theta} , \tilde{\phi} \right) \bar{p}^\lambda_\nu(\bar{\theta}, \bar{\phi})\;, \end{equation} where the $\Lambda$'s are Lorentz boost matrices and where quantities with a hat refer to the hard process frame, quantities with a tilde refer to the top frame and quantities with a bar refer to the $W$ frame. The angles $\theta$ and $\phi$ represent the usual polar angles, defined with respect to the direction of the `sister' particle (see Fig.~\ref{fig:decchain}). For example, in the case $W^+ \rightarrow \ell ^+ \nu_\ell$, where the $W^+$ was produced from the top decay along with a bottom quark, the angles $(\bar{\theta},\bar{\phi})$ are defined with respect to the direction of motion of the $b$ in the $W^+$ frame. The two boost vectors have magnitudes given by $| \vec{\beta}_i| = |\vec{p}_i| / E_i$ ($i=t,W$), the ratio of the parent 3-momentum magnitude and its energy. The boosts, as well as the magnitude of the invisible particle four-momentum, can obtained by considering kinematics in each frame as: \begin{align} &\bar{p}^\lambda_\nu(\bar{\theta}, \bar{\phi}) = \frac{m_W}{2} ( 1, \vec{\bar{r}}) \;,\\\nonumber &\vec{\beta}_W = \frac{m_t^2-m_W^2}{m_t^2 + m_W^2} \vec{\tilde{r}} \;,\\\nonumber &\vec{\beta}_t = \sqrt{ 1 - \frac{4 m_t^2}{Q^2}} \vec{\hat{r}} \;, \end{align} where $\vec{r} = (\sin\theta \cos \phi, \sin\theta \sin \phi, \cos \phi)$ is the unit vector in spherical polar coordinates in the appropriate frame and $m_{W}$, $m_t$ are the $W$ and top quark masses respectively. The four-vector $f^\mu_{inv}$, and hence the functions $f^{\pm}_{inv}$, are calculated by $f^{\pm}_{inv} = q^{\pm}_{inv}/Q$. Evidently, the functions $f^{\pm}_{inv}$ are functions of $Q^2$, giving an implicit equation for $Q^2$. To make this more explicit, we re-write eq.~(\ref{eq:s}): \begin{equation}\label{eq:implicits} Q^2 = \frac{ z_1 z_2 \left[M^2 - m_{inv}^2(Q^2, \Omega)\right]}{1 - z_1 f^+_{inv}(Q^2, \Omega) - z_2 f^-_{inv} (Q^2, \Omega)} \;, \end{equation} and analogously for eq.~(\ref{eq:xbarinv}), where $\Omega$ represents the set of all angular variables. In the present case $m_{inv}(Q^2, \Omega)=m_\nu\simeq 0$ but for multiple invisible particles it will also be a function as indicated. Equation (\ref{eq:implicits}) needs to be solved numerically for each set ($z_1, z_2, \Omega$) in the region $(4m_{t / \tilde{g}}^2, z_1 z_2 S)$, where $S$ is the square of the proton centre-of-mass energy, along with the restriction that the visible invariant mass should be lower than the `true' invariant mass, $M \leq M_{true}$. The numerical solution was found using the Van Wijngaarden-Dekker-Brent method~\cite{brent, gslmanual}, a bracketing method for finding roots of one-dimensional equations. Since $Q$ is not uniquely determined for each $M$, different values of the `true' centre-of-mass energy $Q$ contribute to the cross section. Note that not all possible configurations ($z_1, z_2, \Omega$) are kinematically allowed to contribute to the cross section at $M$ and hence some configurations do not yield roots of eq.~(\ref{eq:implicits}). Once $Q^2$ is obtained, the parton-level cross section for the hard process partons, $\hat{\sigma}_{a'b'}(Q^2)$, is calculated. This result is then multiplied with the parton density functions for the incoming partons, $f_{a,b}(\bar{x}_{1,2}, Q_c)$, and the kernels for evolution from incoming partons $a$ and $b$ to hard process partons $a'$ and $b'$ ($K_{a'a}(z_1)$ and $K_{b'b}(z_2)$). We then integrate over all possible values of $z_1$ and $z_2$, according to eq.~(\ref{eq:SigmaDefinv}). Finally, to obtain the full resummed result we have to integrate over the distribution of the angular variables $\Omega$. \begin{figure}\label{fig:decchain} \begin{centering} \includegraphics[scale=0.70]{figure2.eps} \caption{The sequential two-body decay chain under consideration in the invisible particle treatment. The relevant production angles in the parent centre-of-mass frame are also shown in parentheses.} \end{centering} \end{figure} Notice that the visible invariant mass distribution becomes non-zero below the threshold for production, $M < 2 m_ {t / \tilde{g}}$, owing to the loss of invisible particles. \subsection{Double-invisible decays}\label{sec:2inv} We now turn to the case where both particles produced in the hard process decay invisibly. For illustration we refer to sequential decays of the gluino: $\tilde{g} \rightarrow \tilde{q} q \rightarrow \chi^0_1 qq$. Although this decay mode is generally not the dominant one, it is useful for illustration of the procedure. We extend the treatment given in the semi-leptonic/hadronic top case by writing out functions related to the two invisible particle four-momenta in the decay chain (which we call $\chi$ and $\chi '$), \begin{eqnarray}\label{eq:qnuqnubar} q^{\mu}_{\chi} = \Lambda^{\mu}_\kappa\left(Q, \hat{\theta}, \hat{\phi} \right) \Lambda^{\kappa}_\lambda \left( \tilde{\theta} , \tilde{\phi} \right) \bar{p}^\lambda_\chi(\bar{\theta}, \bar{\phi})\;,\\ q^{\mu}_{\bar{\chi '}} = \Lambda^{\mu}_\kappa\left(Q, \hat{\theta'}, \hat{\phi'} \right) \Lambda^{\kappa}_\lambda \left( \tilde{\theta'} , \tilde{\phi'} \right) \bar{p}^\lambda_{\chi '}(\bar{\theta'}, \bar{\phi'})\;, \end{eqnarray} where the primed quantities now distinguish between the two invisibles. Since both of these four-vectors are defined in the hard subprocess frame, we have simply \begin{equation}\label{eq:finv2} f^{\pm}_{inv} = \frac{1}{Q}\left( q^{\pm}_{\chi} + q^{\pm}_{\chi '} \right)\;. \end{equation} The rest of the treatment is identical to the one-invisible case: an implicit equation has to be solved to obtain $Q^2$ for each ($z_1$, $z_2$, $\Omega$) set and then an integral over $\Omega$ is taken to obtain the resummed result. \subsection{Angular distributions}\label{sec:angular} The distributions of the angular variables $\Omega = (\hat{\theta},\hat{\phi}, \tilde{\theta}, \tilde{\phi}, \bar{\theta}, \bar{\phi})$, appearing in the treatment of invisibles given in the previous sections, are process-dependent. They represent the angles at which the daughter particle is emitted in the frame of the parent particle. We investigated the angular distributions using a Monte Carlo event generator (\texttt{Herwig++} 2.4.0~\cite{Bahr:2008pv}) and subsequently used the results in calculating the $f^{\pm}_{inv}$ functions. The results for SPS1a gluino pair-production are shown in Fig.~\ref{fig:ggangles}, where the uniform distributions are shown for comparison (red horizontal line). Figure~\ref{fig:ttangles} shows the distributions as obtained for top pair-production. The neutrino angle in the $W$ frame is also compared to the analytic calculation. As expected, all the $\phi$ angles, in both cases, were found to be uniform (not shown). The form of all the distributions can be justified using general spin considerations: \begin{itemize} \item[$\hat{\theta}_i$:] The angular distribution of the angle $\hat{\theta} _i$ at which the fermions are produced in the hard process frame is expected to have the form $\sim 1 + \beta \cos ^2 \hat{\theta}_i$, where $\beta$ is a process-dependent constant. \item[$\tilde{\theta}_i$:] The angle $\tilde{\theta} _i$, is defined between the direction of the daughter boson ($W$ or $\tilde{q}$) with respect to the direction of polarization of the parent ($t$ or $\tilde{g}$) polarization. The angular distribution for a spin-up fermion parent is then given by~\cite{Hubaut:2005er}: \begin{equation} \frac{1}{N_\uparrow} \frac{\mathrm{d}N_\uparrow}{\mathrm{d} \cos \tilde{\theta} _i} = \frac{1}{2} ( 1 + P \alpha _i \cos \tilde{\theta} _i ) \;\;, \end{equation} where $\alpha _i$ is a constant and $P$ is the modulus of the polarization of the parent. Since the production processes for both $t\bar{t}$ and $\tilde{g} \tilde{g}$ are parity conserving, there is also an equal spin-down ($N_\downarrow$) contribution to the total distribution with the sign of $\alpha _i$ reversed. This results in a uniform distribution for $\cos \tilde{\theta}_i$. \begin{figure}[htb] \centering \vspace{1.0cm} \hspace{2.9cm} \begin{picture}(300,120) \put(0,0){\includegraphics[scale=0.34, angle=90]{gg_hat_costheta_091009.eps}} \put(150,0){\includegraphics[scale=0.34, angle=90]{gg_tilde_costheta_091009.eps}} \put(300,0){\includegraphics[scale=0.34, angle=90]{gg_bar_costheta_091009.eps}} \put(-25.5,-8.5){\small{$\hat{}$}} \put(124.5,-8.5){\small{$\tilde{}$}} \put(274.5,-8.5){\small{$\bar{}$}} \end{picture} \caption{Monte Carlo results for the gluino pair-production decay chain angles. From left to right: the production angle of the gluino in the hard process frame, the angle of the outgoing squark in the gluino frame and the angle of the outgoing neutralino in the squark frame. The uniform distributions are shown for comparison.} \label{fig:ggangles} \end{figure} \item[$\bar{\theta}_i$:] In gluino pair-production, the decay products of the squark, $\tilde{q}$, which is a scalar, are uniformly distributed in $\cos \bar{\theta}$. In top pair-production, on the other hand, the decay $W\rightarrow \ell \nu _{\ell}$ is parity-violating and the distribution of $\cos \bar{\theta}$ is forward-backward asymmetric in the $W$ frame~\cite{qcdcollider}. The angle $\bar{\theta}$ (sometimes called $\Psi$, see e.g.~\cite{Aad:2009wy}) is used experimentally to infer helicity information on the $W$. The distribution may be written as \begin{eqnarray} \frac{1}{N} \frac{\mathrm{d} N} { \mathrm{d} \cos \bar{\theta} } = \frac{3}{2} \left[ F_0 \left( \frac{ \sin \bar{\theta} } { \sqrt{2} } \right)^2 + F_L \left( \frac{ 1 - \cos \bar{ \theta} } { 2} \right) ^2 + F_R \left( \frac{ 1 + \cos \bar{ \theta} } { 2} \right) ^2 \right]\;\;,\nonumber \\ \end{eqnarray} where $F_L$, $F_R$ and $F_0$ are the probabilities for left-handed, right-handed and longitudinal helicities of the $W$ in top quark decay respectively. The SM predictions, $(F_L, F_R, F_0) = (0.304, 0.001, 0.695)$, yield the curve shown on the right in Fig.~\ref{fig:ttangles}. \end{itemize} \begin{figure}[htb] \centering \vspace{1.0cm} \hspace{2.9cm} \begin{picture}(300,120) \put(0,0){\includegraphics[scale=0.34, angle=90]{tt_hat_costheta_091009.eps}} \put(150,0){\includegraphics[scale=0.34, angle=90]{tt_tilde_costheta_081209.eps}} \put(300,0){\includegraphics[scale=0.34, angle=90]{tt_bar_costheta_091009.eps}} \put(-25.5,-8.5){\small{$\hat{}$}} \put(124.5,-8.5){\small{$\tilde{}$}} \put(274.5,-8.5){\small{$\bar{}$}} \end{picture} \caption{Monte Carlo results for the top pair-production decay chain angles. From left to right: the production angle of the top in the hard process frame, the angle of the outgoing $W$ boson in the top frame and the angle of the outgoing neutrino in the $W$ frame. The uniform distributions are shown for comparison. The neutrino angle in the $W$ frame is also compared to the analytic calculation. } \label{fig:ttangles} \end{figure} The spins of the two produced fermions (tops or gluinos) are correlated and this may cause a degree of correlation between the distributions of particles in the decay chains. We investigated whether these correlations play an important role in the calculation of the invisible particle effects on the visible mass. By comparing the invariant mass distributions with and without the spin correlations in the Monte Carlo we concluded that the effect is small in both top and gluino pair-production and can be safely neglected. \section{Results}\label{sec:invresults} We present the resummed distributions obtained for $t\bar{t}$ and $\tilde{g}\tilde{g}$ production according to eq.~(\ref{eq:SigmaDefinv}). All results are for the LHC at design energy, i.e.\ $pp$ collisions at $\sqrt s = 14$ TeV. We have integrated over the visible system rapidity, $Y$, in the range $|Y|<5$. We first compare our results to those obtained using the \texttt{Herwig++} event generator at parton level (i.e. no hadronization or underlying event) and excluding the proton remnants.\footnote{We verified using the event generator that the contribution of the proton remnants to the total invariant mass in the considered rapidity range is negligible.} In sections~\ref{sec:hadronization} and \ref{sec:MPI} we examine the effects of hadronization and the underlying event. Parton-level top and gluino pair-production cross section formulae are given in appendix~\ref{app:cross-sections}. The PDF set used both in the calculation and \texttt{Herwig++} is the MRST LO** (MRSTMCal) set~\cite{Sherstnev:2007nd, Sherstnev:2008dm}. \subsection{Top quark pair production}\label{sec:topresults} We present resummed results in comparison to Monte Carlo for Standard Model $t\bar{t}$ production, where we include particles with maximum pseudorapidity $\eta_{\rm max} = 5$. In Fig.~\ref{fig:tte5} we show separate results for combinations of hadronic and semi-leptonic decays of the top, leading to zero, one or two invisible neutrinos from the hard process. The effect of the invisibles in both the fully semi-leptonic case and the hadronic/semi-leptonic case are small compared to the effects of hadronization, to be discussed in section~\ref{sec:hadronization}. The differences between the Monte Carlo and resummed curves in Fig.~\ref{fig:tte5} may be attributed to sensitivity to the behaviour of the PDFs and parton showering at low scales, and the precise definition of $Q_c$ in terms of $\eta_{\rm max}$, since $Q_c$ can be as low as $ 2 m_t \times e^{-5} \sim 2~{\rm GeV}$ in the case of $t\bar{t}$ production. \begin{figure} \centering \vspace{0.5cm} \includegraphics[scale=0.34, angle=90]{plot_1tte5.eps} \hspace{0.5cm} \includegraphics[scale=0.34, angle=90]{plot_2tte5.eps} \caption{The $t\bar{t}$ visible mass distributions for a pseudorapidity cut $\eta_{\rm max}=5$. Left: comparing hadronic (no invisibles) and semi-leptonic (one invisible) decays. Right: comparing hadronic (no invisibles) and fully leptonic (two invisibles) decays. The leading order $t\bar{t}$ invariant mass distribution is shown (red dot dashes) for comparison.} \label{fig:tte5} \end{figure} \subsection{Gluino pair production}\label{sec:gluinoresults} We focus on the SPS1a point~\cite{Allanach:2002nj}, which has gluino and lightest neutralino masses $m_ {\tilde{g}} = 604.5 ~{\rm GeV}$ and $m _{\chi _1 ^0 } = 97.0 ~{\rm GeV}$ respectively (and see table~\ref{tb:masses} for the squark masses). For simplicity we set the squark mass in the invisible particle treatment to $550~{\rm GeV}$. We also present results for a modified SPS1a point, with $m_ {\tilde{g}} = 800 ~{\rm GeV}$. In this process only the two-invisibles case is realistic, but for comparison we also show results for no invisibles, i.e. imagining that the two lightest neutralinos are also detected. \begin{table}[htb] \begin{center} \begin{tabular} {|c|c|c|c|} \hline Particle& Mass (GeV)& Particle& Mass (GeV) \\ \hline $\tilde{g}$&604.5&$\tilde{s}_L$&570.7 \\ \hline $\chi _1 ^0$& 97.0&$\tilde{s}_R$&547.9 \\ \hline $\tilde{u}_L$&562.3&$\tilde{b}_1$&515.3 \\ \hline $\tilde{u}_R$&548.2&$\tilde{b}_2$&547.7 \\ \hline $\tilde{d}_L$&570.7&$\tilde{t}_1$&400.7 \\ \hline $\tilde{d}_R$&547.9&$\tilde{t}_2$&586.3 \\ \hline \end{tabular} \end{center} \caption{The relevant particle masses in the supersymmetric model used in the invisible study, SPS1a. The modified SPS1a point differs in that it has $m_{\tilde{g}} = 800 ~{\rm GeV}$.} \label{tb:masses} \end{table} When $\eta_{\rm max} = 5,3$, there is fairly good agreement between the Monte Carlo and resummation predictions in both the two-invisibles and no-invisibles cases, and for both gluino masses, as shown in Figs.~\ref{fig:gge53} and~\ref{fig:gg800e53}, where one should compare the dashed histograms (Monte Carlo) to the solid curves of the same colour (resummation). The shift in the peak of the visible mass distribution in going from no to two invisibles is much larger than that in top pair production, amounting to 600-700 GeV, roughly independent of $\eta_{\rm max}$ and the gluino mass. This results mainly from the higher masses of the intermediate particles in the decays ($m_{\tilde q}\simeq 550$ GeV vs. $m_W=80$ GeV), which implies a higher energy release, rather than the masses of the invisible particles themselves ($m_{\chi_1^0} = 97$ GeV vs. $m_\nu=0$). One of the assumptions of the resummation is that all the visible hard process decay products are detected, which is not true when the maximum pseudorapidity $\eta_{\rm max}$ is restricted to lower values. When $\eta_{\rm max} \sim 2$ in the Monte Carlo analysis, a significant number of hard process particles begin to be excluded and hence the curves shift to lower values than the resummed predictions. Figure~\ref{fig:gprodrap} shows the rapidity distribution of the decay products of the gluino at parton level for $m_ {\tilde{g}} = 604.5 ~{\rm GeV}$. For the case shown, cuts of $\eta_{\rm max} = 5,3,2$ and $1.4$ correspond to exclusion of, respectively, $\sim$0.002\%, 1.1\%, 7.5\% and 20.0\% of the gluino decay products from the detector. The effect of this appears in Figs.~\ref{fig:gge214} and~\ref{fig:gg800e2}, where the Monte Carlo distributions are narrower and peak at lower masses than the resummed predictions. The variation between the resummed $\eta_{\rm max} = 2$ and 1.4 curves is smaller than that between $\eta_{\rm max} = 5$ and 3, since they correspond to smaller differences in $Q_c$. The heavy and light gluino scenarios exhibit similar behaviour when varying the pseudorapidity coverage and the number of invisibles, showing the lack of dependence of the resummation on the mass of the pair-produced particle. The sensitivity to low-scale PDF behaviour and showering is reduced compared to the $t\bar{t}$ case since we are considering higher centre-of-mass energies, with the lowest possible $Q_c$ now being of the the order $2 m_{\tilde{g}} \times e^{-5} \sim 8 ~{\rm GeV}$. The position of the curves is also sensitive to the precise definition of $Q_c$ in terms of $\eta_{\rm max}$. \begin{table}[h!] \begin{center} \begin{tabular} {|c|c|c|c|} \hline $m_{\tilde{g}}$ (GeV).& $\eta_{\mathrm{max}}$& MC (GeV) (0 inv./2 inv.) & Resum. (GeV) (0 inv./2 inv.) \\ \hline 604.5&5& 2280/1560 & 1785/1620 \\ \hline 604.5&3& 1680/1080& 1593/1204 \\ \hline 604.5&2& 1440/840& 1497/1204\\ \hline 604.5&1.4& 1380/660 & 1497/1204 \\ \hline 800.0&5& 2820/2100& 2569/1870 \\ \hline 800.0&3& 2220/1620& 2128/1684 \\ \hline 800.0&2& 1920/1380& 1865/1683 \\ \hline 800.0&1.4& 1740/1140& 1865/1683 \\ \hline \end{tabular} \end{center} \caption{Summary of the positions of the peaks of the gluino pair-production visible mass distributions as given by the Monte Carlo and the resummation, for different values of the maximum pseudorapidity and for no and two invisibles.} \label{tb:peaks} \end{table} \begin{figure} \centering \vspace{1.2cm} \includegraphics[scale=0.34, angle=90]{plot_gg604_e5.eps} \hspace{0.5cm} \includegraphics[scale=0.34, angle=90]{plot_gg604_e3.eps} \caption{The SPS1a gluino pair-production visible mass distributions for pseudorapidity cuts $\eta_{\rm max}=5$ (left) and $\eta_{\rm max}=3$ (right). The leading order distribution is shown (red dot dashes) for comparison.} \label{fig:gge53} \end{figure} \begin{figure} \centering \vspace{1.2cm} \includegraphics[scale=0.34, angle=90]{plot_gg800_e5.eps} \hspace{0.6cm} \includegraphics[scale=0.34, angle=90]{plot_gg800_e3.eps} \caption{The modified SPS1a gluino pair-production (with $m_{\tilde{g}} = 800 ~{\rm GeV}$) results for pseudorapity cuts $\eta_{\rm max}=5$ (left) and $\eta_{\rm max}=3$ (right) . The leading order distribution is shown (red) for comparison.} \label{fig:gg800e53} \end{figure} \begin{figure} \centering \vspace{1.5cm} \includegraphics[scale=0.35, angle=90]{plot_gprodrap.eps} \caption{The SPS1a gluino pair-production pseudorapidity distribution for $m_{\tilde{g}} = 604.5 ~{\rm GeV}$.} \label{fig:gprodrap} \end{figure} \begin{figure} \centering \vspace{1.2cm} \includegraphics[scale=0.34, angle=90]{plot_gg604_e2.eps} \hspace{0.5cm} \includegraphics[scale=0.34, angle=90]{plot_gg604_e14.eps} \caption{The SPS1a gluino pair-production results for pseudorapidity cuts $\eta_{\rm max}=2$ (left) and $\eta_{\rm max}=1.4$ (right). The leading order distribution is shown (red dot dashes) for comparison.} \label{fig:gge214} \end{figure} \begin{figure} \centering \vspace{1.2cm} \includegraphics[scale=0.34, angle=90]{plot_gg800_e2.eps} \hspace{0.5cm} \includegraphics[scale=0.34, angle=90]{plot_gg800_e14.eps} \caption{The modified SPS1a gluino pair-production (with $m_{\tilde{g}} = 800 ~{\rm GeV}$) results for pseudorapity cuts $\eta_{\rm max}=2$ (left) and $\eta_{\rm max}=1.4$ (right). The leading order distribution is shown (red) for comparison.} \label{fig:gg800e2} \end{figure} Table~\ref{tb:peaks} shows a summary of the peak positions for all cases and different pseudorapidity cuts. For the higher values of $\eta_{\rm max}$, the agreement between the Monte Carlo and resummation is satisfactory. There is a large difference in the peak positions for no invisibles and $\eta_{\rm max}=5$, but this is mainly due to the broad shape of the peak in this case, while the overall distributions agree better. For $\eta_{\rm max}\leq 2$ there is a growing discrepancy, especially for the realistic case of two invisibles, due to the loss of particles coming from the hard process. \subsection{\boldmath Hadronization effects}\label{sec:hadronization} We have assumed that ISR partons emitted at pseudorapidities above $\eta_{\rm max}$ do not contribute to the visible invariant mass. This would be true if the hadronization process were perfectly local in angle. However, as a result of hadronization high-rapidity ISR partons can produce lower-rapidity hadrons and thus `contaminate' the detector and shift the visible mass to higher values. The hadronization model employed in the \texttt{Herwig++} Monte Carlo is a refinement of the cluster model described in ref.~\cite{Webber:1983if}. The model involves clustering of partons into colour-singlet objects that decay into hadrons, resulting in a smearing of the pseudorapidity distribution which causes the increase in the visible mass described above. The effect is shown in Fig.~\ref{fig:hadroniz} for gluino and top pair-production (excluding the invisible particles from the hard process). The effect was found to be larger for $t\bar{t}$ production where the mass distribution is shifted significantly, whereas in gluino pair production the shift is negligible. \begin{figure}[htb] \centering \vspace{1.0cm} \includegraphics[scale=0.30, angle=90]{plot_hadroniz_ttlept.eps} \hspace{1.0cm} \includegraphics[scale=0.30, angle=90]{plot_hadroniz2.eps} \caption{The $t\bar{t}$ fully semi-leptonic (left) and SPS1a gluino pair-production (right, with $m_{\tilde{g}} = 604.5 ~{\rm GeV}$) visible mass distributions for a pseudorapity cut $\eta_{\rm max}=5$ with and without hadronization (black and red respectively).} \label{fig:hadroniz} \end{figure} \subsection{Underlying event}\label{sec:MPI} \begin{figure}[htb] \centering \vspace{1.0cm} \includegraphics[scale=0.30, angle=90]{plot_mpi_ttlept_e5_recon.eps} \hspace{1.0cm} \includegraphics[scale=0.30, angle=90]{plot_mpi_ttlept_e3.eps} \caption{The $t\bar{t}$ fully hadronic visible mass distributions for pseudorapity cuts $\eta_{max}=5$ (left) and $\eta_{max}=3$ (right), with and without multiple parton interactions (black and red respectively) and the reconstructed curves (blue dot dashes). The $\eta_{\rm max} = 5$ curve was reconstructed using the resummed results for the visible mass and rapidity, whereas the $\eta_{\rm max} = 3$ curve was reconstructed using the Monte Carlo visible mass and rapidity.} \label{fig:mpi_tt} \end{figure} \begin{figure}[htb] \centering \vspace{1.0cm} \includegraphics[scale=0.30, angle=90]{plot_mpi_gg.eps} \hspace{1.0cm} \includegraphics[scale=0.30, angle=90]{plot_mpi_gg_e3.eps} \caption{The SPS1a gluino pair-production (with $m_{\tilde{g}} = 604.5 ~{\rm GeV}$) visible mass distributions for pseudorapity cuts $\eta_{max}=5$ (left) and $\eta_{max}=3$ (right), with and without multiple parton interactions (black and red respectively) and the reconstructed curves from the Monte Carlo visible masses and rapidities (blue dot dashes).} \label{fig:mpi_gg} \end{figure} The underlying event, which is thought to arise from multiple soft interactions of spectator partons, is a further source of non-perturbative contributions to the visible mass. If $P_H^\mu$ represents the ``hard'' visible 4-momentum studied in earlier sections and $P_U^\mu$ represents that due to the underlying event, the total visible mass is given by \begin{eqnarray}\label{eq:Mtot} M^2 = (P_H+P_U)^2 &=& M_H^2 + M_U^2 +2 (E_HE_U-P_{zH} P_{zU})\nonumber\\ &=& M_H^2 + M_U^2 +2 M_U\sqrt{M_H^2+\not \!\! E_T^2} \cosh(Y_H-Y_U)\;. \end{eqnarray} where we neglect transverse momentum associated with the underlying event. Thus, even if the visible invariant mass due to the underlying event is small, its effect on the overall visible mass may be enhanced through the last term on the right-hand side. The underlying event is simulated in \texttt{Herwig++} by a multiple parton interaction model along the lines of ref.~\cite{Butterworth:1996zw}. In this model, for the rapidity ranges considered here, the underlying event is approximately process-independent and exhibits little correlation with the rest of the event. Therefore, to a good approximation, the distributions of the variables related to the underlying event, $Y_U$ and $M_U$, can be determined once and for all at each collider energy. The process-dependence comes primarily through the dependence on $Y_H$ and $M_H$, which can be calculated using the resummation formula given in eq.~(\ref{eq:SigmaDefinv}). The overall visible mass distribution can then be obtained by convolution using eq.~(\ref{eq:Mtot}). The effects of including the underlying event in the visible mass distribution are shown in Figs.~\ref{fig:mpi_tt} and \ref{fig:mpi_gg} for $t\bar t$ and gluino pair production, respectively. The multiple parton interactions push the peak value to substantially higher masses. The shift amounts to about 250 GeV at $\eta_{max}=3$ and 1.2 TeV at $\eta_{max}=5$, and is roughly process-independent. However, since the underlying event is approximately uncorrelated with the hard process, the visible mass distributions can be reconstructed well by the convolution procedure outlined above, as shown by the blue dot-dashed curves in Figs.~\ref{fig:mpi_tt} and \ref{fig:mpi_gg}. These features of the underlying event will need to be validated by LHC data on a variety of processes. Accurate modelling of the underlying event is important for practically all aspects of hadron collider physics. \section{Conclusions}\label{sec:conc} In this paper we have presented detailed predictions on the total invariant mass $M$ of the final-state particles registered in a detector, as a function of its pseudorapidity coverage $\eta_{\rm max}$. This quantity provides the dominant contribution to many global inclusive observables such as the new variable $\hat{s}^{1/2}_{\rm min}$ in (\ref{eq:smin_def}), which can provide information on the energy scales of hard processes. We have extended the resummation method presented in~\cite{Papaefstathiou:2009hp} to include the effects of invisible particle emission from the hard process. We have considered the case of one or two invisible particles and presented results for Standard Model top quark pair production and SPS1a gluino pair production, obtained using a numerical Mellin moment inversion method. In the case of $t\bar{t}$ production the invisible particles are neutrinos from W-boson decays and their effect on the visible invariant mass distribution is small, even when both decays are leptonic. This is mainly a consequence of the small W-boson mass compared to the overall invariant mass, rather than the negligible neutrino mass. For gluino pair production the invisibles are a pair of massive LSPs from squark decays. The LSP mass is again small compared to the overall invariant mass, but the squark masses are not, leading to a substantial downward shift in the visible mass distribution, of the order of the squark mass. In both cases the resummed predictions are in fair agreement with Monte Carlo estimates of the position of the peak in the distribution, provided the pseudorapidity range covered by the detector is large enough ($\eta_{\rm max}\raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$>$} 3$). For $\eta_{\rm max}\sim 3$, the difference between the Monte Carlo prediction and resummed predictions is of the order of 100 GeV for both the heavy and light gluino SPS1a points. The agreement becomes worse when the pseudorapidity range is restricted, due to particle loss from the hard process. Table~\ref{tb:peaks} shows the positions of the peaks of the distributions for the Monte Carlo results from \texttt{Herwig++} and the resummation. These comparisons were made with Monte Carlo visible mass distributions at parton level. We found that non-perturbative effects, especially the underlying event, tend to shift the invariant mass distributions to significantly higher values than expected from a purely perturbative calculation. According to the underlying event model used in \texttt{Herwig++}, the shift amounts to about 250 GeV at $\eta_{max}=3$ and 1.2 TeV at $\eta_{max}=5$. This effect is also expected in other observables sensitive to longitudinal momentum components, such as $\hat{s}^{1/2}_{\rm min}$. However, in this model the underlying event is only weakly correlated with the rest of the event and hence its effects can be determined once and for all at each collider energy. The modelling of the underlying event is an important feature of the Monte Carlos that needs to be validated by comparison with experiment. Once this has been done, a wide range of global inclusive observables, including the visible invariant mass, will be reliably predicted and useful for establishing the scales of contributing hard subprocesses. \section*{Acknowledgements} BW is grateful to the CERN Theory Group and the Aspen Center for Physics for hospitality during parts of this work, which was supported in part by the UK Science and Technology Facilities Council and the European Union Marie Curie Research Training Network MCnet (contract MRTN-CT-2006-035606).
1,477,468,750,185
arxiv
\section*{APPENDIX -- {\em NOT FOR PUBLICATION\/}} \setcounter{section}{0} \setcounter{subsection}{0} \def\Alph{section}} {\Alph{section}} } \excludecomment{anonymous}\includecomment{standard} \excludecomment{withappendix}\includecomment{withoutappendix} \begin{anonymous} \def\mycite{PBpat}{\mycite{PBpatA}} \def\citeBB#1{\ifnum#1=11 \expandafter\mycite{BB11A}\else\ifnum#1=94 \expandafter\mycite{BB94A}\else\ifnum#1=13 \expandafter\mycite{BB13A}\fi\fi\fi} \def\mycite{jMIPS}{\mycite{jMIPSA}} \def\mycite{kpu}{\mycite{kpuA}} \end{anonymous} \begin{standard} \def\mycite{PBpat}{\mycite{PBpat}} \def\citeBB#1{\ifnum#1=11 \expandafter\mycite{BB11}\else\ifnum#1=94 \expandafter\mycite{BB94}\else\ifnum#1=13 \expandafter\mycite{BB13}\fi\fi\fi} \def\andmyciteBB#1{\mycite{\andciteBBtwo#1,}} \def\andciteBBtwo#1,{BB#1,\andciteBBthree} \def\andciteBBthree#1,{BB#1} \def\mycite{jMIPS}{\mycite{jMIPS}} \def\mycite{kpu}{\mycite{kpu}} \end{standard} \def\mycite#1{\cite{#1}} \def\paragraph#1{{\bf#1}} \begin{standard} \defAuthor~1{Peter~T.~Breuer} \defAuthor 1 email{ptb@cs.bham.ac.uk} \defAuthor 1{P.~T.~Breuer} \defAuthor 1 address{Department of Computer Science, University of Birmingham, UK} \defAuthor~2{Jonathan~P.~Bowen} \defAuthor 2 email{jonathan.bowen@lsbu.ac.uk} \defhttp://www.jpbowen.com{http://www.jpbowen.com} \defAuthor~2{J.~P.~Bowen} \defAuthor 2 address{Department of Informatics, London South Bank University, UK} \end{standard} \begin{anonymous} \defAuthor~1{Author~1} \defAuthor 1 email{Author 1 email} \defAuthor 1{Author 1} \defAuthor 1 address{Author 1 address} \defAuthor~2{Author~2} \defAuthor 2 email{Author 2 email} \defAuthor~2{Author~2} \defAuthor 2 address{Author 2 address} \end{anonymous} \def0.99{0.99} \def{.^{.^.}\kern-5pt}{{.^{.^.}\kern-5pt}} \def\triple#1#2#3{ \{#1\}~#2~\{#3\} } \begin{document} \edef\marginnotetextwidth{\the\textwidth} \title{Certifying Machine Code Safe from Hardware Aliasing} \subtitle{RISC is not necessarily risky} \author{ Author~1\inst{1} \and Author~2\inst{2} } \institute{ Author 1 address\\ \email{Author 1 email} \and Author 2 address\\ \email{Author 2 email} } \maketitle \vspace{-4ex} \begin{abstract} Sometimes machine code turns out to be a better target for verification than source code. RISC machine code is especially advantaged with respect to source code in this regard because it has only two instructions that access memory. That architecture forms the basis here for an inference system that can prove machine code safe against `hardware aliasing', an effect that occurs in embedded systems. There are programming memes that ensure code is safe from hardware aliasing, but we want to certify that a given machine code is provably safe. \end{abstract} \pagestyle{plain} \section{Introduction} \label{sec:Introduction} In a computer system, `software' aliasing occurs when different logical addresses simultaneously or sporadically reference the same physical location in memory. We are all familiar with it and think nothing of it, because the same physical memory is nowadays reused millisecond by millisecond for different user-space processes with different addressing maps, and we expect the operating system kernel to weave the necessary illusion of separation. The kernel programmer has to be aware that different logical addresses from different or even the same user-space process may alias the same physical location, but the application programmer may proceed unawares. We are interested in a converse situation, called `hardware' aliasing, where different physical locations in memory are sporadically bound to the same logical address. If software aliasing is likened to one slave at the beck of two masters, hardware aliasing is like identical twins slaved to one master who cannot tell which is which. In this paper we will investigate the safety of machine code in the light of hardware aliasing issues. Aliasing has been studied before \mycite{sato1997speculative} and is the subject of some patents \mycite{fischer2002memory,wing1999method}. There appears to be no theoretical treatment published, although the subject is broadly treated in most texts on computer architecture (see, for example, Chapter 6 of \mycite{Barr98}) and is common lore in operating systems kernel programming. The `hardware' kind of aliasing arises particularly in embedded systems where the arithmetic components of the processor are insufficient to fill all the address lines. Suppose, for example, that the memory has 64-bit addressing but the processor only has 40-bit arithmetic. The extra lines might be grounded, or sent high, and this varies from platform to platform. They may be connected to 64-bit address registers in the processor, so their values change from moment to moment as the register is filled. In that case, it is up to the software to set the `extra' bits reliably to zero, or one, or some consistent value, in order that computing an address may yield a consistent result. We first encountered the phenomenon in the context of the KPU \citeBB{13}, a general purpose `crypto-processor', i.e., a processor that performs its computations in encrypted form in order to provide security against observation and protection from malware. Because real encryptions are one-to-many, the result of the encrypted calculation of the address $1+1$ will always mean `$2$' when decrypted, but may be different from another encryption of $2$. If the two different {\em physical aliases} are used as addresses, then two different memory cell contents are accessed and the result is chaotic. The same effect occurs in the embedded system that has processor arithmetic with fewer bits than there are address lines; add $1+1$ in the processor and instead of $2$, $\rm 0xff01000000000002$ may be returned. If those two aliases of the arithmetic `$2$' are used as addresses, they access different memory cells. The upshot is that what is meant both times to be `$2$' accesses different locations according to criteria beyond the programmer's control. There are programming memes that are successful in an aliasing environment: if a pointer is needed again in a routine, it must be copied exactly and saved for the next use; when an array or string element is accessed, the address must always be calculated in exactly the same way. But whatever the programmer says, the compiler may implement as it prefers and ultimately it is the machine code that has to be checked in order to be sure that aliasing is not a risk at run-time. Indeed, in an embedded environment it is usual to find the programmer writing in assembler precisely in order to control the machine code emitted. The Linux kernel consists of about $5\%$ hand-written assembly code, for example (but rarely in segments of more than 10-15 lines each). One of our long term objectives is to be able to boot a Linux kernel on an embedded platform with aliasing, the KPU in particular. That requires both modifying a compiler and checking the hand-written machine-level code in the open source archive. An inference system will be set out here that can guarantee a (RISC \mycite{bowen1990formal,Pat85}) machine code program safe against hardware aliasing as described. The idea is to map a stack machine onto the machine code. We will reason about what assembly language instructions for the stack machine do computationally. Choosing an inference rule to apply to a machine code instruction is equivalent to choosing a stack machine assembly language \cite{BB11} instruction to which it disassembles \cite{BB93,BB94}. The choice must be such that a resulting proof tree is well-formed, and that acts as a guide. The stack machine is aliasing-proof when operated within its intended parameters so verifying alias-safety means verifying that the stack machine assembly language code obtained by disassembly of the RISC machine code does not cause the stack machine to overstep certain bounds at run-time. The RISC machine code we can check in this way is {\em ipso facto} restricted to that which we can disassemble. At the moment, that means code that uses string or string-like data structures and arrays which do not contain further pointers, and which uses machine code `jump and link' and `jump register' instructions only for subroutine call and return respectively, and in which subroutines make their own local frame and do not access the caller's frame (arguments are passed to subroutines in registers). These restrictions are not fundamental, but in any case there are no functional limitations implied by them; one call convention is functionally as good as another and data structures may always be laid out flat, as they are in a relational DB. Mistakes in disassembly are possible: if a `jump register' instruction, for example, were in fact used to implement a computed goto and not a subroutine return, it could still be treated as a subroutine return by the verification, which would end prematurely, possibly missing an error further along and returning a false negative. A mistaken return as just described would always fail verification in our system, but other such situations are conceivable in principle. So a human needs to check and certify that the proposed disassembly is not wrongheaded. The practice is not difficult because, as noted above, hand-written machine code at a professional standard consists of short, concise, commented segments. The difficulty is that there is often a great deal of it to be checked and humans tire easily. But our system reduces the burden to checking the disassembly proposed by the system against the comments in the code. This paper is structured as follows: after an illustration of programming against aliasing in Section~\ref{Sec:Getting it right: programming memes} and a discussion of disassembly in Section~\ref{sec:Disassembly}, code annotation is introduced in sections \ref{Sec:annotation}, \ref{sec:Stack, string and array pointers} and \ref{sec:Formal logic}, with a worked example in Section~\ref{sec:Is any substantive program allowed?}. Section~\ref{sec:Preventing unsafe memory access} argues that code annotation gives rise to the formal assurance that aliasing cannot occur. \section{Programming memes} \label{Sec:Getting it right: programming memes} We model aliasing as being introduced when memory addresses are calculated in different ways. That model says that a memory address may be {\em copied} exactly and used again without hazard, but if even $0$ is added to it, then a different alias of the address may result, and reads from the new alias do not return data deposited at the old alias of the address. Arithmetically the aliases are equivalent in the processor; they will test as equal but they are not identical, and using them as addresses shows that up. \begin{floatingtable}[l] \begin{tabular}{@{}|@{~}l@{~}|@{~}l@{~}|@{}} \hline \hfill $\skull$\hfill &\hfill $\checkmark$\hfill \\[1ex] \begin{minipage}[b]{0.20\textwidth} \begin{leftprogram} foo:\\ \quad \\ \quad sp -{\bf=} 32 \\ \quad \mbox{\rm\dots {code} \dots}\\ \quad sp +{\bf=} 32\\ \quad return \end{leftprogram} \end{minipage} & \begin{minipage}[b]{0.20\textwidth} \begin{leftprogram} foo:\\ \quad gp ~{\bf=} sp\\ \quad sp -{\bf=} 32\\ \quad \mbox{\rm\dots {code} \dots}\\ \quad sp ~{\bf=} gp\\ \quad return \end{leftprogram} \end{minipage} \\ \hline \end{tabular} } \caption{Aliasing in function \emph{foo}.} \label{tab:1} \end{floatingtable} That is particularly a problem for the way in which a compiler -- or an assembly language programmer -- renders machine code for the stack pointer movement around a function call. Classically, a subroutine starts by decrementing the stack pointer to make room on the stack for its local frame. Just before return, it increments the stack pointer back to its original value. The pseudo-code is shown on the left in Table~\ref{tab:1}. In an aliasing context, the attempt at arithmetically restoring the pointer puts an alias of the intended address in the \reg{sp} register, and the caller may receive back a stack pointer that no longer points to the data. The code on the right in Table~\ref{tab:1} works correctly; it takes an extra register (\reg{gp}) and instruction, but the register content may be moved to the stack and restored before return, avoiding the loss of the slot. \begin{floatingtable}[l] \begin{tabular}{@{}|@{\,}l@{\,}|@{~}l@{~}|@{~}l@{~}|@{~}l@{~}|@{}} \hline \em string & \hfill $\skull$\hfill & \hfill $\skull$\hfill & \hfill $\checkmark$\hfill \\[1ex] \em array & \hfill $\checkmark$\hfill & \hfill $\skull$\hfill & \hfill $\skull$\hfill \\[1ex] & \begin{minipage}[b]{0.155\textwidth} \begin{leftprogram} \quad\\ \quad x = s[2] \end{leftprogram} \end{minipage} & \begin{minipage}[b]{0.155\textwidth} \begin{leftprogram} \quad s+= 2\\ \quad x = *s \end{leftprogram} \end{minipage} & \begin{minipage}[b]{0.155\textwidth} \begin{leftprogram} \quad s++; s++\\ \quad x = *s \end{leftprogram} \end{minipage} \\ \hline \end{tabular} } \caption{Aliasing while accessing a string or array. } \label{tab:2} \end{floatingtable} Strings and arrays are also problematic in an aliasing environment because different calculations for the address of the same element cause aliasing. To avoid it, the strategy we will follow is that elements of `string-like' structures will be accessed by incrementing the base address in constant steps (see the pseudo-code at right in Table~\ref{tab:2}) and array elements will be accessed via a unique offset from the array base address (see the pseudo-code at left in Table~\ref{tab:2}). This technique ensures that there is only one calculation possible for the address of each string element (it is $((s{+}1){+}1){+}0$ in Table~\ref{tab:2}) or array element ($s{+}2$ in Table~\ref{tab:2}), so aliasing cannot occur. The middle code in Table~\ref{tab:2} gives address $(s{+}2){+}0$ which matches exactly neither string nor array calculations. The decision over whether to treat a memory area like a string or an array depends on the mode of access to be used. \section{Disassembly} \label{sec:Disassembly} \begin{table}[tb] \fbox{ \begin{minipage}{0.97\textwidth} \begin{scriptsize} A RISC machine code processor consists of 32 (32-bit) integer registers $R$, a vector of $2^{32}$ (32-bit) integer memory locations $M$, and the program counter $p$. The latter gives the address of the current instruction. The {\reg{ra}} register is used to hold a subroutine call return address. Only two instructions, \opcode{sw} and \opcode{lw}, access memory. \end{scriptsize} \begin{center} \scriptsize \begin{tabular}{@{}l|l|l@{}} instruction &mnemonic & semantics\\ \hline&&\\[-2ex] ${\bf sw}~r_1~k(r_2)$ &store word &$M'=M\oplus\{R\,r_2\mathop{\mbox{\bf+}} k\mapsto R\,r_1\};~R'=R;~p'=p{+}4$\\ ${\bf lw}~r_1~k(r_2)$ &load word &$M'=M;~R'=R\oplus\{r_1 \mapsto M(R\,r_2\mathop{\mbox{\bf+}} k)\};~p'=p{+}4$\\ ${\bf move}~r_1~r_2$ &move/copy &$M'=M;~R'=R\oplus\{r_1\mapsto R\,r_2\};~p'=p{+}4$\\ ${\bf li}~r_1~k$ &load immediate &$M'=M;~R'=R\oplus\{r_1\mapsto k\};~p'=p{+}4$\\ ${\bf addiu}~r_1~r_2~k$ &add immediate &$M'=M;~R'=R\oplus\{r_1\mapsto R\,r_2 \mathop{\mbox{\bf+}} k\};~p'=p{+}4$\\ ${\bf addu}~r_1~r_2~r_3$ &add variable &$M'=M;~R'=R\oplus\{r_1\mapsto R\,r_2 \mathop{\mbox{\bf+}} R\,r_3\};~p'=p{+}4$\\ ${\bf nand}~r_1~r_2~r_3$ &bitwise not-and&$M'=M;~R'=R\oplus\{r_1\mapsto R\,r_2 \mathop{\overline{\mbox{\bf\&}}} R\,r_3\};~p'=p{+}4$\\ ${\bf beq}~r_1~r_2~k$ &branch-if-equal&$M'=M;~R'=R;~{\bf if}\,(R\,r_1\mathop{\mbox{\bf=}} R\,r_2)~p'=k~{\bf else}~p'=p{+}4$\\ ${\bf jal}~k$ &jump-and-link &$M'=M;~R'=R\oplus\{{\bf ra}\mapsto p{+}4\};~p'=k$\\ ${\bf jr}~r$ &jump-register &$M'=M;~R'=R;~p'=R\,r$\\ \end{tabular} \end{center} \begin{scriptsize} \begin{notation} $M\oplus\{a\mapsto v\}$ means the vector $M$ overwritten at index $a$ with the value $v$; the processor arithmetic (bold font `$\mathop{\mbox{\bf+}}$') is distinguished from the instruction addressing arithmetic (light font `$+$'); $r_1$, $r_2$ are register names or indices; $k$ is a signed 16-bit integer; $x$ and $x'$ are respectively initial and final value after the instruction has acted. \end{notation} \end{scriptsize} \end{minipage} \vspace{2ex} \defBox{Box} \def3{1} \addtocounter{table}{-1} \caption{RISC machine code instructions and their underlying semantics.} \label{Tab:3} \end{table} \begin{floatingtable}{ \kern-7pt\hbox to 0.5\textwidth { \vbox to 1.05in{ \fbox{ \begin{minipage}{0.48\textwidth} \medskip \scriptsize Say that the stack pointer $s$ is in the stack pointer register \reg{sp} in the machine code processor. A corresponding abstract stack machine state is a 4-tuple $({\mathcal R}, {\mathcal K}, {\mathcal H}, p)$, where ${\mathcal R}$ consists of the $31$ registers excluding the stack pointer register, the stack ${\mathcal K}$ consists of the top part of memory above the stack pointer value $s$, the heap ${\mathcal H}$ consists of the bottom part of memory below the stack pointer, and the address $p$ is that of the current instruction. \begin{align*} {\mathcal K} \,k &= M (s \mathop{\mbox{\bf+}} k) &s &= R\,\reg{sp},~ k \ge 0 \\ {\mathcal R} \,r &= R\,r &r &\ne \reg{sp},~ r \in \{0, \dots 31\} \\ {\mathcal H} \,a &= M\,a &a &<s \end{align*} The (hidden) stack pointer value $s$ is needed to recreate the machine code processor state $(R,M,p)$ from the stack machine state $({\mathcal R}, {\mathcal K}, {\mathcal H}, p)$, so the latter is more abstract. \end{minipage} } } \hss } \kern2pt } \defBox{Box} \def3{2} \addtocounter{table}{-1} \caption{Relation of processor to stack machine.} \label{tab:rel} \end{floatingtable} Nothing in the machine code indicates which register holds a subroutine return address, and that affects wh\-ich machine code instructions may be interpreted as a return from a subroutine call. To deal with this and similar issues in an organised manner, we describe rules of reasoning about programs both in terms of the machine code instruction to which they apply and an assembly language instruction for a more abstract {\em stack machine} that the machine code instruction may be disassembled to and which we imagine the programmer is targeting. The core RISC machine code instructions are listed in Box~\ref{Tab:3}, where their semantics are given as state-to-state transformations on the three components of a RISC processor: 32 32-bit registers $R$, memory $M$ and a 32-bit program counter $p$. The corresponding abstract stack machine is described in Box~\ref{tab:rel}. The stack pointer address $s$ in the machine code processor notionally divides memory $M$ into two components: stack ${\mathcal K}$ above and heap ${\mathcal H}$ below. The stack machine manipulates the stack directly via instructions that operate at the level of stack operations, and they are implemented in the machine code processor via instructions that act explicitly on the stack pointer. No stack pointer is available in the abstract machine. Its registers ${\mathcal R}$ consist of the set $R$ in the machine code processor {\em minus} the register that contains the stack pointer, usually the \reg{sp} register. The program counter $p$ is the same in the abstract stack machine as in the machine code processor, because instructions correspond one-to-one between programs for each machine. However, there is usually a choice of more than one abstract stack machine instruction that each machine code instruction could have been disassembled to, even though only one is chosen. \begin{table}[tb] \caption{ Stack machine instructions: the $n$ are small integers, the $r$ are register names or indices, and the $a$ are relative or absolute addresses. } \label{tab:3} \[ \begin{array}{lcll} s&\mathop{{:}{:}{=}} & \opcode{cspt}~$r$ ~|~ \opcode{cspf}~$r$ ~|~ \opcode{rspf}~$r$ ~|~ \opcode{push}~$n$ & \mbox{// stack pointer movement} \\ &|& \opcode{get}~$r$~$n$ ~|~ \opcode{put}~$r$~$n$ ~|~ \dots & \mbox{// stack access} \\ &|& \opcode{newx}~$r$~$a$~$n$ ~|~ \opcode{stepx}~$r$~$n$ ~|~ \opcode{getx}~$r$~$n(r)$ ~|~ \opcode{putx}~$r$~$n(r)$ ~|~ \dots & \mbox{// string operations} \\ &|& \opcode{newh}~$r$~$a$~$n$ ~|~ \opcode{lwfh}~$r$~$n(r)$ ~|~ \opcode{swfh}~$r$~$n(r)$ ~|~ \dots & \mbox{// array operations} \\ &|& \opcode{gosub}~$a$ ~|~ \opcode{return} ~|~ \opcode{goto}~$a$ ~|~ \opcode{ifnz}~$r$~$a$ ~|~ \dots & \mbox{// control operations} \\ &|& \opcode{mov}~$r$~$r$ ~|~ \opcode{addaiu}~$r$~$r$~$n$ ~|~ \dots & \mbox{// arithmetic operations} \\[-6ex] \end{array} \] \end{table} \begin{table}[tb] \caption{ Machine code may be disassembled to one of several alternate assembly language instructions for a stack machine. } \begin{center} \begin{tabular}[t]{|l|l|} \hline machine code & assembly language \\[0.5ex] \hline \opcode{move} $r_1$ $r_2$ & \begin{tabular}{l} \opcode{cspt} $r_1$\\ \opcode{cspf} $r_2$\\ \opcode{rspf} $r_2$\\ \opcode{mov} $r_1$ $r_2$ \end{tabular}\\ \hline \opcode{addiu} $r$ $r$ $n$ & \begin{tabular}{l} \opcode{push} $\mbox{-}n$\\ \opcode{stepx} $r$ $n$\\ \opcode{addaiu} $r$ $r$ $n$ \end{tabular}\\ \hline \opcode{lw} $r_1$ $n(r_2)$ & \begin{tabular}{l} \opcode{get} $r_1$ $n$\\ \opcode{lwfh} $r_1$ $n(r_2)$\\ \opcode{getx} $r_1$ $n(r_2)$ \end{tabular}\\ \hline \opcode{sw} $r_1$ $n(r_2)$ & \begin{tabular}{l} \opcode{put} $r_1$ $n$\\ \opcode{swfh} $r_1$ $n(r_2)$\\ \opcode{putx} $r_1$ $n(r_2)$ \end{tabular}\\ \hline \end{tabular} \qquad \begin{tabular}[t]{|l|l|} \hline machine code & assembly language \\[0.5ex] \hline \opcode{lb} $r_1$ $n(r_2)$ & \begin{tabular}{l} \opcode{getb} $r_1$ $n$\\ \opcode{lbfh} $r_1$ $n(r_2)$\\ \opcode{getbx} $r_1$ $n(r_2)$ \end{tabular}\\[0.5ex] \hline \opcode{sb} $r_1$ $n(r_2)$ & \begin{tabular}{l} \opcode{putb} $r_1$ $n$\\ \opcode{sbth} $r_1$ $n(r_2)$\\ \opcode{putbx} $r_1$ $n(r_2)$ \end{tabular}\\[0.5ex] \hline \opcode{jal}~$a$ & \begin{tabular}{l} \opcode{gosub}~$a$ \end{tabular}\\[0.5ex] \hline \opcode{jr}~$r$ & \begin{tabular}{l} \opcode{return} \end{tabular}\\[0.5ex] \hline \opcode{j}~$a$ & \begin{tabular}{l} \opcode{goto}~$a$ \end{tabular}\\[0.45ex] \hline \opcode{li}~$r$ $a$& \begin{tabular}{l} \opcode{newx}~$r$~$a$~$n$\\ \opcode{newh}~$r$~$a$~$n$ \end{tabular}\\[0.5ex] \hline \opcode{bnez}~$r$ $a$& \begin{tabular}{l} \opcode{ifnz}~$r$~$a$ \end{tabular}\\[0.5ex] \hline \end{tabular} \label{tab:4} \end{center} \vspace{-6ex} \end{table}% For example, several different stack machine instructions may all be thought of as manipulating the hidden stack pointer, register \reg{sp} in the machine code processor, and they all are implemented as a \opcode{move} (`copy') machine code instruction. Thus the \opcode{move} instruction disassembles to one of several stack machine instructions as follows: \medskip \begin{enumerate} \item The \opcode{cspt}~$r_1$ (`copy stack pointer to') instruction saves a copy of the stack pointer in register $r_1$. It corresponds to the \opcode{move}~$r_1$~\reg{sp} machine code processor instruction. \item The \opcode{cspf}~$r_1$ (`copy stack pointer from') instruction {\em refreshes} the stack pointer from a copy in $r_1$ that has the same value and was saved earlier (we will not explore here the reasons why a compiler might issue such a `refresh' instruction). It corresponds to the \opcode{move}~\reg{sp}~$r_1$ machine code instruction. \item The \opcode{rspf}~$r_1$ (`restore stack pointer from') instruction returns the stack pointer to a value that it held previously by copying an old saved value from $r_1$. It also corresponds to \opcode{move}~\reg{sp}~$r_1$. \end{enumerate} A fourth disassembly of the machine code \opcode{move} instruction, to the stack machine \opcode{mov} instruction, encompasses the case when the stack pointer is not involved at all; it does a straight copy of a word from one register to another at the stack machine level. The full set of stack machine instructions is listed in Table~\ref{tab:3}, and their correspondence with RISC machine code instructions is shown in Table~\ref{tab:4}. We will not work through all the instructions and disassembly options in detail here, but note the important \opcode{push}~$n$ instruction in the stack machine, which can be thought of as decrementing the hidden stack pointer by $n$, extending the stack downwards. It corresponds to the \opcode{addiu}~\reg{sp}~\reg{sp}~$m$ machine code instruction, with $m=-n$. Also, the stack machine instructions \opcode{put}~$r_1$~$n$ and \opcode{get}~$r_1$~$n$ access the stack for a word at offset $n$ bytes, and they correspond to the machine code \opcode{sw}~$r_1$~$n(\reg{sp})$ and \opcode{lw}~$r_1$~$n(\reg{sp})$ instructions, respectively. The very same machine code instructions may also be interpreted as stack machine instructions that manipulate not the stack but either a `string-like' object or an array. Strings/arrays are read with \opcode{getx}/\opcode{lwfh} and written with \opcode{putx}/\opcode{swth}. Table~\ref{tab:4} shows that these are implemented by \opcode{lw}/\opcode{sw} in the machine code processor, applied to a base register $r_2\ne \reg{sp}$. Stepping through a string is done with the \opcode{stepx} instruction in the stack machine, which is implemented by \opcode{addiu} in the machine code procesor. Introducing the address of a string/array in the stack machine needs \opcode{newx}/\opcode{newh} and those are both implemented by the \opcode{li} (`load immediate') instruction in the machine code processor. There are also `b' (`byte-sized') versions of the \opcode{get}, \opcode{lwfh}, \opcode{getx} stack machine instructions named \opcode{getb}, \opcode{lbfh}, \opcode{getbx} respectively. These are implemented by \opcode{lb} in the machine code processor. For \opcode{put}, \opcode{swth}, \opcode{putx} we have byte versions \opcode{putb}, \opcode{sbth}, \opcode{putbx}. \section{Introducing annotations and annotated types} \label{Sec:annotation} \begin{floatingtable}[l] \quad \begin{minipage}[b]{0.26\textwidth} \begin{leftprogram} foo:\\ \quad move gp sp\\ \quad addiu sp sp -32\\ \quad \mbox{\rm\dots {code} \dots}\\ \quad move sp gp\\ \quad jr ra \end{leftprogram} \end{minipage} } \caption{Non-aliasing subroutine machine code.} \label{tab:5} \end{floatingtable} Consider the `good' pseudo-code of Table~\ref{tab:1} implemented as machine code and shown in Table~\ref{tab:5}. How do we show it is aliasing-safe? Our technique is to {\em annotate} the code in a style akin to verification using Hoare logic, but the annotation logic is based on the stack machine abstraction of what the machine code does. We begin with an annotation that says the \reg{sp} register is bound to a particular {\em annotation type} on entry: \[ \{\,\reg{sp} = \type{c}!0!4!8\,\} \] The `\type{c}' as base signifies a variable pointer value is in register \reg{sp}. It is the stack pointer value. The `!0!4!8' means that that particular value has been used as the base address for writes to memory at offsets 0, 4 and 8 bytes from it, respectively. The first instruction in subroutine \emph{foo} copies the stack pointer to register \reg{gp} and we infer that register \reg{gp} also gets the `$\type{c}$' annotation, using a Hoare-triple-like notation: \[ \{\,\reg{sp}^*= \type{c}!0!4!8\,\} ~ \mbox{\tt move gp sp} ~ \{\,\reg{sp}^*, \reg{gp}= \type{c}!0!4!8\,\} \] The stack pointer location (in the \reg{sp} register) should always be indicated by an asterisk. The arithmetic done by the next instruction destroys the offset information. It cannot yet be said that anything has been written at some offset from the new address, which is 32 distant from the old only up to an arithmetic equivalence in the processor: \[ \{\,\reg{sp}^*, \reg{gp} = \type{c}!0!4!8\,\} ~ \mbox{\tt addiu sp sp -32} ~ \{\,\reg{gp} = \type{c}!0!4!8; ~ \reg{sp}^* = \type{c}\,\} \] Suppose the annotation on the \reg{gp} register is still valid at the end of subroutine \emph{foo}, so the stack pointer register is finally refreshed by the \opcode{move} instruction with the same annotation as at the start: \[ \{\,\reg{sp}^*=\type{c};~\reg{gp}= \type{c}!0!4!8;\,\} ~ \mbox{\tt move sp gp} ~ \{\,\reg{sp}^*, \reg{gp} = \type{c}!0!4!8\,\} \] The return (\opcode{jr}~\reg{ra}) instruction does not change these annotations. So the calling code has returned as stack pointer a value that is annotated as having had values saved at offsets 0, 4, 8 from it, and the caller can rely on accessing data stored at those offsets. That does not guarantee that the {\em same} value of the stack pointer is returned to the caller, however. It will be shown below how this system of annotations may be coaxed into providing stronger guarantees. \medskip \medskip \section{Types for stack, string and array pointers} \label{sec:Stack, string and array pointers} The annotation discussed above is not complete. The {\em size} in bytes of the local stack frame needs to be recorded by following the `\type{c}' with the frame size as a superscript. Suppose that on entry there is a local stack frame of size 12 words, or 48 bytes. Then here is the same annotation with superscripts on, written as a derivation in which the appropriate disassembly of each machine code instruction is written to the right of the machine code as the `justification' for the derivation: \begin{center} \begin{minipage}{0.99\textwidth} \begin{prooftree} \AxiomC{\hbox to 1.7in{\{$\reg{sp}^* = \type{c}^{48}!0!4!8$\}\hfil}} \RightLabel{\hbox to 0.95in{\opcode{move} gp sp\hfil}/~\opcode{cspt} gp} \UnaryInfC{\hbox to 1.7in{\{$\reg{sp}^*,\reg{gp} = \type{c}^{48}!0!4!8$\}\hfil}} \RightLabel{\hbox to 0.95in{\opcode{addiu} sp sp -32\hfil}/~\opcode{push} 32} \UnaryInfC{\hbox to 1.7in{\{$ \reg{sp}^*=\type{c}^{32^{48}};~\reg{gp}=\type{c}^{48}!0!4!8$\}\hfil}} \noLine \UnaryInfC{\hbox to 1.7in{\hfil\vdots\hfil}} \noLine \UnaryInfC{\hbox to 1.7in{\{$\reg{sp}^*=\type{c}^{32^{48}};~ \reg{gp}=\type{c}^{48}!0!4!8$\}\hfil}} \RightLabel{\hbox to 0.95in{\opcode{move} sp gp\hfil}/~\opcode{rspf} gp} \UnaryInfC{\hbox to 1.7in{\{$\reg{sp}^*, \reg{gp}=\type{c}^{48}!0!4!8$\}\hfil}} \end{prooftree} \end{minipage} \end{center} The \opcode{push}~32 abstract stack machine instruction makes a {\em new} local stack frame of $8$ words or $32$ bytes. It does not increase the size of the current frame. Accordingly, the $32$ `pushes up' the $48$ in the annotation so that $32^{48}$ is shown. This makes the size of the previous stack frame available to the annotation logic. \begin{floatingtable}{ \fbox{ \begin{minipage}{0.4\textwidth} \begin{scriptsize} Annotations $a$ assert a binding of registers $r$ or stack slots $(n)$ to an {\em annotated type} $t$. One of the register names may be starred to indicate the stack pointer position. A type is either `uncalculated', \type{u}, or `calculated', \type{c}. Either may be decorated with `$!n$' annotations indicating historical writes at that offset from the typed value when used as an address. A \type{c} base type may also be superscripted by a `tower' of natural numbers $n$ denoting `frame sizes' (see text), while a \type{u} base type may have a single superscript (also denoting size). We also use $\ddot{1}$ for a tower $1^{1^{.^{.^.}\kern-5pt}}$ of undetermined extent and a single repeated size. Also, formal type variables $\typevar{x}$, $\typevar{y}$, etc are valid stand-ins for annotated types, and formal `set of offsets variables' $\typevar{X}$, $\typevar{Y}$, etc are valid stand-ins for sets of offsets. \end{scriptsize} \[ \begin{array}{lcl} a&\mathop{{:}{:}{=}} & r^{[\text{\bf*}]}\text{\bf,}\,\dots\text{\bf,}\,\text{\bf(}n\text{\bf)}\text{\bf,}\,\dots \,\text{\bf=}\,t\text{\bf;}\,\dots\\ t&\mathop{{:}{:}{=}} & \type{c}^{[n^{.^{.^.}\kern-5pt}\kern3pt]}\text{\bf!}n\text{\bf!}\dots ~|~ \type{u}^{[n]}\text{\bf!}n\text{\bf!}\dots \\[-4ex] \end{array} \] \vspace{1ex} \end{minipage} } \vspace{1ex} \defBox{Box} \def3{3} \addtocounter{table}{-1} \caption{Syntax of annotations and types.} \label{tab:ann} \end{floatingtable} A different disassembly of \opcode{addaiu}~$r$~$r$~$n$ is required when $r$ contains a string pointer, not the stack pointer, which means that register $r$ lacks the asterisk in the annotation. The disassembly as a step along a string is written \opcode{stepx}~$r$~$n$, and requires $n$ to be positive. In this case, the string pointer in $r$ will be annotated with the type \[ \type{c}^{\ddot{1}} \] meaning that it is a `calculatable' value that may be altered by adding $1$ to it repeatedly. The form $\type{c}^{\ddot{1}}$ hints that a string is regarded as a stack $\type{c}^{1^{.^{.^.}\kern-5pt}}$ that starts `pre-charged' with an indefinite number of frames of 1 byte each, which one may step up through by `popping the stack' one frame, and one byte, at a time. So annotation types may be either like $\type{c}^{32^{48}}$ or $\type{c}^{\ddot{1}}$ and these may be followed by offsets $!0!4!8!\dots$. There is just one more base form, described below, completing the list in Box~\ref{tab:ann}. The RISC instruction \opcode{lw}~$r_1$~$n(r_2)$ is also disassembled differently according to the annotated type in $r_2$. As \opcode{get}~$r_1$~$n$ it retrieves a value previously stored at offset $n$ in the stack, when $n\ge 0$ and $r_2$ is the stack pointer register. As \opcode{lwfh}~$r_1$~$n(r_2)$ it retrieves an element in an array from the {\em heap} area. In that case, $r_2$ will be annotated \[ \type{u}^m \] meaning an `unmodifiable' pointer to an array of size $m$ bytes, and $m-4\ge n\ge 0$. A third possibility is dissassembly as retrieval from a string-like object in the heap, when, as $\opcode{getx}~r_1~n(r_2)$, register $r_2$ will have a `string-like' annotation of the form $\type{c}^{\ddot{m}}$, meaning that it must be stepped through in increments of $m$ bytes. Similarly the RISC \opcode{sw}~$r_1$~$n(r_2)$ instruction can be dissassembled as \opcode{put}~$r_1$~$n$ of a value at offset $n$ to the stack, or \opcode{swth}~$r_1$~$n(r_2)$ to an array or $\opcode{putx}~r_1~n(r_2)$ to a string, depending on the type bound to register $r_2$. These register types drive the disassembly. \begin{table}[t] \caption{Possible disassemblies of machine code instructions as constrained by the stack pointer register location changes (SP$\leftarrow$SP) or absence ($\times$), and changes to the stack content (`delta'). } \medskip \subtable{ \begin{tabular}{|l||c|c||l|} \hline move $r_1$ $r_2$~~~~~~& $r_1$&$r_2$ & stack delta\\ \hline \hline rspf $r_2$ & SP $\circlearrowright$ & $\times$ & yes\\ \hline cspf $r_2$ & SP $\circlearrowright$ & $\times$ & no\\ \hline cspt $r_1$ & $\times$ &SP $\circlearrowright$ & no \\ \hline mspt $r_1$ &\multicolumn{2}{c||}{SP$\longleftarrow$SP} & no\\ \hline mov $r_1$ $r_2$& $\times$ & $\times$ & no\\ \hline \end{tabular} } \hfill \subtable{ \begin{tabular}{|l||c|c||l|} \hline addiu $r_1$ $r_2$ $m$& $r_1$&$r_2$ & stack delta\\ \hline \hline step $r$ $m$ & \multicolumn{2}{c||}{$\times$}& no\\ \hline stepto $r_1$ $r_2$ $m$ & ~~ $\times$ ~~& ~~$\times$ ~~& no\\ \hline push ${-}m$ & \multicolumn{2}{c||}{SP $\circlearrowright$ }& yes\\ \hline pushto $r_1$ ${-}m$ & \multicolumn{2}{c||}{~SP$\longleftarrow$SP~~}& yes\\ \hline addaiu $r_1$ $r_2$ $m$& ~~ $\times$ ~~& ~~$\times$ ~~& no\\ \hline \end{tabular} } \begin{center} ~\begin{tabular}{|l||c|c||l|} \hline lw $r_1$ $m$($r_2$) & $r_1$ & $r_2$ & stack delta\\ \hline \hline get $r_1$ $m$ & ~~~$\times$~~~ & SP $\circlearrowright$ & no \\ \hline lwfh $r_1$ $m$($r_2$)& ~~~$\times$~~~ & ~~~$\times$~~~ & no \\ \hline getx $r_1$ $m$($r_2$)& ~~~$\times$~~~ & ~~~$\times$~~~ & no \\ \hline \end{tabular} \hfill \begin{tabular}{|l||c|c||l|} \hline \strut sw $r_1$ $m$($r_2$) & $r_1$ & $r_2$ & stack delta\\ \hline \hline put $r_1$ $m$ & ~~~$\times$~~~ & SP $\circlearrowright$ & no\\ \hline \strut swth $r_1$ $m$($r_2$)& ~~~$\times$~~~ & ~~~$\times$~~~ & no \\ \hline \strut putx $r_1$ $m$($r_2$)& ~~~$\times$~~~ & ~~~$\times$~~~ & no \\ \hline \end{tabular}~~ \end{center} \vspace{-2ex} \label{tab:6} \end{table} \section{Formal logic} \label{sec:Formal logic} We can now write down formal rules for the logic of annotations introduced informally in the `derivation' laid out in the previous section. Readers who would prefer to see a worked example first should jump directly to Section~\ref{sec:Is any substantive program allowed?}. We start with a list of so-called `small-step' program annotations justified by individual stack machine instructions, each the disassembly of a machine code instruction. The small-step rules relate the annotation before each machine code instruction to the annotation after. Table~\ref{tab:6} helps to reduce {\em a priori} the number of possible disassemblies for each machine code instruction, but in principle disassembly to stack machine code does not have to be done first, but can be left till the last possible moment during the annotation process, as each dissassembly choice corresponds to the application of a different rule of inference about which annotation comes next. If the corresponding inference rule may not be applied, then that disassembly choice is impossible. Here is how to read Table~\ref{tab:7}. Firstly, `offsets variables' $\typevar{X}$, $\typevar{Y}$, etc, stand in for sets of offset annotations `$!k$'. For example, the $\opcode{put}~\reg{gp}~4$ instruction is expected to start with a prior annotation pattern $\reg{sp}^*=\type{c}^{f}!\typevar{X}$ for the stack pointer register. Secondly, the stack pointer register is indicated by an asterisk. Thirdly, $f$ in the table stands for some particular stack frame tower of integers; it is not a variable, being always some constant in any patrticular instance. In the case of the $\opcode{put}~\reg{gp}~4$ instruction, $f$ must start with some particular number at least 8 in size, in order to accommodate the 4-byte word written at offset 4 bytes within the local stack frame. Just `$8$' on its own would do for $f$ here. Lastly, `type variables' \typevar{x}, \typevar{y}, etc, where they appear, stand in for full types. The table relates annotations before and after each instruction. So, in the case of the $\opcode{put}~\reg{gp}~4$ instruction, if the prior annotation for the stack pointer register is $\reg{sp}^*=\type{c}^{f}!\typevar{X}$, then the post annotation is $\reg{sp}^*=\type{c}^{f}!4!\typevar{X}$, meaning that 4 is one of the offsets at which a write has been made. It may be that 4 is also a member of the set denoted by \typevar{X} (which may contain other offsets too), or it may be not in \typevar{X}. That is not decided by the formula, which merely says that whatever other offsets there are in the annotation, `4' is put there by this instruction. At any rate, the annotation pattern for the $\opcode{put}~\reg{gp}~4$ instruction is: \[ \{ \dots;\reg{sp}^*=\type{c}^{f}!\typevar{X};\dots \} ~\opcode{put}~\reg{gp}~4~ \{ \dots;\reg{sp}^*=\type{c}^{f}!4!\typevar{X};\dots \} \] and considering the effect on the \reg{gp} register (which may be supposed to have the type denoted by the formal type variable \typevar{x} initially) and the stack slot denoted by `(4)' gives \[ \{ \reg{gp}{=}\typevar{x};\reg{sp}^*{=}\type{c}^{f}!\typevar{X} \} ~\opcode{put}~\reg{gp}~4~ \{ \reg{sp}^*{=}\type{c}^{f}!4!\typevar{X};\reg{gp}{,}(4){=}\typevar{x} \} \] because whatever the description \typevar{x} of the data in register \reg{gp} before the instruction runs, since the data is transferred to stack slot `($4$)', the latter gains the same description. Generalising the stack offset `$4$' back to $n$, and generalising registers \reg{gp} and \reg{sp} to $r_1$ and $r_2$ respectively, one obtains exactly the small-step signature listed for instruction $\opcode{put}~r_1~n$. Registers whose annotations are not mentioned in this signature have bindings that are unaffected by the instruction. \begin{table}[tb] \caption{ `Small-step' annotations on assembly instructions. } \label{tab:7} \[ \begin{array}{@{}rcl@{~~}p{2in}@{}} \{~ \} &\opcode{newx}~r~n & \{ r\,\,{=}\type{c}^{\ddot{n}}!\typevar{X} \} &// \small Set reg. $r$ content\\ \{r_1{=}\type{c}^{f_1}!\typevar{Y}{;\,}r_2{=}\type{u}^{f_2}!\typevar{X} \} &\opcode{putx}~r_1~n(r_2) & \{ r_1{=}\type{c}^{f_1}!\typevar{Y} {;}\, r_2{=}\type{u}^{f_2}!n!\typevar{X}\} &// \small Store word to string\\ \{ r_2{=}\type{u}^{f}!n!\typevar{X} \} &\opcode{getx}~r_1~n(r_2) & \{ r_1{{=}}\type{c}^0 {;}\, r_2{=}\type{u}^{f}!n!\typevar{X} \} &// \small Load word from string\\ \{r{=}\type{c}^{n^f}!\typevar{X} \} &\opcode{stepx}~r~n & \{ r{=}\type{c}^f!\typevar{Y}\} &// \small Step along string\\ \{~ \} &\opcode{newh}~r~n & \{ r\,\,{=}\type{u}^n!\typevar{X} \} &// \small Set reg. $r$ content\\ \{ r_1{=}\type{c}^{f_1}!\typevar{Y} {;}\, r_2{=}\type{u}^{f_2}!\typevar{X} \} &\opcode{swth}~r_1~n(r_2) & \{ r_1{=}\type{c}^{f_1}!\typevar{Y} {;}\, r_2{=}\type{u}^{f_2}!n!\typevar{X} \} &// \small Store word to array\\ \{ r_2{=}\type{u}^f!n!\typevar{X} \} &\opcode{lwfh}~r_1~n(r_2) & \{ r_1{{=}}\type{c}^0 {;}\, r_2{=}\type{u}^f!n!\typevar{X}\} &// \small Load word from array\\ \{r_1{=}\typevar{x} {;}\, r_2^*{=}\type{c}^f!\typevar{X} \} &\opcode{put}~r_1~n & \{ r_1{,}(n){=}\typevar{x} {;}\, r_2^*{=}\type{c}^f!n!\typevar{X} \} &// \small Store word to stack\\ \{ r_2^*{=}\type{c}^f!n!\typevar{X} {;}\, (n){=}\typevar{x} \} &\opcode{get}~r_1~n & \{ r_1{,}(n){=}\typevar{x} {;}\, r_2^*{=}\type{c}^f!n!\typevar{X} \} &// \small Load word from stack\\ \{r^*{=}\type{c}^{f}!\typevar{X} \} &\opcode{push}~n & \{ r^*{=}\type{c}^{n^f}\} &// \small New frame\\ \{r_2^*{=}\type{c}^f!\typevar{X} \} &\opcode{cspt}~r_1 & \{ r_1{,}r_2^*{=}\type{c}^f!\typevar{X}\} &// \small Copy SP to reg. $r_1$\\ \{r_1^*{=}\type{c}^f!\typevar{Y}{;\,}r_2{=}\type{c}^f!\typevar{X} \} &\opcode{cspf}~r_2 & \{ r_1^*{,}r_2{=}\type{c}^f!\typevar{X}\} &// \small Copy SP from reg. $r_2$\\ \{r_1^*{=}\type{c}^{n^f}!\typevar{Y}{;\,}r_2{=}\type{c}^f!\typevar{X} \} &\opcode{rspf}~r_2 & \{ r_1^*{,}r_2{=}\type{c}^f!\typevar{X}\} &// \small Restore SP from reg. $r_2$\\ \{~ \} &\opcode{nop} & \{ ~\} &// \small No-op{,} do nothing\\ \{r_2{=}\typevar{x} \} &\opcode{mov}~r_1~r_2 & \{ r_1{,}r_2{=}\typevar{x}\} &// \small Copy from reg. $r_2$\\ \{r_2{=}\type{c}^f!\typevar{X} \} &\opcode{addaiu}~r_1~r_2~n & \{ r_1{=}\type{c}^0 {;}\, r_2{=}\type{c}^f!\typevar{X} \} &// \small Arithmetic add \\ \end{array} \] \begin{scriptsize} \begin{notation} The \typevar{X}, \typevar{Y}, etc stand for a set of offsets $!n_1!n_2!\dots$, for literal natural numbers $n$. The stack frame size (or `tower of stack frame sizes') $f$ is a literal natural number (or finite sequence of natural numbers). The \typevar{x}, \typevar{y}, etc stand for any type (something that can appear on the right of an equals sign). \end{notation} \end{scriptsize} \end{table} Small-step annotations $\{ \Theta \}~\kappa~\{ \Psi \}$ for an instruction $\iota$ at address $a$ with a disassembly $\kappa$ generate a so-called `big step' rule \[ \frac{ T~\triangleright~\{\Psi\}~a+4~\{\Phi\} }{ T~\triangleright~\{\Theta\} ~a~ \{\Phi\} }[a~|~\iota ~/~ \kappa] \] in which $\Phi$ is the final annotation at program end and $T$ denotes a list of big-step annotations $\{ \Psi\}~a~\{\Phi\}$, one for each instruction address $a$ in the program (note that, in consequence, branches within the program must get the same annotation at convergence as there is only one annotation there). Thus the big-step rule is an inference about what \emph{theory} $T$ contains. The rule above says that if $\{\Psi\}~a+4~\{\Phi\}$ is in theory $T$, then so is $\{\Theta\} ~a~ \{\Phi\}$. The label justifies the inference by the fact that instruction $\iota$ is at address $a$, and disassembly $\kappa$ has been chosen for it. The big-step rules aim to generate a `covering' theory $T$ for each program. That is, an annotation before every (reachable) instruction, and thus an annotation {\em between} every instruction. The rule above tells one how to extend by one further instruction a theory that is growing from the back of the program towards the front. Where does theory construction start? It is with the big-step rule for the final \opcode{jr}~\reg{ra} instruction that classically ends a subroutine. The action of this instruction is to jump back to the `return address' stored in the \reg{ra} register (or another designated register). The annotation for it says that there was a program address (an `uncalculatable value', $\type{u}^0$) in the \reg{ra} register before it ran (and it is still there after), and requires no hypotheses: $$ \frac{ }{ T~\triangleright~ \{r{=}\type{u}^0\} ~a~ \{r{=}\type{u}^0\} }\mbox{[$a~|$~ \opcode{jr} $r$ / \opcode{return}]} $$ The `$0$' superscript indicates that the address may not be used as a base for offset memory accesses; that would access program instructions if it were allowed. Calling code conventionally places the return address in the \reg{ra} register prior to each subroutine call. There are just three more big-step rules, corresponding to each of the instructions that cause changes in the flow of control in a program. Jumps (unconditional branches) are handled by a rule that refers back to the target of the jump: \begin{align*} \frac{ T ~\triangleright~ \{\Theta\} ~b~ \{\Phi\} }{ T ~\triangleright~ \{\Theta\} ~a~ \{\Phi\} }\mbox{[$a$ $|$ \opcode{j} $b$ / \opcode{goto} $b$]} \end{align*} This rule propagates the annotation at the target $b$ of the jump back to the source $a$. At worst a guess at the fixpoint is needed. The logic of branch instructions (conditional jumps) at $a$ says that the outcome of going down a branch to $b$ or continuing at $a+4$ must be the same. But the instruction \opcode{bnez}~$r$~$b$ (`branch to address $b$ if register $r$ is nonzero, else continue') and variants first require the value in the register $r$ to be tested, so it is pre-marked with \type{c} (`calculatable'): \begin{align*} \frac{ T~\triangleright~ \{r{=}\type{c}^f!\typevar{X};\Theta\} ~b~ \{\Phi\} \quad T ~\triangleright~ \{r{=}\type{c}^f!\typevar{X};\Theta\} ~a+4~ \{\Phi\} }{ T~\triangleright~ \{r{=}\type{c}^f!\typevar{X};\Theta\} ~a~ \{\Phi\} }\mbox{[$a$ $|$ \opcode{bnez} $r$ $b$ / \opcode{ifnz} $r$ $b$]} \end{align*} The case $b< a$ (backward branch) requires a guess at a fixpoint as it does for jump. The annotated incremental history $f$, likely none, of the value in the tested register is irrelevant here, but it is maintained through the rule. The set of offsets \typevar{X} already written to is also irrelevant here, but it is maintained through the rule. The RISC \opcode{jal}~$b$ machine code instruction implements standard imperative programming language subroutine calls. It puts the address of the next instruction in the \reg{ra} register (the `return address') and jumps to the subroutine at address $b$. The calling code will have saved the current return address on the stack before the call. The callee code will return to the caller by jumping to the address in the \reg{ra} register with \opcode{jr}~\reg{ra}, and the calling code will then restore its own return address from the stack. Because of \opcode{jal}'s action in filling register \reg{ra} with a program address, \reg{ra} on entry to the subroutine at $b$ must already have a $\type{u}^0$ annotation, indicating an unmodifiable value that cannot even be used for memory access. And because the same subroutine can be called from many different contexts, we need to distinguish the annotations per call site and so we use a throwaway lettering $T'$ to denote those annotations that derive from the call of $b$ from site $a$. The general rule is: \begin{align*} \frac{ T'~\triangleright~ \{\reg{ra}{=}\type{u}^0; \Psi\} ~b~ \{\Theta\} \qquad T ~\triangleright~ \{\Theta\} ~a+4~ \{\Phi\} }{ T~\triangleright~ \{\Psi\} ~a~ \{\Phi\} }\mbox{[$a$ $|$ \opcode{jal} $b$ / \opcode{gosub} $b$]} \end{align*} The `$0$' superscript means that memory accesses via the return address as base address for \opcode{lw}/\opcode{sw} are not allowed; that would access the program instructions. The stack pointer register has not been named, but it must be distinct from the \reg{ra} register. We have found it useful to apply extra constraints at subroutine calls. We require (i) that each subroutine return the stack to the same state it acquired it in (this is not a universal convention), and (ii) that a subroutine make and unmake all of its own local stack frame (again, not a universal convention). That helps a Prolog implementation of the verification logic start from a definitely known state at the end of each subroutine independent of the call context -- namely, that the local stack frame at subroutine end (and beginning) is size zero. These constraints may be built into the \opcode{jal} rule as follows: \begin{align*} \frac{ T'~\triangleright~ \{\reg{ra} {=} \type{u}^0; r^*{=}\type{c}^{0}!\typevar{X},\Psi\} ~b~ \{r^*{=}\type{c}^{0}!\typevar{Y}; \Theta\} \quad T ~\triangleright~ \{r^*{=}\type{c}^{f}!\typevar{Y}; \Theta\} ~a{+}4~ \{\Phi\} }{ T ~\triangleright~ \{r^*{=}\type{c}^{f}!\typevar{X}; \Psi\} ~a~ \{\Phi\} \end{align*} The requirement (i) is implemented by returning the stack pointer in the same register ($r^*$ with the same $r$ on entry and return) and with no stack cells visible in the local stack frame handed to the subroutine and handed back by the subroutine (the two $0$s). The requirement (ii) is implemented by setting the local stack frame on entry to contain no stack, just the general purpose registers, which forces the subroutine to make its own stack frame to work in. Other calling conventions require other rule refinements. As noted, the small-step and big-step rules can be read as a Prolog program with variables the bold-faced offsets variables $\typevar{X}$, $\typevar{Y}$, etc, and type variables $\typevar{x}$, $\typevar{y}$, etc. \section{Example annotation} \label{sec:Is any substantive program allowed?} Below is the annotation of the simple main routine of a Hello World program that calls `printstr' with the Hello World string address as argument, then calls `halt'. The code was emitted by a standard compiler ({\em gcc}) and modified by hand to be safe against aliasing, so some compiler `quirks' are still visible. The compiler likes to preserve the \reg{fp} register content across subroutine calls, for example, even though it is not used here. The functionality is not at issue here, but, certainly, knowing what each instruction does allows the annotation to be inferred by an annotator without reference to rules and axioms. The \opcode{li}~\reg{a0} instruction sets the \reg{a0} (`$0$th argument') register, for example, so the only change in the annotation after the instruction is to the \reg{a0} column. The annotator introduces the string type, $\type{c}^{\ddot{1}}$, into the annotation there, since the instruction sets \reg{a0} to the address of the Hello World string. The annotator assumes that the stack pointer starts in the \reg{sp} register and that `main' is called (likely from a set-up routine) with a return address in the \reg{ra} register. Changes are marked in grey: \begin{center}\scriptsize \begin{longtable}{lll|@{~}c@{~}|@{~}c@{~}|@{~}c@{~}|@{~}c@{~}|@{~}c@{~}|@{~}c@{~}|@{~}c@{~}|@{~}c@{~}|@{~}c@{~}|@{~}c@{~}} & & & \(\reg{sp}\sp*\) & \reg{ra} & \reg{a0} & \reg{fp} & \reg{gp} & \reg{v0} & \reg{v1} & (16) & (24) & (28) \\ \hline &&&&&&&&&&&\\[-1ex] \underline{main}: &\tt & & \(\type{c}\sp{0}\)& \(\type{u}\sp{0}\)& & \typevar{x}& & \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp{0}\)& & \\ \tt move gp sp &\rm cspt gp && \(\type{c}\sp{0}\)& \(\type{u}\sp{0}\)& & \typevar{x}& \cellcolor[gray]{0.8}\(\type{c}\sp{0}\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp{0}\)& && \\ \tt addiu sp sp -32 &\rm push 32 && \cellcolor[gray]{0.8} \(\type{c}\sp{32\sp0}\)& \(\type{u}\sp{0}\)& & \typevar{x}& \(\type{c}\sp{0}\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp{0}\)& && \\ \tt sw ra 28(sp) &\rm put ra 28 && \(\type{c}\sp{32\sp0} \cellcolor[gray]{0.8} !28\)& \(\type{u}\sp{0}\)& & \typevar{x}& \(\type{c}\sp{0}\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp{0}\)& && \cellcolor[gray]{0.8} \(\type{u}\sp{0}\) \\ \tt sw fp 24(sp) &\rm put fp 24 && \(\type{c}\sp{32\sp0} \cellcolor[gray]{0.8} !24!28\)& \(\type{u}\sp{0}\)& & \typevar{x}& \(\type{c}\sp{0}\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp{0}\)& & \cellcolor[gray]{0.8} \typevar{x}& \(\type{u}\sp{0}\) \\ \tt move fp sp &\rm cspt fp && \(\type{c}\sp{32\sp0}!24!28\)& \(\type{u}\sp{0}\)& & \cellcolor[gray]{0.8}\(\type{c}\sp{32\sp0}!24!28\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp{0}\)& & \typevar{x}& \(\type{u}\sp{0}\) \\ \tt sw gp 16(sp) &\rm put gp 16 && \(\type{c}\sp{32\sp0}!16!24!28\)& \(\type{u}\sp{0}\)& & \(\type{c}\sp{32\sp0}!24!28\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp{0}\)& \cellcolor[gray]{0.8}\(\type{c}\sp{0}\)& \typevar{x}& \(\type{u}\sp{0}\) \\ \tt li a0 <{\rm helloworld}>&\rm newx a0 \dots 1&& \(\type{c}\sp{32\sp0}!16!24!28\)& \(\type{u}\sp{0}\)& \cellcolor[gray]{0.8}\(\type{c}\sp{\ddot{1}}\) & \(\type{c}\sp{32\sp0}!24!28\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{0}\)& \typevar{x}& \(\type{u}\sp{0}\) \\ \tt jal <{\rm printstr}> &\rm gosub \dots && \(\type{c}\sp{32\sp0}!16!24!28\)& \cellcolor[gray]{0.8}\(\type{u}\sp{0}\)& \cellcolor[gray]{0.8}\(\type{c}\sp{0}\)& \(\type{c}\sp{32\sp0}!24!28\)& \cellcolor[gray]{0.8}\(\type{c}\sp{0}\)& \cellcolor[gray]{0.8}\(\type{c}\sp{0}\)& \cellcolor[gray]{0.8}\(\type{u}\sp{1}!0\)& \(\type{c}\sp{0}\)& \typevar{x}& \(\type{u}\sp{0}\) \\ \tt lw gp 16(sp) &\rm get gp 16 && \(\type{c}\sp{32\sp0}!16!24!28\)& \(\type{u}\sp{0}\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{32\sp0}!24!28\)& \cellcolor[gray]{0.8}\(\type{c}\sp{0}\)& \(\type{c}\sp{0}\)& \(\type{u}\sp{1}!0\)& \(\type{c}\sp{0}\)& \typevar{x}& \(\type{u}\sp{0}\) \\ \tt jal <{\rm halt}> &\rm gosub \dots && \(\type{c}\sp{32\sp0}!16!24!28\)& \cellcolor[gray]{0.8}\(\type{u}\sp{0}\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{32\sp0}!24!28\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{0}\)& \cellcolor[gray]{0.8}\(\type{u}\sp{1}!0\)& \(\type{c}\sp{0}\)& \typevar{x}& \(\type{u}\sp{0}\) \\ \tt nop &\rm & &&&&&&&&&\\ \tt lw gp 16(sp) &\rm get gp 16 && \(\type{c}\sp{32\sp0}!16!24!28\)& \(\type{u}\sp{0}\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{32\sp0}!24!28\)& \cellcolor[gray]{0.8}\(\type{c}\sp{0}\)& \(\type{c}\sp{0}\)& \(\type{u}\sp{1}!0\)& \(\type{c}\sp{0}\)& \typevar{x}& \(\type{u}\sp{0}\) \\ \tt nop &\rm & &&&&&&&&&\\ \tt lw ra 28(sp) &\rm get ra 28 && \(\type{c}\sp{32\sp0}!16!24!28\)& \cellcolor[gray]{0.8}\(\type{u}\sp{0}\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{32\sp0}!24!28\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{0}\)& \(\type{u}\sp{1}!0\)& \(\type{c}\sp{0}\)& \typevar{x}& \(\type{u}\sp{0}\) \\ \tt lw fp 24(sp) &\rm get fp 24 && \(\type{c}\sp{32\sp0}!16!24!28\)& \(\type{u}\sp{0}\)&\(\type{c}\sp{0}\)& \cellcolor[gray]{0.8}\(\typevar{x}\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{0}\)& \(\type{u}\sp{1}!0\)& \(\type{c}\sp{0}\)& \typevar{x}& \(\type{u}\sp{0}\) \\ \tt move sp gp &\rm rspf gp && \cellcolor[gray]{0.8}\(\type{c}\sp{0}\)& \(\type{u}\sp{0}\)& \(\type{c}\sp{0}\)& \(\typevar{x}\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{0}\)& \(\type{u}\sp{1}!0\)& \(\type{c}\sp{0}\)& \typevar{x}& \(\type{u}\sp{0}\) \\ \tt jr ra &\rm return && \(\type{c}\sp{0}\)& \cellcolor[gray]{0.8}\(\type{u}\sp{0}\)& \(\type{c}\sp{0}\)& \(\typevar{x}\)& \(\type{c}\sp{0}\)& \(\type{c}\sp{0}\)& \(\type{u}\sp{1}!0\)& \(\type{c}\sp{0}\)& \typevar{x}& \(\type{u}\sp{0}\) \\ \underline{helloworld}: &\rm $\langle$string data$\rangle$ &&&&&&&&&&&\\[-4ex] \end{longtable} \end{center} That the `!' annotations are always less than the bottom element of the tower on the stack pointer annotation means that no aliasing occurs. Reads are at an offset already marked with a `!', hence within the same range that writes are constrained to. The `halt' subroutine does not use the stack pointer; its function is to write a single byte to the hard-coded I/O-mapped address of a system peripheral. The annotation for register \reg{v1} on output is the taint left by that write. \begin{scriptsize} \begin{leftprogram} \fns \underline{halt}: \#\rm \(\reg{zero}=\type{c}\sp0;\reg{ra}=\type{u}\sp0\) \fns\tt li v1 0xb0000x10 newh v1 ... 1 \#\rm \(\reg{v1}=\type{u}\sp1;\reg{zero}=\type{c}\sp0;\reg{ra}=\type{u}\sp0\) \fns\tt sb zero 0(v1) sbth v1 0(v1) \#\rm \(\reg{v1}=\type{u}\sp1!0;\reg{zero}=\type{c}\sp0;\reg{ra}=\type{u}\sp0\) \fns\tt jr ra return \#\rm \(\reg{v1}=\type{u}\sp1!0;\reg{zero}=\type{c}\sp0;\reg{ra}=\type{u}\sp0\) \end{leftprogram} \end{scriptsize} \noindent The \reg{zero} register is conventionally kept filled with the zero word in RISC architectures. The \emph{printstr} routine takes a string pointer as argument in register \reg{a0}. A requirement that registers \reg{v0}, \reg{v1} have certain types on entry is an artifact of annotation. Since `\$B' comes after writes to \reg{v0}, \reg{v1}, those two registers are bound to types at that point. The forward jump (\opcode{j}) to `\$B' forces the same annotations at the jump instruction as at the target. But, at the jump, no write to \reg{v0}, \reg{v1} has yet taken place, so we are obliged to provide the types of \reg{v0}, \reg{v1} at entry. The table below is constructed using the same display convention as the table for {\it main}. \begin{center} \scriptsize \begin{longtable}{@{}lllc|c|c|c|c|c|c|c|c|c|c@{}} && & \reg{sp}$^*$& \reg{fp}& \reg{ra}& \reg{a0}& \reg{gp}& \reg{v0}& \reg{v1}& (12)& (20)& (24)& (28) \\ \hline &&& &&&&&&&&& \\[-1.5ex] \underline{printstr}:& &\#&\rm \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& & \(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& &&& \\ \tt move gp sp & cspt gp &\#&\rm \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \cellcolor[gray]{0.8}\(\type{c}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& &&& \\ \tt addiu sp sp -32 & push 32 &\#&\rm \cellcolor[gray]{0.8}\(\type{c}\sp{32\sp0}\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& &&& \\ \tt sw ra 24(sp) & put ra 24 &\#&\rm \cellcolor[gray]{0.8}\(\type{c}\sp{32\sp0}!24\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& && \cellcolor[gray]{0.8}\(\type{u}\sp0\)& \\ \tt sw fp 20(sp) & put fp 20 &\#&\rm \cellcolor[gray]{0.8}\(\type{c}\sp{32\sp0}!20!24\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& & \cellcolor[gray]{0.8}\typevar{x}& \(\type{u}\sp0\)& \\ \tt move fp sp & cspt fp &\#&\rm \(\type{c}\sp{32\sp0}!20!24\)& \cellcolor[gray]{0.8}\(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& & \typevar{x}& \(\type{u}\sp0\)& \\ \tt sw gp 12(sp) & put gp 12 &\#&\rm \cellcolor[gray]{0.8}\(\type{c}\sp{32\sp0}!12!20!24\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& \cellcolor[gray]{0.8}\(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \\ \tt sw a0 28(sp) & put a0 28 &\#&\rm \cellcolor[gray]{0.8}\(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp{\ddot{1}}!0\) \\ \tt move a0 zero & mov a0 zero&\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt j \(\rm\langle\$B\rangle\) & j \(\rm\langle\$B\rangle\) &\#&\rm & & & & & &&&& \\ \underline{\$A}: & &\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt lw v0 28(sp) & get v0 28 &\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt nop & nop &\#&\rm & & & & & &&&& \\ \tt lb v0 0(v0) & getbx v0 0(v0) \kern-2.5pt&\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt move v1 v0 & mov v1 v0 &\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt lw v0 28(sp) & get v0 28 &\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt addiu v0 v0 1 & step v0 1 & \#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt sw v0 28(sp) & put v0 28 &\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt move a0 v1 & mov a0 v1 &\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt jal \(\rm\langle{\rm printchar}\rangle\) &gosub printchar & \#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \cellcolor[gray]{0.8}\(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \cellcolor[gray]{0.8}\(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt lw gp 12(sp) & get gp 12 &\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \underline{\$B}: & & \#&\rm & & & & & &&&& \\ \tt lw v0 28(sp) & get v0 28 &\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp{\ddot{1}}!0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt lb v0 0(v0) & getbx v0 0(v0)&\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \cellcolor[gray]{0.8}\(\type{c}\sp0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt bnez v0 \(\rm\langle\$A\rangle\)&bnez v0 \(\rm\langle\$A\rangle\) &\#&\rm \(\type{c}\sp{32\sp0}!12!20!24!28\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt move sp fp & cspf fp &\#&\rm \cellcolor[gray]{0.8}\(\type{c}\sp{32\sp0}!20!24\)& \(\type{c}\sp{32\sp0}!20!24\)& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt lw ra 24(sp) & get ra 24 &\#&\rm \(\type{c}\sp{32\sp0}!20!24\)& \(\type{c}\sp{32\sp0}!20!24\)& \cellcolor[gray]{0.8}\(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt lw fp 20(sp) & get fp 20 &\#&\rm \(\type{c}\sp{32\sp0}!20!24\)& \cellcolor[gray]{0.8}\typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt move sp gp & rspf gp &\#&\rm \cellcolor[gray]{0.8}\(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \\ \tt jr ra & return &\#&\rm \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{c}\sp0\)& \(\type{u}\sp1!0\)& \(\type{c}\sp0\)& \typevar{x}& \(\type{u}\sp0\)& \(\type{c}\sp{\ddot{1}}!0\) \end{longtable} \end{center} \noindent The `printchar' subroutine writes a character received in register \reg{a0} to the hard-coded address of a printer device: \begin{scriptsize} \begin{leftprogram} \fns \underline{printchar}: \#\rm \(\reg{a0}=\type{c}\sp0;\reg{ra}=\type{u}\sp0\) \fns\tt li v1 0xb0000000 newh v1 ... 1 \#\rm \(\reg{v1}=\type{u}\sp1;\reg{a0}=\type{c}\sp0;\reg{ra}=\type{u}\sp0\) \fns\tt sb a0 0(v1) sbth a0 0(v1) \#\rm \(\reg{v1}=\type{u}\sp1!0;\reg{a0}=\type{c}\sp0;\reg{ra}=\type{u}\sp0\) \fns\tt jr ra return \#\rm \(\reg{v1}=\type{u}\sp1!0;\reg{a0}=\type{c}\sp0;\reg{ra}=\type{u}\sp0\) \end{leftprogram} \end{scriptsize} \noindent Like \emph{halt}, it does not use the stack pointer. \section{How does annotation ensure aliasing does not happen?} \label{sec:Preventing unsafe memory access} How to ensure memory aliasing does not happen is intuitively simple: make sure that each address used can have been calculated in only one way. There are in principle two constraints that can be enforced directly via annotation and which will have this effect: \begin{enumerate}[(i)] \item Both stack reads and writes with \opcode{get} and \opcode{put} may be restricted to offsets $n$ that lie in the range permitted by the local stack frame size (look for a stack pointer tower $m^{{.^{.^.}\kern-5pt}}$ on the annotation before the instruction, with $0 \le n\le m-4$); \item stack reads with \opcode{get} may be restricted to offsets $n$ at which writes with \opcode{put} have already taken place (look for a $!n$ mark on the annotation before the instruction). \end{enumerate} Similarly for strings and arrays. It is (i) that makes memory aliasing impossible, but (ii) is also useful because it (a) reduces (i) to be required on writes alone, and (b) prevents `read before write' faults. Without (i), code could validly try to access an element of the caller's frame, and that would fail because of aliasing via two distinct calculations for the same address, from caller's and callee's frames respectively. If these constraints are satisfied, we argue as follows that memory-aliasing cannot occur. The base address used for access via the RISC \opcode{lw} or \opcode{sw} instructions is either: \begin{enumerate} \item The stack pointer (disassembly of the access instruction is to \opcode{put}, \opcode{get}, \opcode{putb}, \opcode{getb}); \item the base address of a string, incremented several times by the string increment (the disassembly is to \opcode{putx}, \opcode{getx}, \opcode{putbx}, \opcode{getbx}); \item the base address of an array (the disassembly is to \opcode{swth}, \opcode{sbth}, \opcode{lwfh}, \opcode{lbfh}). \end{enumerate} and the offset in the instruction is in the first case less than the stack frame size, in the second case less than the string increment, and in the third case less than the array size. Why are these and no other case possible? Firstly, if the program is annotated, then every use of a base address for the underlying machine code \opcode{lw} and \opcode{sw} instructions matches exactly one of these cases, because the annotation rules have no other option. Next we claim that the annotations on a program are {\em sound}. This is a technical claim that we cannot formally substantiate here that says that in an annotated program the annotations around each instruction reflect what the instruction does computationally. The full statement requires a model of each instruction's semantics as a state-to-state transformation (given in Appendix~\ref{Sec:Motivating semantics}) and a proof that the big-step rules of Section~\ref{sec:Formal logic} express those semantics. Given that, the three cases above for the base address used in a \opcode{lw} and \opcode{sw} instruction may be characterized thus: \begin{enumerate} \item It is the stack pointer, which is marked with an asterisk in the annotatiion and typed with $\type{c}^f$ where the tower $f$ consists of the sizes of current and calling stack frames; \item it is a string pointer, which is typed with $\type{c}^{\ddot{m}}$ in the annotation and is equal to the base address of the string plus a finite number of increments $m$; \item it is an array pointer, which is typed with $\type{u}^m$ in the annotation and is equal to the base address of the array, which is of size $m$. \end{enumerate} \noindent In each of those three cases, the offset used in the \opcode{lw} or \opcode{sw} instruction is only permitted by the annotation to lie in the range $0$ to $m-4$, where $m$ is respectively the current frame size, the string step size, and the array size. The first of these cases implements condition (i), and the second and third implement the equivalent condition for strings and arrays respectively. I.e., there is only one calculation possible for each address used. Similar arguments hold for byte-wise access via \opcode{lb} and \opcode{sb}. In addition, however, one must require that menory areas accessed via these instructions are not also accessed via \opcode{lw} and \opcode{sw}, in order to avoid different calculations for the addresses of the individual bytes in a word. The simplest way to ensure that is to forbid use of \opcode{lb} and \opcode{sb} entirely, relying instead on \opcode{lw} and \opcode{sw} plus arithmetic operations to extract the byte. The next simplest alternative is to allow \opcode{lb} and \opcode{sb} only on strings with step size less than $4$ and arrays of size less than 4, which word-wise instructions are forbidden from accessing by the annotation rules. \section{Conclusion and Future Work} We have set out a method of annotation that can ensure that a RISC machine-code program is safe against `hardware' aliasing. We model aliasing as introduced by the use of different arithmetic calculations for the same memory address, and successful annotation guarantees that a unique calculation will be used at run-time for the address of each execution stack, string or array element accessed by the program. Annotation also means disassembling the machine code to a slightly higher level assembly language, for a stack machine, and a human being is required to certify that the disassembly matches the programmer's intentions. Note that one may add disassembly rules to the system that are (deliberately) semantically wrong, with the aim of correcting the code. For example, one may choose to (incorrectly) disassemble the RISC $\opcode{addiu}~\reg{sp}~\reg{sp}~32$ instruction to a stack machine \opcode{pop} instruction. The RISC instruction is not a correct implementation of the higher level instruction in an aliasing context, although it was likely intended to be. But one may then replace the original RISC code with a correct implementation. Also note that the equational annotations here may be generalised to quite arbitrary first-order predicates. It also appears that our system of types may be generalised to arrays of arrays and strings of strings, etc, which offers the prospect of a static analysis technology that can follow pointers. \bibliographystyle{plain}
1,477,468,750,186
arxiv
\section{Introduction} \label{sec:intro} Remarkable features of Generative Adversarial Networks (GANs) such as impressive sample quality and smooth latent interpolation have drawn enormous attention from the community, but what we have enjoyed with little gratitude claim their worth in a data-limited regime. As naive training of GANs with small datasets often fails both in terms of fidelity and diversity, many have proposed novel approaches specifically designed for few-shot image synthesis. Among the most successful are those adapting a pretrained source generator to the target domain \cite{mo2020freeze,ojha2021few,li2020few} and those seeking generalization to unseen categories through feature fusion \cite{gu2021lofgan,hong2020matchinggan}. Despite their impressive synthesis quality, these approaches are often critically constrained in practice as they all require semantically related large source domain dataset to pretrain on~\cite{ojha2021few}, as illustrated in \cref{fig:cdc}. For some domains like abstract art paintings, medical images and cartoon illustrations, it is very difficult to collect thousands of samples, while at the same time, finding an adequate source domain to transfer from is not straightforward either. To train GANs from scratch with limited data, several augmentation techniques~\cite{zhao2020differentiable,karras2020training} and model architecture~\cite{liu2020towards} have been proposed. Although these methods have presented promising results on low-shot benchmarks consisting of hundreds to thousands of training images, they fall short for few-shot generation where the dataset is even more constrained (\textit{e.g.,} $n=10$). GANs trained with small dataset typically display one of the two behaviors: severe quality degradation~\cite{zhao2020differentiable,karras2020training} or near-perfect memorization~\cite{feng2021gans}, as visible from \cref{fig:smooth_interp} (\textit{left}). Hence producing \textit{novel} samples of \textit{reasonable} quality is the ultimate goal of few-shot generative models. We note that memorization differs from the classic mode collapse problem, as the former is not just lack of diversity, but the \textit{fundamental inability to generate unseen samples}. As directly combatting memorization with as little as 10 training samples is extremely difficult if not impossible, we choose to tackle a surrogate problem instead. Our key observation is that strongly overfitted generators are only capable of producing a limited set of samples, resulting in discontinuous transitions in the image space under latent interpolation. We call this \textit{stairlike latent space phenomenon}, which has been pointed out by previous works~\cite{radford2015unsupervised,brock2018large} as an indicator for memorization. \cref{fig:smooth_interp} (\textit{right}) demonstrates that previous methods designed for diversity preservation~\cite{benaim2017one} or low-shot synthesis~\cite{liu2020towards} all display such behavior under few-shot setting ($n=10$). Therefore, instead of pursuing the seemingly insurmountable task of suppressing memorization, we directly target \textit{stairlike latent space problem} and propose effective distance regularizations to explicitly \textit{smooth} the latent space of the generator (G) and the discriminator (D), which we empirically show is equivalent to fighting memorization in effect. \begin{figure}[t] \includegraphics[width=0.9\linewidth]{Figure/fig0-rearr.jpg} \caption{ \textbf{Cross Domain Correspondence~\cite{ojha2021few} adaptation of FFHQ source generator on various target domains} (10-shot). Finding a semantically similar source domain is crucial for CDC as large domain gap greatly harms the transfer performance. We later show that our method outperforms CDC without any source domain pretraining even on the semantically related domains. } \label{fig:cdc} \end{figure} Our high level idea is to maximally exploit the scarce data points by continuously exploring their semantic mixups~\cite{zhang2017mixup}. The discriminator overfitted to few real samples, however, shows overly confident and abrupt decision boundaries, leaving the generator with no choice but to faithfully replicate them in order to convince the opponent. This results in aforementioned \textit{stairlike latent space} for both G and D, rendering smooth semantic mixups impossible. To tackle this problem, we explore G's latent space with a randomly sampled interpolation coefficient $\mathbf{c}$, enforcing relative semantic distances between samples to follow the mixup ratio. By simultaneously imposing similar regularization on D's feature space, we prohibit the discriminator from embedding images to arbitrary locations for its convenience of memorizing, and guide its feature space to be aligned by semantic distances. Our objective is inspired by the formulation of \cite{ojha2021few} that aims to transfer diversity information from source domain to target domain. We tailor it for our single domain setting, where no source domain is available to import diversity from, and show that our method is able to to produce diverse novel samples with convincing quality even with as little as 10 training images. \begin{figure}[t] \includegraphics[width=\columnwidth]{Figure/fig2_rename.jpg} \caption{ Training GANs with as little as 10 real samples typically results in either complete collapse or severe memorization \textit{(left)}. Strongly overfitted generators can only generate a limited set of images, hence displaying \textit{stairlike} latent interpolation \textit{(right)}. } \label{fig:smooth_interp} \end{figure} We further observe that models trained with our regularizations resist mode collapse surprisingly well even with no special augmentation. We believe that our distance regularizations encourage the model to preserve inherent diversity present in early stages throughout the course of training. Resistance to overfitting and mode collapse combined opens up doors for sample diversity under rigorous data constraint, which we demonstrate later with experimental results. In sum, our contributions can be summarized as: \begin{itemize} \item We propose a two-sided distance regularization that encourages learning of smooth and mode-preserved latent space through controlled latent interpolation. \item We introduce a simple framework for few-shot image generation without a large source domain dataset that is compatible with existing architectures and augmentation techniques. \item We evaluate our approach on a wide range of datasets and demonstrate its effectiveness in generating diverse samples with convincing quality. \end{itemize} \section{Related Works} \noindent \textbf{One-shot image generation} In order to create diverse outcomes from a single image, SinGAN~\cite{shaham2019singan} leverages the inherent ambiguity present in downsampled image. Based on SinGAN, ConSinGAN~\cite{hinz2021improved} proposes a technique to control the trade-off between fidelity and diversity. One-Shot GAN~\cite{sushko2021one} uses a dual-branch discriminator where each head identifies context and layout, respectively. As one-shot image generation methods focus on exploiting a single image, they are not directly applicable to few-shot image generation tasks where the generator must learn the underlying distribution of a collection of images. \noindent \textbf{Low-shot image generation} Given a limited amount of training data, the discriminator in conventional GAN can easily overfit. To mitigate this problem, DiffAugment~\cite{zhao2020differentiable} imposes differentiable data augmentation to both real and fake samples while ADA~\cite{karras2020training} devises non-leaking adaptive discriminator augmentation. FastGAN~\cite{liu2020towards} suggests a skip-layer excitation module and a self-supervised discriminator, which saves computational cost and stabilizes low-shot training. GenCo~\cite{cui2021genco} shows impressive results on low-shot image generation task by using multiple discriminators to alleviate overfitting. Despite their promising performances on low-shot benchmarks, these methods often show significant instability under stricter data constraint, namely in \textit{few-shot} setting. \noindent \textbf{Few-shot generation with auxiliary dataset} Thus far, the \textit{few-shot} image generation task ($n\approx10$) mostly required pretraining on larger dataset with similar semantics~\cite{wang2018transferring,wang2020minegan,zhao2020leveraging,robb2020few} mainly due to its inherent difficulty. A group of works~\cite{gu2021lofgan,hong2020matchinggan,hong2020f2gan,bartunov2018few} learns transferable generation ability on \textit{seen categories} and seek generalization into \textit{unseen categories} through fusion-based methods. FreezeD~\cite{mo2020freeze} and EWC~\cite{li2020few} further improves transfer learning framework for GANs. Meanwhile, CDC~\cite{ojha2021few} computes the similarities between samples within each domain and encourages the corresponding similarity distributions to resemble each other. It aims to directly transfer the structural diversity of the source domain to the target, yielding impressive performance. In this paper, we modify the formulation of CDC and propose a novel few-shot generation framework that does not require any auxiliary data or separate pretraining step. \noindent \textbf{Generative diversity} Mode collapse has been a long standing obstacle in GAN training. \cite{arjovsky2017wasserstein,mao2017least} introduce divergence metrics that are effective at stabilizing GAN training while \cite{durugkar2016generative,ghosh2018multi} tackle this problem by training multiple networks. Another group of works \cite{liu2019normalized,mao2019mode,tran2018dist,yang2019diversity,benaim2017one} proposes regularization methods to preserve distances in the generated output space. Unlike these works, we consider the few-shot setting where the diversity is restricted mainly due to memorization, and introduce an interpolation-based distance regularization as an effective remedy. \noindent \textbf{Latent mixup} Since \cite{zhang2017mixup}, mixup methods have been actively explored to enforce smooth behaviors in between training samples~\cite{berthelot2019mixmatch,verma2021interpolation,berthelot2019remixmatch}. In generative models, \cite{radford2015unsupervised} emphasizes the importance of smooth latent transition as a counterevidence for memorization, but as state-of-the-art GAN models trained with sufficient data naturally possess such property \cite{karras2020analyzing,brock2018large}, it has been mainly studied with autoencoders. \cite{berthelot2018understanding,oring2020autoencoder} regularize autoencoders to learn smooth latent space while \cite{wertheimer2020augmentation,sainburg2018generative} explore their potential as generative models through interpolation. \section{Approach} \begin{figure}[t] \centering \includegraphics[width=0.85\textwidth]{Figure/main_fig_eccv_rename.jpg} \caption{Overview of our \textbf{Mixup-based Distance Learning (MixDL)}. We sample mixup coefficients from a Dirichlet distribution and generate an anchor point $z_0$ through interpolation. Then we enforce pairwise similarities between intermediate generator activations to follow the interpolation coefficients. Similar regularization is imposed on discriminator's penultimate activation, which is linearly projected before similarity calculation. The proposed regularization terms can be added on top of any traditional adversarial framework.} \label{fig:overview} \end{figure} We consider the situation where only few train examples (e.g., $n=10$) are available with no semantically similar source domain. Hence, we would like to train a generative model from scratch, \textit{i.e.,} with no auxiliary dataset or separate pretraining step, using only a handful of images. Under such challenging constraints, overfitting greatly restricts a model's ability to learn data distribution and produce diverse samples. We identify its byproduct \textit{stairlike latent space} as the core obstacle, as it not only indicates memorizing but also prohibits hallucination through semantic mixup. We observe that both the generator and the discriminator suffer from the problem with insufficient data, evidenced by discontinuous latent interpolation and overly confident decision boundary, respectively. To this end, we propose mixup-based distance learning (MixDL) framework that guides the two players to form soft latent space and leverage it to generate diverse samples. We further discover that our proposed regularizers effectively combat mode collapse, a problem particularly more devastating with a small dataset, by preserving diversity present in early training stages. As our formulation is inspired by \cite{ojha2021few}, we first introduce their approach in \cref{subsection:CDC}, and formally state our methods in \cref{subsection:G} and \cref{subsection:D}. Our final learning framework and the corresponding details can be found in \cref{subsection:Final}. \subsection{Cross-Domain Correspondence} \label{subsection:CDC} In CDC~\cite{ojha2021few}, the authors propose to transfer the relationship learned in a source domain to a target domain. They define a probability distribution from pairwise similarities of generated samples in both domains and bind the latter to the former. Formally, they define distributions as \begin{align} p^l &= \text{softmax}(\{\text{sim}(G^l_{s}(z_0), G^l_{s}(z_i))\}_{i=1}^N) \\ q^l &= \text{softmax}(\{\text{sim}(G^l_{s \rightarrow t}(z_0), G^l_{s \rightarrow t}(z_i))\}_{i=1}^N) \end{align} where $G^l$ is the generator activation at the $l^{th}$ layer and $\{z_i\}_0^N$ are latent vectors. Note that $G_{s}$ and $G_{s \rightarrow t}$ correspond to source and target domain generator, respectively, and $p^l$, $q^l$ are $N$-way discrete probability distributions consisting of $N$ pairwise similarities. Then, along with adversarial objective $\mathcal{L}_{adv}$, they impose a KL-divergence-based regularization of the following form: \begin{equation} \mathcal{L}_{dist} = \mathbb{E}_{z \sim p_z(z)}[D_{KL}(q^l||p^l)]. \label{eqn:loss_dist} \end{equation} The benefits of this auxiliary objective are twofold: it prevents distance collapse in the target domain and transfers diversity from the source to target via one-to-one correspondence. However, as visible from \cref{fig:cdc}, the synthesis quality is greatly affected by the semantic distance between source and target. Hence, we propose MixDL, which modifies CDC for pretraining-free few-shot image synthesis and provides consistent performance gains across different benchmarks. \subsection{Generator Latent Mixup} \label{subsection:G} In \cite{ojha2021few}, the anchor point $z_0$ could be chosen arbitrarily from the prior distribution $p_z(z)$ since they were transferring the rich structural diversity of the source domain to the target latent space. As this is no longer applicable in our setting, we propose to resort to diverse \textit{combinations} of given samples. Hence, preserving the modes and learning interpolable latent space are our two main desiderata. To this end, we define our anchor point using Dirichlet distribution as follows: \begin{equation} z_0 = \sum_{i=1}^N c_i z_i, \quad \mathbf{c} \sim Dir(\alpha_1, \cdots , \alpha_N) \label{eqn:latent_interpolation} \end{equation} where $\mathbf{c} \triangleq [c_1, \cdots, c_N]^T$. Using \cref{eqn:latent_interpolation}, the latent space can be navigated in a quantitatively controlled manner. Defining probability distribution of pairwise similarities as in \cite{ojha2021few}, we bind it to the interpolation coefficients $\mathbf{c}$ instead. The proposed distance loss is defined as follows: \begin{align} \mathcal{L}_{dist}^G &= \mathbb{E}_{z \sim p_z(z), \mathbf{c} \sim Dir(\mathbf{\alpha})}[D_{KL}(q^l||p)], \\ \label{eqn:ours_source} q^l &= \text{softmax}(\{\text{sim}(G^l(z_0), G^l(z_i))\}_{i=1}^N), \\ p &= \text{softmax}(\{c_i\}_{i=1}^N), \end{align} where $Dir(\mathbf{\alpha})$ denotes the Dirichlet distribution with parameters $\alpha=(\alpha_1, \cdots, \alpha_N)$. This efficiently accomplishes our desiderata. Intuitively, unlike naive generators that gradually converge to few modes, our regularization forces the generated samples to differ from each other by a controlled amount, making mode collapse very difficult. At the same time, we constantly explore our latent space with continuous coefficient vector $\mathbf{c}$, explicitly enforcing smooth latent interpolation. An anchor point similar to \cite{ojha2021few} can be obtained with one-hot coefficients $\mathbf{c}$. \subsection{Discriminator Feature Space Alignment} \label{subsection:D} While the generator distance regularization can alleviate mode collapse and stairlike latent space problem surprisingly well, the root cause of constrained diversity still remains unresolved, \textit{i.e.,}\ discriminator overfitting. As long as the discriminator delivers overconfident gradient signals to the generator based on few examples it observes, generator outputs will be strongly pulled towards the small set of observed data. To encourage the discriminator to provide smooth signals to the generator based on reasoning about continuous semantic distances rather than simply memorizing the data points, we impose similar regularization on its feature space. Formally, we define our discriminator $D(x) = (d^{(2)}\circ d^{(1)})(x)$ where $d^{(2)}(x)$ refers to the final FC layer that outputs \{real, fake\}. When a set of generated samples $\{G(z_i)\}_{i=1}^N$ and the interpolated sample $G({z_0})$ is provided to $D$, we construct an $N$-way distribution similar to \cref{eqn:ours_source} as \begin{equation} r = \text{softmax}(\{\text{sim}(proj(d_0^{(1)}), proj(d_i^{(1)}))\}_{i=1}^N) \end{equation} where $proj$ refers to a linear projection layer widely used in self-supervised learning literature \cite{chen2020simple,chen2021exploring,grill2020bootstrap} and $d_j^{(1)} \triangleq d^{(1)}(G(z_j))$. Without the linear projector, we found the constraint too rigid that it harms overall output quality. We define our distance regularization for the discriminator as \begin{equation} \mathcal{L}_{dist}^D = \mathbb{E}_{z \sim p_z(z), \mathbf{c} \sim Dir(\mathbf{\alpha})} [D_{KL}(r||p)]. \end{equation} This regularization penalizes the discriminator for storing memorized real samples in arbitrary locations in the feature space and encourages the space to be aligned with relative semantic distances. Thus it makes memorization harder while guiding the discriminator to provide smoother and more semantically meaningful signals to the generator. \subsection{Final Objective} \label{subsection:Final} \cref{fig:overview} shows an overall concept of our method. Our final objective takes the form: \begin{equation} \mathcal{L}^G = \mathcal{L}_{adv}^G + \lambda_G\mathcal{L}_{dist}^G \end{equation} \begin{equation} \mathcal{L}^D = \mathcal{L}_{adv}^D + \lambda_D\mathcal{L}_{dist}^D \end{equation} where we generally set $\lambda_G=1000$ and $\lambda_D=1$. As our method is largely independent of model architectures, we apply our method to two existing models, StyleGAN2\footnote{https://github.com/rosinality/stylegan2-pytorch}\cite{karras2020analyzing} and FastGAN\cite{liu2020towards}. We keep their objective functions as they are and simply add our regularization terms. For StyleGAN2, we interpolate in $\mathcal{W}$ rather than $\mathcal{Z}$, which has been shown to have better properties such as disentanglement \cite{wang2021high,zhu2020improved,alaluf2021restyle}. Mixup coefficients $\mathbf{c}$ is sampled from a Dirichlet distribution of parameters all equal to one. Patch-level discrimination \cite{isola2017image,ojha2021few} is applied for mixup images to encourage our generator to be \textit{creative} while exploring the latent space. \section{Experiments} \begin{figure}[t] \centering \includegraphics[width=0.95\textwidth]{Figure/main_rename.jpg} \caption{10-shot image generation results. While baseline methods either collapse or \textbf{simply replicate the training samples (yellow box)}, our method actively encourages the generator to explore semantic mixups of given samples, which enables synthesis of various unseen samples. } \label{fig:generated_samples} \end{figure} \noindent \textbf{Baselines} We mainly apply our method to the state-of-the-art unconditional GAN model, StyleGAN2 \cite{karras2020analyzing}. Data augmentation techniques introduced by \cite{zhao2020differentiable} and \cite{karras2020training} show promising performance on low-shot image generation task, so we evaluate them along with ours and refer to them as \textit{DA} and \textit{ADA} respectively. We additionally apply our method to FastGAN~\cite{liu2020towards}, which is a light-weight GAN architecture that allows faster convergence with limited data. Although methods designed for alleviating mode collapse~\cite{benaim2017one,liu2019normalized,mao2019mode} are not directly targeted for data-limited setting, we further adopt these as baselines considering the similarity in objective formulation. We implement them on StyleGAN2 for better synthesis quality and fair comparison. Transfer based methods such as EWC~\cite{li2020few} and CDC~\cite{ojha2021few} fundamentally differ from ours as they require a large scale pretraining and thus are not directly comparable. However, we include CDC~\cite{ojha2021few} since our method adjusts it for a more general single domain setting. \noindent \textbf{Datasets} For quantitative evaluation, we use Animal-Face Dog \cite{si2011learning}, Oxford-flowers \cite{nilsback2006visual}, FFHQ-babies \cite{karras2019style}, face sketches \cite{wang2008face}, Obama and Grumpy Cat \cite{zhao2020differentiable}, anime face \cite{liu2020towards} and Pokemon (pokemon.com, \cite{liu2020towards}). Aforementioned datasets contain 100 to 8189 samples, so we simulate few-shot setting by randomly sampling 10 images, if not stated otherwise. For qualitative evaluation, we further experiment on paintings of Amedeo Modigliani \cite{yaniv2019face}, landscape drawings \cite{ojha2021few} and web-crawled images of Totoro. All the images are $256 \times 256$. Additional synthesis results and information about datasets can be found in the supplementary. \begin{figure}[t] \includegraphics[width=\linewidth]{Figure/low-shot.jpg} \centering \caption{ Uncurated collection of samples sharing the same training image as nearest neighbor. Images from baselines are largely identical, but those produced by ours are all different. Numbers in parentheses indicate the dataset size. } \label{fig:div_under_const} \end{figure} \noindent \textbf{Evaluation Metrics} We measure \textit{Fréchet Inception Distance} (FID) \cite{heusel2017gans}, sFID~\cite{nash2021generating} and precision/recall~\cite{zhu2020improved} for datasets containing a sufficient number ($\geq100$) of samples along with pairwise \textit{Learned Perceptual Image Patch Similarity} (LPIPS) \cite{zhang2018unreasonable}. For simulated few-shot tasks, the FID and sFID are computed against the full dataset as in \cite{li2020few,ojha2021few}. We further use LPIPS as a distance metric for demonstrating interpolation smoothness and mode preservation. \subsection{Qualitative Result} \cref{fig:generated_samples} shows generated samples from 10-shot training. We observe that baseline methods either collapse to few modes or severely overfit to the training data, resulting in inability to generate novel samples. Ours is the only method that produces a variety of convincing samples that are not present in the training set. Our method combines visual attributes such as hairstyle, beard and glasses in a natural way, producing distinctive samples under harsh data constraint. The difference is more distinguished when we take a closer look. In \cref{fig:div_under_const} we display uncurated sets of generated images along with their nearest neighbor real images. Samples from \textit{DistanceGAN}~\cite{benaim2017one} and \textit{FastGAN}~\cite{liu2020towards} are either defective or largely identical to the corresponding GT, but our method generates unique samples with recognizable visual features. We believe this is because our distance regularization enforces outputs from different latent vectors to differ from each other, proportionally to the relative distances in the latent space. \begin{table}[t] \caption{\textbf{Quantitative results on 10-shot generation task.} FID and sFID are computed against the full dataset and LPIPS is calculated between generated samples. The best and the second best scores are in bold and underlined. Although CDC$^\dagger$ is not directly comparable as it leverages a pretrained generator (FFHQ), we include it for the relevancy to our method. Clear performance drops are observed with increased domain gap (\textit{e.g.,} FFHQ $\rightarrow$ Dogs). } \centering \resizebox{\linewidth}{!}{ \begin{tabular}{l|ccc|ccc|ccc|ccc|ccc} \Xhline{3\arrayrulewidth} \multicolumn{1}{c|}{\multirow{2}{*}{Method}} & \multicolumn{3}{c|}{Anime Face} & \multicolumn{3}{c|}{Animal-Face Dog} & \multicolumn{3}{c|}{Oxford Flowers} & \multicolumn{3}{c|}{Face Sketches} & \multicolumn{3}{c}{Pokemon}\\ \cline{2-16} & FID $\downarrow$ & sFID $\downarrow$ & LPIPS $\uparrow$ & FID $\downarrow$ & sFID $\downarrow$ & LPIPS $\uparrow$ & FID $\downarrow$ & sFID $\downarrow$ & LPIPS $\uparrow$ & FID $\downarrow$ & sFID $\downarrow$ & LPIPS $\uparrow$ & FID $\downarrow$ & sFID $\downarrow$ & LPIPS $\uparrow$\\ \hline\hline FastGAN~\cite{liu2020towards} & 123.7 & 127.9 & 0.341 & 103.0 & 117.4 & 0.633 & 182.7 & 111.2 & 0.667 & 76.3 & 81.8 & 0.148 & 123.5 & 105.7 & 0.578 \\ StyleGAN2~\cite{karras2019style} & 166.0 & 111.4 & 0.363 & 177.5 & 127.7 & 0.569 & 177.3 & 143.0 & 0.537 & 94.2 & 84.4 & 0.435 & 257.6 & 136.5 & 0.439 \\ StyleGAN2 + DA~\cite{zhao2020differentiable} & 162.0 & 96.8 & 0.204 & 136.1 & 123.5 & 0.559 & 187.0 & 154.4 & 0.687 & 43.1 & 59.9 & 0.438 & 280.1 & 148.9 & 0.179 \\ StyleGAN2 + ADA~\cite{karras2020training} & 130.2 & 108.0 & 0.288 & 236.5 & 126.2 & 0.636 & 167.8 & 83.5 & 0.719 & 62.8 & 67.3 & 0.399 & 214.3 & 95.5 & 0.496 \\ \hline FastGAN + Ours & 107.6 & 98.5 & 0.478 & 99.8 & 111.7 & 0.625 & 180.5 & 75.5 & 0.657 & 45.0 & 58.0 & 0.416 & 144.0 & 118.3 & \underline{0.584} \\ StyleGAN2 + Ours & \underline{73.1} & \textbf{92.8} & 0.548 & 96.0 & \underline{99.9} & \underline{0.682} & 136.6 & 67.6 & \underline{0.734} & 39.4 & \textbf{43.3} & \underline{0.479} & \underline{117.0} & \textbf{57.7} & 0.539 \\ StyleGAN2 + DA + Ours & \textbf{70.2} & \underline{94.1} & \underline{0.551} & \underline{96.4} & 107.6 & \underline{0.682} & \underline{129.9} & \underline{66.9} & 0.705 & \textbf{35.6} & 50.1 & 0.471 & \textbf{114.3} & 79.0 & \textbf{0.607} \\ StyleGAN2 + ADA + Ours & 75.0 & 96.5 & \textbf{0.571} & \textbf{94.1} & \textbf{96.6} & \textbf{0.684} & \textbf{127.7} & \textbf{52.5} & \textbf{0.763} & \underline{39.2} & \underline{45.7} & \textbf{0.482} & 155.5 & \underline{65.7} & 0.544 \\ \hline StyleGAN2 + CDC$^\dagger$~\cite{ojha2021few} & 93.4 & 107.4 & 0.469 & 206.7 & 110.1 & 0.545 & 107.5 & 99.9 & 0.518 & 45.7 & 46.1 & 0.428 & 126.6 & 79.1 & 0.342\\ \Xhline{3\arrayrulewidth} \end{tabular} } \label{tab:quantitative} \end{table} \subsection{Quantitative Evaluation} \cref{tab:quantitative} shows FID, sFID and LPIPS scores for several low-shot generation methods~\cite{zhao2020differentiable,karras2020training,liu2020towards} on 10-shot image generation task. We can see that our method consistently outperforms the baselines, often with significant margins. Moreover, our regularizations can be applied concurrently to data augmentations to obtain further performance gains. Note that while StyleGAN2 armed with advanced data augmentations fails to converge from time to time, our method guarantees stable convergence to a better optimum across all datasets. Surprisingly, ours outperforms CDC~\cite{ojha2021few} on all metrics even when the two domains are closely related, \textit{e.g. anime-face} and \textit{face sketches}. For dissimilar domains like \textit{pokemon}, CDC tends to sacrifice diversity (\textit{i.e.,} LPIPS) for better fidelity, which nevertheless falls short overall. We present training snapshots in the supplementary. \begin{table}[t] \caption{Quantitative comparison with diversity preservation methods on 10-shot image generation task. MixDL is equivalent to \textit{StyleGAN2+Ours.}} \centering \resizebox{0.9\linewidth}{!}{ \begin{tabular}{c|ccc|ccc|ccc} \Xhline{3\arrayrulewidth} \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Anime Face} & \multicolumn{3}{c|}{Animal-Face Dog} & \multicolumn{3}{c}{FFHQ-babies} \\ \cline{2-10} & FID $\downarrow$ & sFID $\downarrow$ & LPIPS $\uparrow$ & FID $\downarrow$ & sFID $\downarrow$ & LPIPS $\uparrow$ & FID $\downarrow$ & sFID $\downarrow$ & LPIPS $\uparrow$ \\ \hline \hline N-Div~\cite{liu2019normalized} & 175.4 & 176.4 & 0.425 & 150.4 & 153.6 & 0.632 & 177.1 & 177.1 & 0.510 \\ MSGAN~\cite{mao2019mode} & 138.6 & 100.5 & 0.536 & 165.7 & 123.0 & 0.630 & 165.4 & 120.1 & 0.569 \\ DistanceGAN~\cite{benaim2017one} & 84.1 & 93.0 & 0.543 & 102.6 & 114.2 & 0.678 & 105.7 & 102.9 & 0.640 \\ MixDL (ours) & \textbf{73.1} & \textbf{92.8} & \textbf{0.548} & \textbf{96.0} & \textbf{99.9} & \textbf{0.682} & \textbf{83.4} & \textbf{73.9} & \textbf{0.643} \\ \Xhline{3\arrayrulewidth} \end{tabular} } \label{tab:quant_div} \end{table} \begin{figure}[t] \begin{floatrow} \capbtabbox{ \resizebox{1.0\linewidth}{!}{\scriptsize \begin{tabular}{c|cc|c|cc} \Xhline{3\arrayrulewidth} Dataset & Obama & Cat & Flowers & Obama & Cat\\ \hline Shot & 100 & 100 & 100 & 10 & 10\\ LPIPS & 0.615 & 0.613 & 0.795 & 0.598 & 0.598 \\ \hline StyleGAN2 & 63.1 & 43.3 & 192.2 & 174.7 & 76.4 \\ + DA & 46.9 & 27.1 & 91.6 & 66.8 & 45.6 \\ + Ours & 58.4 & 26.6 & 82.0 & 62.7 & 41.1 \\ + DA + Ours & \textbf{45.4} & \textbf{26.5} & \textbf{64.0} & \textbf{57.9} & \textbf{39.3} \\ \Xhline{3\arrayrulewidth} \end{tabular} }} {\caption{{\small FID compariosn on low-shot benchmarks. LPIPS measures in-domain diversity.}} \label{tab:lowshot-benchmark} } \capbtabbox{% \resizebox{0.89\linewidth}{!}{\scriptsize \begin{tabular}{c|cc|cc} \Xhline{3\arrayrulewidth} \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Obama} & \multicolumn{2}{c}{Cat} \\ \cline{2-5} & Prec. & Rec. & Prec. & Rec. \\ \hline StyleGAN2 & 0.47 & 0.07 & 0.15 & 0.12 \\ +MixDL & \textbf{0.52} & \textbf{0.32} & \textbf{0.86} & \textbf{0.50} \\ \hline FastGAN & 0.90 & 0.36 & 0.90 & 0.43 \\ +MixDL & \textbf{0.91} & \textbf{0.47} & \textbf{0.91} & \textbf{0.50} \\ \Xhline{3\arrayrulewidth} \end{tabular} }} {\caption{Precision and recall metrics on 100-shot benchmarks.} \label{tab:prec-recall} } \end{floatrow} \end{figure} Additional quantitative comparison with diversity preserving methods is displayed in \cref{tab:quant_div}. Although these methods have some similarities with ours, especially MixDL-G, we can observe steady improvements with MixDL. As the baselines are simply designed to minimize mode collapse, we believe they are relatively prone to memorization, which is a far devastating issue in few-shot setting. While pretraining-free 10-shot image synthesis task has not been studied much, several works \cite{liu2020towards,zhao2020differentiable} have previously explored generative modeling with as little as 100 samples. We present quantitative evaluations on popular low-shot benchmarks in Table \ref{tab:lowshot-benchmark}. We observe that our method consistently improves the baseline, and the margin is larger for more challenging tasks, \textit{i.e.,} dataset with greater diversity or fewer training samples. We discuss experiments on these benchmarks in depth in \cref{sec:discussion}. \cref{tab:prec-recall} shows precision and recall~\cite{kynkaanniemi2019improved} for these benchmarks, where MixDL boosts scores especially in terms of diversity. \subsection{Ablation Study} We further evaluate the effects of the proposed regularizations, MixDL-G (generator) and MixDL-D (discriminator), through ablation under different settings. In \cref{tab:method_ablation}, we observe that in general, our regularizations both contribute to better quality and diversity, while in some special cases, only adding MixDL-G leads to better FID score. We conjecture that aligning discriminator's feature vectors with the interpolation coefficients can impose overly strict constraint for some datasets. We nonetheless observe consistent improvements on diversity. \cref{fig:shot-ablation} shows the evaluation across different subset sizes. Since FFHQ-babies and Oxford-flowers contain more than 2,000 and 8,000 images respectively, we randomly sample subsets of size 10, 100 and 1,000. We can see that the performance of StyleGAN2 steadily improves with more training samples, but it consistently benefits from MixDL. Hence, we believe that with limited data in general, our method can be broadly used to improve model performance. Lastly in \cref{tab:dirichlet_ablation}, the effect of using different Dirichlet concentration parameters and sampling distribution for mixup is illustrated. We find that setting $\alpha=1$ yields the best performance, so we uniformly use this throughout the experiments. \begin{figure}[t] \begin{floatrow} \capbtabbox{ \resizebox{1.0\linewidth}{!}{\scriptsize \begin{tabular}{cc|cc|cc|cc} \Xhline{3\arrayrulewidth} \multicolumn{2}{c|}{MixDL} & \multicolumn{2}{c|}{Dog (10-shot)} & \multicolumn{2}{c|}{Babies (100-shot)} & \multicolumn{2}{c}{Flowers (100-shot)}\\ \hline $G$ & $D$ & FID $\downarrow$ & LPIPS $\uparrow$ & FID $\downarrow$ & LPIPS $\uparrow$ & FID $\downarrow$ & LPIPS $\uparrow$ \\ \hline & & 177.5 & 0.569 & 131.0 & 0.574 & 192.2 & 0.747\\ & \checkmark & 118.4 & 0.649 & 83.4 & 0.638 & 94.1 & 0.775\\ \checkmark & & \textbf{95.4} & 0.673 & 71.7 & 0.638 & 84.0 & 0.780\\ \checkmark & \checkmark & 96.0 & \textbf{0.682} & \textbf{63.4} & \textbf{0.647} & \textbf{82.0} & \textbf{0.782}\\ \Xhline{3\arrayrulewidth} \end{tabular} }} {\caption{{Ablation on MixDL-G and MixDL-D. Two regularizations combined generally yields the best performances.}} \label{tab:method_ablation} } \capbtabbox{% \resizebox{0.94\linewidth}{!}{\scriptsize \renewcommand{\arraystretch}{1.6} \begin{tabular}{c|ccc|c|c} \Xhline{3\arrayrulewidth} Distribution & \multicolumn{3}{c|}{Dirichlet} & Gaussian & Uniform \\ \hline Parameter & $\alpha=0.1$ & $\alpha=1$ & $\alpha=10$ & standard & - \\ \hline FID ($\downarrow$) & 76.4 & \textbf{73.1} & 80.8 & 76.0 & 74.8\\ LPIPS ($\uparrow$) & 0.536 & \textbf{0.548} & 0.532 & \textbf{0.548} & 0.546\\ \Xhline{3\arrayrulewidth} \end{tabular} \renewcommand{\arraystretch}{1} }} {\caption{Mixup coefficient sampling distribution ablation. We adopt $\alpha=1$ for simplicity.} \label{tab:dirichlet_ablation} } \end{floatrow} \end{figure} \begin{figure}[t] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{Figure/chart.jpg} \caption{FID scores for different dataset sizes.} \label{fig:shot-fid} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{Figure/chart-2.jpg} \caption{LPIPS for different dataset sizes.} \label{fig:shot-lpips} \end{subfigure} \caption{Shot ablation results. \textcolor{red}{Red} indicates FFHQ-babies and \textcolor{blue}{blue} represents flowers. Our method consistently improves both metrics with limited data.} \label{fig:shot-ablation} \end{figure} \subsection{Latent Space Smoothness} \label{sec:latent_smoothness} Smooth latent space interpolation is an important property of generative models that disproves overfitting and allows synthesis of novel data samples. As our proposed method focuses on diversity through latent smoothing, we quantitatively evaluate this using a variant of Perceptual Path Length (PPL) proposed by \cite{karras2019style}. PPL was originally introduced as a measure of latent space disentanglement under the assumption that a more disentangled latent space would show smoother interpolation behavior \cite{karras2019style}. As we wish to directly quantify latent space smoothness, we slightly modify the metric by taking 10 subintervals between any two latent vectors and measure their perceptual distances. \cref{tab:ppl_uniform} reports the subinterval mean, standard deviation, and the mean for the full path (\textit{End}). Note that as PPL is a quadratic measure, the sum of subinterval means can be smaller than the endpoint mean. All four models show similar endpoint mean, suggesting that the overall total perceptual distance is consistent, while ours displays the lowest PPL standard deviation. As low PPL variance across subintervals is a direct sign of perceptually uniform latent transitions, we can verify the effectiveness of our method in smoothing the latent space. Similar insight can be found from \cref{fig:latent_interp} where the baselines display \textit{stairlike} latent transition while ours shows smooth semantic interpolation. More details on PPL computation can be found in the supplementary materials. \subsection{Preserving Diversity} \begin{figure}[t] \includegraphics[width=\linewidth]{Figure/interp_rename.jpg} \captionsetup{width=\linewidth} \caption{\textbf{Latent space interpolation result.} Ours shows smooth transitions with high quality while others show defective or abrupt transitions.} \label{fig:latent_interp} \end{figure} \begin{table}[t] \caption{\textbf{Perceptual Path Length uniformity.} We generate 5000 latent interpolation paths and subdivide each into 10 subintervals to compute perceptual distances. Standard deviation (std) is computed across the subintervals, indicating perceptual uniformity of latent transition.} \centering \resizebox{0.8\linewidth}{!}{ \begin{tabular}{>{\centering}p{0.2\linewidth}|>{\centering}p{0.1\linewidth}>{\centering}p{0.1\linewidth}>{\centering}p{0.1\linewidth}|>{\centering}p{0.1\linewidth}>{\centering}p{0.1\linewidth}>{\centering\arraybackslash}p{0.1\linewidth}} \Xhline{3\arrayrulewidth} \multicolumn{1}{c|}{Dataset} & \multicolumn{3}{c|}{Landscape} & \multicolumn{3}{c}{Totoro}\\ \hline \multicolumn{1}{c|}{Metric} & Mean & Std. & End & Mean & Std. & End \\ \hline StyleGAN2 & 21.91 & 12.66 & 60.90 & 16.43 & 15.39 & 56.53 \\ DistanceGAN & 23.07 & 21.53 & 70.71 & 16.76 & 14.82 & 61.50 \\ FastGAN & 15.49 & 15.00 & 67.75 & 10.03 & 12.14 & 54.16 \\ MixDL & 12.82 & \textbf{4.19} & 64.28 & 11.75 & \textbf{6.44} & 56.83 \\ \hline \end{tabular} } \label{tab:ppl_uniform} \end{table} \begin{figure}[t] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{Figure/chart-3.jpg} \caption{LPIPS in early iterations.} \label{fig:diversity1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{Figure/chart_1.jpg} \caption{Number of unique NN training samples.} \label{fig:diversity2} \end{subfigure} \caption{Analysis on sample diversity. (a) shows that our method produces samples with greater diversity. (b) indicates the number of unique training samples that are nearest neighbor to any of the generated samples. We generate 500 samples for the analysis. Since we train our model with 10 samples, the upper bound is 10. Training snapshots are available in the supplementary materials.} \label{fig:diversity} \end{figure} As opposed to \cite{ojha2021few} that preserves diversity in the source domain, our method can be interpreted as preserving the diversity inherently present in the early stages throughout the course of training, by constantly exploring the latent space and enforcing relative similarity/difference between samples. To validate our hypothesis, we keep track of pairwise LPIPS of generated samples and the number of \textit{modes} in the early iterations. \cref{fig:diversity} shows the result, where the number of \textit{modes} is represented by the number of unique training samples (real images) that are the nearest neighbor to any of the generated images. In \cref{fig:diversity1}, we can see that vanilla StyleGAN2 and our method show similar LPIPS in the beginning, but the baseline quickly loses diversity as opposed to ours that maintain relatively high level of diversity throughout the training. \cref{fig:diversity2} delivers similar implication that FastGAN trained with our method better preserves modes, thus diversity, compared to the baseline. Combined with latent space smoothness explained in \cref{sec:latent_smoothness}, generators equipped with MixDL learn rich mode-preserving latent space with smooth interpolable landscape. This naturally allows generative diversity particularly appreciated under the constraint of extremely limited data. \section{Discussion} \label{sec:discussion} The trade-off between fidelity and diversity in GANs has been noted by many~\cite{brock2018large,karras2019style}. Truncation trick, a technique widely used in generative models, essentially denotes that diversity can be traded for fidelity. In few-shot generation task, it is very straightforward to obtain near-perfect fidelity at the expense of diversity as one can simply overfit the model, while generating diverse \textit{unseen} data points is very challenging. This implies that with only a handful of data, the diversity should be credited no less than the fidelity. However, we believe that the widely used low-shot benchmarks, e.g., 100-shot Obama and Grumpy Cat, inherently favor faithful reconstruction over audacious exploration. The main limitations we find in these datasets are twofold: (i) the intra-diversity is too limited as they contain photos of a single person or object, evidenced by low LPIPS in \cref{tab:lowshot-benchmark} and (ii) FID is computed based on the 100 samples that were used for training. We acknowledge that (ii) is a common practice in generative models, but the problem with these benchmarks is that the number of samples is too limited, making it possible for some models to simply \textit{memorize} a large portion of them. These two combined results in benchmarks that allow relatively easy replication and reward it generously at the same time. In other words, we believe that a model's capacity to explore continuous image manifold and \textit{be creative} can potentially backfire in these benchmarks. To address these limitations, in \cref{tab:lowshot-benchmark} we extend the benchmark with three additional datasets: 100-shot Oxford-flowers, 10-shot Obama and Grumpy Cat. The first one challenges the model with greater diversity while the last two evaluate its capacity to learn distribution in a generalizable manner, as the FID is still computed against the full 100 images. As our method mainly aims for modeling diversity, we observe marginal performance gains in the traditional benchmarks. However on the extended benchmarks, our proposed method shows significant contributions, confirming that it excels at learning diversity even under challenging situations. \section{Conclusion} We propose MixDL, a set of distance regularizations that can be directly added on top of existing models for few-shot image generation. Unlike previous works, MixDL enables high-quality synthesis of novel images with as few as 5 to 10 training samples, even without any source domain pretraining. Thorough evaluations on diverse benchmarks consistently demonstrate the effectiveness of our framework. We hope our work facilitates future research on data efficient generative modeling, which we believe has great upside in both academics and practical applications. \clearpage \renewcommand\thesection{\Alph{section}} \newcommand{\beginsupplement}{% \setcounter{table}{0} \renewcommand{\thetable}{S\arabic{table}}% \setcounter{figure}{0} \renewcommand{\thefigure}{S\arabic{figure}}% } \title{Supplementary Materials} \begin{comment} \titlerunning{ECCV-22 submission ID \ECCVSubNumber} \authorrunning{ECCV-22 submission ID \ECCVSubNumber} \author{Anonymous ECCV submission} \institute{Paper ID \ECCVSubNumber} \end{comment} \titlerunning{Few-shot Image Generation with MixDL} \author{} \institute{} \maketitle \beginsupplement \newcommand\nj[1]{\textcolor{red}{#1}} \section{Implementation Details} \noindent \textbf{StyleGAN2} We adopt the standard StyleGAN2 architecture\footnote{https://github.com/rosinality/stylegan2-pytorch} for $256\times256$ resolution images, with 8 fully connected layers in the mapping network. We keep the hyperparameters such as the learning rate, regularization weights and frequency, untouched, and only add our proposed MixDL. \noindent \textbf{DiffAug} We essentially follow the official configuration\footnote{https://github.com/mit-han-lab/data-efficient-gans} for \textit{low-shot} generation, including the two-layer mapping network and three data augmentation methods. We have also tried with a standard 8 FC layer mapping network and observed significant drops in the overall performance as shown in \cref{tab:diffaug-lowshot}. \begin{table}[h] \caption{FID for DiffAug with varying number of FC layers} \centering \begin{tabular}{c|cc} \Xhline{3\arrayrulewidth} FC layers & Obama (100-shot) & Grumpy Cat (100-shot)\\ \hline 2 & 46.87 & 26.52 \\ 8 & 71.13 & 38.42 \\ \hline \end{tabular} \label{tab:diffaug-lowshot} \end{table} \noindent \textbf{FastGAN} We use the official FastGAN implementation\footnote{https://github.com/odegeasslbc/FastGAN-pytorch} for $256\times256$ images. As FastGAN doesn't have a separate mapping network, we interpolate in $\mathcal{Z}$ space. \noindent \textbf{Diversity Preservation Methods} Baselines such as \textit{Normalized Diversification (N-Div)} [28], \textit{Mode Seeking GAN (MSGAN)} [29] and \textit{DistanceGAN} [4] propose distance preserving objective to combat mode collapse. We train these models with StyleGAN2 architecture for better synthesis quality and fair comparison. \noindent \textbf{MixDL} For MixDL, we alternate between the normal adversarial training step and the interpolation/regularization step. In the former we go through normal image-level discrimination and in the latter, we apply patch-level discrimination on the mixup samples and compute losses for MixDL-G and MixDL-D. For patch discrimination, we largely adopt the implementation of Cross-domain Correspondence (CDC)\footnote{https://github.com/utkarshojha/few-shot-gan-adaptation}. Our linear projection layer for the discriminator operates on 512 dimension. \noindent \textbf{Percpetual Path Length} For PPL computation, we mainly follow the implementation in StyleGAN2. The difference is that we subdivide a latent interpolation path into 10 subintervals and compute the perceptual distance for each line segment. Since the original PPL computation divides the perceptual distance by the squared step size, we divide each subinterval length by $0.1^2$. For clear demonstration, we divide the endpoint mean by $0.1^2$ as well. Note that the overall procedure is equivalent to calculating LPIPS multiplied by the factor of 100. The standard deviation is computed across the subintervals, and averaged for the interpolation paths. \noindent \textbf{Number of Modes} We generate 500 samples and compute their perceptual distances to the 10 training samples. We record the index for the real sample with the smallest perceptual distance and report the unique count. It is visually apparent from \cref{fig:snapshots1} that our method boosts mode diversity. \section{Datasets} We present the datasets used in our work along with their size. \begin{table}[h] \caption{Number of shots used in each dataset. \textbf{Names of datasets} are presented in the first and third rows and their corresponding \textbf{number of shots} used in this paper are described in the second and fourth rows.} \centering \resizebox{0.95\linewidth}{!}{ \renewcommand{\arraystretch}{1.3} \begin{tabular}{c|c|c|c|c|c} \Xhline{3\arrayrulewidth} \makecell{Animal-Face\\Dog} & \makecell{Oxford Flowers} & \makecell{FFHQ Babies} & \makecell{Sketches} & \makecell{Obama} & \makecell{Grumpy\\Cat} \\ \hline 10 & 10, 100, 1000, 8192 & 10, 100, 1000, 2479 & 5, 10 & 10, 100 & 10, 100 \\ \hline \makecell{Pokemon} & \makecell{Amedeo Modigliani} & \makecell{Anime Face} & \makecell{Landscape} & \makecell{Totoro} & \makecell{} \\ \hline 10 & 10 & 10 & 10 & 5 & \\ \Xhline{3\arrayrulewidth} \end{tabular} \renewcommand{\arraystretch}{1.0} } \vspace{-2mm} \label{tab:dataset} \end{table} \section{Additional Evaluations with CDC [34]} We provide evaluation results for CDC [34] on two popular low shot benchmarks, Obama and Cat (\cref{tab:additional}). To simulate few-shot setting, we randomly sample 10 images from each dataset.. Since CDC is pretrained on FFHQ, the domain gap is relatively small, especially for Obama dataset. Nevertheless, we observe superior performances with MixDL. \begin{table}[h] \caption{FID, precision and recall are computed against the full dataset (with 100 images) while LPIPS is computed among the generated samples.} \centering \resizebox{0.95\linewidth}{!}{ \begin{tabular}{c|cccc|cccc} \Xhline{3\arrayrulewidth} & \multicolumn{4}{c|}{Obama (10-shot)} & \multicolumn{4}{c}{Cat (10-shot)} \\ \hline Model & FID($\downarrow$) & LPIPS($\uparrow$) & Prec.($\uparrow$) & Rec.($\uparrow$) & FID($\downarrow$) & LPIPS($\uparrow$) & Prec.($\uparrow$) & Rec.($\uparrow$) \\ \hline CDC & 75.0 & 0.490 & 0.47 & 0.07 & 45.3 & 0.451 & 0.52 & 0.10 \\ MixDL & \textbf{62.7} & \textbf{0.601} & \textbf{0.53} & \textbf{0.09} & \textbf{41.1} & \textbf{0.590} & \textbf{0.78} & \textbf{0.11} \\ \hline \end{tabular} } \label{tab:additional} \end{table} \section{Additional Baseline Comparisons} We present quantitative evaluation results with concurrent competitive baselines~\cite{tseng2021regularizing,cui2021genco} in combination to different data augmentations in \cref{tab:add_baselines}. We observe consistent benefits from MixDL. \begin{table}[h] \caption{Comparison with additional baselines. MixDL consistently outperforms others even without advanced augmentations.} \centering \resizebox{0.95\linewidth}{!}{\scriptsize \begin{tabular}{l|cc|cc|cc|cc} \Xhline{3\arrayrulewidth} \multicolumn{1}{c|}{Dataset} & \multicolumn{2}{c|}{Anime-face} & \multicolumn{2}{c|}{Dog} & \multicolumn{2}{c|}{Flower} & \multicolumn{2}{c}{Baby} \\ \hline \multicolumn{1}{c|}{Metric} & \multicolumn{1}{c}{FID} & LPIPS & \multicolumn{1}{c}{FID} & LPIPS & \multicolumn{1}{c}{FID} & LPIPS & \multicolumn{1}{c}{FID} & LPIPS \\ \hline LeCam + DA & \multicolumn{1}{c}{286.7} & 0.130 & \multicolumn{1}{c}{129.7} & 0.593 & \multicolumn{1}{c}{189.2} & 0.688 & \multicolumn{1}{c}{127.7} & 0.588 \\ GenCo + DA & \multicolumn{1}{c}{222.4} & 0.082 & \multicolumn{1}{c}{147.2} & 0.565 & \multicolumn{1}{c}{186.1} & 0.702 & \multicolumn{1}{c}{119.3} & 0.605 \\ MixDL + DA & \multicolumn{1}{c}{\textbf{70.2}} & \textbf{0.551} & \multicolumn{1}{c}{\textbf{96.4}} & \textbf{0.682} & \multicolumn{1}{c}{\textbf{129.9}} & \textbf{0.705} & \multicolumn{1}{c}{-} & - \\ \hline LeCam + ADA & \multicolumn{1}{c}{111.6} & 0.405 & \multicolumn{1}{c}{239.0} & 0.378 & \multicolumn{1}{c}{191.0} & 0.659 & \multicolumn{1}{c}{178.3} & 0.451 \\ GenCo + ADA & \multicolumn{1}{c}{93.7} & 0.450 & \multicolumn{1}{c}{112.4} & 0.652 & \multicolumn{1}{c}{194.0} & 0.673 & \multicolumn{1}{c}{103.8} & 0.570 \\ MixDL + ADA & \multicolumn{1}{c}{\textbf{75.0}} & \textbf{0.571} & \multicolumn{1}{c}{\textbf{94.1}} & \textbf{0.684} & \multicolumn{1}{c}{\textbf{127.7}} & \textbf{0.763} & \multicolumn{1}{c}{-} & - \\ \hline MixDL (no aug.) & \multicolumn{1}{c}{73.1} & 0.548 & \multicolumn{1}{c}{96.0} & 0.682 & \multicolumn{1}{c}{136.6} & 0.734 & \multicolumn{1}{c}{\textbf{83.4}} & \textbf{0.643} \\ \hline \end{tabular} } \label{tab:add_baselines} \vspace{-0.5mm} \end{table} \section{Training Snapshots} We provide training snapshots for FastGAN and StyleGAN2 for visual demonstration of diversity and interpolation smoothness. \cref{fig:snapshots1} clearly shows that as opposed to vanilla FastGAN that rapidly loses diversity and converges to few prototypes, MixDL successfully alleviates this. \cref{fig:snapshots2} displays interpolation snapshots for StyleGAN2. In early training iterations, it does show relatively smooth latent transition, but the sample quality is very unsatisfactory. As the training proceeds, the sample quality improves as the model \textit{overfits}, but consequently the interpolation smoothness is quickly lost. This describes the classic dilemma in few-shot generative modeling. In contrast, \cref{fig:snapshots3} shows that as MixDL is effective at maintaining latent space smoothness, it provides a sweet spot where reasonable sample quality and smooth latent transition coexist. Note that models with MixDL do inevitably overfit in the end, but we can find reasonable stopping point that produces diverse unseen samples with satisfactory visual quality. \section{Additional Generated Samples} We present latent interpolation results in \cref{fig:supp_interp2} and \cref{fig:supp_interp}. \cref{fig:supp_interp2} shows that MixDL yields smoother latent interpolation compared to baseline methods that show typical \textit{stairlike latent space}. \cref{fig:supp_interp} reaffirms this observation on various datasets. We note that images of Japanese animation character Totoro were crawled from the web, and 5 real samples were used. Additional synthesis results from face paintings of Amedeo Modigliani and illustrations of Totoro are displayed in \cref{fig:supp_gen1} and \cref{fig:supp_gen3}, respectively. \section{Sample Images from Low-shot Benchmarks} In \cref{fig:low-shot-sample}, we present samples from Obama and Grumpy Cat datasets. As they contain images of a single character, the intra-diversity is inherently very limited, which is also demonstrated by the LPIPS measure in Tab. 3 of the main paper. \section{Naive Application of GAN adaptation} We display results from naive application of CDC. Since it is very difficult to find a semantically similar source domain for datasets like Pokemon, we naively leverage the source generator trained on FFHQ. As the source and the target are semantically different, the adaptation does not yield satisfactory outcomes as expected. We can observe the dilemma here as well that in the early iterations, the face shape learned in the source domain is clearly visible while in later stages, the face shape is no longer visible but the model collapses altogether. As CDC preserves distances in the target domain through the correspondence to the source domain, it is not applicable to domains that lack an adequate source dataset to transfer from. MixDL, on the other hand, improves upon CDC in that it enables training generative models with minimal overfitting and mode collapse, without leveraging source domain pretraining. Quantitative evaluations further support the claim as in Tab. 1 of the main paper. \begin{figure}[t] \includegraphics[width=\linewidth]{supp/sup2_rename.jpg} \caption{ Training snapshots for FastGAN and FastGAN+MixDL in early iterations. As opposed to the base FastGAN that rapidly loses diversity, our regularizations help preserve the modes throughout the course of training. Numbers in the left indicate training iterations. } \label{fig:snapshots1} \end{figure} \begin{figure*} \includegraphics[width=0.9\linewidth]{supp/interp_snap1.jpg} \caption{ Interpolation snapshots for StyleGAN2. Numbers in the left indicate training iterations. } \label{fig:snapshots2} \end{figure*} \begin{figure*} \includegraphics[width=0.9\linewidth]{supp/interp_snap2.jpg} \caption{ Interpolation snapshots for StyleGAN2+MixDL. } \label{fig:snapshots3} \end{figure*} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{supp/supp_interp3.jpg} \caption{ Interpolation examples. Baselines clearly display \textit{stairlike} latent transition while ours shows smooth interpolation. } \label{fig:supp_interp2} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{supp/supp_interp.jpg} \caption{ More interpolation examples from MixDL. Numbers in the parentheses represent the number of training samples used for each dataset. } \label{fig:supp_interp} \end{figure} \begin{figure*} \includegraphics[width=\linewidth]{supp/supp_gen.jpg} \caption{ Samples from face paintings of Amedeo Modigliani. While the baselines simply replicate the given images, ours produces diverse unseen face images. \textit{Ours} represents samples from StyleGAN2+MixDL. } \label{fig:supp_gen1} \end{figure*} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{supp/supp_gen3.jpg} \caption{ MixDL generation result from 5-shot training on Totoro. Although there are only 5 training samples, it combines visual features in a natural way to produce diverse novel samples. } \label{fig:supp_gen3} \end{figure} \begin{figure*} \includegraphics[width=0.95\linewidth]{supp/sup1-rearr.jpg} \captionsetup{width=\linewidth} \caption{ Random samples from low-shot benchmark datasets, Obama and Grumpy Cat. Since they contain photos of a single character, the intra-diversity is inherently constrained, rendering these benchmarks inappropriate to evaluate generative diversity. } \label{fig:low-shot-sample} \end{figure*} \begin{figure*} \includegraphics[width=0.95\linewidth]{supp/CDC1.jpg} \centering \captionsetup{width=\linewidth} \caption{ Naive application of CDC from FFHQ to Pokemon. As the authors have pointed out, the adaptation performance degrades when the two domains are semantically different, but it is not straightforward to find a transferable source domain for datasets like Pokemon. We observe clear human face shapes in the early stages \textit{(left)} and mode collapse in later stages \textit{(right)} where the face shape is no longer visible. } \label{fig:CDC1} \end{figure*} \clearpage \bibliographystyle{splncs04}
1,477,468,750,187
arxiv
\section{Introduction} \label{sec:intro} Orthogonal frequency-division multiplexing (OFDM) is a multicarrier technique which has been widely used in many high data rate wireless communication standards such as Wireless Fidelity (Wi-Fi), Mobile Broadband Wireless Access (MBWA), Worldwide Interoperability for Microwave Access (WiMax), terrestrial digital TV systems, 3GPP Long Term Evolution (LTE), etc. A major problem of OFDM is its large peak-to-mean envelope power ratio (PMEPR) for the uncoded signals. PMEPR reduction through a coding perspective can be achieved by designing a large codebook whose codewords, e.g., in the form of sequences, have low PMEPR values. This paper aims to reduce PMEPR via codebooks consisting of complementary sequences which to be introduced in the sequel. Golay complementary pair (GCP), introduced by M. J. E. Golay in \cite{golay1961}, refers to a pair of sequences whose aperiodic autocorrelation functions (AACFs) diminish to zero at each non-zero time-shift when they are summed. Either sequence from a GCP is called a Golay sequence. The idea of GCP was extended to complementary sets (CSs) by Tseng and Liu in \cite{chinchong} where each CS consisting of two or more constituent sequences, called complementary sequences. A PMEPR reduction method was introduced by Davis and Jedwab in \cite{Davis1999} to construct standard $2^h$-ary ($h$ is a positive integer) Golay sequences of length $2^m$ ($m$ is a positive integer) using second-order generalized Boolean function (GBF), comprising second-order cosets of generalized first-order Reed-Muller (RM) codes $RM_{2^h}(1,m)$. By applying the constructed Golay sequences to encode OFDM signals with a PMEPR of at most 2, Davis and Jedwab obtained $\frac{m!}{2}2^{h(m+1)}$ codewords, called Golay-Davis-Jedwab (GDJ) code in this paper, for the phase shift keying (PSK) modulated OFDM signals with good error-correcting capabilities, efficient encoding and decoding. Subsequently, Paterson employed complementary sequences to enlarge the code rate by relaxing the PMEPR of OFDM signal in \cite{pater2000}. Specifically, Paterson showed that each coset of $RM_q(1,m)$ inside $RM_q(2,m)$ ($q$ is an even number no less than $2$) can be partitioned into CSs of size $2^{k+1}$ (where $k$ is a non-negative integer depending only on $G(Q)$, a graph naturally associated with the quadratic form $Q$ in $m$ variables which defines the coset) and provided an upper bound on the PMEPR of arbitrary second-order cosets of $RM_q(1,m)$. The construction given in \cite[Th. 12]{pater2000}\footnote{Full statement of \cite[Th. 12]{pater2000} is given in \textit{Lemma} \ref{lemmad}.} was unable to provide a tight PMEPR bound for all the cases. By giving an improved version of \cite[Th. 12]{pater2000} in \cite[Th. 24]{pater2000}\footnote{Full statement of \cite[Th. 24]{pater2000} is given in \textit{Lemma} \ref{lemmae}.}, Paterson left the following question: \newline \textit{``What is the strongest possible generalization of \cite[Th. 12]{pater2000}?''.} In \cite[Th. 24]{pater2000}, it was shown that after deleting $k$ vertices in $G(Q)$, if the resulting graph contains a path and one isolated vertex, then $Q+RM_q(1,m)$ can be partitioned into CSs of size $2^{k+1}$ instead of $2^{k+2}$, i.e., there is no need to delete the isolated vertex. Later, a generalization of \cite[Th. 12]{pater2000} was made by Schmidt in \cite{Schmid2007} to establish a construction of complementary sequences that are contained in higher-order generalized RM codes. Schmidt showed in \cite{Schmid2007} that a GBF gives rise to a CS of a given size if the graphs of all \textit{restricted Boolean functions}\footnote{A restricted Boolean function of a GBF is obtained by fixing some variables of the GBF to some constants. If we restrict a GBF of $m$ variables over $k$ ($k<m$) fixed variables, the restriction can be done in $2^k$ ways. Corresponding to the $2^k$ restricted Boolean functions, there are $2^k$ graphs if the restricted Boolean functions are of order $2$.} of the GBF are paths. In Schmidt's construction, however, CS cannot be generated corresponding to a GBF if there is at least one restricted Boolean function whose graph is not a path (among all the restricted Boolean functions of the GBF). In this case, further restrictions need to be carried out until the graphs of all restricted Boolean functions become path. As a result, the CS set size increases and so does the PMEPR. Because of this, a reasonable number of sequences were excluded from the Schmidt's coding scheme. Hence, an improved version of \cite[Th. 5]{Schmid2007}\footnote{Full statement of \cite[Th. 5]{Schmid2007} is given in \textit{Lemma} \ref{lemmaf}.} or, a more generalized version of \cite[Th. 12]{pater2000} is expected to extend the range of coding options with good PMEPR bound for practical applications of OFDM. More constructions of CSs with low PMEPR have been proposed in the literature. In \cite{fiedler2008}, a framework has been presented to identify known Golay sequences and pairs of length $2^m$ ($m>4$) over $\mathbb{Z}_{2^h}$ in explicit algebraic normal form. \cite{tarokh} presents a lower bound on the PMEPR of a constant energy code as a function of its rate, distance, and length. The results in \cite{fiedler2008} and \cite{tarokh} provide better upper bound of PMEPR than the results in \cite{pater2000} and \cite{Schmid2007}. For multi-carrier code division multiple access (MC-CDMA), Liu \emph{et al} presented in \cite{liumc} a new class of mutually orthogonal CSs whose column sequences have PMEPR of at most 2, when each CS is arranged to be a two dimensional matrix (called a complementary matrix) whose rows constitute all of its complementary sequences in order. The low PMEPR property in Liu's construction is achieved by designing CSs such that every column sequence of a complementary matrix is a Golay sequence. Nowadays, besides polyphase complementary sequences, the design of quadrature amplitude modulation (QAM) complementary sequences with low PMEPR is also an interesting research topic. In \cite{imli}, QAM Golay seuences were introduced based on quadrature phase shift keying (PSK) GDJ-code. Later, Liu \emph{et~al} constructed QAM Golay sequences by using properly selected Gaussian integer pairs \cite{liug}. Recently, some constructions on complementary or near-complementary sequences have been reported in \cite{sdas,Sdas_lett,pskaccess,psktcom,ara_bzcp_2018}. These sequences may also be applicable in OFDM systems to deal with the PMEPR problem, in addition to their applications in scenarios such as asynchronous communications and channel estimation. In this paper, we propose a construction to generate new polyphase CSs with low PMEPR and high code rate for OFDM systems by allowing both path and isolated vertices in the graphs of certain restricted versions of higher order GBFs. In our proposed construction, we restrict a few number of vertices to obtain tighter PMPER. For example, we obtain CS with maximum PMEPR of $2^{k+1}$ and $2^{k+2}-2M$ in the presence of isolated vertices whereas the PMEPR upper bound obtained from Schmidt's construction for the same sequences is at least $2^{k+p+1}$ (where $p$ is the number of isolated vertices present in the graphs of certain restricted Boolean functios). The introduction of ``isolated vertices" is essential as it gives rise to higher sequence design flexibility and hence more complementary sequences for larger code rate, as compared to Schmidt's construction. By moving to higher order RM code, we not only provide a partial answer to the aforementioned question raised by Paterson, but also extend the range of coding options for practical applications of OFDM. It is shown that our proposed construction includes Schmidt's construction, Paterson's construction, and the GDJ code construction as special cases. The remainder of the paper is organized as follows. In Section II, some useful notations and definitions are given. In Section III, a generalized construction of CS is presented. Section IV contains some results which are obtained from our proposed construction. We have presented a graphical analization of our proposed construction in Section V. Then we compare our proposed construction in Section VI. Finally, concluding remarks are drawn in Section VII. \section{Preliminary} \label{sec:back} \subsection{Definitions of Correlations and Sequences} Let $\textbf{a}=(a_0,a_1,\hdots, a_{L-1})$ and $\textbf{b}=(b_0,b_1,\hdots,b_{L-1})$ be two complex-valued sequences of equal length $L$ and let $\tau$ be an integer. Define \begin{equation}\label{equ:cross} \begin{split} C(\textbf{a}, \textbf{b})(\tau)=\begin{cases} \sum_{i=0}^{L-1-\ell}a_{i+\tau}b^{*}_i, & 0 \leq \tau < L, \\ \sum_{i=0}^{L+\ell -1} a_ib^{*}_{i-\tau}, & -L< \tau < 0, \\ 0, & \textnormal{otherwise}, \end{cases} \end{split} \end{equation} and $A(\textbf{a})(\tau)=C(\textbf{a},\textbf{a})(\tau)$. The above mentioned functions are called the aperiodic cross-correlation function between $\textbf{a}$ and $\textbf{b}$ and the AACF of $\textbf{a}$, respectively. \begin{definition} A set of $n$ sequences $\textbf{a}^0,\textbf{a}^1, \hdots ,\textbf{a}^{n-1}$, each of equal length $L$, is said to be a CS if \begin{equation} \begin{split} A(\textbf{a}^0)(\tau)\!+\!A(\textbf{a}^1)(\tau)\!+\!\hdots \!+\!A(\textbf{a}^{n-1})(\tau)\!\!=\!\! \begin{cases} nL, & \tau=0,\\ 0, & \textnormal{otherwise}. \end{cases} \end{split} \end{equation} \end{definition} A CS of size two is called a GCP. \subsection{PMEPR of OFDM signal} For $q$-PSK modulation, the OFDM signal for the word $\textbf{a}=(a_0,a_1,\hdots, a_{L-1})$ (where $a_i\in \mathbb{Z}_q$) can be modeled as the real part of \begin{equation}\nonumber S(\textbf{a})(t)=\displaystyle\sum_{j=0}^{L-1}\omega_q^{a_{j}} e^{2\pi i(f_0+jf_s)t}, \end{equation} where $\omega_q=\exp(2\pi \sqrt{-1}/q)$ is a complex $q$th root of unity and $f_0+jf_s$ $(0\leq j< L)$ is $j$th carrier frequency of the OFDM signal. We define the instantaneous envelope power of the OFDM signal as \cite{pater2000} \begin{equation} \nonumber P(\textbf{a})(t)=|S(\textbf{a})(t)|^2. \end{equation} From the above expression, it is easy to derive that \begin{equation} \begin{split} P(\textbf{a})(t)&=\displaystyle\sum_{\tau=1-L}^{L-1}A(\textbf{a})(\tau)\exp(2\pi \sqrt{-1}\tau f_s t)\\ &=A(\textbf{a})(0)+2\cdot \text{Re}\left\{ \displaystyle\sum_{\tau=1}^{L-1}A(\textbf{a})(\tau)\exp(2\pi \sqrt{-1}\ell f_s t)\right\}, \end{split} \end{equation} where $\text{Re}\{x\}$ denotes the real part of a complex number $x$. We define the PMEPR of the signal $S(\textbf{a})(t)$ as \begin{equation}\nonumber \textnormal{PMEPR}(\textbf{a})= \frac{1}{n}\displaystyle \sup_{0\leq f_s t<1}P(\textbf{a})(t). \end{equation} The largest value that the PMEPR of an $n$-subcarrier OFDM signal is $n$. \subsection{Generalized Boolean Functions} Let $f$ be a function of $m$ variables $x_0,x_1,\hdots,x_{m-1}$ over $\mathbb{Z}_q$. A monomial of degree $r$ is defined as the product of any $r$ distinct variables among $x_0,x_1\hdots x_{m-1}$. There are $2^m$ distinct monomials over $m$ variables listed below: $1,x_0,x_1,\hdots,x_{m-1},x_0x_1,x_0x_2,\hdots,x_{m-2}x_{m-1},\hdots,$ $x_0x_1\hdots x_{m-1}$. A function $f$ is said to be a GBF of order $r$ if it can be uniquely expressed as a linear combination of monomials of degree atmost $r$, where the coefficient of each monomial is drawn from $\mathbb{Z}_q$. A GBF of order $r$ can be expressed as \begin{equation}\label{rorder} f=Q+\sum_{i=0}^{m-1}g_ix_i+g', \end{equation} where \begin{equation} \begin{split} Q\!=\!\sum_{p=2}^{r}\sum_{0\leq \alpha_0\!<\!\alpha_1\!<\!\hdots\!<\!\alpha_{p-1}\!<\!m}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!a_{\alpha_0,\alpha_1,\hdots,\alpha_{p-1}}x_{\alpha_0}x_{\alpha_1}\hdots x_{\alpha_{p-1}}, \end{split} \end{equation} and $g_i,g',a_{\alpha_0,\alpha_1,\hdots,\alpha_{p-1}}\in \mathbb{Z}_q$. \subsection{Quadratic Forms and Graphs} Let $f$ be a $r$th order GBF of $m$ variables over $\mathbb{Z}_q$. Assume $\textbf{x}=(x_{j_0},x_{j_1},\hdots,x_{j_{k-1}})$ and $\textbf{c}=(c_0,c_1,\hdots,c_{k-1})$. Then $f\big\arrowvert_{\textbf{x}=\textbf{c}}$ is obtained by substituting $x_{j_\alpha}=c_\alpha$ ($\alpha=0,1,\hdots,k-1$) in $f$. If $f\big\arrowvert_{\textbf{x}=\textbf{c}}$ is a quadratic GBF, then $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ denotes a graph with $V=\{x_0,x_1,\hdots,x_{m-1}\}\setminus \{x_{j_0},x_{j_1},\hdots,x_{j_{k-1}}\}$ as the set of vertices. The $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ is obtained by joining the vertices $x_{\alpha_1}$ and $x_{\alpha_2}$ by an edge if there is a term $q_{\alpha_1\alpha_2}x_{\alpha_1}x_{\alpha_2}$ ($0\leq \alpha_1<\alpha_2<m$, $x_{\alpha_1}$, $x_{\alpha_2}\in V$) in the GBF $f\big\arrowvert_{\textbf{x}=\textbf{c}}$ with $q_{\alpha_1\alpha_2}\neq 0$ ($q_{\alpha_1\alpha_2}\in \mathbb{Z}_q$). For $k=0$, $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ is nothing but $G(f)$. \subsection{Sequence Corresponding to a Boolean Function} Corresponding to a GBF $f$, we define a complex-valued vector (or, sequence) $\psi(f)$, as follows. \begin{equation} \psi(f)=(\omega_q^{f_0}, \omega_q^{f_1},\hdots, \omega_q^{f_{2^m-1}}), \end{equation} where $f_i=f(i_0,i_1,\hdots,i_{m-1})$ and $(i_0,i_1,\hdots,i_{m-1})$ is the binary vector representation of integer $i$ ($i=\sum_{\alpha=0}^{m-1}i_\alpha 2^\alpha$). Again, we define $\psi(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ as a complex-valued sequence with $\omega_q^{f(i_0,i_1,\hdots,i_{m-1})}$ as $i$th component if $i_{j_\alpha}=c_\alpha$ for each $0\leq \alpha <k$ and equal to zero otherwise. \begin{definition}[Effective-Degree of a GBF \cite{Schmid2007}] The effective-degree of a GBF $f:\{0,1\}^m\rightarrow \mathbb{Z}_{2^h}$, is defined as follows. \begin{equation} \max_{0\leq i<h}[\deg\left(f\mod 2^{i+1}\right)-i]. \end{equation} \end{definition} Let $\mathcal{F}(r,m,h)$ be the set of all GBFs $f:\{0,1\}^m\rightarrow \mathbb{Z}_{2^h}$. Then $|\mathcal{F}(r,m,h)|$, denotes the number of GBFs in $\mathcal{F}(r,m,h)$ and given by \cite{Schmid2007} \begin{equation} \log_2|\mathcal{F}(r,m,h)|=\sum_{i=0}^r h\binom{m}{i}+\sum_{i=1}^{h-1}(h-i)\binom{m}{r+i}. \end{equation} \begin{definition}[Effective-Degree RM Code \cite{Schmid2007}] For $0\leq r\leq m$, the effective-degree RM code is denoted by ERM$(r,m,h)$ and defined as \begin{equation} \textnormal{ERM}(r,m,h)=\{\psi(f): f\in \mathcal{F}(r,m,h)\}. \end{equation} \end{definition} \begin{definition}[Lee Weight and Squared Euclidean Weight] Let $\textbf{a}=(a_0,a_1,\hdots, a_{L-1})$ be a $\mathbb{Z}_{2^h}$-valued sequence. The Lee weight of $\textbf{a}$ is denoted by $wt_L(\textbf{a})$ and defined as follows. \begin{equation} wt_L(\textbf{a})=\sum_{i=0}^{L-1}\min\{a_i,2^h-a_i\}. \end{equation} \end{definition} The squared Euclidean weight of $\textbf{a}$ (when the entries of $\textbf{a}$ are mapped onto a $2^h$-ary PSK constellation) is denoted by $wt^2_E(\textbf{a})$ and given by \begin{equation} wt^2_E(\textbf{a})=\sum_{i=0}^{L-1}|\omega_q^{a_i}-1|^2. \end{equation} Let $d_L(\textbf{a},\textbf{b})=wt_L(\textbf{a}-\textbf{b})$ and $d^2_E(\textbf{a},\textbf{b})=wt^2_E(\textbf{a}-\textbf{b})$ be the Lee and squared Euclidean distance between $\textbf{a},\textbf{b}\in \mathbb{Z}^L_{2^h}$, respectively. The symbols $d_L(\mathcal{C})$ and $d^2_E(\mathcal{C})$ will be used to denote minimum distances (taken over all distinct sequences) of a code $\mathcal{C}\in \mathbb{Z}^L_{2^h}$. Next, we present some lemmas which will be used in our proposed construction. \begin{lemma}[\cite{pater2000}]\label{lemmac} Let $f,g$ be GBFs of $m$ variables. Consider $0\leq j_0<j_1<\hdots<j_{k-1}<m$, which is a list of $k$ indices and $\textbf{c}=(c_0,c_1,\hdots, c_{k-1})$ and $\textbf{d}=(d_0,d_1,\hdots,d_{k-1})$ are two binary vectors. Write $\textbf{x}=(x_{j_0},x_{j_1},\hdots,x_{j_{k-1}})$ and consider $0\leq i_0<i_1<\cdots< i_{l-1}<m$, which is a set of indices which has no intersection with $\{j_0,j_1,\hdots,j_{k-1}\}$. Let $\textbf{y}=(x_{i_0},x_{i_1},\hdots,x_{i_{l-1}})$, then \begin{equation} \begin{split} C&\left (\psi(f\arrowvert_{\textbf{x}=\textbf{c}}),\psi(g\arrowvert_{\textbf{x}=\textbf{d}})\right )(\tau)\\&=\displaystyle\sum_{\textbf{c}_1,\textbf{c}_2} C\left(\psi(f\arrowvert_{\textbf{xy}=\textbf{cc}_1}),\psi(g\arrowvert_{\textbf{xy}=\textbf{dc}_2})\right)(\tau). \end{split} \end{equation} \end{lemma} \begin{lemma}[\cite{stinch}] \label{lemmaa} Suppose that there are two GBFs $f$ and $f'$ of $m$-variables $x_0,x_1,\hdots,x_{m-1}$ over $\mathbb{Z}_q$, such that for some $k$ $(\leq m-3)$ restricting variables $\textbf{x}=(x_{j_0},x_{j_1},\hdots, x_{j_{k-1}})$, $f\big\arrowvert _{\textbf{x}=\textbf{c}}$ and $f'\big\arrowvert_{\textbf{x}=\textbf{c}}$ is given by \begin{equation} \begin{split} f\big\arrowvert _{\textbf{x}=\textbf{c}}&=P+L+g_lx_l+g,\\ f'\big\arrowvert _{\textbf{x}=\textbf{c}}&=P+L+g_lx_l+\frac{q}{2}x_a+g, \end{split} \end{equation} where $L=\sum_{\alpha =0}^{m-k-2}g_\alpha x_\alpha$, both $G(f\big\arrowvert _{\textbf{x}=\textbf{c}})$ and $G(f'\big\arrowvert _{\textbf{x}=\textbf{c}})$ consist of a path over $m-k-1$ vertices, given by $G(P)$, $x_a$ be the either end vertex, $x_l$ be an isolated vertex, and $g_l, g\in \mathbb{Z}_q$. Then for fixed $\textbf{c}$ and $d_1\neq d_2$ $(d_1, d_2\in\{0,1\})$,\\ $C(f\big\arrowvert _{\textbf{x}x_l=\textbf{c}d_1},f\big\arrowvert _{\textbf{x}x_l=\textbf{c}d_2})(\tau)+ C(f'\big\arrowvert _{\textbf{x}x_l=\textbf{c}d_1},f'\big\arrowvert _{\textbf{x}x_l=\textbf{c}d_2})(\tau)$ \begin{equation} \begin{split} = \begin{cases} \omega_q^{(d_1-d_2)g_l}2^{m-k}, & \tau=(d_2-d_1)2^l \qquad\qquad\qquad\qquad\qquad\qquad\\ 0, & \textnormal{otherwise}. \end{cases} \end{split} \end{equation} \end{lemma} \begin{lemma}[\cite{rati}]\label{lemmab} Let $\mathbf{d},\textbf{c}_1,\textbf{c}_2$ $\in \{0,1\}^k$. If $\textbf{c}_1\neq \textbf{c}_2$, $\displaystyle\sum_{\textbf{d}}(-1)^{\textbf{d}\cdot(\textbf{c}_1+\textbf{c}_2)}=0$. \end{lemma} \begin{lemma}[\cite{Davis1999}] Suppose $f:\{0,1\}^m\rightarrow \mathbb{Z}_q$ be a quadratic GBF of $m$ variables. Suppose further that $G(f)$ is a path with $2^{h-1}$ being the weight of every edge. Then for any choice of $c,c'\in$ $\mathbb{Z}_{2^h}$, the pair $$\left(f+c,f+2^{h-1}x_a+c'\right)$$ forms a GCP. \end{lemma} \begin{lemma}[{\cite[Th. 12]{pater2000}}]\label{lemmad} Suppose $f:\{0,1\}^m\rightarrow \mathbb{Z}_q$ be a quadratic GBF of $m$ variables. Suppose further that $G(f)$ contains a set of $k$ distinct vertices labeled $j_0, j_1, \hdots, j_{k-1}$ with the property that deleting those $k$ vertices and corresponding their edges results in a path. Then for any choice of $g_i$, $g'\in $ $\mathbb{Z}_q$ \begin{equation} \left\{f+\frac{q}{2}\left(\sum_{\alpha=0}^{k-1}d_{\alpha}x_{j_{\alpha}}+dx_a \right): d_{\alpha}, d \in \{0,1\} \right\} \end{equation} is a CS of size $2^{k+1}$. \end{lemma} \begin{lemma}[{\cite[Th. 24]{pater2000}}]\label{lemmae} Suppose $f:\{0,1\}^m\rightarrow \mathbb{Z}_q$ be a quadratic GBF of $m$ variables. In addition, suppose that $G(f)$ contains a set of $k$ distinct vertices labeled $j_0, j_1, \hdots, j_{k-1}$ with the property that deleting those $k$ vertices and all their edges results in a path on $m-k-1$ vertices and an isolated vertex. Suppose further that all edges in the original graph between the isolated vertex and the $k$ deleted vertices are weighted by $q/2$. Let $x_a$ be the either end vertex in this path. Then for any choice of $g_i$, $g'\in $ $\mathbb{Z}_q$ \begin{equation} \left\{f+\frac{q}{2}\left(\sum_{\alpha=0}^{k-1}d_{\alpha}x_{j_{\alpha}}+dx_a \right): d_{\alpha}, d \in \{0,1\} \right\} \end{equation} is a CS of size $2^{k+1}$. \end{lemma} \begin{lemma}[{\cite[Th. 5]{Schmid2007}}]\label{lemmaf} Let $f:\{0,1\}^m\rightarrow \mathbb{Z}_q$ be a GBF of $m$ variables. Suppose further that for each $\textbf{c}\in \{0,1\}^k$, the $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ is a path in $m-k$ vertices. Suppose further that $q/2$ is the weight of each edge and $x_{\textbf{c}}$ be the either end vertex of the path $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$. Then for any choice of $g_i$, $g'\in $ $\mathbb{Z}_q$ \begin{equation} \left\{f+\frac{q}{2}\left(\sum_{\alpha=0}^{k-1}d_{\alpha}x_{j_{\alpha}}+dx_{\textbf{c}} \right): d_{\alpha}, d \in \{0,1\} \right\} \end{equation} is a CS of size $2^{k+1}$ and hence $\psi(f)$ lies in a CS of size $2^{k+1}$. \end{lemma} \begin{lemma}[{\cite[Th. 9]{Schmid2007}}] \begin{equation} \begin{split} d_L(\textnormal{ERM}(r,m,h))&=2^{m-r},\\ d_E^2(\textnormal{ERM}(r,m,h))&=2^{m-r+2}\sin^2\left(\frac{\pi}{2^h}\right). \end{split} \end{equation} \end{lemma} \section{Proposed Constructions} In this section, we present a generalized construction of CS. For ease of presentation, whenever the context is clear, we use $C(f,g)(\tau)$ to denote $C(\psi(f),\psi(g))(\tau)$ for any two GBFs $f$ and $g$. Similar changes are applied to restricted Boolean functions as well. \begin{theorem}\label{Theorem1} Let $f$ be a GBF of $m$ variables over $\mathbb{Z}_q$ with the property that there exist $M$ number of such $\textbf{c}$ for which $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ is a path over $m-k$ vertices and there exist $N_i$ number of such $\textbf{c}$ for which $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ consists of a path over $m-k-1$ vertices and one isolated vertex $x_{l_i}$ such that $M,N_i\geq 0, M+\displaystyle\sum_{i=1}^pN_i=2^k$. Suppose further that all the relevant edges in $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ (for all $\textbf{c}$) have identical weight of $q/2$. Then for any chice of $g_i,g'\in \mathbb{Z}_q$, $\psi(f)$ lies in a set $S$ of size $2^{k+1}$ with the following aperiodic auto-correlation property. \begin{equation}\label{autocorrelationofS} \begin{split} A(S)(\tau)\!\!=\!\!\begin{cases} 2^{m+1}\displaystyle\sum_{i=1}^pN_i+2^{m+1}M, & \tau=0,\\ \omega_q^{g_{l_i}}2^m\displaystyle\sum_{\textbf{c}\in S_{N_i}}\omega_q^{L^{l_i}_{\textbf{c}}}, & \tau\!=\!2^{l_i},i\!\!=\!\!1,2,\hdots,p,\\ \omega_q^{-g_{l_i}}2^m\displaystyle\sum_{\textbf{c}\in S_{N_i}}\omega_q^{-L^{l_i}_{\textbf{c}}}, & \tau=-2^{l_i},i\!\!=\!\!1,2,\!\hdots\!,p,\\ 0, & \textnormal{otherwise}, \end{cases} \end{split} \end{equation} where, $g_{l_i}(\in \mathbb{Z}_q$, $i=1,2,\hdots,p$) is the coefficient of $x_{l_i}$ in $f$, $S_{N_i}$ contains all those $\textbf{c}$ for which $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ consists of a path over $m-k-1$ vertices and one isolated vertex labeled $l_i$ ($l_i\in\{0,1,\hdots,m-1\}\setminus\{j_0,j_1,\hdots,j_{k-1}\}$, and $l_1,l_2,\hdots,l_p$ are all distinct), and \begin{equation}\nonumber \begin{split} L^{l_i}_{\textbf{c}}=&\displaystyle\sum_{r=1}^k\sum_{0\leq i_1<i_2<\cdots<i_r<k}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \varrho^{l_i}_{i_1,i_2,\hdots,i_r}c_{i_1}c_{i_2}\cdots c_{i_r}\!\quad\!(\varrho^{l_i}_{i_1,i_2,\hdots,i_r}\textnormal{'s}\in \mathbb{Z}_q). \end{split} \end{equation} \end{theorem} \begin{IEEEproof} See Appendix A. \end{IEEEproof} We have introduced $M$ and $N_i$ ($i=1,2,\hdots,p$) in \textit{Theorem} \ref{Theorem1} with the condition $M+\displaystyle\sum_{i=1}^pN_i=2^k$, $M,N_i\geq 0$. Therefore, $M$ and $N_i$'s range from $0$ to $2^k$. \begin{remark}[Explicit Form of GBFs as Defined in \textit{Theorem} \ref{Theorem1}] The GBFs $f$, as defined in \textit{Theorem} \ref{Theorem1}, can be expressed as \begin{equation}\label{cgbf} \begin{split} \frac{q}{2}\sum_{\textbf{c}\in S_M}\sum_{i=0}^{m-k-2}x_{\pi_{\textbf{c}}(i)} x_{\pi_{\textbf{c}}(i+1)}\prod_{\alpha=0}^{k-1}x_{j_\alpha}^{c_\alpha}(1-x_{j_\alpha})^{(1-c_\alpha)}\\ +\frac{q}{2}\sum_{\delta=1}^p\sum_{\textbf{c}\in S_{N_{\delta}}}\sum_{i=0}^{m-k-3}x_{\pi^\delta_{\textbf{c}}(i)} x_{\pi^\delta_{\textbf{c}}(i+1)}\\ \prod_{\alpha=0}^{k-1}x_{j_\alpha}^{c_\alpha}(1-x_{j_\alpha})^{(1-c_\alpha)}\\+ \sum_{\delta=1}^p\sum_{r=1}^k\sum_{0\leq i_1<i_2<\cdots<i_r<k}\!\!\!\!\!\!\!\!\!\!\!\!\!\varrho^{l_\delta}_{i_1,i_2,\hdots,i_r}x_{j_{i_1}}x_{j_{i_2}}\cdots x_{x_{i_r}}x_{l_\delta}\\+ \sum_{r=2}^k\sum_{0\leq i_1<i_2<\cdots<i_r<k}\!\!\!\!\!\!\!\!\!\!\!\!\!\alpha_{i_1,i_2,\hdots,i_r}x_{j_{i_1}}x_{j_{i_2}}\cdots x_{x_{i_r}}\\+ \sum_{i=0}^{m-1}g_ix_i+g', \end{split} \end{equation} where $\pi^\delta_{\textbf{c}}$ are $N_\delta$ permutations of $\{0,1,\hdots,m-1\}\setminus \{j_0,j_1,\hdots,j_{k-1},l_\delta\}$ ($\delta=1,2,\hdots,p$), $\pi_{\textbf{c}}$ are $M$ permutations of $\{0,1,\hdots,m-1\}\setminus\{j_0,j_1,\hdots,j_{k-1}\}$, and $\alpha_{i_1,i_2,\hdots,i_r}$'s are belongs to $\mathbb{Z}_q$. \end{remark} We illustrate \textit{Theorem} \ref{Theorem1} by the example below. \begin{example}\label{example_theorem1} Let $f$ be a GBF of $4$ variables over $\mathbb{Z}_2$, given by \begin{equation} \begin{split} f(x_0,x_1,x_2,x_3)&=x_0x_1x_3+x_0x_3x_2+x_0x_2x_1+x_1x_2. \end{split} \end{equation} The restricted Boolean functions $f\big\arrowvert_{x_0=0}$ and $f\big\arrowvert_{x_0=1}$ are given by \begin{equation} \begin{split} f\big\arrowvert_{x_0=0}&=x_1x_2,\\ f\big\arrowvert_{x_0=1}&=x_1x_3+x_3x_2, \end{split} \end{equation} respectively. \begin{figure}[!t] \centering \includegraphics[height=4cm]{cropped_graph_th13.pdf} \caption{The $G(f\big\arrowvert_{x_0=0})$ and $G(f\big\arrowvert_{x_0=1})$ of \textit{Example 1}.} \end{figure} Fig. 1 (a) and Fig. 1 (b) represents $G(f\big\arrowvert_{x_0=1})$ and $G(f\big\arrowvert_{x_0=0})$ respectively. It is clear that $G(f\big\arrowvert_{x_0=1})$ is a path over the variables $x_1$, $x_2$, and $x_3$. $G(f\big\arrowvert_{x_0=0})$ contains a path over the variables $x_1$, $x_2$ and one isolated vertex $x_3$. Therefore, $M=1$, $N_1=1$, $S_{N_1}=\{0\}$, $\varrho_0^3=0$, and $L^3_0=0$ respectively. By using \textit{Theorem} \ref{Theorem1}, we obtain the set $S$ given by \begin{equation} \begin{split} S&=\left \{f+\left(d_0x_0+dx_2\right): d_0,d\in \{0,1\}\right\}\\ &=\begin{bmatrix} + + + + + + - + + + + - + - - +\\ + - + - + - - - + - + + + + - -\\ + + + + - - + - + + + - - + + -\\ + - + - - + + + + - + + - - + + \end{bmatrix} \end{split} \end{equation} The AACF of $S$ is given by \begin{equation} \begin{split} A(S)=\begin{cases} 64, & \tau=0,\\ 16, & \tau=\pm 2. \end{cases} \end{split} \end{equation} \end{example} \begin{remark}\label{remark1} Let $f$ be a quadratic GBF with the property that for all $\textbf{c}\in \{0,1\}^k$, $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ is a path in $m-k$ vertices. Then from \textit{Therorem} \ref{Theorem1}, we have $M=2^{k}$ and \begin{equation} \begin{split} A(S)(\tau)=\begin{cases} 2^{m+k+1}, & \tau=0,\\ 0, & \textnormal{otherwise.} \end{cases} \end{split} \end{equation} Hence, $S$ is a CS of size $2^{k+1}$ and therefore, Paterson's construction \cite[Th. 12]{pater2000} turns to be a special case of our proposed one. \end{remark} \begin{remark} From \textit{Remark} \ref{remark1}, for $k=0$, $S$ is a CS of size $2$, i.e., $S$ is a GCP and thus the GDJ code in \cite{Davis1999} is also a special case of \textit{Theorem} \ref{Theorem1}. \end{remark} \begin{remark} Let $f$ be a quadratic GBF with the property that for all $\textbf{c}\in \{0,1\}^k$, $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ contains a path in $m-k-1$ vertices and one isolated vertex $x_{l_1}$. We also assume that all edges in the original graph between the isolated vertex and the $k$ deleted vertices are weighted by $q/2$. Then, from \textit{Therorem} \ref{Theorem1}, we have $N_1=2^{k}$, $S_{N_1}=\{0,1\}^k$, $L^{l_1}_{\textbf{c}}=\frac{q}{2}\sum_{\alpha=0}^{k-1}c_\alpha$, and \begin{equation} \begin{split} A(S)(\tau)&=\begin{cases} 2^{m+k+1}, & \tau=0,\\ \omega_q^{g_{l_1}}2^{m+k}\displaystyle\sum_{\textbf{c}\in S_{N_1}}\omega_q^{L^{l_1}_{\textbf{c}}}, & \tau=2^{l_1},\\ \omega_q^{-g_{l_1}}2^{m+k}\displaystyle\sum_{\textbf{c}\in S_{N_1}}\omega_q^{-L^{l_1}_{\textbf{c}}}, & \tau=-2^{l_1},\\ 0, & \textnormal{otherwise}, \end{cases}\\ &=\begin{cases} 2^{m+k+1}, & \tau=0,\\ 0, & \textnormal{otherwise.}, \end{cases} \end{split} \end{equation} Therefore, $\psi(f)$ lies in a CS of size $2^{k+1}$ and the result given by Paterson in \cite[Th. 24]{pater2000}turns to be a special case of \textit{Theorem} \ref{Theorem1}. \end{remark} \begin{remark} Let $f$ be a GBF with the property that for all $\textbf{c}\in \{0,1\}^k$, $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ is a path in $m-k$ vertices. Then from \textit{Therorem} \ref{Theorem1}, we have $M=2^{k}$ and \begin{equation}\label{stinspecial} \begin{split} A(S)(\tau)=\begin{cases} 2^{m+k+1}, & \tau=0,\\ 0, & \textnormal{otherwise.} \end{cases} \end{split} \end{equation} From (\ref{stinspecial}), it is clear that $\psi(f)$ lies in a CS of size $2^{k+1}$ and hence the PMEPR of $\psi(f)$ is atmost $2^{k+1}$. Therefore, the result given by Schmidt in \cite[Th. 5]{Schmid2007} is a special case of \textit{Theorem} \ref{Theorem1}. \end{remark} \section{Construction of Complementary Sequences with Low PMEPR} In this section, we present two constructions of CSs which are derived from \textit{Theorem} \ref{Theorem1} to provide tighter PMEPR upper bound than the PMEPR bound introduced in Schmidt's construction \cite[Th. 5]{Schmid2007}. \begin{corollary}\label{corr_1} Let $f$ be a GBF as defined in \textit{Theorem} \ref{Theorem1} with the property that $N_i\equiv 0(\mod 2)$ ($i=1,2,\hdots,p$) and there exist $N_i/2$ number of $\textbf{c}$ in $S_{N_i}$ for which $L^{l_i}_{\textbf{c}}\equiv 0 (\mod q)$, and $L^{l_i}_{\textbf{c}}\equiv \frac{q}{2} (\mod q)$ for the rest $N_i/2$ number of $\textbf{c}$ in $S_{N_i}$. Then for any choice of $g_i$, $g'\in$ $\mathbb{Z}_q$, \begin{equation} \left \{f+\frac{q}{2}\left(\textbf{d}\cdot\textbf{x}+dx_{\textbf{c}}\right):\textbf{d}\in \{0,1\}^k,d\in\{0,1\}\right\}, \end{equation} is a CS of size $2^{k+1}$. \begin{IEEEproof} Let \begin{equation} \begin{split} S=\left \{f+\frac{q}{2}\left(\textbf{d}\cdot\textbf{x}+dx_{\textbf{c}}\right):\textbf{d}\in \{0,1\}^k,d\in\{0,1\}\right\}. \end{split} \end{equation} By using \textit{Theorem 1}, we have \begin{equation}\label{autocorrelation_comple_1} \begin{split} A(S)(\tau)\!\!=\!\!\begin{cases} 2^{m+1}\displaystyle\sum_{i=1}^pN_i+2^{m+1}M, & \tau=0,\\ \omega_q^{g_{l_i}}2^m\displaystyle\sum_{\textbf{c}\in S_{N_i}}\omega_q^{L^{l_i}_{\textbf{c}}}, & \tau\!=\!2^{l_i},i\!\!=\!\!1,2,\hdots,p,\\ \omega_q^{-g_{l_i}}2^m\displaystyle\sum_{\textbf{c}\in S_{N_i}}\omega_q^{-L^{l_i}_{\textbf{c}}}, & \tau=-2^{l_i},i\!\!=\!\!1,2,\!\hdots\!,p,\\ 0, & \textnormal{otherwise}. \end{cases} \end{split} \end{equation} Since, there exist $N_i/2$ number of $\textbf{c}$ in $S_{N_i}$ for which $L^{l_i}_{\textbf{c}}\equiv 0 (\mod q)$, $\omega_q^{L^{l_i}_{\textbf{c}}}$ takes the value $1$ for $N_i/2$ times. Similarly, $\omega_q^{L^{l_i}_{\textbf{c}}}$ takes the value $-1$ for $N_i/2$ times. Therefore, $\sum_{\textbf{c}\in S_{N_i}}\omega_q^{L^{l_i}_{\textbf{c}}}=0$. In the same way, we can show that $\sum_{\textbf{c}\in S_{N_i}}\omega_q^{-L^{l_i}_{\textbf{c}}}=0$. Hence, from (\ref{autocorrelation_comple_1}), we have \begin{equation}\label{comple_11} \begin{split} A(S)(\tau)=\begin{cases} 2^{m+k+1}, & \tau=0,\\ 0, & \textnormal{otherwise}. \end{cases} \end{split} \end{equation} From (\ref{comple_11}), we have $S$ is a CS of size $2^{k+1}$ and hence at most PMEPR of each sequenes lying in $S$ is $2^{k+1}$ \cite{pater2000}. \end{IEEEproof} \end{corollary} \begin{remark}[Explicit Form of GBFs as Defined in \textit{Corollary} \ref{corr_1}] To, construct the GBFs as defined in \textit{Corollary} \ref{corr_1}, we only need to take care of the following term in (\ref{cgbf}) $$\displaystyle\sum_{\delta=1}^p\sum_{r=1}^k\sum_{0\leq i_1<i_2<\cdots<i_r<k}\!\!\!\!\!\!\!\!\!\!\!\!\!\varrho^{l_\delta}_{i_1,i_2,\hdots,i_r} x_{j_{i_1}}x_{j_{i_2}}\cdots x_{x_{i_r}}x_{l_\delta}$$ or, $\sum_{\delta=1}^pL^{l_\delta}_{\mathbf{x}}x_{l_\delta}$. In this \textit{Remark}, we define $L^{l_\delta}_{\mathbf{x}}$, so that the GBFs associated with $L^{l_\delta}_{\mathbf{x}}$, meet the condition given in \textit{Corollary} \ref{corr_1}. To define $L^{l_\delta}_{\mathbf{x}}$, first we need to define some vectors which are as follows: $\textbf{c}^{l_\delta}_{\phi_t}=(c^{l_\delta}_{0,\phi_t},c^{l_\delta}_{1,\phi_t},\hdots,c^{l_\delta}_{k-1,\phi_t})\in S_{N_\delta}$, where $t=1,2,\hdots, N_\delta/2$, $\delta=1,2,\hdots,p$. Therefore, $\textbf{c}^{l_\delta}_{\phi_1}, \textbf{c}^{l_\delta}_{\phi_2},\hdots,\textbf{c}^{l_\delta}_{\phi_{N_\delta/2}}$ are any $N_\delta/2$ distinct elements in $S_{N_\delta}$. Now, we define \begin{equation}\label{lterm} L^{l_\delta}_{\mathbf{x}}=\frac{q}{2}\displaystyle\sum_{t=1}^{N_\delta/2}\prod_{\alpha=0}^{k-1}x_{j_\alpha}^{c^{l_\delta}_{\alpha,\phi_t}}(1-x_{j_\alpha})^{(1-c^{l_\delta}_{\alpha,\phi_t})}. \end{equation} From, the above equation, it is clear that $L^{l_\delta}_{\mathbf{x}}=1$ for $\mathbf{x}=\textbf{c}^{l_\delta}_{\phi_t}$, $t=1,2,\hdots,N_\delta/2$ and for the rest of $N_\delta/2$ elements in $S_{N_\delta}$, $L^{l_\delta}_{\mathbf{x}}=0$. Therefore, the GBFs whose $L^{l_\delta}_{\mathbf{x}}$ terms are as defined as in (\ref{lterm}), satisfy the conditions given in \textit{Corollary} \ref{corr_1}. \end{remark} \begin{remark} The construction of CSs given in \textit{Corollary \ref{corr_1}} is based on GBFs of any order. It is observed that \textit{Corollary \ref{corr_1}} can provide tighter upper bound of PMEPR than that given by Schmidt \cite[Th. 5]{Schmid2007} for a sequence corresponding to a GBF which satisfy the property given in \textit{Corollary} \ref{corr_1}. Below, we present an example to illustrate \textit{Corollary \ref{corr_1}}. \end{remark} \begin{example} Let $f$ be GBF of $5$ variables over $\mathbb{Z}_4$, given by \begin{equation} \begin{split} f(x_0,x_1,x_2,x_3,x_4)=2\left(x_0x_1x_2+x_0x_1x_3+x_1x_3\right.\\\left.+x_3x_2+x_0x_4\right)+x_1+2x_2+2x_3+2x_4+3. \end{split} \end{equation} From the GBF $f$, we obtain the restricted Boolean functions as follows. \begin{equation}\label{exam_corr_1} \begin{split} f\big\arrowvert_{x_0=0}&=2(x_1x_3+x_3x_2)+x_1+2x_2+2x_3+2x_4+3,\\ f\big\arrowvert_{x_0=1}&=2(x_1x_2+x_2x_3)+x_1+2x_2+2x_3+3. \end{split} \end{equation} From (\ref{exam_corr_1}), it is observed that $G(f\big\arrowvert_{x_0=0})$ and $G(f\big\arrowvert_{x_0=1})$ both contain a path over the vertices $x_1,x_2,x_3$ and one isoltaed vertex $x_4$. Thererfore, $p=1$, $N_1=2$, $S_{N_1}=\{0,1\}$, $\varrho_0^4=2$, $L^4_0=0$, and $L^4_1=2$. By \textit{Corollary \ref{corr_1}}, \begin{equation} \begin{split} S\!=\!&\left\{2\left(x_0x_1x_2\!+\!x_0x_1x_3+x_1x_3+x_3x_2+x_0x_4\right)+x_1\right.\\ &\left.+2x_2+2x_3+2x_4+3+2(d_0x_0+dx_1): d_0,d\in \{0,1\} \right\}, \end{split} \end{equation} is a CS of size $4$. Therefore, the PMEPR of $\psi(f)$ is at most $4$ and from Schmidt's construction, the PMEPR upper bound of $\psi(f)$ is $8$. \end{example} \begin{corollary}\label{corr_2} Let $f$ be a GBF as defined in \textit{Theorem \ref{Theorem1}} and unlike the GBF as defined in \textit{Corollary} \ref{corr_1}. Then for any choice of $g_i$, $g'\in$ $\mathbb{Z}_q$, \begin{equation} \left \{f\!+\!\frac{q}{2}\left(\textbf{d}\!\cdot\!\textbf{x}\!+\!d'\sum_{i=1}^px_{l_i}\!+\!dx_{\textbf{c}}\right):\textbf{d}\!\in\! \{0,1\}^k,d,d'\!\in\!\{0,1\}\right\}, \end{equation} is a CS of size $2^{k+2}$ with at most PMEPR $2^{k+2}-2M$. \begin{IEEEproof} The set $S$ can be expressed as $S=S_1\cup S_2$, where \begin{equation} \begin{split} S_1&\!=\!\left \{f\!+\!\frac{q}{2}\left(\textbf{d}\!\cdot\!\textbf{x}\!+\!dx_{\textbf{c}}\right):\textbf{d}\!\in\! \{0,1\}^k,d,\!\in\!\{0,1\}\right\},\\ S_2&\!=\!\left \{f\!+\!\frac{q}{2}\left(\textbf{d}\!\cdot\!\textbf{x}\!+\!\sum_{i=1}^px_{l_i}\!+\!dx_{\textbf{c}}\right)\!:\!\textbf{d}\!\in\! \{0,1\}^k,d,\!\in\!\{0,1\}\right\}. \end{split} \end{equation} By \textit{Theorem \ref{Theorem1}}, we have \begin{equation}\label{T2A1} \begin{split} A(S_1)(\tau)\!\!=\!\!\begin{cases} 2^{m+1}\displaystyle\sum_{i=1}^pN_i+2^{m+1}M, & \tau=0,\\ \omega_q^{g_{l_i}}2^m\displaystyle\sum_{c\in S_{N_i}}\omega_q^{L^{l_i}_{\textbf{c}}}, & \tau\!=\!2^{l_i},i\!\!=\!\!1,2,\hdots,p,\\ \omega_q^{-g_{l_i}}2^m\displaystyle\sum_{c\in S_{N_i}}\omega_q^{-L^{l_i}_{\textbf{c}}}, & \tau\!=\!-2^{l_i},i\!\!=\!\!1,2,\!\hdots\!,p,\\ 0, & \textnormal{otherwise}, \end{cases} \end{split} \end{equation} and \begin{equation}\label{T2A2} \begin{split} A(S_2)(\tau)\!\!&=\!\!\begin{cases} 2^{m+1}\displaystyle\sum_{i=1}^pN_i+2^{m+1}M, & \tau=0,\\ \omega_q^{\frac{q}{2}+g_{l_i}}2^m\displaystyle\sum_{c\in S_{N_i}}\omega_q^{L^{l_i}_{\textbf{c}}}, & \tau\!=\!2^{l_i},i\!\!=\!\!1,2,\hdots,p,\\ \omega_q^{-(\frac{q}{2}+g_{l_i})}2^m\displaystyle\sum_{c\in S_{N_i}}\omega_q^{-L^{l_i}_{\textbf{c}}}, & \tau\!=\!-2^{l_i},i\!\!=\!\!1,2,\!\hdots\!,p,\\ 0, & \textnormal{otherwise}, \end{cases}\\ \!\!&=\!\!\begin{cases} 2^{m+1}\displaystyle\sum_{i=1}^pN_i+2^{m+1}M, & \tau=0,\\ -\omega_q^{g_{l_i}}2^m\displaystyle\sum_{c\in S_{N_i}}\omega_q^{L^{l_i}_{\textbf{c}}}, & \tau\!=\!2^{l_i},i\!\!=\!\!1,2,\hdots,p,\\ -\omega_q^{-g_{l_i}}2^m\displaystyle\sum_{c\in S_{N_i}}\omega_q^{-L^{l_i}_{\textbf{c}}}, & \tau\!=\!-2^{l_i},i\!\!=\!\!1,2,\!\hdots\!,p,\\ 0, & \textnormal{otherwise}, \end{cases} \end{split} \end{equation} From (\ref{T2A1}) and (\ref{T2A2}), we have \begin{equation} \begin{split} A(S_1)(\tau)+A(S_2)(\tau)=\begin{cases} 2^{m+k+2}, & \tau=0,\\ 0, & \textnormal{otherwise}. \end{cases} \end{split} \end{equation} Therefore, $S$ is a CS of size $2^{k+2}$. From (\ref{T2A1}) and (\ref{T2A2}), it is observed that the PMEPR of the sequences that lies in $S_1$ and $S_2$, is at most $2^{k+1}+2\sum_{i=1}^pN_i$ or, $2^{k+2}-2M$. Since, the set $S$ is union of two sets $S_1$ and $S_2$, the PMEPR of $S$ is at most $2^{k+2}-2M$. \end{IEEEproof} \end{corollary} \begin{remark} It is observed that \textit{Corollary} \ref{corr_2} can provide more tight upper bound of PMEPR than that of \cite[Th. 5]{Schmid2007} for a sequence corresponding to a GBF which satisfy the properties introduced in \textit{Corollary} \ref{corr_2}. \end{remark} \begin{example} Let $f$ be a GBF of $5$ variables $x_0,x_1,x_2,x_3,x_4$ over $\mathbb{Z}_4$, given by \begin{equation} f(x_0,x_1,x_2,x_3)=x_0x_1x_3+x_0x_3x_4+x_1x_3+x_3x_2. \end{equation} The restricted Boolean functions $f\big\arrowvert_{x_0=0}$ and $f\big\arrowvert_{x_0=1}$ are \begin{equation}\label{exa_2_eq} \begin{split} f\big\arrowvert_{x_0=0}&=x_1x_3+x_3x_2,\\ f\big\arrowvert_{x_0=1}&=x_4x_3+x_3x_2, \end{split} \end{equation} respectively. From (\ref{exa_2_eq}), it is clear that $G(f\big\arrowvert_{x_0=0})$ contains a path over the vertices $x_1,x_2,x_3$ and $x_4$ as isolated vertex, and $G(f\big\arrowvert_{x_0=1})$ contains a path over the vertices $x_2,x_3,x_4$ and $x_1$ as isolated vertex. Hence $p=2$, $M=0$, $N_1=1$, $N_2=1$, $S_{N_1}=\{0\}$, $S_{N_2}=\{1\}$, $\varrho_0^4=0$, $\varrho_0^1=0$, $L^4_0=0$, and $L^1_1=0$. By using \textit{Corollary \ref{corr_2}}, the set \begin{equation} \left\{f+\frac{q}{2}\left(d_0x_0+d'(x_1+x_4)+dx_1\right):d_0,d',d\in\{0,1\}\right\} \end{equation} is a CS of size $8$. Hence, by using \textit{Corollary \ref{corr_2}}, the PMEPR upper bound for $\psi(f)$ is $8$ whereas Schmidt's construction provides a PMEPR upper bound of $16$. \end{example} \begin{example}\label{example_corr_th2} Let $f$ be a GBF of $6$ variables over $\mathbb{Z}_4$, given by \begin{equation}\label{example_2_eq} \begin{split} f(x_0,x_1,\hdots,x_5)&=2(x_0x_2x_3+x_0x_3x_4+x_0x_4x_5+x_0x_2x_4\\&+x_0x_1x_4+x_0x_1x_3+x_0x_3x_5+x_2x_4\\&+x_4x_1+x_1x_3+x_3x_5). \end{split} \end{equation} The restricted Boolean functions $f\big\arrowvert_{x_0=0}$ and $f\big\arrowvert_{x_0=1}$ are given by \begin{equation} \begin{split} f\big\arrowvert_{x_0=0}&=x_2x_4+x_4x_1+x_1x_3+x_3x_5,\\ f\big\arrowvert_{x_0=1}&=x_2x_3+x_3x_4+x_4x_5, \end{split} \end{equation} respectively. \begin{figure}[!t] \centering \includegraphics[height=10cm]{new_Paper_graph_1pd.pdf} \caption{The graphs of the restricted Boolean functions obtained from $f$.} \end{figure} It is clear that $G(f\big\arrowvert_{x_0=0})$ is a path and $G(f\big\arrowvert_{x_0=1})$ contains a path and the isolated vertex $x_1$. Therefore, $p=1$, $M=1$, $N_1=1$, $S_{N_1}=\{1\}$, and $L^1_1=1$. By using \textit{Corollary \ref{corr_2}}, the set \begin{equation} \begin{split} S\!=\!\left\{f+2(d_0x_0+d'x_1+dx_2): d_0,d,d'\in \{0,1\} \right\}, \end{split} \end{equation} is a complementary set of size $8$ and the PMEPR upper bound of the sequences lying in $S$ is $6$. The $G(f\big\arrowvert_{x_0=0})$ and $G(f\big\arrowvert_{x_0=1})$ are represented by Fig. 2 (a) and Fig. 2 (b) respectively. Since, $G(f\big\arrowvert_{x_0=1})$ contains the isolated vertex $x_1$, Schmidt's construction suggests to delete the isolated vertex $x_1$. After deleting the isolated vertex or restricting $x_1$, we obtain the following restricted Boolean functions $f\big\arrowvert_{(x_0,x_1)=(0,0)}$, $f\big\arrowvert_{(x_0,x_1)=(0,1)}$, $f\big\arrowvert_{(x_0,x_1)=(1,0)}$ and $f\big\arrowvert_{(x_0,x_1)=(1,1)}$. The $G(f\big\arrowvert_{(x_0,x_1)=(0,0)})$, $G(f\big\arrowvert_{(x_0,x_1)=(0,1)})$, are represented by Fig. 2 (c) and $G(f\big\arrowvert_{(x_0,x_1)=(1,0)})$, $G(f\big\arrowvert_{(x_0,x_1)=(1,1)})$ are represented by Fig. 2 (d). Again, deletion process need to be performed by following Scmidt's construction. But, after performing another deletion of vertices, the resulting graphs of restricted Boolean functions will be represented by Fig. 2 (e) and Fig. 2 (g). The deletion process can continue until the graph of every restricted Boolean function is a path or contain single vertex. Therefore, from Schmidt's construction, the PMEPR upper bound of $\psi(f)$ is $64$ whereas from \textit{Corollary \ref{corr_2}}, the PMEPR upper bound of $\psi(f)$ is $6$. \end{example} \section{Graphical Analization of the Proposed Construction} In this section, we present a relationship between our proposed construction and graph. A graph can be represented by a pair of sets $(V,E)$, where $V$ is the set of verices and $E$ is the set of edges present in the graph. As an example, the graph given in Fig. 1 (a) can also be expressed by $(V,E)$, where $V=\{x_1,x_2,x_3\}$ and $E=\{x_1x_3, x_2x_3\}$. The term $x_1x_3$ represents the edge between the vertices $x_1$ and $x_3$. Similarly, $x_2x_3$ represents the edge between the vertices $x_2$ and $x_3$. We say a graph $(V,E)$ is a path if the edges in $E$ forms a path over all the vertices presents in $V$. On the other hand, if there is some vertices in $V$ which are not associated with any edges present in $E$, we call them isolated vertices in the graph $(V,E)$. As an example, in Fig. 1 (b), $V=\{x_1,x_2,x_3\}$ and $E=\{x_1x_2\}$, where the set $E$ does not contain any such edges which include the vertex $x_3$. Hence, for Fig. 1 (b), we call $(V,E)$ is a graph containing a path and an isolated vertex. \begin{figure}[!h] \centering \includegraphics[height=7cm]{New_drawing_CS_2.pdf} \caption{The graphs of the restricted Boolean functions corresponding to GBF given in (\ref{cgbf}).} \end{figure} As an generalization, in Fig. 3, $(V^M,E_{\textbf{c}}^M)=G(f\arrowvert_{\mathbf{x}=\textbf{c}})$, where $f$ is GBF given in (\ref{cgbf}), $\textbf{c}\in S_M$, $V_M=\{x_0,x_1,\hdots,x_{m-1}\}\setminus \{x_{j_0},x_{j_1},\hdots,x_{j_{k-1}}\}$, and $E_{\textbf{c}}^M=\{x_{\pi_\textbf{c}(i)}x_{\pi_\textbf{c}(i+1)}:i=0,1,\hdots,m-k-2\}$. For any two distinct $\textbf{c}_1$, $\textbf{c}_2\in S_M$, the graphs $(V^M,E_{\textbf{c}_1}^M)$ ($=G(f\arrowvert_{\mathbf{x}=\textbf{c}_1})$) and $(V^M,E_{\textbf{c}_2}^M)$ ($=G(f\arrowvert_{\mathbf{x}=\textbf{c}_2})$) will be the smae if the permutations $\pi_{\textbf{c}_1}$ and $\pi_{\textbf{c}_2}$ are equal. Otherwise, $E_{\textbf{c}_1}^M\neq E_{\textbf{c}_2}^M$, and hence $(V^M,E_{\textbf{c}_1}^M)$, $(V^M,E_{\textbf{c}_2}^M)$ represent two different graph. Similarly, $(V^{N_\delta},E_{\textbf{c}}^{N_\delta})=G(f\arrowvert_{\mathbf{x}=\textbf{c}})$, $\textbf{c}\in S_{N_\delta}$ ($\delta=1,2,\hdots,p$), $V^{N_\delta}=\{x_0,x_1,\hdots,x_{m-1}\}\setminus \{x_{j_0},x_{j_1},\hdots,x_{j_{k-1}},x_{l_\delta}\}$, and $E_{\textbf{c}}^{N_\delta}=\{x_{\pi_\textbf{c}^\delta(i)}x_{\pi_\textbf{c}^\delta(i+1)}:i=0,1,\hdots,m-k-3\}$ (where, $\pi_{\textbf{c}}^\delta$, $\textbf{c}\in S_{N_\delta}$, $\delta=1,2,\hdots,p$ are defined in (\ref{cgbf})). If a GBF which has the same graphical property as given in Fig. 3 and also satisfy the conditions given in \textit{Corollary} \ref{corr_1}, the sequence corresponding to the GBF lies in CS of size $2^{k+1}$ and so the PMEPR is upper bounded by $2^{k+1}$. Similarly, if a GBF meets the condition given in \textit{Corollary} \ref{corr_2} and also has the same graphical property as in Fig. 3, the sequence corresponding to the GBF lies in CS of size $2^{k+2}$ with at most PMEPR $2^{k+2}-2M$. Now, we define the set of vertices as follows: $P^M_\textbf{c}=\{x_{\pi_{\textbf{c}}(0)},x_{\pi_{\textbf{c}}(m-k-1)}\}$, $\textbf{c}\in S_M$ and $I^{N^\delta_{\textbf{c}}}=\{x_{\pi_{\textbf{c}}^\delta(0)},x_{\pi_{\textbf{c}}^\delta(m-k-2)}\}$, $\textbf{c}\in S_{N_\delta}$, $\delta=1,2,\hdots,p$. Schmidt's construction provides a PMEPR upper bound of $2^{k+p+1}$ for the sequences corresponding to the GBFs which have the following properties: \begin{itemize} \item The restricted Boolean functions of a GBF have the following graphical properties as given in Fig. 3. \item $x_{l_\delta}\in P^M_\textbf{c}~\forall \textbf{c}\in S_M,\delta=1,2,\hdots,p$. \item $x_{l_\delta}\in I^{N^{\delta_1}_{\textbf{c}}}~\forall\textbf{c}\in S_{N_{\delta_1}}, \delta_1\in \{1,2,\hdots,p\}\setminus\{\delta\}, \delta=1,2,\hdots,p$. \end{itemize} Otherwise, the PMEPR upper bound provided by Schmidt's construction will be strictly greater than $2^{k+p+1}$. \begin{table} \centering \caption{PMEPR Upper Bound for Different Values of $M$ and $p$.} \begin{tabular}{|l|l|l|l|l||l||} \hline $k$ & Construction & $M$ & $p$ & \multicolumn{2}{l|}{PMEPR upper bound} \\ \hline \multirow{6}{*}{1} & \multirow{2}{*}{\textit{Corollary} \ref{corr_1}} & \multirow{2}{*}{0} & \multirow{2}{*}{1} & Proposed & \cite{Schmid2007} \\ \cline{5-6} & & & & $4$ & $8$ \\ \cline{2-6} & \multirow{4}{*}{\textit{Corollary} \ref{corr_2}} & \multirow{2}{*}{0} & $1$ & $8$ & $8$ \\ \cline{4-6} & & & $2$ & $8$ & $\geq 16$ \\ \cline{3-6} & & $1$ & $1$ & $6$ & $\geq 8$ \\ \cline{3-6} & & $2$ & $0$ & $4$ & $4$ \\ \hline \multirow{14}{*}{2} & \multirow{3}{*}{\textit{Corollary} \ref{corr_1}} & \multirow{2}{*}{0} & $1$ & $8$ & $16$ \\ \cline{4-6} & & &$2$ & $8$ & $\geq 32$\\ \cline{3-6} & & $1$ &$1$ &$8$ & $\geq 16$\\ \cline{2-6} & \multirow{11}{*}{\textit{Corollary} \ref{corr_2}} & \multirow{4}{*}{0} & $1$ & $16$ & $16$ \\ \cline{4-6} & & & $2$ & $16$ & $\geq 32$ \\ \cline{4-6} & & & $3$ & $16$ & $\geq 64$ \\ \cline{4-6} & & & $4$ & $16$ & $\geq 128$ \\ \cline{3-6} & & \multirow{3}{*}{1} & $1$ & $14$ & $\geq 16$ \\ \cline{4-6} & & & $2$ & $14$ & $\geq 32$ \\ \cline{4-6} & & & $3$ & $14$ & $\geq 128$ \\ \cline{3-6} & & \multirow{2}{*}{2} & $1$ & $12$ & $\geq 16$ \\ \cline{4-6} & & & $2$ & $12$ & $\geq 32$ \\ \cline{3-6} & & $3$ & $1$ &$10$ & $\geq 16$ \\ \cline{3-6} & & $4$ & $0$ & $8$ & $8$ \\ \hline \end{tabular} \end{table} For different values of $M$ and $p$, we compare the PMEPR upper bounds obtained from \textit{Corollary} \ref{corr_1} and \textit{Corollary} \ref{corr_2}, with \cite{Schmid2007} in TABLE I. \section{Code Rate Comparison with existing work} In this section, we compare our proposed construction with the constructions given in \cite{pater2000} and \cite{Schmid2007} in terms of code rate and PMEPR. \subsection{Comparison With \cite{pater2000}} In this subsection, we give a comparison of our proposed construction with \cite{pater2000} to show that the proposed construction can generate more sequences, i.e., higher code rate with tighter PMEPR upper bound. It is observed that by using \textit{Corollary \ref{corr_1}}, we get at least $$\frac{m!}{2}\left[\frac{(m-2)!}{2}-1\right]q^{2m-3}(q-1)^2,$$ complementary sequences with PMEPR at most $4$ and $$\frac{3m!}{4}\left[\frac{(m-3)!}{2}-1\right]q^{3m-8}(q-1)^2,$$ complementary sequences with PMEPR at most $8$ of length $2^m$. By using \textit{Corollary \ref{corr_2}}, We get at least $$\left[2m!+\frac{m!(m-2)!(m-3)}{4}\right]q^{2m-2}(q-1)^2,$$ complementary sequences with PMEPR at most $6$ and at least $$m(m-2)\left[\frac{(m-2)!}{2}\right]^2q^{2m-3}(q-1)^2,$$ complementary sequences with PMEPR at most $8$. Now we define three coodbooks $\mathcal{S}_1, \mathcal{S}_2, \mathcal{S}_3$ where $\mathcal{S}_1, \mathcal{S}_2$, and $ \mathcal{S}_3$ contain codewords of length $2^m$ over $\mathbb{Z}_q$ with PMEPR at most $4,6$, and $8$ respectively, given in TABLE II. \begin{table}[h!] \tabcolsep=0.1cm \centering \caption{PMEPRs and enumerations for codebooks $\mathcal{S}_1, \mathcal{S}_2, \mathcal{S}_3$} \begin{tabular}{|c|| c| c|} \hline Codebook & \makecell{PMEPR \\upper bound} & Size of Codebook\\ [0.5ex] \hline\hline $\mathcal{S}_1$ & 4 & $\frac{m!}{2}\left[\frac{(m-2)!}{2}-1\right]q^{2m-3}(q-1)^2$\\[0.5ex] \hline $\mathcal{S}_2$ & 6 & $\left[2m!+\frac{m!(m-2)!(m-3)}{4}\right]q^{2m-2}(q-1)^2$\\[0.5ex] \hline $\mathcal{S}_3$ & 8 & $\makecell{\frac{3m!}{4}\left[\frac{(m-3)!}{2}-1\right]q^{3m-8}(q-1)^2\\+m(m-2)\left[\frac{(m-2)!}{2}\right]^2q^{2m-3}(q-1)^2}$\\[0.5ex] \hline \end{tabular} \end{table} The code rate of a code-keying OFDM is defined as $\mathcal{R(C)}:=\frac{\log_2 |\mathcal{C}|}{L}$, where $\mathcal{|C|}$ and $L$ denote the set size of codebook $\mathcal{C}$ and the number of subcarriers respectively. In TABLE III and TABLE V, code-rate comparisons with \cite{pater2000} is given. TABLE IV contains code-rates for binary and quaternary cases with PMEPR at most $6$. \begin{table}[h!] \centering \caption{Code-rate comparison with codebook in \cite{pater2000} with PMEPR at most $4$ Over $\mathbb{Z}_q$} \begin{tabular}{|l||*{5}{c|}}\hline \diagbox[dir=NW]{$L=2^m$}{$\mathbb{Z}_q$} &\multicolumn{4}{c|}{\thead{$q=2$}\vline\thead{$q=4$}}\\ \cline{2-5} &{\thead{Proposed}}&{\thead{\cite{pater2000}}} & {\thead{Proposed}}&{\thead{\cite{pater2000}}}\\\hline\hline $m=5$ &$0.4346$&$0.3440$&$0.7524$&$0.3750$\\\hline $m=6$ &$0.3274$&$0.2660$&$0.5175$&$0.2420$\\\hline $m=7$ &$ 0.2202$&$0.1800$&$0.3309$&$0.1480$\\\hline $m=8$ &$0.1398$&$0.1130$&$0.2030$&$0.0880$\\\hline $m=9$ &$0.0855$&$0.0660$&$0.1210$&$0.0510$\\\hline $m=10$ &$0.0509$&$0.0380$&$0.0706$&$0.0290$\\\hline \end{tabular} \end{table} \begin{table}[h!] \centering \caption{Code-rate for OFDM codes with PMEPR at most $6$ over $\mathbb{Z}_q$} \begin{tabular}{|l||*{3}{c|}}\hline \diagbox{$L=2^m$}{$\mathbb{Z}_q$} &\makebox[8em]{$q=2$}&\makebox[8em]{$q=4$}\\\hline\hline $m=4$ &$0.7442$&$1.3173 $\\\hline $m=5$ &$0.5384$&$0.8875$\\\hline $m=6$ &$0.3721$&$0.5779$\\\hline $m=7$ &$0.2440$&$0.3625$\\\hline $m=8$ &$0.1528 $&$0.2199$\\\hline $m=9$ &$0.0925 $&$0.1299$\\\hline $m=10$ &$0.0546$&$0.0753$\\\hline \end{tabular} \end{table} \begin{table}[h!] \tabcolsep=0.6cm \centering \caption{Code-rate comparison with codebook in \cite{pater2000} with PMEPR at Most $8$ Over $\mathbb{Z}_2$} \begin{tabular}{|l||*{3}{c|}}\hline \diagbox{$L=2^m$}{$\mathbb{Z}_q$} &\multicolumn{2}{c|}{\thead{$q=2$}} \\ \cline{2-3} &{\thead{Proposed}}&{\thead{\cite{pater2000}}}\\\hline\hline $m=7$ &$0.2371$&$0.1720$\\\hline $m=8$ &$0.1501$&$0.1170$\\\hline $m=9$ &$0.0916$&$0.072$\\\hline $m=10$ &$0.0544$&$0.043$\\\hline \end{tabular} \end{table} \subsection{Comparison With \cite{Schmid2007}} In this subsection, we present a comparison between our proposed construction with \cite{Schmid2007} to show that the proposed construction can provide higher code rate and PMEPR upper bound. For $0\leq k<m$, $0\leq r\leq k+1$, and $h\geq 1$, a linear code $\mathcal{A}(k,r,m,h)$ \cite{Schmid2007} is defined to be the set of codewords corresponding to the set of polynomials \begin{equation} \begin{split} \left \{ \sum_{i=0}^{m-k-1}x_\alpha g_i(x_{m-k},\hdots,x_{m-1})+g(x_{m-k},\hdots,x_{m-1}):\right.\\ \left. g_0,\hdots,g_{m-k-1}\in \mathcal{F}(r-1,k,h),g\in\mathcal{F}(r,k,h)\right \}. \end{split} \end{equation} The number of codewords in $\mathcal{A}(k,r,m,h)$ is equal to $2^{s_k}$, where $$s_k=(m-k)\log_2|\mathcal{F}(r-1,k,h)|+\log_2|\mathcal{F}(r,k,h)|.$$ Now, $\mathcal{R}(k,m,h)$ \cite{Schmid2007} is defined to be the set of codewords associated with the following polynomials over $\mathbb{Z}_{2^h}$ \begin{equation} \begin{split} 2^{h-1}\sum_{\textbf{c}\in \{0,1\}^k}\sum_{i=0}^{m-k-2}x_{\pi_{\textbf{c}}(i)} x_{\pi_{\textbf{c}}(i+1)}\qquad\qquad\qquad\\ \prod_{j=0}^{k-1}x_{m-k+j}^{c_j}(1-x_{m-k+j})^{(1-c_j)}, \end{split} \end{equation} where $\pi_{\textbf{c}}$ are $2^k$ permutations of $\{0,1,\hdots,m-k-1\}$. For $m-k>1$ and $r>2-h$, the set $\mathcal{R}(k,m,h)$ contains $[ (m-k)!/2]^{2^{\min\{r+h-3,k\}}}$ codewords corresponding to a GBF of effective-degree at most $r$. The sequences in the cosets of $\mathcal{A}(k,r,m,h)$ with coset representatives in $\mathcal{R}(k,m,h)$ have PMEPR at most $2^{k+1}$ and the code has minimum Lee and squared Euclidean distance equal to $2^{m-r}$ and $2^{m-r+2}\sin^2(\frac{\pi}{2^h})$ respectively. \subsubsection{Code Construction by Using \textit{Corollary} \ref{corr_1}} For $0\leq k<m-1$, $0\leq r\leq k+1$, $\alpha\neq l_1$ and $h\geq 1$, we drfine a linear code $\mathcal{A}_1(k,r,m,h)$ corresponding to the set of polynomials \begin{equation} \begin{split} \left \{ \sum_{i=0}^{m-k-1}x_\alpha g_i(x_{m-k},\hdots,x_{m-1})+g(x_{m-k},\hdots,x_{m-1}):\right.\\ \left. g_0,\hdots,g_{m-k-1}\in \mathcal{F}(r-1,k,h),g\in\mathcal{F}(r,k,h)\right \}. \end{split} \end{equation} $\mathcal{A}_1(k,r,m,h)$ contains $2^{s_{1,k}}$ codewords, where $$s_{1,k}=(m-k-1)\log_2|\mathcal{F}(r-1,k,h)|+\log_2|\mathcal{F}(r,k,h)|.$$ Since, $\mathcal{A}_1(k,r,m,h)\subset \mathcal{A}(k,r,m,h)$, the minimum distances of $\mathcal{A}_1(k,r,m,h)$ can be lower bounded by $2^{m-r}$ and $2^{m-r+2}\sin^2(\frac{\pi}{2^h})$. Now, we assume $\mathcal{R}_1(k,m,h)$ be the set of codewords associated with the following polynomials \begin{equation} \begin{split} 2^{h-1}\sum_{\textbf{c}\in \{0,1\}^k}\sum_{i=0}^{m-k-3}x_{\pi_{\textbf{c}}(i)} x_{\pi_{\textbf{c}}(i+1)}\\ \prod_{j=0}^{k-1}x_{m-k+j}^{c_j}(1-x_{m-k+j})^{(1-c_j)}\\+2^{h-1}x_{l_1}(e_0x_{m-1}+\cdots+e_{k-1}x_{m-k}), \end{split} \end{equation} where $\pi_{\textbf{c}}$ are $2^k$ permutations of $\{0,1,\hdots,m-k-1\}\setminus {l_1}$ and $e_0,\hdots,e_{k-1}\in \{0,1\}$, but all can not be zero at the same time. For $m-k>2$ and $r>2-h$, it can be shown that the set $\mathcal{R}_1(k,m,h)$ contains $(2^k-1)[(m-k-1)!/2]^{2^{\min(r+h-3,k)}}$ codewords corresponding to a GBF of effective degree at most $r$. \begin{note}\label{note_1} Assume $m-k>2$. Let $2\leq r\leq k+2$ when $h=1$, $1\leq r\leq k+1$ when $h>1$ and $r'=\min\{r,k+1\}$. By using \textit{Corollary} \ref{corr_1}, it can be shown that any coset of $\mathcal{A}_1(k,r',m,h)$ with coset representatives in $\mathcal{R}_1(k,m,h)$ have PMEPR at most $2^{k+1}$. Now take the union of $(2^k-1)[(m-k-1)!/2]^{2^{\min(r+h-3,k)}}$ distinct cosets of $\mathcal{A}_1(k,r',m,h)$, each containing a word in $\mathcal{R}_1(k,m,h)$ with effective degree at most $r$. The PMEPR of the corresponding polyphase codewords in this code is at most $2^{k+1}$. Since the code is a subcode of $\textnormal{ERM}(r,m,h)$, its minimum Lee and squared Euclidean distances are lower bounded by $2^{m-r}$ and $2^{m-r+2}\sin^2(\frac{\pi}{2^h})$ respectively. \end{note} \subsubsection{Code Construction by Using \textit{Corollary} \ref{corr_2}} In this section, we consider the case $p\geq 2$, $M=0$ of \textit{Corollary} \ref{corr_2}. Consider $\mathcal{R}_2(k,m,h)$ be the set of codewords associated with the following polynomials \begin{equation} \begin{split} 2^{h-1}\sum_{\alpha=1}^p\sum_{\textbf{c}_\alpha\in S_{N_{\alpha}}}\sum_{i=0}^{m-k-3}x_{\pi_{\textbf{c}_\alpha}(i)} x_{\pi_{\textbf{c}_\alpha}(i+1)}\\ \prod_{j=0}^{k-1}x_{m-k+j}^{c^{\alpha}_j}(1-x_{m-k+j})^{(1-c^{\alpha}_j)}, \end{split} \end{equation} where $\textbf{c}_\alpha=(c^\alpha_0,\hdots,c^\alpha_{k-1})$, $\pi_{\textbf{c}_\alpha}$ are $N_\alpha$ permutations of $\{0,1,\hdots,m-k-1\}\setminus l_\alpha$ and $\sum_{\alpha=1}^p N_\alpha=2^k$. For $m-k>2$ and $r>2-h$, it can be shown that the set $\mathcal{R}_2(k,m,h)$ contains \begin{equation}\nonumber \begin{split} [(m\!\!-\!\!k\!\!-\!\!1)!/2]^{\min(2^{r+h-3},N_1)}\times[(m\!\!-\!\!k\!\!-\!\!1)!/2]^{\min(2^{r+h-3},N_2)}\times\\ \hdots\times [(m\!\!-\!\!k\!\!-\!\!1)!/2]^{\min(2^{r+h-3},N_p)} \end{split} \end{equation} codewords corresponding to a GBF of effective degree at most $r$. \begin{note}\label{note_2} Assume $m-k>2$. Let $2\leq r\leq k+2$ when $h=1$, $1\leq r\leq k+1$ when $h>1$ and $r'=\min\{r,k+1\}$. By using \textit{Corollary} \ref{corr_2}, it can be shown that any coset of $\mathcal{A}(k,r',m,h)$ with coset representatives in $\mathcal{R}_2(k,m,h)$ have at most PMEPR $2^{k+2}$. It is also observed that the minimum Lee and squared Euclidean distances of the code $$\displaystyle\bigcup_{\textbf{a}\in \mathcal{R}_2(k,m,h)}\left(\textbf{a}+\mathcal{A}(k,r,m,h)\right)$$ are lower bounded by $2^{m-r}$ and $2^{m-r+2}\sin^2(\frac{\pi}{2^h})$ respectively. \end{note} \subsubsection{Code Construction With Maximum PMEPR $4$ and $8$} In this part, we construct codes with maximum PMEPR $4$ and $8$ by using the above discussed codes. \begin{corollary}[Code With Maximum PMEPR $4$]\label{corr_4} Assume $m>3$. Let $2\leq r\leq 3$ when $h=1$, $1\leq r\leq 2$ when $h>1$ and $r'=\min\{r,2\}$. Now, consider \begin{equation} \begin{split} \mathcal{C}\!\!=\!\!\left[\bigcup_{\textbf{a}_1\in \mathcal{R}(1,m,h)}\!\!\!\!\!\!\!\!\!\!\textbf{a}_1\!\!+\!\!\mathcal{A}(1,r',m,h)\right]\!\!\cup\!\! \left[\bigcup_{\textbf{a}_2\in \mathcal{R}_1(1,m,h)}\!\!\!\!\!\!\!\!\!\!\!\!\textbf{a}_2\!\!+\!\!\mathcal{A}_1(1,r',m,h)\right]. \end{split} \end{equation} The code $|\mathcal{C}|$ contains codewords or sequences with at most PMEPR $4$. Hence, the maximum PMEPR of $\mathcal{C}$ is $4$. We denote the number of codewords or sequences in the code by $|\mathcal{C}|$, where \begin{equation}\label{pmepr4} \begin{split} |C|=\left(2^{s_1}\times[ (m-1)!/2]^{2^{\min\{r+h-3,1\}}}\right)\qquad\qquad\qquad\qquad\\ \qquad\qquad+\left(2^{s_{1,1}}\times[(m-2)!/2]^{2^{\min(r+h-3,1)}}\right). \end{split} \end{equation} Since $\mathcal{C}$ is a subcode of $\textnormal{ERM}(r,m,h)$, the minimum Lee and squared Euclidean distances of the code are lower bounded by $2^{m-r}$ and $2^{m-r+2}\sin^2(\frac{\pi}{2^h})$ respectively. \begin{table}[h!] \centering \caption{Code-rate comparison with codebook in \cite{Schmid2007} with maximum PMEPR $4$ Over $\mathbb{Z}_{2^h}$} \begin{tabular}{ |c|c|c|c|c|c|c| } \hline $m$ & $h$ & $r$ &Proposed&\cite{Schmid2007}& $d_L$ & $d_E^2$ \\ \hline \hline $4$&$1$&$2$&$0.6060$&$0.5990$&$4$&$16.00$\\ & &$3$&$0.7010$&$0.6980$&$2$&$8.00$\\ &$2$&$1$&$0.9150$&$0.9120$&$8$&$16.00$\\ & &$2$&$1.2000$&$1.1980$&$4$&$8.00$\\ \hline $5$&$1$&$2$&$0.4270$&$0.4250$&$8$&$32.00$\\ & & $3$&$0.5373$&$0.5366$&$4$&$16.00$\\ &$2$&$1$&$0.6134$&$0.6120$&$16$&$32.00$\\ & &$2$&$0.8492$&$0.8491$&$8$&$16.00$\\ \hline $6$&$1$&$2$&$0.2809$&$0.2798$&$16$&$64.00$\\ & &$3$&$0.3723$&$0.3721$&$8$&$32.00$\\ &$2$&$1$&$0.3897$&$0.3892$&$32$&$64.00$\\ & & $2$&$0.5596$&$0.5596$&$16$&$32.00$\\ \hline \end{tabular} \end{table} From (\ref{pmepr4}), it is clear that the set size of the sequences with maximum PMEPR $4$ obtained from our proposed construction is larger than the set size given in \cite{Schmid2007}. In TABLE VI, we have compared the code rate of sequences with maximum PMEPR $4$ obtained from our proposed construction with that of the construction given in \cite{Schmid2007}. \end{corollary} \begin{corollary}[Code With Maximum PMEPR $8$]\label{corr_5} Suppose $m>4$. Let $2\leq r\leq 4$ when $h=1$, $1\leq r\leq 3$ when $h>1$ and $r''=\min\{r,3\}$. For the case $2\leq r\leq 3$ when $h=1$, $1\leq r\leq 2$ when $h>1$ and $r''=\min\{r,3\}$, we consider the code $\mathcal{C}_1$, defined by \begin{equation} \begin{split} \mathcal{C}_1\!\!=\!\!\left[\bigcup_{\textbf{b}_1\in \mathcal{R}(2,m,h)}\!\!\!\!\!\!\!\!\!\!\textbf{b}_1\!\!+\!\!\mathcal{A}(2,r'',m,h)\!\!\right]\!\!\cup\!\! \left[\bigcup_{\textbf{b}_2\in \mathcal{R}_1(2,m,h)}\!\!\!\!\!\!\!\!\!\!\!\!\textbf{b}_2\!\!+\!\!\mathcal{A}_1(2,r'',m,h)\!\!\right]\\ \cup\left[\bigcup_{\textbf{b}_3\in \mathcal{R}_2(1,m,h)}\!\!\!\!\!\!\!\!\!\!\!\!\textbf{b}_3\!\!+\!\!\mathcal{A}(1,r',m,h)\!\!\right], \end{split} \end{equation} where \begin{equation}\label{pmepr81} \begin{split} |\mathcal{C}_1|=\left(2^{s_2}\times[ (m-2)!/2]^{2^{\min\{r+h-3,2\}}}\right)\qquad\qquad\qquad\qquad\\ \qquad\qquad+\left(3\times 2^{s_{1,2}}\times[(m-3)!/2]^{2^{\min(r+h-3,2)}}\right)\\ +\left(2^{s_1}\times[ (m-2)!/2]^{2\times \min\{2^{r+h-3},1\}} \right) \end{split} \end{equation} Since $\mathcal{C}_1$ is a subcode of $\textnormal{ERM}(r,m,h)$, the minimum Lee and squared Euclidean distances of the code are lower bounded by $2^{m-r}$ and $2^{m-r+2}\sin^2(\frac{\pi}{2^h})$ respectively. For $r=4$ when $h=1$ and $r=3$ when $h>1$, we consider the code $\mathcal{C}_2$, defined by \begin{equation} \begin{split} \mathcal{C}_2\!\!=\!\!\left[\bigcup_{\textbf{b}_1\in \mathcal{R}(2,m,h)}\!\!\!\!\!\!\!\!\!\!\textbf{b}_1\!\!+\!\!\mathcal{A}(2,r'',m,h)\!\!\right]\!\!\cup\!\! \left[\bigcup_{\textbf{b}_2\in \mathcal{R}_1(2,m,h)}\!\!\!\!\!\!\!\!\!\!\!\!\textbf{b}_2\!\!+\!\!\mathcal{A}_1(2,r'',m,h)\!\!\right], \end{split} \end{equation} where \begin{equation}\label{pmepr82} \begin{split} |\mathcal{C}_2|=\left(2^{s_2}\times[ (m-2)!/2]^{2^{\min\{r+h-3,2\}}}\right)\qquad\qquad\qquad\qquad\\ \qquad\qquad+\left(3\times 2^{s_{1,2}}\times[(m-3)!/2]^{2^{\min(r+h-3,2)}}\right). \end{split} \end{equation} Since $\mathcal{C}_2$ is a subcode of $\textnormal{ERM}(r,m,h)$, the minimum Lee and squared Euclidean distances of the code are lower bounded by $2^{m-r}$ and $2^{m-r+2}\sin^2(\frac{\pi}{2^h})$ respectively. \begin{table}[h!] \centering \caption{Code-rate comparison with codebook in \cite{Schmid2007} with maximum PMEPR $8$ Over $\mathbb{Z}_{2^h}$} \begin{tabular}{ |c|c|c|c|c|c|c| } \hline $m$ & $h$ & $r$ &Proposed&\cite{Schmid2007}& $d_L$ & $d_E^2$ \\ \hline \hline $5$&$1$&$2$&$0.4741$&$0.4558$&$8$&$32.00$\\ & & $3$&$0.6007$&$0.5991$&$4$&$16.00$\\ & & $4$&$0.6982$&$0.6981$&$2$&$8.00$\\ &$2$&$1$&$0.6596$&$0.6432$&$16$&$32.00$\\ & &$2$&$1.006$&$1.005$&$8$&$16.00$\\ & & $3$&$1.1981$&$1.1981$&$4$&$8.00$\\ \hline $6$&$1$&$2$&$0.3198$&$0.3060$&$16$&$64.00$\\ & &$3$&$0.4249$&$0.4245$&$8$&$32.00$\\ & & $4$&$0.5366$&$0.5366$&$4$&$16.00$\\ &$2$&$1$&$0.4286$&$0.4154$&$32$&$64.00$\\ & & $2$&$0.6746$&$0.6745$&$16$&$32.00$\\ & &$3$&$0.8491$&$0.8491$&$8$&$16.00$\\ \hline \end{tabular} \end{table} From (\ref{pmepr81}) and (\ref{pmepr82}), it is clear that our proposed construction can provide more number of sequences than the construction given in \cite{Schmid2007}. In TABLE VII, we have compared the code rate of sequences with maximum PMEPR $8$ obtained from our proposed construction with that of the construction given in \cite{Schmid2007}. \end{corollary} \section{Conclusions} In this paper, we proposed a direct and generalized construction of polyphase CS by using higher order GBFs and the concept of isolated vertices. The proposed construction provides tighter PMEPR upper bound for code words and higher code rate by maintaining the same minimum code distances as compare to Schmidt's construction. We have shown that our proposed construction gives rise to sequences with maximum PMEPR upper bound of $4$ in \textit{Corollary} \ref{corr_1} and $8$ in both \textit{Corollary} \ref{corr_1} and \textit{Corollary} \ref{corr_2}, respectively. In addition, we have obtained sequences with maximum PMEPR upper bound of $6$ in \textit{Corollary} \ref{corr_2}. The constructions given by Davis and Jedwab \cite{Davis1999}, Paterson \cite{pater2000} and Schmidt \cite{Schmid2007} appear as special cases of our proposed construction. \begin{appendices} \section{Proof of \textnormal{\textit{Theorem} \ref{Theorem1}}} Let $\mathbf{x}=(x_{j_0},x_{j_1},\hdots, x_{j_{k-1}})$ and $\mathbf{d}=(d_0,d_1,\hdots, d_{k-1})$. Then $\mathbf{d}\cdot\mathbf{x}=\displaystyle{\sum_{\alpha=0}^{k-1}} d_{\alpha}x_{j_{\alpha}}$. Define \begin{equation} S=\left \{f+\frac{q}{2}\left(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}\right): \mathbf{d}\in \{0,1\}^k, d\in\{0,1\}\right\}, \end{equation} where $x_\textbf{c}$ is an end vertex of the path which is contained in $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$. Now for any $\tau\neq 0$, the sum of AACF of sequences from the set $S$ can be written as \begin{equation}\label{A=L_1+L_2} \displaystyle \sum_{\textbf{d}d}A\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c})\right)(\tau)=\mathcal{L}_1+\mathcal{L}_2, \end{equation} where \begin{equation}\label{L_1} \mathcal{L}_1=\displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}}A\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x=c}}\right)(\tau), \end{equation} and \begin{equation}\label{L_2} \begin{split} \mathcal{L}_2 =\displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}_1\neq \textbf{c}_2}C \left( \left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_{\textbf{c}_1})\right)\big\arrowvert_{\textbf{x}=\textbf{c}_1} \right.,\\ \left. \left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_{\textbf{c}_2})\right)\big\arrowvert_{\textbf{x}=\textbf{c}_2} )\right) (\tau). \end{split} \end{equation} We first focus on the term $\mathcal{L}_1$, which can be written as \begin{equation}\label{L_1=T+T_i} \mathcal{L}_1=T+\sum_{i=1}^pT_i, \end{equation} where \begin{equation}\label{T} T=\displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}\in S_M}A\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x=c}}\right)(\tau), \end{equation} and \begin{equation}\label{T_i} T_i=\displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}\in S_{N_i}}A\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x=c}}\right)(\tau), \end{equation} where $S_M$ is the set of all those $\textbf{c}$ for which $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ is a path over $m-k$ vertices. To find $\mathcal{L}_1$, we first start with $T$. Since, $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ is a path over $m-k$ vertices for all $\textbf{c}\in S_M$, we have \cite{pater2000} \begin{equation} \begin{split} \displaystyle \sum_{d}A\left(\left(f\!+\!\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}\!+\!dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x=c}}\right)(\tau)\!\!=\!\! \begin{cases} 2^{m-k+1}, & \tau=0,\\ 0, \!\!\!\!\!&\!\!\!\!\! \textnormal{otherwise.} \end{cases} \end{split} \end{equation} Therefore, \begin{equation}\label{Tderi} \begin{split} T&=\displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}\in S_M}A\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x=c}}\right)(\tau)\\ &=\begin{cases} 2^{m+1}M, & \tau=0,\\ 0, & \textnormal{otherwise.} \end{cases} \end{split} \end{equation} To find $\mathcal{L}_1$, it remains to find $T_i$ $(i=1,2,\hdots,p)$ where \begin{equation}\nonumber T_i=\displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}\in S_{N_i}}A\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x=c}}\right)(\tau). \end{equation} We can express each of $T_i$, as \begin{equation}\label{T_ider} \begin{split} &T_i\\&=\displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}\in S_{N_i}}A\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x}=\textbf{c}}\right)(\tau)\\ &=\displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}\in S_{N_i}}\sum_{\beta\in\{0,1\}} A\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x}x_{l_i}=\textbf{c}\beta}\right)(\tau)\\ &~~~~+\displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}\in S_{N_i}}\sum_{\beta\in\{0,1\}} C\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x}x_{l_i}=\textbf{c}\beta},\right. \\&\qquad\qquad\qquad\qquad \left. \left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x}x_{l_i}=\textbf{c}(1-\beta)}\right)(\tau). \end{split} \end{equation} Since, for all $\textbf{c}\in S_{N_i}$, $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ consists of a path over $m-k-1$ vertices and one isolated vertex labeled $l_i$, $G(f\big\arrowvert_{\textbf{x}x_{l_i}=\textbf{c}\beta})$ is a path over $m-k-1$ vertices. Therefore \begin{equation} \begin{split} &\sum_dA\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x}x_{l_i}=\textbf{c}\beta}\right)(\tau)\\&= \begin{cases} 2^{m-k}, & \tau=0,\\ 0, & \textnormal{otherwise.} \end{cases} \end{split} \end{equation} Hence, the first auto-correlation term in (\ref{T_ider}) can be expressed as \begin{equation}\label{T_iauto} \begin{split} &\displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}\in S_{N_i}}\sum_{\beta\in\{0,1\}} A\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x}x_{l_i}=\textbf{c}\beta}\right)(\tau)\\&= \begin{cases} 2^{m+1}N_i, & \tau=0,\\ 0, & \textnormal{otherwise.} \end{cases} \end{split} \end{equation} Since, for all $\textbf{c}\in S_{N_i}$, $G(f\big\arrowvert_{\textbf{x}=\textbf{c}})$ consists of a path and one isolated vertex $x_{l_i}$, the only term involving $x_{l_i}$ is with the variables of the deleted vertices. Thus the only term in $x_{l_i}$ in $f$ can be expressed as follows. \begin{equation}\label{newdef} \begin{split} \sum_{r=1}^k\sum_{0\leq i_1<i_2<\cdots<i_r<k}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \varrho^{l_i}_{i_1,i_2,\hdots, i_r}x_{j_{i_1}}x_{j_{i_2}}\cdots x_{j_{i_r}}x_{l_i} =L^{l_i}_{\mathbf{x}}x_{l_i}, \end{split} \end{equation} where, \begin{equation}\nonumber \begin{split} L^{l_i}_{\mathbf{x}}&=\displaystyle\sum_{r=1}^k\sum_{0\leq i_1<i_2<\cdots<i_r<k}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \varrho^{l_i}_{i_1,i_2,\hdots, i_r}x_{j_{i_1}}x_{j_{i_2}}\cdots x_{j_{i_r}}. \end{split} \end{equation} To simplify the cross-correlation term in (\ref{T_ider}), we have the following equality by \textit{Lemma} \ref{lemmaa} and (\ref{newdef}). \begin{equation}\nonumber \begin{split} &\sum_dC\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x}x_{l_i}=\textbf{c}\beta},\right. \\&\qquad\qquad\qquad\qquad \left. \left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x}x_{l_i}=\textbf{c}(1-\beta)}\right)(\tau)\\ &=\begin{cases} \omega_q^{(2\beta-1)g_{l_i}}\omega_q^{(2\beta-1)L^{l_i}_{\textbf{c}}}2^{m-k}, & \tau=(2\beta-1)2^{l_i},\\ 0, & \textnormal{otherwise}, \end{cases} \end{split} \end{equation} where $\beta\in \{0,1\}$. Therefore, the cross-correlation term of (\ref{T_ider}) is simplified as \begin{equation}\label{T_icorr} \begin{split} & \displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}\in S_{N_i}}\sum_{\beta\in\{0,1\}} C\left(\left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x}x_{l_i}=\textbf{c}\beta},\right. \\&\qquad\qquad\qquad\qquad \left. \left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_\textbf{c}) \right)\big\arrowvert_{\textbf{x}x_{l_i}=\textbf{c}(1-\beta)}\right)(\tau)\\ &=\begin{cases} \omega_q^{g_{l_i}}2^{m}\displaystyle\sum_{\textbf{c}\in S_{N_i}}\omega_q^{L^{l_i}_{\textbf{c}}}, & \tau=2^{l_i},\\ \omega_q^{-g_{l_i}}2^{m}\displaystyle\sum_{\textbf{c}\in S_{N_i}}\omega_q^{-L^{l_i}_{\textbf{c}}}, & \tau=-2^{l_i},\\ 0, & \textnormal{otherwise.} \end{cases} \end{split} \end{equation} From (\ref{T_ider}), (\ref{T_iauto}) and (\ref{T_icorr}), we have \begin{equation}\label{T_iderivation} \begin{split} T_i=\begin{cases} 2^{m+1}N_i,& \tau=0,\\ \omega_q^{g_{l_i}}2^{m}\displaystyle\sum_{\textbf{c}\in S_{N_i}}\omega_q^{L^{l_i}_{\textbf{c}}}, & \tau=2^{l_i},\\ \omega_q^{-g_{l_i}}2^{m}\displaystyle\sum_{\textbf{c}\in S_{N_i}}\omega_q^{-L^{l_i}_{\textbf{c}}}, & \tau=-2^{l_i},\\ 0, & \textnormal{otherwise.} \end{cases} \end{split} \end{equation} From (\ref{L_1=T+T_i}), (\ref{Tderi}) and (\ref{T_iderivation}), we have \begin{equation}\label{L_1derivation} \begin{split} \mathcal{L}_1&=T+\sum_{i=1}^pT_i\\ &= \begin{cases} 2^{m+1}\displaystyle\sum_{i=1}^pN_i+2^{m+1}M, & \tau=0,\\ \omega_q^{g_{l_i}}2^{m}\displaystyle\sum_{\textbf{c}\in S_{N_i}}\omega_q^{L^{l_i}_{\textbf{c}}}, & \tau\!=\! 2^{l_i},i\!\!=\!\!1,2,\hdots,p,\\ \omega_q^{-g_{l_i}}2^{m}\displaystyle\sum_{\textbf{c}\in S_{N_i}}\omega_q^{-L^{l_i}_{\textbf{c}}}, & \tau\!=\!-2^{l_i},i\!\!=\!\!1,2,\hdots,p,\\ 0, & \textnormal{otherwise.} \end{cases} \end{split} \end{equation} To find $\mathcal{L}_2$, we start with \begin{equation}\label{L_2derived} \begin{split} &\displaystyle \sum_{\textbf{d}}\displaystyle C \left( \left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_{\textbf{c}_1})\right)\big\arrowvert_{\textbf{x}=\textbf{c}_1} ,\right.\\&\qquad\qquad\qquad\qquad\left. \left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_{\textbf{c}_2})\right)\big\arrowvert_{\textbf{x}=\textbf{c}_2} \right) (\tau)\qquad\\ &=\displaystyle\sum_{\textbf{d}}(-1)^{\textbf{d}\cdot(\textbf{c}_1+\textbf{c}_2)}C \left( \left(f+\frac{q}{2}(dx_{\textbf{c}_1})\right)\big\arrowvert_{\textbf{x}=\textbf{c}_1} ,\right.\\& \qquad\qquad\qquad\qquad\qquad\qquad\left. \left(f+\frac{q}{2}(dx_{\textbf{c}_2})\right)\big\arrowvert_{\textbf{x}=\textbf{c}_2} \right) (\tau)\\ &= C \left( \left(f+\frac{q}{2}(dx_{\textbf{c}_1})\right)\big\arrowvert_{\textbf{x}=\textbf{c}_1} ,\right.\\&\qquad\qquad\qquad\left. \left(f+\frac{q}{2}(dx_{\textbf{c}_2})\right)\big\arrowvert_{\textbf{x}=\textbf{c}_2} \right) (\tau) \displaystyle\sum_{\textbf{d}}(-1)^{\textbf{d}\cdot(\textbf{c}_1+\textbf{c}_2)}\\ &=0\!\!\quad\!\!\forall \tau. \end{split} \end{equation} Therefore, from (\ref{L_2}) and (\ref{L_2derived}), we have \begin{equation}\label{L_2derivation} \begin{split} \mathcal{L}_2& =\displaystyle \sum_{\textbf{d}d}\displaystyle \sum_{\textbf{c}_1\neq \textbf{c}_2}C \left( \left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_{\textbf{c}_1}\right)\big\arrowvert_{\textbf{x}=\textbf{c}_1} \right., \\& \qquad\qquad\qquad\qquad\left. \left(f+\frac{q}{2}(\mathbf{d}\cdot\mathbf{x}+dx_{\textbf{c}_2})\right)\big\arrowvert_{\textbf{x}=\textbf{c}_2} \right) (\tau)\\ &=0\!\!\quad\!\!\forall \tau. \end{split} \end{equation} By substituting (\ref{L_1derivation}) and (\ref{L_2derivation}) in (\ref{A=L_1+L_2}), we complete the proof. \end{appendices} \bibliographystyle{IEEEtran}
1,477,468,750,188
arxiv
\section{Introduction} The ability to learn from one or few examples is an important property of human learning that helps us to function effectively in the real world. Children learn new concepts effortlessly by building upon prior knowledge \cite{biederman1987concepts30000,carey1978fastMapping}. In contrast, our most successful deep learning-based approaches to recognition \cite{krizhevsky2012imagenet,he2016deep,hu2018senet} treat each learning problem as tabula-rasa, and as such are extremely data-inefficient compared to humans. This limits their scalability to open-ended learning of the long tail of categories in the real-world; and particularly their applicability to numerous real world problems where categories to recognise are rare (e.g., endangered species), continually emerging (man-made devices), or expensive to annotate (medical images). These observations have motivated a resurgence of interest in few-shot learning in visual recognition \cite{vinyals2016matching,finn2017model,snell2017prototypical,qiao2017few} and beyond \cite{aurelie2017oneshotwv,duan2017one}. In the few-shot learning scenario, contemporary deep networks overfit -- even when exploiting fine-tuning \cite{yosinski2014howTransferable}, data augmentation \cite{krizhevsky2012imagenet}, or regularisation \cite{srivastava2014dropout} techniques. The most effective few-shot methods rely on purpose built `meta-learning' techniques, where transferrable task-agnostic knowledge is extracted from historical tasks and leveraged to benefit sparse data learning of specific new target tasks. This task-agnostic knowledge here has taken several forms: Fast adaptation methods enable rapid adaptation using sparse data and without overfitting. For example good initial conditions \cite{finn2017model} and learned optimisers \cite{ravi2016optimization}. Weight synthesis approaches learn a meta-network that inputs the training set and synthesizes weights for a recogniser \cite{bertinetto2016feedForwardOneShot,mishra2018simple}. Deep metric learning approaches provide a robust way to represent \cite{koch2015siamese} and compare \cite{vinyals2016matching,snell2017prototypical} instances allowing new categories to be recognised with nearest-neighbour style strategies. Existing methods each have different drawbacks, including complexity of inference mechanism \cite{lake2015ppi}, architectural complexity \cite{lake2015ppi,munkhdalai2017meta}, the need to fine-tune on the target problem \cite{finn2017model,ravi2016optimization}, or reliance on a simple linear comparison function \cite{koch2015siamese,vinyals2016matching,snell2017prototypical}. We build on the `deep metric learning' line of work due to its appealing architectural simplicity, and instantaneous training for new categories. These methods perform few-shot recognition by using auxiliary training tasks to learn a deep image-embedding such that the embedded data becomes linearly separable \cite{koch2015siamese,vinyals2016matching,snell2017prototypical}. Thus the decision is non-linear in image-space, but linear in the embedding space. For learning the target task, few-shot training data is simply memorised. For testing the target task at runtime, query images are matched to training examples by applying the deep embedding and comparison function. Within this paradigm, the recently proposed RelationNet \cite{yang2018learning} achieved state-of-the-art performance by learning a \emph{non-linear comparison function}. Learning the embedding and non-linear relation module jointly alleviates the reliance on the embedding's ability to generate linearly separable features. We build on this idea of jointly learning an embedding and non-linear comparison function, but take it further with the following insights. Different layers of deep networks represent different types of features at different levels of abstraction \cite{zeiler2014understandingCNN} -- from simple textures to complex parts. A general purpose comparison network should be able to make use of any and all of these cues in making its decisions. Therefore we work with embedding networks composed of a sequence of modules, and pair each embedding module with its own non-linear comparison module. Thus resulting in a column of non-linear relation modules, where prior studies used a single linear \cite{snell2017prototypical}, or non-linear comparison \cite{yang2018learning}. To provide the inductive bias that each layer of representation should be potentially discriminative for matching, and enable better gradient propagation \cite{huang2017densely} to each relation module, we deeply supervise \cite{lee2015deepSupNet} all the relation modules. Finally, since the hierarchy of added relation modules increases the parameter total, we develop a learned-noise stochastic regularizer to reduce overfitting and improve generalisation. Overall our approach can be seen as jointly learning and embedding and comparison as the task-agnostic meta knowledge \cite{koch2015siamese,vinyals2016matching,snell2017prototypical,yang2018learning}, but extending this successful idea to make full use of deep networks by making comparison with the full feature hierarchy extracted by the embedding network. The resulting framework maintains the architectural simplicity and efficiency of other methods in this line, while providing state of the art performance on both the established \textit{mini}ImageNet{} benchmark, and the more challenging \textit{tiered}ImageNet{} few-shot learning benchmarks. \section{Related Work} Contemporary approaches to deep-network few-shot learning have exploited the learning-to-learn paradigm. Auxiliary tasks are used to meta-learn some task agnostic knowledge, before exploiting this to learn the target few-sample more effectively problem. The learning-to-learn idea has a long history \cite{thrun1996lll,fei2006one,lake2015ppi}, but contemporary approaches typically cluster into three categories: Fast adaptation, weight synthesis, and metric-learning approaches. \keypoint{Fast Adaptation} These approaches aim to meta-learn an optimisation process that enables base models to be fine-tuned quickly and robustly. So that a base model can updated for sparse data target problems without extensive overfitting. Effective ideas include the simply meta-learning an effective initial condition \cite{finn2017model,DBLP:journals/corr/abs-1803-02999}, and learning a recurrent neural network optimizer to replace the standard SGD learning approach \cite{ravi2016optimization}. Recent extensions include also learning per-parameter learning rates \cite{li2017meta}, and accelerating fine-tuning through solving some layers in closed form \cite{bertinetto2018closedFormMeta}. Nevertheless, these methods suffer from needing to be fine-tuned for the target problem, often generating costly higher-order gradients during meta-learning, and failing \cite{finn2017model} to scale to deeper network architectures as shown in \cite{mishra2018simple}. They also often suffer from a fixed parametric architecture. For example, once you train MAML \cite{finn2017model} for 5-way auxiliary classification problems, it is restricted to the same for target problems without being straightforwardly generalizable to a different cardinality of classification. \keypoint{Classifier Synthesis} Another line of work focuses on synthesising a classifier based on the provided few-shot training data \cite{gidaris2018dynamic}. An early method in this line learned a transferrable `LearnNet' that generated convolutional weights for the base recognition network given a one-shot training example \cite{bertinetto2016feedForwardOneShot}. However, this was limited to binary classification. Conditional Neural Processes \cite{gidaris2018dynamic} exploited a similar idea, but in a Bayesian framework. SNAIL obtained excellent results by embedding the training set with temporal convolutions and attention \cite{mishra2018simple}. Recently Qiao \etal proposed a method to predict classification parameters given neuron activations \cite{qiao2017few}. In this case the global parameter prediction network is the task-agnostic knowledge that is transferred from auxiliary categories. Compared to the fast adaptation approaches, these methods generally synthesize their classifier in a single pass, making them faster to train on the target problem. {However learning to synthesize a full classifier does entail some complexity. This process can overfit and generalize poorly to novel target problem.} \keypoint{Deep Metric Learning} These approaches aim to learn a deep embedding that extracts features that robustly describe instances, allowing them to be classified directly with nearest neighbour type strategies in the embedding space. The deep embedding forms the task agnostic knowledge that is transferred from auxiliary to target tasks. Early work simply used Siamese networks \cite{koch2015siamese} to embed images, such that images of the same class are placed near each other. Matching networks \cite{vinyals2016matching} defined a differentiable nearest-neighbour loss based on cosine similarity between the support set and query embedding. \cut{However the computation cost for each gradient update during meta-training increases with the size of the support set.} Prototypical Networks \cite{snell2017prototypical} provide a simpler but more effective variant of this idea where the support set instances for one class are embedded as a single prototype. Their analysis showed that this leads to a linear classifier in the embedding space. The most related method to ours is RelationNet \cite{yang2018learning}, which extended this line of work to use a separate non-linear comparison module instead of relying entirely on the embedding networks to make the data linearly separable \cite{koch2015siamese,snell2017prototypical,vinyals2016matching}. This division of labour between a deep embedding and a deep relation module improved performance in practice \cite{yang2018learning}. Our approach builds on this line of work in general and RelationNet in particular. RelationNet relied on the embedding networks to produce a \emph{single} embedding for the relation module to compare. We argue that a general purpose comparison function should use any or all of the full feature hierarchy \cite{zeiler2014understandingCNN} to make matching decisions. For example matching based on colors, textures, or parts -- which may be represented at different layers in a embedding network. To this end we modularise the embedding networks, and pair every embedding module with its own relation module. \keypoint{Use of Feature Hierarchies} The general strategy of simultaneously exploiting multiple layers of a feature hierarchy has been exploited in conventional many-shot classification network \cite{huang2017densely,srivastava2015highwayNet}, instance recognition \cite{chang2018mlfn}, and semantic segmentation networks \cite{hariharan2015hypercolumn}. However, not to our knowledge in the context of deep-metric learning, where the conventional pipeline is to extract a complete feature \cite{ge2018deepMetric,hu2014deepMetric}. Importantly, in contrast to prior approaches' single `shortcut' connection of deeper features to a classifier \cite{hariharan2015hypercolumn,chang2018mlfn}, we uniquely learn a hierarchy of relation modules: One non-linear comparison function for each block of the embedding. Our approach is also reminiscent of classic techniques such as spatial pyramids \cite{lazebnik2006pyramid} (since each module in the hierarchy operates at different spatial resolutions) and multi-kernel learning \cite{vedaldi2009mklObjDet} (since we learn multiple relation modules for each feature in the hierarchy). \keypoint{Noise and Regularisation} For best performance, we would like to fully exploit a state of the art embedding module architecture (we use SENet \cite{hu2018senet}), and also benefit from the array of comparison modules mentioned above. However the parameters introduced in such a rich architecture and multiple embedding modules introduce additional overfitting risk. We therefore develop a novel regularisation technique by adding learned Gaussian noise at each network module. Rather than generating deterministic features at a module output, we generate means and variances \cut{(analogous to the output of a density network, or generator of a VAE)} which are sampled in the forward pass, with backpropagation relying on the reparamaterisation trick. Unlike density networks \cite{bishop1994mdn} where such distributions are only generated at the output layer, or VAEs \cite{kingma2014variationalAutoEncoder} here they are generated only once by the generator, we generate multiple such stochastic features at each embedding module's output. This turns out to be an effective strategy for avoiding overfitting. \cut{\keypoint{Embedding and Metric based Approaches} The former approaches based on deep learning with great complexity are always ill-suited to one-shot learning problem. Embedding and metric based approaches takes query and sample images from the target problem as embedding, then produce a set of projection function as metric, to realize feed forward classification. As so far, the best performing methods for few-shot learning have been mainly metric learning methods. Siamese neural network \cite{koch2015siamese} employs a convolutional network to rank similarity between inputs, where items in the same class are close while items in different classes are far away, according to some distance metric. Once the model is tuned, we can then capitalize on powerful discriminative features to generalize the prediction to entirely new classes from unknown distributions. Prototypical Networks \cite{snell2017prototypical} learn a metric space in which classification can be performed by computing distances to prototype representations of each class, which is represented as the mean of its examples in a representation space learned by a neural network. Matching network \cite{vinyals2016matching} refines the above idea by defining a differentiable nearest neighbor loss involving the cosine similarities of embeddings produced by a convolutional network. This approach has the limitation that the computation cost for each gradient update increases corresponding to the support set size, and fail to fine-grained problems. Inspired by relation reasoning \cite{santoro2017simple} on object detection, which is another research target, relation network \cite{yang2018learning} is proposed as a two-branch network comparing query images and few-shot labeled support images with a learnable rather than fixed metric. This model is simple (can be easily trained end-to-end from scratch without RNNs) and faster (without fine-tuning).} \cut{\keypoint{RNN memory based approaches} Thus there is another kind of approaches based on Recurrent Neural Network (RNN), which iteratively accumulate information over an example from a given problem by hidden activations and external memories, to learn a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. MANN \cite{santoro2016meta} has an architecture with augmented memory capacities. Obviating the drawback of former models, MANN assimilates new data quickly, to encode and retrieve new information with an external memory focusing on memory content. Meta Net \cite{munkhdalai2017meta} explores a more sophisticated weight update strategy that leverage the use of loss gradients as meta information and support continual learning on few shot classification. These algorithms have high requirements for reliable, and potential long-term, historical information of relevance without forgetting in the memory block. Instead of training the complex RNN with the issue ensuring the adequacy of the memory, our learning to learn DCN{} involve simple and fast feed forward CNNs.} \section{Methodology} \subsection{Problem Definition} We consider a $K$-shot $C$-way classification problem for few shot learning. There are some labelled source tasks with sufficient data, denoted meta-train $\mathcal{D}_{\text{m-train}}$, and we ultimately want to solve a new set of target tasks denoted meta-test $\mathcal{D}_{\text{m-test}}$ (Fig.~\ref{fig:meta}), for which the label space is disjoint. Within meta-train and meta-test, we denote each task as being composed of a support set of training examples, and a query set of testing examples. The meta-test tasks are assumed to be few-shot, so $\mathcal{D}_{\text{m-test}}$ contains a support set with $C$ categories and $K$ examples each. We want to learn a model on meta-train that can generalise out of the box, without fine-tuning, to learning the new categories in meta-test. \keypoint{Episodic Training} We adopt an episodic training paradigm for few-shot meta-learning. During meta-training, an episode is formed as follows: (i) Randomly select $C$ classes from $\mathcal{D}_{\text{m-train}}$, (ii) For each class, sample $K$ images, which serve as \emph{support set} $ {\mathcal{D}}_{\text{m-train}}^{\text{S}} = \left\{ (x_{i}, y_{i})\right\}_{i=1}^m $, where $m=K*C$, (iii) For the same $C$ classes, sample another set serving as the \emph{query set} $ {\mathcal{D}}_{\text{m-train}}^{\text{Q}} = \left\{ (\tilde{x}_{j}, \tilde{y}_{j})\right\}_{j=1}^n$, where $n=K'*C$ and $\mathcal{D}_{\text{m-train}}^{\text{S}} \cap \mathcal{D}_{\text{m-train}}^{\text{Q}} = \emptyset$. The support/query distinction mimics the $\mathcal{D}_{\text{m-test}}$/real-time testing. Our few-shot DCN{} will be trained for instance comparison using episodes constructed in this manner. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{split.pdf} \caption{\small Few-shot learning: Problem Setup.} \label{fig:meta} \end{figure} \subsection{Model} \begin{figure*}[t] \centering \includegraphics[width=0.92\textwidth]{dcn.pdf} \caption{\small Deep Comparison Network\xspace{} architecture. There are 4 embedding modules for each embedding branch branch, and a set of 4 corresponding relation modules. } \label{fig:network} \end{figure*} \keypoint{Overview} \modelname (\modelnameshort)\xspace is composed of two module types: \emph{embedding} and \emph{relation} modules, as shown in Fig.~\ref{fig:network}. The detailed architecture will be given in Section~\ref{sec:arch}. A pair of images $x_{i}$ and $x_{j}$ in the support and query respectively set are fed to embedding modules. The $v$th-level of embedding modules produce feature maps $f_{\theta}^{v}(x_{i})$ and $f_{\theta}^{v}(x_{j})$, which are concatenated $[f_{\theta}^{v}(x_{i}), f_{\theta}^{v}(x_{j})]$, and then fed into the corresponding, i.e., $v$th-level, relation module. For the pair of $x_i$ and $x_j$, at level $v$, the relation module outputs a similarity feature map $g_{\phi}^{v}$. Each relation module also inputs the similarity feature map of the previous relation module in the hierarchy, \begin{equation} \begin{aligned} g_{\phi}^{v}= g([f_{\theta}^{v}(x_{i}),f_{\theta}^{v}(x_{j})], g_{\phi}^{v-1}). \end{aligned} \label{relation_score} \end{equation} The first relation module is special as it does not have a predecessor to input, and we can not use zero-padding because $0$ has a specific meaning in our context, thus we set $g_{\phi}^{1}=g([f_{\theta}^{1}(x_{i}),f_{\theta}^{1}(x_{j})])$. Simultaneously, after an average pooling and fully connected layer denoted $q$, each relation module outputs a real-valued scalar in the range of $[0,1]$, representing the $v$th-level similarity/relation score $r_{i,j}^v$ of two images, \begin{equation}\begin{aligned} r_{i,j}^v = q(g_{\phi}^{v}). \end{aligned}\label{relation_score}\end{equation} \keypoint{K-Shot} For $K$-shot with $K>1$, the embedding module outputs the average pooling of features, along the sample axis, of all samples from the same class to produce \emph{one} feature map. Thus, the number of outputs for the $v$-level relation module is $C$, regardless of the value of $K$. \keypoint{Objective Function} We train the Siamese embedding networks as a conventional multi-class classifier for the classes in $\mathcal{D}_{m-train}$ using cross entropy (CE) loss. After training, the embedding module parameter $\theta$ is fixed. Binary cross entropy (BCE) loss is then used to train the column of relation modules, with one loss applied to each of the four modules (Fig.~\ref{fig:network}). Since we have multiple relation modules, we can assign different weights, $[w_1,w_2,\dots,w_V],v=1,2,...,V$ to different modules, \begin{equation} \phi \leftarrow \underset{\phi}{\operatorname{argmin}}~ \sum_{v=1}^V {w_{v}\operatorname{BCE}(r_{i,j}^v,\mathbf{1}(y_{i}=y_{j}))}. \label{bceloss} \end{equation} \keypoint{Testing Strategy} To evaluate our learned model on $C$-way-$k$-shot learning, we calculate the final relation score $r_{c}$ of query images to different classes using the same weight assignment as the weighted sum loss $[w_1,w_2,\dots,w_V],v=1,2,...,V$, as shown in Eq.~\ref{eq:score}. The class with the highest relation score $r_{c}$ is the final predicted classification. \begin{equation} r_{c}=\sum_{v=1}^V\omega_{v}\cdot{r_{i,j}^v}. \label{eq:score} \end{equation} \subsection{Network Architecture} \label{sec:arch} The Deep Comparison Network\xspace architecture (Fig.~\ref{fig:network}) uses $4$ embedding modules, each paired with a relation module. \keypoint{Embedding Subnetwork} As shown in Fig.~\ref{fig:network}, first we use a $7\times 7$ convolution followed by a $3\times 3$ max-pooling for size reduction. Then, we have $4$ embedding modules, each composed of a number of SENet~\cite{hu2018senet} blocks. Finally, an avg-pooling and a fully-connected layer are used to produce $C'$ logit values, corresponding to $C'$ classes in $\mathcal{D}_{\text{m-train}}$. More specifically, the embedding modules $[1,2,3,4]$ have $[3,4,6,3]$ SENet blocks respectively, as per \cite{hu2018senet}. Empirically, we found that SENet blocks achieved the best performance compared to conventional convolutional blocks, WRN blocks \cite{zagoruyko2016wide}, and ResNet blocks \cite{he2016deep}. We use exactly the same configuration of SENet block as proposed in \cite{hu2018senet}, e.g., reduction ratio $r=16$ as suggested. \cut{ \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{dcn.pdf} \caption{\small Embedding Module Architecture} \label{fig:embedding} \end{figure}} { \begin{table*}[t] \begin{center} \resizebox{0.8\textwidth}{!}{ \begin{tabular}{c|c|c|c|c} \toprule Output size & \textbf{Embedding} & \textbf{Embedding}+ \textit{noise} & Output size &\textbf{Relation}\\ \midrule $112\times112$ & \multicolumn{2}{c|}{conv, $7\times7$, 64, stride 2, padding 3} \\ \cline{1-3} $56\times56$ & \multicolumn{2}{c|}{Maxpooling $3\times3$, stride 2, padding 1} \\ \hline \multirow{4}{*}{$56\times56$} & \multirow{4}{*}{${\begin{bmatrix}conv, 3\times3,64 \\ conv, 3\times3,64\\fc,[4,64] \end{bmatrix}}\times3$} & ${\begin{bmatrix}conv, 3\times3,64 \\ conv, 3\times3,64\\fc,[4,64] \end{bmatrix}}\times2$ & \multirow{4}{*}{$28\times28$} & \multirow{4}{*}{${\begin{bmatrix}conv, 3\times3,128 \\ conv, 3\times3,128\\fc,[8,128] \end{bmatrix}}\times2$}\\ & & ${\begin{bmatrix}conv, 3\times3,64 \\ conv, 3\times3,65\\fc,[4,65] \end{bmatrix}}\times1$ &&\\ \hline \multirow{4}{*}{$28\times28$} & \multirow{4}{*}{${\begin{bmatrix}conv, 3\times3,128 \\ conv, 3\times3,128\\fc,[8,128] \end{bmatrix}}\times4$} & ${\begin{bmatrix}conv, 3\times3,128 \\ conv, 3\times3,128\\fc,[8,128] \end{bmatrix}}\times3$ & \multirow{4}{*}{$14\times14$} & ${\begin{bmatrix}conv, 3\times3,384 \\ conv, 3\times3,256\\fc,[16,256] \end{bmatrix}}\times1$ \\ & & ${\begin{bmatrix}conv, 3\times3,128 \\ conv, 3\times3,129\\fc,[8,129] \end{bmatrix}}\times1$ & &${\begin{bmatrix}conv, 3\times3,256 \\ conv,3\times3,256\\fc,[16,256] \end{bmatrix}}\times1$\\ \hline \multirow{4}{*}{$14\times14$} & \multirow{4}{*}{${\begin{bmatrix}conv, 3\times3,256 \\ conv, 3\times3,256\\fc,[16,256] \end{bmatrix}}\times6$} & ${\begin{bmatrix}conv, 3\times3,256 \\ conv, 3\times3,256\\fc,[16,256] \end{bmatrix}}\times5$ &\multirow{4}{*}{$7\times7$} &${\begin{bmatrix}conv, 3\times3,768 \\ conv, 3\times3,512\\fc,[32,512] \end{bmatrix}}\times1$\\ & & ${\begin{bmatrix}conv, 3\times3,256 \\ conv, 3\times3,257\\fc,[16,257] \end{bmatrix}}\times1$ & & ${\begin{bmatrix}conv, 3\times3,512 \\ conv, 3\times3,512\\fc,[32,512] \end{bmatrix}}\times1$ \\ \hline \multirow{4}{*}{$7\times7$} & \multirow{4}{*}{${\begin{bmatrix}conv, 3\times3,512 \\ conv, 3\times3,512 \\fc,[32,512] \end{bmatrix}}{~}\times3$} & ${\begin{bmatrix}conv, 3\times3,512 \\ conv, 3\times3,512 \\fc,[32,512] \end{bmatrix}}\times2$ & \multirow{4}{*}{$7\times7$} & ${\begin{bmatrix}conv, 3\times3,1536 \\ conv, 3\times3,512\\fc,[32,512] \end{bmatrix}}\times1$ \\ & & ${\begin{bmatrix}conv, 3\times3,512 \\ conv, 3\times3,513 \\fc,[32,513] \end{bmatrix}}\times1$ & & ${\begin{bmatrix}conv, 3\times3,512 \\ conv, 3\times3,512\\fc,[32,512] \end{bmatrix}}\times1$\\ \hline $1\times1$ & \multicolumn{2}{c|}{Global average pooling, fc} \\ \bottomrule \end{tabular}% } \end{center} \caption{\small Parameters of each embedding (conventional and noise-generating) and relation module. Relation modules concatenate the final feature maps of both corresponding embedding modules, and the previous relation module. The output size of each embedding module matches the input size of the corresponding relation module. The brackets of `\emph{fc}' indicate the dimension of FC layers in an SE block \cite{hu2018senet}.} \label{tab:detail} \end{table*} \keypoint{Parameterized Gaussian Noise for Stochastic Feature Regularisation} Conventionally, an embedding module outputs deterministic features. As a regularisation strategy, we treat each feature output as a random variable drawn from a parameterized Gaussian distribution, for which the embedding module outputs the mean and variance. This design is illustrated in Fig.~\ref{fig:noisev1}. To realise this idea, each embedding module's output is split into two parts: the mean feature $f_{\theta,{\mu}}$ sized $[b,c,h,w]$ ([batch\_size, channel, height, width]), and standard deviation $f_{\theta,{\sigma}}$ sized $[b,1,h,w]$. Note that we assume that every channel shares the same standard deviation (std). This means, in addition to the penultimate-to-output layer (now it is penultimate-to-mean layer), we have a new penultimate-to-std layer (with its own parameters). The motivation behind sharing the std across channels is to reduce the number of parameters in that newly introduced layer. We also control the amount of noise added by constraining the standard deviation to the range $[0,1]$ by applying sigmoid activation. To generate the final output, we draw one (or more) random sample from the Gaussian distribution. However, the conventional sampling process is not differentiable, thus we use the reparameterization trick, \begin{equation} f_{\theta,{\mu}} + \varepsilon \odot f_{\theta,{\sigma}}, \label{noise} \end{equation} \noindent where $\varepsilon$ is $b \times 1 \times h \times w$ random samples drawn from a standard Gaussian and reshaped into $[b,1,h,w]$; $\odot$ denotes element-wise product; and we perform broadcasting across the channels using the shared std. \begin{figure*}[h] \centering \includegraphics[width=0.95\textwidth]{noise_embedding.pdf} \caption{\small Learned noise regularizer: Each embedding module defines a Gaussian distribution from which its output feature is sampled.} \label{fig:noisev1} \end{figure*} \keypoint{Relation Subnetwork} As illustrated in Fig.~\ref{fig:network}, the relation column consists of $4$ serial modules, each of which has $2$ SENet blocks. They each finish with a pooling, and fully-connected layer to produce the relation score. The SENet block architecture is the same as the one used in embedding module. {The detailed relation module architectures are shown in Tab.~\ref{tab:detail}}. In Eq.~\ref{bceloss}, we have the weighting terms for each sub-module's loss, and we fix them to be $[0.3,0.4,0.5,1.0]$. This increasing pattern reflects the expectation that later layers are generally more discriminative, and the aggregation effect of the column structure passing feature maps between relation modules (Eq.~\ref{relation_score}). \cut{ \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{relation_module.png} \caption{\small Relation Module Architecture} \label{fig:relation} \end{figure}} \section{Experiments} Our approach is evaluated on few-shot classification using two datasets: \textit{mini}Imagenet and \textit{tiered}Imagenet. \keypoint{Baselines} We compare against several state of the art baselines for few-shot learning including Matching Nets \cite{vinyals2016matching}, Meta Nets \cite{munkhdalai2017meta}, Meta LSTM \cite{ravi2016optimization}, MAML \cite{finn2017model}, Prototypical Nets \cite{snell2017prototypical}, Graph Neural Nets \cite{garcia2017few}, Relation Net \cite{yang2018learning}, Meta-SGD \cite{li2017meta}, SNAIL \cite{mishra2018simple}, DynamicFSL \cite{gidaris2018dynamic}, AdaResNet \cite{munkhdalai2018rapid}, Meta-SSL \cite{ren2018meta}, PPA \cite{qiao2017few} and TPN \cite{liu2018transductive}. \keypoint{Data Augmentation} We follow the standard data augmentation \cite{szegedy2015going} with random-size cropping to 224*224 pixels and random horizontal flipping. Input images are normalized through mean channel subtraction. \keypoint{Pre-train and Retrain} We firstly pre-train the supervised feature embedding branch using the training set and then fix the parameters of embedding sub-network, before meta-training the relation sub-network. We then use the validation set to estimate the right number of early stop episodes for the relation training. Finally, we use all 80 train+validation classes (as per common practice \cite{qiao2017few}) to retrain both the embedding and relation sub-networks. \subsection{\textit{mini}Imagenet} \keypoint{Dataset} \textit{mini}ImageNet{} consists of 100 ImageNet{} classes, each with 600 images (60,000 colour images in total) \cite{vinyals2016matching}. Following the split in \cite{ravi2016optimization}, the dataset is divided into a 64-class training set, 16 validation and a 20-class testing set. \keypoint{Settings} We evaluate both \textit{5-way-1-shot} and \textit{5-way-5-shot}. In both settings, each episode contains 5 query images for each of the $C$ sampled classes. Thus, there are 5*5+1*5=30 images per training episode/mini-batch for 5-way-1-shot experiments, and 5*5+5*5=50 images for 5-way-5-shot experiments. In terms of 5-way-5-shot learning, recall that we calculate the class-wise average feature across the support set. Thus we always get 5*5*5*1=75 feature pairs as input for the relation module. For embedding and relation module training, optimization uses SGD with momentum 0.9. The initial learning rate is 0.1, decreased by a factor of 5 every 60 epochs, and the training epoch is 200. All models are trained from scratch, using the robust RELU weight initialisation \cite{he2015delving}. For testing, we resize shorter image edges to 256, and evaluate on 224*224 central pixels cropped from each image. \keypoint{Results} Following the settings of \cite{snell2017prototypical}, when evaluating testing performance, we batch 15 query images per class in a testing episode and the accuracy is calculated by averaging over 600 randomly generated testing tasks (for both both 1-shot and 5-shot scenarios). From Tab.~\ref{tab:mini}, we can see that DCN{} achieves the new state-of-the-art performance on the 5-way-1-shot and 5-shot tasks with comfortable margins. Specifically, the accuracy of testing on 5-way \textit{mini}ImageNet{} reaches 62.88\% and 75.84\% for 1-shot and 5-shot respectively. This improves by 3.3\% and 2.1\% respectively on prior state of the art method PPA \cite{qiao2017few}. Note that Tab.~\ref{tab:mini} divides competitors into those that use relatively shallow image embeddings (top) with those that use state-of-the-art deep architectures (bottom). \keypoint{Cross-way Testing Results} Standard procedure in few-shot evaluation is to train models for the desired number of categories to discriminate at testing time. However unlike alternatives such as MAML \cite{finn2017model}, our method is not required to match label cardinality between training and testing. We therefore evaluate our 5-way trained model on 20-way testing in Tab.~\ref{tab:ablation2}. We can see that our model outperforms the alternatives clearly despite DCN{} being trained for 5-way, and the others specifically for 20-way. This demonstrates another important aspect of DCN{}'s flexibility and general applicability. \setlength{\tabcolsep}{4.8pt} \begin{table}[t] \centering \footnotesize \begin{tabular}{@{} lcc @{}} \toprule \multirow{2}{*}{\bf Model} &\multicolumn{2}{c}{\multirow{2}{*}{\bf \textit{mini}Imagenet 5-way Acc.}}\\ & \multicolumn{2}{c}{} \\ & 1-shot & 5-shot \\ \midrule \textbf{\textsc{Matching} \textsc{Nets}} \cite{vinyals2016matching} &43.56 $\pm$ 0.84\% &55.31 $\pm$ 0.73\% \\ \textbf{\textsc{Meta} \textsc{LSTM}} \cite{ravi2016optimization} &43.44 $\pm$ 0.77\% & 60.60 $\pm$ 0.71\% \\ \textbf{\textsc{MAML}} \cite{finn2017model}& 48.70 $\pm$ 1.84\% & 63.11 $\pm$ 0.92\% \\ \textbf{\textsc{Meta} \textsc{Nets}} \cite{munkhdalai2017meta} &49.21 $\pm$ 0.96\% & - \\ \textbf{\textsc{Prototypical} \textsc{Nets}} \cite{snell2017prototypical} &49.42 $ \pm $ 0.78\% &68.20 $\pm$ 0.66\% \\ \textbf{\textsc{GNN}} \cite{garcia2017few} & 50.33 $\pm$ 0.36\% & 66.41 $\pm$ 0.63\% \\ \textbf{\textsc{Meta SSL}} \cite{ren2018meta} & 50.41 $\pm$ 0.31\% & 64.39 $\pm$ 0.24\% \\ \textbf{\textsc{Relation} \textsc{Net}} \cite{yang2018learning} & 50.44 $\pm$ 0.82\% &65.32 $\pm$ 0.70\% \\ \textbf{\textsc{Meta SGD}} \cite{li2017meta} & 50.47 $\pm$ 1.87\% & 64.03 $\pm$ 0.94\% \\ \textbf{\textsc{TPN}} \cite{liu2018transductive} & 52.78 $\pm$ 0.27\% & 66.59 $\pm$ 0.28\% \\ \midrule \textbf{\textsc{SNAIL}} \cite{santoro2017simple} &55.71 $\pm$ 0.99\% & 68.88 $\pm$ 0.92\% \\ \textbf{\textsc{Dynamic FSL}} \cite{gidaris2018dynamic} & 56.20 $\pm$ 0.86\% & 73.00 $\pm$ 0.64\% \\ \textbf{\textsc{adaResNet}} \cite{munkhdalai2018rapid} & 57.10 $\pm$ 0.70\% & 70.04 $\pm$ 0.63\% \\ \textbf{\textsc{PPA}} \cite{qiao2017few} & 59.60 $\pm$ 0.41\% & 73.74 $\pm$ 0.19\% \\ \midrule \textbf{\textsc{Deep Comparison Network\xspace{}}} & \textbf{62.88 $\pm$ 0.83\%} & \textbf{75.84 $\pm$ 0.65\%} \\ \bottomrule \end{tabular}% \caption{\small \small Few-shot classification results on \textit{mini}ImageNet{}. All accuracies are averaged over 600 test episodes and are reported with 95\% confidence intervals. For each task, the best-performing method is bold, along with any others whose confidence intervals overlap. `-': not reported. } \label{tab:mini} \end{table} \setlength{\tabcolsep}{4.8pt} \begin{table}[t] \centering \footnotesize \begin{tabular}{@{} lcc @{}} \toprule \multirow{2}{*}{\bf Model} &\multicolumn{2}{c}{\multirow{2}{*}{\bf \textit{mini}Imagenet 20-way Acc.}}\\ & \multicolumn{2}{c}{} \\ & 1-shot & 5-shot \\ \midrule \textbf{\textsc{Matching Nets}}, (from \cite{li2017meta}) & 17.31 $\pm$ 0.22\% & 22.69 $\pm$ 0.86\% \\ \textbf{\textsc{Meta LSTM}}, (from \cite{li2017meta}) & 16.70 $\pm$ 0.23\% & 26.06 $\pm$ 0.25\% \\ \textbf{\textsc{MAML}}, (from \cite{li2017meta}) & 16.49 $\pm$ 0.58\% & 19.29 $\pm$ 0.29\% \\ \textbf{\textsc{Meta SGD}} \cite{li2017meta} & 17.56 $\pm$ 0.64\% & 28.92 $\pm$ 0.35\% \\ \midrule \textbf{\textsc{Deep Comparison Network\xspace{}}} & \textbf{32.07 $\pm$ 0.29\%} & \textbf{47.31 $\pm$ 0.25\%} \\ \bottomrule \end{tabular}% \caption{\small \small 20-way classification accuracy on \textit{mini}ImageNet{}. Our DCN{} is trained for 5-way and transferred to 20-way. Meta LSTM, MAML, and Meta SGD results are from \cite{li2017meta}.} \label{tab:ablation2} \end{table} \subsection{\textit{tiered}Imagenet} \keypoint{Dataset} \textit{tiered}ImageNet{} is a newer and larger few-shot recognition benchmark than \textit{mini}ImageNet{}. It contains 608 classes (779,165 images), and the training/validation/testing categories are organized so as to ensure a larger semantic gap than those in \textit{mini}ImageNet{}, thus providing a more rigorous test of generalisation. This is achieved by dividing according to 34-higher-level nodes in the ImageNet hierarchy \cite{ren2018meta}. These 34 broad categories are grouped into 20 for training (351 classes), 6 for validation (97 classes) and 8 for testing (160 classes). \keypoint{Settings} Similar to the setting of \textit{mini}ImageNet{}, we use 5 query images per training episode. In terms of training the embedding modules, due to the larger data size, we use a larger batch size 512, initial learning rate 0.3 and 100 training epochs. Other settings remain the same. \keypoint{Results} Following the former experiments, we batch 15 query images per class in each testing episode for both 1-shot and 5-shot scenarios, and the accuracy is calculated by averaging over 600 randomly generated testing tasks. From Tab.~\ref{tab:tiered}, DCN{} achieves the new state-of-the-art performance on the 5-way-1-shot and 5-shot tasks with comfortable margins. Specifically, the testing accuracy on \textit{tiered}ImageNet{} reaches 68.83\% and 79.62\% for 5-way-1-shot and 5-way-5-shot respectively. This is a clear 8.92\% and 6.32\% improvements on prior state of the art TPN \cite{liu2018transductive}. We note also that state- of-the-art competitors Meta-SSL \cite{ren2018meta} and TPN \cite{liu2018transductive} are semi-supervised methods that use more information than ours, and have additional requirements such as access to the test set for transduction. \setlength{\tabcolsep}{4.8pt} \begin{table}[t] \centering \footnotesize \begin{tabular}{@{} lcc @{}} \toprule \multirow{2}{*}{\bf Model} &\multicolumn{2}{c}{\multirow{2}{*}{\bf \textit{tiered}ImageNet{} 5-way Acc.}}\\ & \multicolumn{2}{c}{} \\ & 1-shot & 5-shot \\ \midrule \textbf{\textsc{Reptile}}, (from \cite{liu2018transductive}) &48.97\% &66.47\% \\ \textbf{\textsc{MAML}}, (from \cite{liu2018transductive}) & 51.67\% & 70.30\% \\ \textbf{\textsc{Meta SSL}}$^\dagger$ \cite{ren2018meta} & 52.39 $\pm$ 0.44\% & 70.25 $\pm$ 0.31\% \\ \textbf{\textsc{Prototypical Net}}, (from \cite{liu2018transductive}) &53.31\% &72.69\% \\ \textbf{\textsc{Relation} \textsc{Net}}, (from \cite{liu2018transductive}) & 54.48\% &71.31\% \\ \textbf{\textsc{TPN}}$^\dagger$ \cite{liu2018transductive} & 59.91\% & 73.30\% \\ \midrule \textbf{\textsc{Deep Comparison Network\xspace{}}} & \textbf{68.83 $\pm$ 0.94\%} & \textbf{79.62 $\pm$ 0.77\%} \\ \bottomrule \end{tabular}% \caption{\small \small Few-shot classification results on \textit{tiered}Imagenet. All accuracies are averaged over 600 test episodes and reported with 95\% confidence intervals. For each task, the best-performing method is bold. $^\dagger$ Indicates methods that make use of additional unlabeled data for semi-supervised learning or transductive inference. } \label{tab:tiered} \end{table} \cut{\subsection{CUB-200} \keypoint{Dataset} Caltech-UCSD Birds-200-2011 (CUB-200) is a fine-grained dataset of bird categories, which are hard to differentiate even for human beings. The CUB200 dataset \cite{wah2011caltech} contains 11,788 color images of 200 bird species. Following \cite{zhou2018deep}, the dataset is split as 140 classes for meta-training, 20 classes for meta-validation, and the remaining 40 classes for meta-test. \keypoint{Training} Similar to the former experiment settings, we use 5 query images in each training episode. We adopt the same settings as \textit{mini}ImageNet{}. \keypoint{Results} Following the former experiments, we batch 15 query images per class in each testing episode for both 1-shot and 5-shot scenarios, and the accuracy is calculated by averaging over 600 randomly generated testing tasks. From Tab.~\ref{tab:cub}, DCN{} achieves the new state-of-the-art performance on the 5-way-1-shot and 5-shot tasks with comfortable margins. Specifically speaking, the accuracy of testing on CUB-200 reaches 63.94\% and 74.28\% for 1-shot and 5-shot respectively. \doublecheck{add Relation Net+ CUB here?}} \subsection{Further Analysis} \subsubsection{Ablation Study} Our experiments demonstrate that our approach outperforms prior state-of-the-art by a large margin. To investigate the contribution of the different components of our method, we conduct a series of ablation studies reported in Tab.~\ref{tab:ablation1}. The conclusions are as follows: \textbf{Gaussian Noise:} Comparing DCN{} and DCN{}-No Noise, we can see that this brings just over 2\% improvement. \textbf{Retraining:} The impact of re-training on the combined training and validation set is visible by comparing the entries with DCN{}-No Retrain. Retraining provides a similar 2\% margin, and this is complementary to the noise. \textbf{Deep Supervision:} From the DCN{}-No Deep Sup. result, we can see that deep supervision is important to gain full benefit from the column of relation modules. \textbf{Architecture:} From the results we can see that our DCN{} benefits from deeper embedding architectures. It from using the simple convolutional blocks used by early studies \cite{finn2017model,snell2017prototypical,yang2018learning} when equipped with ResNet \cite{he2016deep}, Wide Res Net \cite{zagoruyko2016wide} and SENet \cite{hu2018senet}. Nevertheless when fixing a common SimpleConv embedding for comparing all models, DCN{} outperforms the alternatives MAML \cite{finn2017model}, RelationNet \cite{yang2018learning} and Prototypical Nets \cite{snell2017prototypical}, as well as other prior methods with simple embeddings (upper block, Tab.~\ref{tab:mini}). In contrast, when fixing a common SENet embedding, the related and recently state of the art method RelationNet \cite{yang2018learning} improves, but is still surpassed by DCN{}. It is also important to note that improvements from deeper embeddings are not automatic. Competitors PrototypicalNet \cite{snell2017prototypical} (evaluated by us) and MAML \cite{finn2017model} (evaluated by \cite{mishra2018simple}) failed to benefit from deeper embeddings, actually overfitting and becoming worse. \textbf{Multiple Relation Modules:} Finally, we also compare separately the testing accuracy with each DCN{} relation module output score $r_v$ in isolation (DCN{}\_$r_v$). We can see that each individual module performs competitively, but their combination clearly leads to the best overall performance, supporting our argument that multiple levels of the feature hierarchy should be used to make general purpose matching decisions. \setlength{\tabcolsep}{4.8pt} \begin{table}[t] \centering \footnotesize \begin{tabular}{@{} lcc @{}} \toprule \multirow{2}{*}{\bf Model} &\multicolumn{1}{c}{\multirow{2}{*}{\bf \textit{mini}Imagenet Acc.}}\\ & \multicolumn{1}{c}{} \\ & 5-way 1-shot \\ \midrule \textbf{\textsc{DCN{}}} Our full model & 62.88 $\pm$ 0.83\% \\ \midrule \textbf{\textsc{DCN{}}}-No noise & 60.57 $\pm$ 0.86\% \\ \textbf{\textsc{DCN{}}}-No retrain & 60.79 $\pm $ 0.88\% \\ \textbf{\textsc{DCN{}}}-No retrain, No noise & 58.04 $\pm$ 0.82\% \\ \textbf{\textsc{DCN{}}}-No deep sup. & 58.02 $\pm$ 0.80\% \\ \midrule \textbf{\textsc{DCN{}}} + SimpleConv & 53.48 $\pm$ 0.78\% \\ \textbf{\textsc{DCN{}}} + ResNet & 60.24 $\pm$ 0.82\% \\ \textbf{\textsc{DCN{}}} + WRN & 57.28 $\pm$ 0.81\% \\ \textbf{\textsc{DCN{}}} + SENet & 62.88 $\pm$ 0.83\% \\ \textbf{\textsc{Relation Net}} \cite{yang2018learning} + SimpleConv & 50.44 $\pm$ 0.82\% & \\ \textbf{\textsc{Relation Net}} \cite{yang2018learning} + SENet & 57.39 $\pm$ 0.86\% & \\ \textbf{\textsc{Prototypical}} \cite{snell2017prototypical} + SimpleConv &49.42 $\pm$ 0.78\% \\ \textbf{\textsc{Prototypical}} \cite{snell2017prototypical} + SENet & 47.61 $\pm$ 0.82\% \\ \textbf{\textsc{MAML}} \cite{finn2017model} + SimpleConv & 48.70 $\pm$ 1.84\% \\ \textbf{\textsc{MAML}} \cite{finn2017model} + Deep \cite{mishra2018simple} & 30.10\% & \\ \midrule \textbf{\textsc{DCN{}}}-$r_{1}$ & 52.25 $\pm$ 0.80\% & \\ \textbf{\textsc{DCN{}}}-$r_{2}$ & 58.07 $\pm$ 0.80\% & \\ \textbf{\textsc{DCN{}}}-$r_{3}$ & 60.69 $\pm$ 0.81\% & \\ \textbf{\textsc{DCN{}}}-$r_{4}$ & 58.31 $\pm$ 0.79\% & \\ \bottomrule \end{tabular}% \caption{\small \small Ablation study using 5-way-1-shot classification on \textit{mini}ImageNet{}. } \label{tab:ablation1} \end{table} \begin{figure*}[t] \centering \includegraphics[width=0.24\textwidth]{mini_score_scatter_RM1.pdf} \includegraphics[width=0.24\textwidth]{mini_score_scatter_RM2.pdf} \includegraphics[width=0.24\textwidth]{mini_score_scatter_RM3.pdf} \includegraphics[width=0.24\textwidth]{mini_score_scatter_RM4.pdf} \vspace{-0.1cm} \caption{\small Illustration of query-support score distribution and the link to ImageNet{} hierarchy. Colors indicate query images of a $(query,support1,support2)$ class triple matching the specified ImageNet{} distance relationship $[D(q,s1),D(q,s2)]$. \cut{{\color{blue} $\hrectangleblack$} $[20,10]$, {\color{green} $\hrectangleblack$} $[10,0]$, {\color{red} $\hrectangleblack$} $[0,0]$}} \label{fig:my_label} \end{figure*} \begin{figure}[tb] \centering \includegraphics[width=0.75\columnwidth]{accuracy14_scatter.pdf} \vspace{-0.1cm} \caption{\small Category-wise accuracy of RM1 vs RM4.} \label{fig:accScatter} \end{figure} \cut{\setlength{\tabcolsep}{4.8pt} \begin{table}[t] \centering \footnotesize \begin{tabular}{@{} lcc @{}} \toprule \multirow{2}{*}{\bf Model} &\multicolumn{1}{c}{\multirow{2}{*}{\bf \textit{mini}Imagenet 5-way Acc.}}\\ & \multicolumn{1}{c}{} \\ & 1-shot \\ \midrule \textbf{\textsc{DCN{}}} Our full model & 62.88 $\pm$ 0.83\% \\ \midrule \textbf{\textsc{DCN{}}}-no\_noise & 60.57 $ \pm $ 0.86\% \\ \textbf{\textsc{DCN{}}}-no\_valid & 60.79 $ \pm $ 0.88\% \\ \textbf{\textsc{DCN{}}}-no\_noise-no\_valid & 58.04$\pm$ 0.82\% \\ \midrule \textbf{\textsc{DCN{}}}\_conv\_without valid & 53.97$\pm$ 0.77\% \\ \textbf{\textsc{DCN{}}}\_resnet & 60.24$\pm$ 0.82\% \\ \textbf{\textsc{DCN{}}}\_WRN & 57.28$\pm$ 0.81\% \\ \midrule \textbf{\textsc{DCN{}}}\_$r_{1}$ & 52.25 $\pm$ 0.80\% \\ \textbf{\textsc{DCN{}}}\_$r_{2}$ & 58.07 $\pm$ 0.80\% \\ \textbf{\textsc{DCN{}}}\_$r_{3}$ & 60.69 $\pm$ 0.81\% \\ \textbf{\textsc{DCN{}}}\_$r_{4}$ & 58.31 $\pm$ 0.79\% \\ \textbf{\textsc{DCN{}}}\_$r_{4}$(trained with $r_{4}$ only) & 58.02 $\pm$ 0.79\% \\ \midrule \textbf{\textsc{Relation Net}}\_SENet-34-16 & 57.39 $\pm$ 0.86\% \\ \textbf{\textsc{Relation Net}} & 50.44 $\pm$ 0.82\% \\ \textbf{\textsc{MAML}}\_deep, evaluated in \cite{mishra2018simple} & 30.10\% & \\ \textbf{\textsc{Prototypical}}\_SENet-34-16 & 47.61 $\pm$ 0.82\% \\ \bottomrule \end{tabular}% \caption{\small \small Ablation study of different settings of few-shot classification accuracies on \textit{mini}Imagenet. } \label{tab:ablation1} \end{table}} \subsubsection{Relation Module Analysis} \vspace{-0.1cm} A key contribution of our model is to perform metric learning at multiple abstraction levels simultaneously via a series of paired relation and embedding modules. We now analyse the differences between relation modules to provide some insight into their complementarity. \vspace{-0.1cm} \keypoint{Score-Distance Correlation} We first checked how the relation module (RM) scores relate to distances in the ImageNet hierarchy. Using \textit{mini}ImageNet{} data, we searched for $(support1,support2,query)$ category tuples where the distance $D(query,support1)$ and $D(query,support2$) matched a certain number of links, and then plotted instances from these tuples query categories against the relative relation module scores $RM(q,s1)$, $RM(q,s2)$. Fig.~\ref{fig:my_label} presents scatter plots for the four relation modules where points are images and colors indicate category tuples with specified distance from the two support classes. We can see that: (1) The scores generally match ImageNet{} distances: The most/least similar categories (red/magenta) are usually closer to the top right/bottom left of the plot; while query categories closer to one support class are in the opposite corners (blue/yellow-green). (2) Generally higher numbered relation modules are more discriminative, separating classes with larger differences in relation score. \keypoint{Score Correlation} We next investigated if relation module predictions are diverse or redundant. We analysed the correlation in their predictions by randomly picking 10,000 image pairs from \textit{mini}ImageNet{} and computing the Spearman rank-order correlation coefficient \cite{Spearman1904Proof} between each pair of relation module's scores. The results in Tab.~\ref{tab:score-corelation}, show that: (1) Many correlations are relatively low (down to 0.34), indicating that they are making diverse, non-redundant predictions; and (2) Adjacent RMs have higher correlation than non-adjacent RMs, indicating that prediction diversity is related to RM position in the feature hierarchy. \keypoint{Prediction Success by Module} We know that RM predictions do not necessarily agree. But to find out if they are complementary, we made a scatter plot of the per-class accuracy of RM-1 vs RM-4 in Fig.~\ref{fig:accScatter}. We can see that many categories lie on the diagonal, indicating that RM-1 and-4 get them right equally often. However there are some categories \emph{below} the diagonal, indicating that RM-1 gets them right more often than RM-4. Examples include both stereotyped and fine-grained categories such as `hourglass' and `African hunting dog'. These below diagonal elements confirm the value of using deeper features in metric learning. \vspace{-0.2cm} \setlength{\tabcolsep}{4.8pt} \begin{table}[tb] \centering \footnotesize \begin{tabular}{@{} cccccc @{}} \hline {\bf Module} & {\bf RM1} & {\bf RM2} & {\bf RM3} & {\bf RM4} \\ \midrule \textbf{RM1} & - & - & - & - \\ \textbf{RM2} & 0.75 & - & - & - \\ \textbf{RM3} & 0.55 & 0.73 & - & - \\ \textbf{RM4} & 0.34 & 0.45 & 0.61 & - \\ \bottomrule \end{tabular} \caption{\small Spearman rank-order correlation coefficient between different relation modules.} \label{tab:score-corelation} \end{table} \vspace{-0.2cm} \section{Conclusion} We proposed Deep Comparison Network\xspace{}s, a new general purpose matching framework for few-shot learning. This architecture performs effective few-shot learning via learning non-linear comparisons simultaneously at multiple levels of feature extraction, while resisting overfitting. The resulting method achieves state of the art results on \textit{mini}ImageNet{} and the more ambitious \textit{tiered}ImageNet{}, while retaining architectural simplicity, and fast training and testing processes. {\small \bibliographystyle{ieee}
1,477,468,750,189
arxiv
\section{Introduction} \label{sec:intro} Deep convolutional neural networks (CNNs) with 2D convolutions and small kernels~\cite{vgg}, have achieved state-of-the-art results for several speech recognition tasks~\cite{ibm2015,ibm2016,microsoft2016,vdcnn,vdcnn_adapt}. The accuracy of those models grows with their complexity, leading to redundant latent representations. Several approaches have been proposed in the literature to reduce this redundancy~\cite{pruning1,thinet,condensenet,shufflenet,efficientnet}, and therefore to improve their efficiency. Octave convolutional layers~\cite{OctConv} address the problem of spatial redundancy in feature maps by learning feature representations at high and low resolutions. The low resolution processing path increases the size of the receptive field in the original input space, which is a plausible explanation of the improved performance for image classification. We extend the octave convolution concept to multi-scale octave convolutional layers, which include lower resolution feature maps with a higher compression rate (reduction by more than one octave), and the use of more than two feature map tensor groups in order to be learn representations at multiple scales. Multi-scale processing have been previously proposed for a variety of speech recognition tasks~\cite{wavelets1,wavelets2,toth,blnet,multi_span}. In deep CNN acoustic models, some of the feature maps may need to represent information which varies at a lower rate, such as the characteristics of the speaker or background noise, compared to the information necessary for phonetic discrimination. Spatial average pooling in a low resolution group of feature maps can be interpreted as a form of low-pass filtering, providing smoothed representations of the observed data, potentially leading to improved performance. We investigate the use of multi-scale octave convolutional layers for robust speech recognition, and attempt to shed more light on the explainability of the models by evaluating the robustness of the learned representations using an affine transformation loss to measure the similarity between clean and noisy encodings. \section{Multi-scale octave convolutions} \label{sec:method} An octave convolutional layer~\cite{OctConv} factorizes the output feature maps of a convolutional layer into two groups. The resolution of the low-frequency feature maps is reduced by an octave -- height and width dimensions are divided by $2$. In this work, we explore spatial reduction by up to 3 octaves -- dividing by $2^t$, where $t=1,2,3$ -- and for up to 4 groups. We refer to such a layer as a multi-octave convolutional (MultiOctConv) layer, and an example with three groups and reductions of one and two octaves is depicted in Fig.~\ref{fig:octconv}. \begin{figure}[t] \centering \includegraphics[scale=0.38]{octconv_alpha.png} \caption{Multi-octave convolution scheme for 3 resolution groups. Red and green arrows show the connections for the initial and final MultiOctConv layers, respectively. $N$ corresponds to the total number of groups in the MultiOctConv layer ($N=3$ in this example). $\alpha_{n}$ is a fraction of channels corresponding to group $n$. $h$ and $w$ are spatial dimensions.} \label{fig:octconv} \end{figure} In a vanilla CNN the convolutions have the same spatial resolution throughout the network. An octave convolutional (OctConv) layer is divided into high- and low-frequency feature maps and a multi-octave convolutional (MultiOctConv) layer has feature maps reduced by multiple octaves. Let the input feature tensor be $X \in \mathbb{R}^{c_{in} \times h \times w}$, where $c_{in}$ denotes the number of input channels and $h$ and $w$ correspond to the spatial dimensions. In a MultiOctConv layer working at 3 resolutions, $X$ is factorized along the channel dimension into $X = \{X^1, X^2, X^3\}$. The first tensor group tensor, $X^1$, is a representation at the same spatial scale as $X$. The spatial dimensions of the second and third group tensors, $X^2$ and $X^3$, are reduced by one and two octaves respectively. The dimensions of the input tensors $X^1$, $X^2$ and $X^3$ are described in Fig.~\ref{fig:octconv}. The fraction of the channels for each group is denoted with $\alpha_{n} \in [0,1]$, where $\sum_{n=1}^{N} \alpha_{n} = 1$ for $N$ resolution groups in the MultiOctConv layer. For simplicity, we use the same $\alpha_{n}$ for input and output representations within the same scale group. Similarly, the output tensors are also factorized into $Y = \{Y^1, Y^2, Y^3\}$. Their dimensions are analogous to the dimensions of the input tensors and are described in Fig.~\ref{fig:octconv}. To compute $Y^1$, $Y^2$ and $Y^3$ we operate directly on the factorized input tensors $X^1$, $X^2$ and $X^3$. Inter-frequency information update is implemented as a sum of feature maps from different resolution groups. To be able to sum those representations for a desired output scale, the spatial dimensions of the input tensors must be the same. For this reason, two operations are employed: spatial average pooling \texttt{pool($X, p$)} and bilinear interpolation \texttt{upsample($X, u$)}, where $p$ is the kernel size and stride for the the 2D pooling layer and $u$ is the upsampling factor. The output MultiOctConv representations are therefore computed as \begin{align*} \begin{split} Y_1 = f(X^1; W^{1\rightarrow1}) + \texttt{upsample}(f(X^2;W^{2\rightarrow1}), 2) + \\ + \texttt{upsample}(f(X^3;W^{3\rightarrow1}), 4) \end{split} \\ \smallskip \begin{split} Y_2 = f(X^2; W^{2\rightarrow2}) + \texttt{upsample}(f(X^3;W^{3\rightarrow2}), 2) + \\ + f(\texttt{pool}(X^1, 2); W^{1\rightarrow2}) \end{split} \\ \smallskip \begin{split} Y_3 = f(X^3; W^{3\rightarrow3}) + f(\texttt{pool}(X^1, 4); W^{1\rightarrow3}) + \\ + f(\texttt{pool}(X^2, 2); W^{2\rightarrow3}) \end{split} \end{align*} \noindent where $f(.)$ is the convolution function and $W^{n_{in}\rightarrow{n_{out}}}\in\mathbb{R}^{c_{in} \times k \times k \times c_{out}}$ is the convolution filter for a $k \times k$ kernel. We call the information update ``intra-frequency'' when $n_{in} = n_{out}$, and ``inter-frequency'' when $n_{in} \neq n_{out}$. It is important to note that the convolution $f(.)$ operates on the tensors compressed with average pooling and on the tensors before upsampling, making the design more efficient. The number of parameters in the MultiOctConv layer is the same as in a vanilla convolutional layer. \subsubsection*{Robustness of learned representations} \label{sec:robustness} \begin{figure}[t] \centering \includegraphics[scale=0.4]{projection.png} \caption{Proposed method to measure the robustness of learned representations.} \label{fig:robust} \end{figure} To evaluate the robustness of learned representations, we compare the projections of clean and noisy Aurora-4 samples. The similarity between them is measured using the mean squared error (MSE) loss of an affine projection $y$ of $N$ clean to noisy samples (Eq.~\ref{eq:mse}), to take into account permutations of hidden representations and to ensure invariance of the metric to affine transformations of the encodings. The number of units in layer $y$ and the dimensionality $D$ of $\mathbf{x}_{h}$ is 1024. \begin{equation} \theta^* = \argmin_{\theta}{\frac{1}{ND} \sum_{i=1}^N \big\lVert y({\mathbf{x}_{h,clean}^{(i)}, \theta}) - {\mathbf{x}_{h,noisy}^{(i)}} \big\rVert ^2} \label{eq:mse} \end{equation} We use the Aurora-4 test sets and compare clean encodings $\mathbf{x}_{h,clean}$ with noisy encodings $\mathbf{x}_{h,noise}$, obtained as the activations from the last convolutional layer with a forward pass through a trained model. Both hidden representations were obtained for CNN and octave CNN (OctCNN) models in order to compare representations between the models. Also, for intra-model comparison, we evaluate the loss with the encodings from high and low-resolution groups (paths $Y^{1\rightarrow1}$ and $Y^{2\rightarrow1}$). This analysis aims to evaluate if the low-resolution groups for noisy samples are indeed more similar to the clean ones than the high-resolution encodings, suggesting more robust representations. We optimize the parameters of $y$ with back-propagation using a fixed number of 3 epochs and we report the validation loss for Aurora-4 test sets. \section{Experimental setup} \label{sec:setup} \noindent \textbf{Aurora-4}~\cite{aurora}: We evaluate our models on the simulated multi-condition Aurora-4 dataset, consisting of $\sim$15h of audio for training and $\sim$9h for testing. The test set is divided into 4 subsets: A, B, C, and D. Subset A contains clean-condition recordings, subset B has 6 noise types added to the recordings (car, babble, restaurant, street, airport, train), subset C is recorded with a mismatched microphone, and subset D is recorded with a mismatched microphone and with noise added. In our experiments, we use multi-condition GMM-HMM forced alignments as targets for CNN training. The number of CD states for Aurora-4 is 3422. \noindent \textbf{AMI}~\cite{ami}: AMI contains $\sim$100h of meeting recordings, captured by an independent headset microphone (IHM), single distant microphone (SDM), and multiple distant microphones (MDM), where the mics are combined using the BeamformIt~\cite{beamformit} toolkit. We train our models using the MDM data and evaluate the models for all 3 types of recordings to analyze the effect of mismatched training/testing conditions. We use the suggested train/dev/eval data split~\cite{pawel_ami}, and we evaluate the models on both dev and eval sets. The number of CD states for AMI is 3984. \noindent \textbf{Features}: In our experiments, we use 40-dimension mel-scaled filterbank (FBANK) features with \{-5,..,5\} context for splicing, resulting in a $40\times11$ input feature map. \noindent \textbf{Models}: Our baseline CNN model~\cite{my_asru_2017} consists of 15 convolutional and one fully-connected layer. We use $3\times3$ kernels throughout the network. We start with 64 output channels in the first layer and double them after 3 and 9 layers. We use batch normalization in every convolutional layer, and ReLU afterwards (unless a reverse order is noted). The initial learning rate is 0.001. We use early stopping for training. \iffalse \begingroup \renewcommand{\arraystretch}{0.5} \begin{table}[] \centering \begin{tabular}{l|r|r|r|r} layer (L) & $c_{in}$ & $d_{in}$ & $c_{out}$ & $d_{out}$ \\ \hline Conv1 & 1 & 40x11 & 64 & 40x11 \\ Conv2 & 64 & 40x11 & 64 & 40x11 \\ Conv3 & 64 & 40x11 & 64 & 20x11 \\ Conv4 & 64 & 20x11 & 128 & 20x11 \\ Conv5 & 128 & 20x11 & 128 & 20x11 \\ Conv6 & 128 & 20x11 & 128 & 10x11 \\ Conv7 & 128 & 10x11 & 128 & 10x11 \\ Conv8 & 128 & 10x11 & 128 & 10x11 \\ Conv9 & 128 & 10x11 & 128 & 5x6 \\ Conv10 & 128 & 5x6 & 256 & 5x6 \\ Conv11 & 256 & 5x6 & 256 & 5x6 \\ Conv12 & 256 & 5x6 & 256 & 3x3 \\ Conv13 & 256 & 3x3 & 256 & 3x3 \\ Conv14 & 256 & 3x3 & 256 & 3x3 \\ Conv15 & 256 & 3x3 & 256 & 2x2 \\ FC & 256 & 2x2 & 3422 & 1x1 \end{tabular} \caption{Baseline CNN network structure.} \label{tab:vdcnn} \end{table} \endgroup \fi \section{Results} \label{sec:results} We present our results in terms of accuracy and robustness on Aurora-4 and AMI, as well as in terms of the computational cost, which is calculated as the number of multiply-accumulate operations (MACCs) performed for a single input feature map. The cost reduction when using octave convolutions stems from reduced dimensions $c_{in}$, $c_{out}$, $h$, and $w$ compared to a vanilla convolutional layer. \smallskip \noindent \textbf{Aurora-4:} Results for Aurora-4 are presented in Table~\ref{tab:aurora_wer}. We replace vanilla convolutional layers of our baseline model (CNN) with OctConv and MultiOctConv layers. We first evaluate which layers can be replaced and find that all but the first layer, operating directly on the input representation, should be replaced for the best performance. This approach (L2-L15) is also the least costly. Reducing the ratio of low-resolution representations to 0.125 improves the WER for the mismatched microphone scenario C, but not for all test conditions. Applying batch normalization after ReLU is beneficial for test set C and D. For OctCNN models, the WER for test set D dropped by $\sim0.4\%$ with a compression by one octave, and by another $\sim0.4\%$ with a reversed batch normalization and ReLU order. \begin{table*}[ht] \centering \small \begin{tabular}{llc|ccc|c|rrrr|r} Model & OctConv & $\alpha$ (low $\rightarrow$ high) & $2^1$ & $2^2$ & $2^3$ & \#MACCs (M) & A & B & C & D & Avg. \\ \hline CNN & - & [0, 1] & - & - & - & 174.7 & 2.19 & 4.68 & 4.22 & 14.53 & 8.69 \\ \hline OctCNN & L1-L3 & [0.2, 0.8] & $\checkmark$ & - & - & 167.6 & 2.19 & 4.74 & 4.32 & 14.83 & 8.85 \\ OctCNN & L1-L15 & [0.2, 0.8] & $\checkmark$ & - & - & 126.9 & 2.22 & 4.61 & 4.30 & 14.40 & 8.61 \\ OctCNN & L2-L15 & [0.2, 0.8] & $\checkmark$ & - & - & 126.2 & 2.02 & 4.65 & 4.35 & 14.16 & 8.52 \\ OctCNN $^\dagger$ & L2-L15 & [0.2, 0.8] & $\checkmark$ & - & - & 126.2 & 2.22 & 4.82 & 4.22 & 13.72 & 8.41 \\ OctCNN & L2-L15 & [0.125, 0.875] & $\checkmark$ & - & - & 143.1 & 2.11 & 4.56 & 4.07 & 14.55 & 8.63 \\ \hline MultiOctCNN & L2-L15 & [0.1, 0.1, 0.8] & $\checkmark$ & $\checkmark$ & - & 120.6 & \textbf{1.98} & 4.51 & 4.11 & 14.00 & 8.37 \\ MultiOctCNN & L2-L15 & [0.1, 0.1, 0.8] & $\checkmark$ & - & $\checkmark$ & 119.5 & 2.02 & 4.59 & \textbf{3.92} & 13.82 & \textbf{8.31} \\ MultiOctCNN $^\dagger$ & L2-L15 & [0.1, 0.1, 0.8] & $\checkmark$ & - & $\checkmark$ & 119.5 & 2.28 & 4.81 & 4.04 & 13.76 & 8.41 \\ MultiOctCNN & L2-L15 & [0.1, 0.1, 0.1, 0.7] & $\checkmark$ & $\checkmark$ & $\checkmark$ & \textbf{94.3} & 2.30 & 4.88 & 4.18 & 14.06 & 8.58 \\ MultiOctCNN & L2-L15 & [0.2, 0.8] & - & $\checkmark$ & - & 115.7 & 2.15 & 4.77 & 4.07 & 13.77 & 8.39 \\ MultiOctCNN & L2-L15 & [0.125, 0.875] & - & $\checkmark$ & - & 136.3 & 2.09 & 4.56 & 4.22 & 14.32 & 8.54 \\ MultiOctCNN & L2-L15 & [0.2, 0.8] & - & - & $\checkmark$ & 113.5 & 2.09 & 4.54 & 3.94 & 14.03 & 8.39 \\ MultiOctCNN & L2-L15 & [0.125, 0.875] & - & - & $\checkmark$ & 134.9 &2.02 & \textbf{4.50} & 4.17 & 13.87 & 8.32 \\ MultiOctCNN $^\dagger$ & L2-L15 & [0.125, 0.875] & - & - & $\checkmark$ & 134.9 & 2.32 & 4.73 & 4.24 & \textbf{13.57} & \textbf{8.31} \end{tabular} \caption{WERs [\%] for Aurora-4 test sets A, B, C and D for octave and multi-octave CNNs. "OctConv" column indicates where a Conv layer was replaced with an OctConv or MultiOctConv. $2^1$, $2^2$ and $2^3$ correspond to the factor of spatial dimension reduction. Models with batch normalization after ReLU are denoted by $^\dagger$.} \label{tab:aurora_wer} \end{table*} The biggest differences between the MultiOctCNN models can be observed for test set D. The models with the lowest WERs are the ones with a spatial reduction by 2 or 3 octaves, and with 2 or 3 groups. This indicates that multi-scale octave convolutions seem to be an effective, as well as an efficient design for processing speech with background noise and channel mismatch. For MultiOctCNNs, batch normalization after ReLU also gives a performance boost for test set D, with a drop to $13.57\%$. \begin{figure}[t] \centering \includegraphics[scale=0.17]{mse_loss_no_margin.png} \caption{MSE affine transformation loss to measure the similarity of "clean" and "noisy" encodings ($\mathbf{x}_{h,clean}$ and $\mathbf{x}_{h,noisy}$). "all" corresponds to the output of the last convolutional layer (Conv15), "high" and "low" correspond to its $Y^{1\rightarrow1}$ and $Y^{2\rightarrow1}$ branch, respectively.} \label{fig:mse} \end{figure} To further evaluate the robustness of the latent representations we measured the MSE between the (projected) representations, described above (Fig.~\ref{fig:mse}). The loss for the activations at the output of Conv15 ("all") is similar for CNN and OctCNN models for test sets B and C, but lower for test set D for OctCNN, indicating that the learned representations are more robust, contributing to lower WERs. As expected, within-model comparison of the loss show that the representations at low resolution are more similar to the clean encodings from test set A than the ones at high resolution. We believe that this effect improves the robustness of latent representations and results in a decreased WER. \begin{table*}[h!t] \centering \small \begin{tabular}{llc|ccc|c|rrrrrr} &&&&&& & \multicolumn{2}{c}{IHM} & \multicolumn{2}{c}{SDM} & \multicolumn{2}{c}{MDM} \\ Model & OctConv & $\alpha$ (low $\rightarrow$ high) & $2^1$ & $2^2$ & $2^3$ & \#MACCs (M) & dev & eval & dev & eval & dev & eval \\ \hline CNN & - & [0, 1] & - & - & - & 175.2 & 33.4 & 38.3 & 49.1 & 54.0 & 43.9 & 48.0 \\ \hline OctCNN & L1-L3 & [0.2, 0.8] & $\checkmark$ & - & - & 168.2 & 33.0 & 38.1 & 49.0 & 54.1 & 43.8 & 47.9 \\ OctCNN & L2-L15 & [0.2, 0.8] & $\checkmark$ & - & - & 126.7 & 33.0 & 37.7 & 48.9 & 54.0 & 43.7 & 47.7 \\ OctCNN & L1-L15 & [0.2, 0.8] & $\checkmark$ & - & - & 127.5 & \textbf{32.2} & \textbf{37.2} & 48.3 & 53.5 & 43.1 & 47.3 \\ OctCNN & L1-L15 & [0.125, 0.875] & $\checkmark$ & - & - & 144.1 & 32.5 & 37.4 & \textbf{48.2} & \textbf{53.3} & \textbf{42.9} & \textbf{47.2} \\ OctCNN$^\dagger$ & L1-L15 & [0.125, 0.875] & $\checkmark$ & - & - & 144.1 & 33.2 & 38.3 & 48.8 & 54.3 & 43.7 & 48.0 \\ \hline MultiOctCNN & L1-L15 & [0.1, 0.1, 0.8] & $\checkmark$ & $\checkmark$ & - & 121.6 & 32.8 & 38.1 & 48.9 & 53.9 & 43.7 & 47.9 \\ MultiOctCNN & L1-L15 & [0.1, 0.1, 0.8] & $\checkmark$ & - & $\checkmark$ & 120.4 & 33.3 & 38.5 & 49.2 & 54.5 & 44.1 & 48.4 \\ MultiOctCNN & L1-L15 & [0.1, 0.1, 0.1, 0.7] & $\checkmark$ & $\checkmark$ & $\checkmark$ & \textbf{95.2} & 33.7 & 38.7 & 49.5 & 54.6 & 44.1 & 48.4 \\ MultiOctCNN & L1-L15 & [0.125, 0.875] & - & $\checkmark$ & - & 136.9 & 33.6 & 38.6 & 49.7 & 54.6 & 44.3 & 48.4 \\ MultiOctCNN & L1-L15 & [0.125, 0.875] & - & - & $\checkmark$ & 135.4 & 32.9 & 38.1 & 49.1 & 54.3 & 43.8 & 48.0 \\ \end{tabular} \caption{WERs [\%] for models trained on AMI MDM and evaluated on IHM, SDM and MDM conditions. "OctConv" column indicates where a Conv layer was replaced with an OctConv or MultiOctConv. $2^1$, $2^2$ and $2^3$ correspond to the factor of spatial dimension reduction. Models with batch normalization after ReLU are denoted by $^\dagger$.} \label{tab:ami_wer} \end{table*} \smallskip \noindent \textbf{AMI:} Results for AMI are presented in Table~\ref{tab:ami_wer}. In contrast to the Aurora-4 findings, better performance was achieved with an all OctCNN model (L1-L15). This is an interesting finding, and we believe that the multi-scale processing of the input feature space is beneficial for AMI MDM because of the reverberation in the data. The reverbarated input time$\times$freq representation can be viewed as a spatially redundant one, therefore the OctConv layer applied to the input representation is effective. Unfortunately, the only MultiOctConv model superior to the baseline CNN is the one with 3 groups with a spatial reduction by 1 and 2 octaves. This result indicates that the spatial redundancy for this architecture for AMI MDM is not degrading the performance. However, in terms of the computational cost, we can reduce the \#MACCs by a factor of 1.8 with only a small WER increase for a model with 4 resolution groups. \section{Conclusions} \label{sec:conclusions} We have presented multi-scale octave CNN models for robust and efficient speech recognition. We build on Chen et al~\cite{OctConv}, applying the method to robust ASR and extending it to multiple resolution groups with a spatial reduction of more than one octave. Our experiments confirm that multi-scale processing of the hidden representations is not only more computationally efficient, but also improves the recognition. Similarity measures between clean and noisy encodings indicates that multi-scale processing in a deep CNN acoustic model improves the robustness of learned representations, especially in the additive noise and mismatched microphone scenario. The gain of the octave convolutions was also observed for AMI MDM data with significant reverberation, when applied to the input feature space. However, the model performance for AMI MDM was not improved with multi-octave convolutions. More careful tuning of the $\alpha$ hyperparameter could improve the results, as it controls the ratio of multi-scale feature maps in the model, enabling both learning of fine-grained representations preserving the details necessary for phonetic discrimination, as well as smoothed, more invariant representations improving the robustness of the model. It would also be possible to set $\alpha$ layer-by-layer to enable the fractions of channels at different resolutions to vary according to the depth of the representation. We proposed a single projection layer MSE loss to measure the affine relationship of clean and noisy hidden representations. With this approach, we evaluated the robustness of the encodings and improved the explainability of our models. More thorough analysis of the representations learned is an interesting future direction. We confirmed that the noisy lower resolution representations are more similar to the clean counterparts than high resolution representations, and thus are more robust. However, we did not investigate the reason for the increased similarity, leaving future work to ascertain if the lower resolution group corresponds to better speaker or noise characteristics, or more invariant phonetic representations. \bibliographystyle{IEEEbib}
1,477,468,750,190
arxiv
\section{Introduction} In high energy collisions of leptons, hadrons and nuclei we observe the production of many particles, mainly hadrons. The ultimate goal in multiparticle production studies is the explanation of the hadronic phenomena within QCD, the basic theory of the strong interactions, along with the other interactions of the standard model and possibly beyond. A basic problem in the QCD study of multiparticle production is the matching of parton and hadron dynamics relevant in the respective weak and strong coupling regimes of QCD. A class of inclusive observables in hard collisions can be computed perturbatively in terms of the running coupling constant $\alpha_s$ thanks to the celebrated asymptotic freedom \cite{gwp}. In practice, it is often required to include some additional non-perturbative input from other sources (\eg Parton Distribution Functions). Hadronic phenomena can be systematically analysed within lattice gauge theory in sufficiently simple problems. The description of genuine multi-hadron production requires, in addition to the hard QCD part (shower calculus), some phenomenological approaches: specific hadronization models or such simple ideas as ``parton hadron duality''. At present, the systematic approach to multi-hadron production based entirely on QCD remains a dream. Is it just a problem of complexity or do we need a fundamentally new insight? In this talk we concentrate on the four topics mentioned in the abstract which represent different variants of the interplay of hard and soft interactions and we emphazise shortly other topics. I am sorry for the incomplete coverage of the many interesting presentations at this conference. \section{Hard processes with jets and heavy quarks} Results are reported from TEVATRON, HERA, RHIC and LEP accelerators. Of central importance is the comparison with fixed order perturbation theory to test the universality of the coupling constant $\alpha_s(Q^2)$ in all processes. Here the goal is to improve the accuracy of the calculation, in particular by the extension beyond Next-to-Leading-Order accuracy, and to compute within the QCD framework observables of higher complexity. An important goal is also the discovery of new physics either through the deviation of experimental results from the precision calculations or through better understanding of background processes. {\it Top quark production (D. Bauer)} has been studied at the TEVATRON $p\bar p$ collider where in the new RUN II the cms energy is increased from 1.8 to 1.96 TeV. Top quarks at these energies are produced primarily in pairs and they decay through $t\to Wb$. The various decay channels for the $W$ have been analysed by both experiments and consistent results have been obtained in eight channels, all compatible with the theoretical computations beyond NLO of $\sigma_{top}=6.77\pm0.42$ pb at $m_t=175$ GeV \cite{kv}. A new value for the top quark mass has been presented by D0 \cite{d0top} from a reanalysis of their earlier RUN I data: $m_t=180.1\pm3.6 ({\rm stat.})\pm 3.9 ({\rm syst.)}$ GeV with considerably reduced errors and this result increases the world average by 4 GeV to $m_t=178\pm4.3$ GeV. The best RUN II CDF result so far obtained is $m_t=177.8^{+4.5}_{-5.0}\pm6.2 $ GeV. No single top production has been observed and $\sigma_t<8.5$ pb at 95\% CL (RUN II, CDF). {\it Single top production (S.D. Ellis)}. The observation of this process would determine the CKM matrix element $V_{tb}$ and be important in other searches (Higgs, new particles, including extra scalar bosons or gauge bosons, new quarks). The expected rates are still below the presently achieved sensitivity. Therefore some strategies to obtain an improved signal/background ratio are proposed, especially by using the ``signed rapidity'' variable which takes into account the fact that processes with $W$'s are not separately C or P invariant. \begin{figure}[tb] \unitlength1cm \unitlength1cm \begin{center} \mbox{\epsfig{file=j1jet.eps,width=7cm}} \end{center} \caption{\label{wjet} Distribution of the jet E$_{T}$ for the jet of lowest energy in the $W +\geq n$ jet sample, for $n$=1 (top) to $n$=4 (down), in comparison with LO QCD calculations. } \label{fig:jets} \end{figure} {\it Jet Production at the TEVATRON (L. Sawyer)} is measured now at the higher cms energy by CDF and D0. Single inclusive jets are studied up to transverse momenta $ p_T\sim 550$ GeV, that is 150 GeV higher than in RUN I and di-jet masses up to $M_{ij}\sim 1400$ GeV, also the azimuthal decorrelation of the two jet events which is an effect of ${\cal O}(\alpha_s)^3$ is measured. These results are found in an overall consistency with the NLO QCD predictions \cite{eks,jetrad} in the extended regime of energy and transverse momentum within the systematic errors which are dominated by the energy scale uncertainty. Only in extreme kinematic regions of the azimuthal decorrelation deviations appear. \\ {\it Production of W,Z,$\gamma$ + jets (A. Cruz)} allows further tests of pQCD at large momentum transfers using different techniques. The inclusive W production cross section rises from $2.38\pm 0.24$ nb at 1.8 TeV to $2.64\pm 0.18$ nb at 1.96 TeV and is well described by the NNLO QCD results of 2.5 and 2.73 nb respectively. For the production of W together with n jets the distribution of the jet transverse energies have been measured. There is good agreement with a calculation (ALPGEN \cite{alpgen}) based on leading order QCD Matrix Element combined with parton shower development (HERWIG \cite{herwig}), see Fig. \ref{fig:jets}. Here the experimental uncertainties from the jet energy scale are comparable in size to the theoretical scale uncertainties. {\it Central di-photon production} has been measured up to masses of about 30 GeV. The NLO QCD calculations (DIPHOX \cite{diphox}) involving the $q\bar q$ annihilation and quark loop diagram for $gg\to \gamma\gamma$ describe the data well in absolute normalization. The {\it associated production of photon with heavy flavours (c,b)} has been obtained as well for photon energies up to 60 GeV. It agrees with a LO calculation (Pythia \cite{pythia}). In summary for TEVATRON, no serious disagreement of the new data with QCD expectations at different levels of accuracy for the observables of different levels of complexity are met and so, at the same time, we do not obtain any signal of a new physics. The other problems discussed here concern the role of the ``underlying event'' ({\it R.D. Field}, see below) and the influence of the selection of jets by a particular algorithm (``cone'' vs.``$K_T$'') ({\it Andrieu}). {\it Monte Carlo generators, (L. L\"onnblad)} are being developed to generate parton final states beyond LO for present accelerators but also for the LHC. For problems like production of multi-jet or W+jet events a matching of the NLO matrix element and parton shower has been achieved; also double scattering effects are being considered. {\it Fragmentation functions (S. Kretzer)} $D_{q,g}^{h^\pm}(z,Q^2)$ are compared in $pp$ and $e^+e^-$ collisions as a tests the collinear factorization approach of QCD at NLO together with universality. New data obtained at RHIC on $pp\to \pi X$ are well predicted using the previous results \cite{kkp,kretzer} from $e^+e^-$ collisions on $D_{q,g}^{h^\pm}$ at the factorization scale and subsequent DGLAP evolution. {\it Jet production at HERA (C. Glasman)} has now been measured down to $x\simeq 10^{-4}$ for momentum transfers $Q^2$ larger than a few GeV$^2$. The $Q^2$ dependence of of two- and three-jet cross sections in NC DIS up to $Q^2 \sim 5000$ GeV$^2$, obtained by ZEUS, is found in good agreement with NLO (${\cal O}(\alpha_s^2)$ and ${\cal O}(\alpha_s^3$)) predictions. The ratio of both results provides an accurate test of the theory and in particular a precise determination of the coupling constant $\alpha_s(M_Z)=0.1179\pm0.0013$ (stat.)$^{+0.0028}_{-0.0046}$ (exp.) which by itself compares well with the current world average $\alpha_s(M_Z)=0.1182\pm 0.0027$ \cite{bethke} but there is still a large theoretical error of ($+0.0061, -0.0047$). Of special interest, also in view of further applications to small $x$ physics, is a test of the validty of the DGLAP approximation to the $Q^2$ evolution of structure functions. If jets are produced ``forward'' (in direction of incoming proton) and have transverse energies $(E_T^{jet})^2$ of order $Q^2$ then the kinematic configuration is not in favour of intermediate gluon emission with strong $k_T$ ordering as is typical for DGLAP evolution and deviations from DGLAP based predictions are expected \cite{mueller}. Indeed, if jets are selected by the H1 collaboration with $0.5< (E_T^{jet})^2/Q^2<5$ and $x<0.004$, then the ``direct'' NLO QCD predictions (DISENT \cite{disent}), \ie without photon structure, are too low by about a factor 3 for very small $x<0.001$. Considerable improvement, although not full agreement, is obtained if a photon structure is included in the DGLAP calculation. If one of the two scales $E_T$ or $Q^2$ is large compared to the other one the DGLAP factorization approach works best. Whereas work continues to develop the approximations beyond DGLAP, there is no serious conflict with pQCD at the fundamental level. {\it Spin physics}. The spin program at RHIC {(\it E. Sichtermann)} aims at a measurement of of hard and soft processes with polarised protons. The single transverse spin asymmetry $A_N$ of forward $\pi^0$ production has been observed at FNAL at $\sqrt{s}=20$ and the first results from RHIC show that it persists at $\sqrt{s}=200$ GeV. New results are also presented from the HERMES experiment at DESY ({\it I. Gregor}) on the study of the transverse single-spin asymmetries in semi-inclusive pion production in DIS. This allows the determination of the protons ``transversity'' distribution, which represents the degree to which the quarks are polarised along the proton spin transversely polarized to the virtual photon. \section{Soft particle production in hard collisions} The classical applications of pQCD concern observables for hard processes where the hadrons are either summed over or collected into jets so that fixed order perturbation theory can be applied, as in the previous section. A further development, surprisingly successful, is the application of pQCD to observables calculated from the individual momenta of hadrons in the final state directly and in general involves a resummation of the perturbation theory. One has to include some assumptions on the transition from partons to hadrons and eventually on some non-perturbative aspects of colour confinement. There are final state parton observables, like event shapes, which are infrared and collinear safe, i.e. their values do not change if a collinear or soft gluon is added. They are less sensitive to soft hadronization effects. More sensitive are multiplicity observables, such as particle flows inside jets or between jets which are not infrared safe. These observables are a testing ground for ``parton hadron duality'' ideas (review by {\it Yu. Dokshitzer}). \subsection{Infrared and collinear safe observables} {\it Event shapes (Yu. Dokshitzer)}. These observables describe global properties of hadronic final states, in $e^+e^-$ annihilation, for example, one defines ``thrust'', ``jet mass'', ``broadening'' and others. The analysis of these observables in perturbation theory (an asymptotic expansion) leads to a description which combines perturbative and non-perturbative aspects, the latter ones represented by a power correction $\propto(1/Q)$ \cite{dwm}. This term involves an integral over gluon emissions at small transverse momenta $k_T$ where the coupling constant $\alpha_s(k_T^2)$ is ill defined. It is assumed that this integral is finite and - as the coupling itself - universal for the different observables. One therefore introduces the parameter $ \alpha_0=\frac{1}{\mu_I}\int_0^{\mu_I} dk_T \alpha_s(k_T^2) $ at the matching scale $\mu_I$ to describe the influence from the soft region \cite{alpha0}. This result, strictly obtained for partons, is then applied to the experimental hadronic observables assuming a duality between both descriptions. The calculation has been extended to the differential distribution of shape observables where the non-perturbative effects can shift or squeeze the perturbative spectra by an amount given by $1/Q$. By now, the fits to event shapes in $e^+e^-$ with two parameters provide a competitive determination of $\alpha_s$ with an error of $\sim 8\%$ and the universality of the non-perturbative parameter $\alpha_0\sim 0.5$ at $\mu_I=2$ GeV is confirmed within $\sim 15\%$. {\it Angularities (G.Sterman)}: A new class of event shapes allows specific tests for the non-perturbative corrections. From the angles $\theta_i$ of the particles to the thrust axis and energies $E_i$ one constructs the quantity \cite{bks} \begin{equation} \tau_a=\frac{1}{Q} \sum_i E_i(\sin \theta_i)^a(1-|\cos\theta_i|)^{1-a} \end{equation} which interpolates between thrust ($a=0)$ and broadening ($a=1$). Again, one can separate a contribution from soft gluon emission which is then represented by a non-perturbative ``shape function'' $f_{a,NP}$. It represents corrections of all higher orders in $\Lambda/Q$ which should be more appropriate near the collinear limit. This represents a generalization of the correction $\alpha_0/Q$ above, which corresponds to a shift of the distribution. Once determined at a particular energy one obtains predictions for other energies. {\it Interjet radiation: non-global log's}. In DIS and hadron-hadron collisions one is led to consider gluon radiation in part of the phase space excluding a region around the beam direction, so energy flow or event shapes are non-global. Such observables obtain contributions from multi-soft emissions which lead to ``non-global log's'', single logarithmically enhanced contributions \cite{ds}. The difficulty is that the number of jets is not fixed. This problem can be tamed by construction of a correlation of the energy flow with an event shape which fixes the number of jets \cite{bks,dm}. {\it Automated resummation (G. Zanderighi)}: In order to facilitate the calculation of new shape observables, in particular for $p\bar p$ and DIS, a program (CAESAR \cite{caesar}) has been developed which, for a class of observables, yields resummed results in NLL order by combining analytical and numerical methods. The limitations concern certain properties in the infrared and collinear limit of the emission (``recursively IRC-safe'') and the dependence on transverse momentum (``continuously global''). Old results have been reproduced, new observables derived, for example, the distribution of global transverse thrust $ T_\perp = \frac{1}{E_T}\ \text{max}_{\vec{n}_T} \sum_i|\vec{p}_{t_i}\vec{n}_T| $ constructed from the transverse momenta with respect to the beam axis. Such calculations open up the possibility to considerably extend the kinematic range of these QCD studies towards the higher energies of TEVATRON and LHC. \subsection{Multiplicities, particle flows} These observables are not infrared safe: emission of a soft gluon would increase the multiplicity by +1. Finite perturbative results for the parton cascade can be obtained by introducing a cut-off $k_T\geq Q_0 $. For $Q_0\gg \Lambda$ the partons represent jets and $Q_0$ can be viewed as jet resolution in the sense of the ``$k_T$-algoritm'', one can also take the small $Q_0 \gtrsim \Lambda$ and compare the resulting cascade directly with the hadronic final state in the sense of a duality picture (``Local Parton Hadron Duality'' \cite{dkmtbook}), then $Q_0$ is a non-perturbative parameter. {\it Multiplicities in quark and gluon jets at LEP (K. Hamacher) and TEVATRON (A. Pronko)}: At LEP the multiplicity of gluon jets is determined from 3 jet events after subtraction of 2 jet events at a reduced scale (DELPHI) or from 3 jet events using a boost algorithm (OPAL \cite{opalmult}). The quark jet multiplicity is found directly from the total $e^+e^-$ multiplicity. Theoretical results are obtained from resummed perturbation theory. One approach is based on coupled evolution equations of quark and gluon jets in Modified Leading Logarithmic Approximation \cite{dkmtbook} which includes fully the $\sqrt{\alpha_s}$-correction up to NLL order. These calculations reproduce the multiplicity rise with energy. The ratio $r=N_g/N_q$ obtains large corrections beyond MLLA and is reduced from the asymptotic value $r=C_A/C_F=9/4$ to $r=1.7$ in 3NLLO \cite{dremin} and to $r=1.5$, observed at LEP, in the numeric solution \cite{lo} where the only parameters $\Lambda,Q_0$ are fit by the total $e^+e^-$ multiplicity. Another calculation is based on the colour dipole model which treats the evolution of dipoles in NLL approximation and includes recoil effects \cite{dipole}. It describes the data well using an additional non-perturbative parameter. The CDF Collaboration has separated quark and gluon jets by analysing di-jet and $\gamma$ + jet events with known jet compositions. The multiplicity data are found generally well consistent with $e^+e^-$ results. The multiplicities reach the higher energies $Q\sim 300$ GeV where they also follow the 3NLLA expectations. {\it Particles with low momenta in jets (A. Pronko)}, measured by CDF, show the so-called hump-backed plateau in the variable $\xi=\log(1/x)$ \cite{dkmtbook} with the suppression of the low energy, large $\xi$ particles because of soft gluon coherence. Results on the ratio $r(\xi)$ of these spectra for gluon over quark jets for $\xi>3$ approach ratios of $r(\xi)\sim 1.8\pm 0.2$ again in good agreement with OPAL results. Although larger than for the full jet result ($r\sim 1.5$) it is still below $r(\xi)=C_A/C_F$ expected in \cite{klo} for this limit from the dominance of the primary gluon emission. This discrepency is likely due to the difficulty to obtain "pure" gluon jets, rather, soft particles are emitted from all participating jets of the event. This problem is avoided in {\it Soft particle emission in 3-jet events in $e^+e^-$ (Hamacher)}. The particle multiplicity $N_3$ in a cone perpendicular to the production plane is studied as function of the inter-jet angles $\Theta_{ij}$. The gluon radiation into this cone coherently emitted from the $q\bar qg$ ``antenna'', normalized by a corresponding multiplicity $N_2$ in 2-jet events, is given by the simple expression \cite{klo} \begin{equation} \frac{N_3}{N_2} =\frac{C_A}{C_F}r_t = \frac{1}{4} \frac{C_A}{C_F}\left[(1\!-\!\cos\Theta_{qg})+ (1\!-\!\cos\Theta_{\bar q g})-\frac{1}{N_C^2}(1\!-\!\cos\Theta_{q\bar q})\right] \label{r32} \end{equation} The first two leading terms represent the dipoles along the $qg$ directions, in close analogy to QED electric dipoles, except for the colour factors. The formula interpolates for aligned partons between a colour triplet antenna ($q$ against $qg$) and a colour octet antenna ($q\bar q$ against $g$) with the intensity higher by $C_A/C_F$. \begin{figure}[bt] \unitlength1cm \unitlength1cm \begin{center} \mbox{\epsfig{file=cone_line.eps,width=5cm}} \end{center} \caption{\label{f:cone} Multiplicity ratio $N_3/N_2$ in cones of $30^{\circ}$ opening angle as function of $r_t$ in Eq.{\protect (\ref{r32})}: a) for different inter-jet angles $\theta_3$; b) averaged over $\theta_3$, the dashed line is the expectation Eq.{\protect (\ref{r32})} with slope $C_A/C_F$, the full line is a fit (DELPHI \protect\cite{delphiperp}). } \end{figure} DELPHI has measured the ratio $N_3/N_2$ against the variable $r_t$ (see Fig.~\ref{f:cone}) which shows the scaling behaviour in the angles implicit in (\ref{r32}) and the linear dependence with slope $ 2.211\pm0.014\ (stat.) \pm 0.053\ (syst.)$ well consistent with the expected slope $C_A/C_F$ \cite{delphiperp}. The data are sensitive to the small term $\propto 1/N_C^2$ which corresponds to a negative interference not accessible through purely probabilistic jet algorithms. It is remarkable that the perturbative calculations describe production of particles with such low momenta (few 100 MeV above threshold) at low multiplicity. Similar transverse effects are expected for $\gamma p$ collisions ($q\bar q$ antenna in direct, $gg$ in resolved processes) and $p\bar p$ collisions (low transverse radiation in direct $\gamma$ or $W$ production, high radiation in gluon jet production). {\it Underlying event in $p\bar p$ collisions (R.D. Field)}: Soft particle production in central rapidity perpendicular to the high $p_T$ trigger jet direction increases rapidly with the jet transverse momentum from low $p_T$ (like minimum bias) up to about 7 GeV and saturates beyond. The same is observed for the transverse momentum sum in back-to-back jet events. If one triggers in addition on a particle in transverse direction one observes the ``birth'' of a third jet in the same and possibly even on a fourth jet in the opposite direction. This conclusion is derived from the good agreement with the PYTHIA MC which includes multiple parton collisions whereas HERWIG without this addition is in a less good agreement. It will be interesting in future studies to clarify the role of multi-parton scattering and also to investigate the possible reduction of the transverse particle production in direct $\gamma$ and $W$ production processes as expected for the perturbative gluon radiation mechanism emphasized in the previous paragraph for $e^+e^-$ collisions. \section{Small $x$ structure functions and diffraction} There is an old expectation for the parton density at small $x$ to ``saturate'' \cite{glr}, i.e. to become so high that a limiting behaviour related to the finite proton (or nuclear) size is reached. Ultimately, one expects a transition into a strong coupling regime not accessible to perturbative treatment. With the new data from HERA and RHIC this debate enters a new round. The ``Pomeron'' which describes diffractive processes with vacuum quantum number exchange is treated as composite object with a partonic sub-structure. \subsection{Deep Inelastic Scattering and parton saturation} In the standard perturbative treatment of DIS (DGLAP 1972-1977) the photon interacts with the proton through exchange of a single parton ladder which leads to a linear evolution equation of the parton densities in $Q^2$. With decreasing Bjorken $x$ there is the possibility of multiple interaction of photon and proton through exchange of two or more parton ladders as described by GLR \cite{glr} (1983). This happens for sufficiently large parton overlap probability $W(x,Q^2)$, given by the ratio of the parton parton cross section at scale $Q^2$, $\hat\sigma\sim \frac{\alpha_s(Q^2)}{Q^2}$, to the mean distance of two partons in the proton, $\Delta\vec b^2\sim \frac{F(x,Q^2)}{\pi R^2}$ for proton radius $R$ and parton density $F(x,Q^2)\sim xG(x,Q^2)$. While for $W\ll 1$ the DGLAP approximation is appropriate, for $W\sim \alpha_s$ the non-linear recombination processes set in and, ultimately, at $W=1$ saturation is reached, i.e. a full overlap of partons in the proton which is beyond perturbation theory. This limit defines the characteristic saturation scale $Q_s(x)$ from \begin{equation} \frac{xG(x,Q^2_s)}{\pi R^2}\sim \frac{Q_s(x)^2}{\alpha_s(Q_s^2)} \label{qsat} \end{equation} For small $\alpha_s$ this corresponds to a state of high gluon density. The theoretical analysis starts from the dipole picture \cite{disdipole}, formulated in space time, from which the $\gamma^*p$ total cross section can be computed as \begin{equation} \sigma^{\gamma^*p}_{T,L}(x,Q^2)= \int d^2r\int dz \hat\sigma_{\rm dipole} (\vec r,x)|\psi^\gamma_{T,L} (\vec r,z,Q^2)|^2 \label{gbwmodel} \end{equation} where $\psi^\gamma$ is the wave function of the virtual photon splitting into a $q\bar q$ dipole, $z$ the photon longitudinal momentum fraction of the quark and $r$ the transverse size of the dipole. Different approaches are used for the dipole cross section. The scattering process can be studied in the proton rest frame and one considers higher orders to the $q\bar q$ wave function. A non-linear evolution equation for the dipole-proton scattering amplitude has been given by Balitsky and by Kovchegov \cite{bk}. A complementary approach treats the gluons at small $x$ in an effective field theory in a frame with a low momentum photon and an energetic proton where the partonic motions in the proton are largely frozen in. The state of these high density gluons is also called ``Colour Glass Condensate'' (CGC) \cite{cgc}. This new kinematic regime of perturbative QCD at high density at the border to a non-perturbative confinement regime is under intense investigation and is important also for heavy ion collisions where the gluonic state at high density appears initially. Status and applications of the theory of saturation and the CGC are reviewed by {\it J. Bartels} and {\it E.G. Ferreiro}. \subsection{Evidence for saturation at HERA?} According to the above outline one may reach the saturation region in DIS either by decreasing $x$ at fixed $Q^2$, i.e. by increasing the parton density or by decreasing $Q^2$ at fixed $x$, i.e. by increasing the parton transverse ``size''. In the first case one observes that the structure function $F_2$ can be well fitted by the NLO QCD in the DGLAP approach for $Q^2\gtrsim 2$ GeV$^2$ where for $x<0.01$ one finds an $x$ independent slope $\lambda(Q^2)=-(\partial \ln F_2/\partial\ln x)_{Q^2}$ which depends linearly on $Q^2$. For smaller $Q^2$ the applicability of the perturbative calculations may be questioned, so there is no direct evidence for saturation in the perturbative regime from this point of view. On the other hand, a new regime appears in the second case when $Q^2$ is decreased at fixed $x$. In this case it is observed that the $\lambda$ slope saturates for $Q^2\lesssim 1$ GeV$^2$ ({\it talk by E. Elsen}). This region of low $Q^2$ is included in the models of saturation which combine perturbative and non-perturbative aspects. Some essential features of this approach are contained already in the model by Golec-Biernat and W\"usthoff \cite{gbw}. Here the DIS cross section is obtained in the dipole picture (\ref{gbwmodel}) with a simple ansatz for the dipole cross section $\sigma_{\rm dipole}(r^2 Q_s^2(x))$ to depend only on the particular combination of $r$ and $x$ and with $Q_s^2(x)\sim x^{-\lambda}$ the saturation scale. The cross section is calculated perturbatively for small distances ($\sigma_{\rm dipole}\sim r^2$) whereas at large distances a simple Gaussian form has been adopted with saturation built in ($\sigma_{\rm dipole}\to \sigma_0$). In this way the full $Q^2$ range becomes accessible in the model. An important prediction of the model is the geometrical scaling \cite{geomscal} which states that the cross section $\sigma^{\gamma^*p}_{tot}(x,Q^2)=f(\tau)$ depends only on the quantity $\tau=Q^2/Q^2_s(x)$. This scaling property is well satisfied.\footnote{An alternative scaling law has been derived within the framework of a generalized vector-dominance model \protect\cite{schild}.} Another successful prediction of the model is the near constancy of the ratio of the diffractive over the full cross section $F_2^{diff}/F_2^{tot}$ under variation of the total hadronic mass $W$. Recent studies have removed some shortcomings of the model, especially the phenomenological parametrizations for large dipole sizes $r$. The large $Q^2$ behaviour can be recovered by a smooth matching to the DGLAP evolution \cite{bgk}. The solutions of the BK equation determine the behaviour for large $r$ and together with the DGLAP behaviour at small $r$ a good description of HERA data is obtained \cite{gllm, iim}: the geometrical scaling in the saturation domain, the transition between the hard and soft photon region for the slope $\lambda$ and the $x$-dependence of the saturation scale $Q_s^2$. These successes in the description of data involving the scale $Q_s$ can be taken as indirect evidence for the onset of saturation. \subsection{Hard diffraction in DIS} Events with a large rapidity gap adjacent to the proton have been observed in NC events with high $Q^2$ at HERA which are considered as inelastic diffraction of the photon ({\it K. Borras; C. Kiesling}). Similar events with the charged current have now been reported as well. Assuming Regge factorization for small momentum transfer $t$ between the in and outgoing protons this process can be described by Pomeron exchange where the virtual photon scatters off the Pomeron which is emitted by the incoming proton with momentum fraction $x_{I\negmedspace P}=M_X^2/s$ where $M_X$ denotes the hadronc mass of the $\gamma^* I\negmedspace P$ and $\sqrt{s}$ the cms energy of the $\gamma^* p$ system. In analogy to the standard parton model for $ep$ DIS one can introduce Parton Distribution Functions for the Pomeron \cite{ischlein} to describe $e I\negmedspace P$ DIS. In this ``Diffractive DIS'' it is possible to derive QCD factorization of the photoabsorption cross section $\gamma^*p\to pX$ at fixed $x_{I\negmedspace P}$ and momentum transfer $t$ \cite{collins} \begin{equation} \frac{d^2\sigma^{DDIS}}{dx_{I\negmedspace P} dt}\ =\ \sum_q \int_x^{x_{I\negmedspace P}} d\xi f_q^{D}(x_{I\negmedspace P},t;\xi,Q^2) \sigma^{\gamma^*q}(x,Q^2,\xi) \label{Pfact} \end{equation} which further simplifies according to Regge factorization $f_q^{D}(x_{I\negmedspace P},t; x,Q^2)=f_{I\negmedspace P/p}(x_{I\negmedspace P},t)f_{q/I\negmedspace P}(\beta=x/x_{I\negmedspace P},Q^2)$. These PDF's are then studied as function of $\beta=x_{q/I\negmedspace P}$ or $\beta=x_{g/I\negmedspace P}$ where Bjorken $x=\beta x_{I\negmedspace P}$. Both factorization properties are found to be satisfied by the data but it is necessary to include Reggeon exchange in addition to Pomeron exchange. The very precise data from the full HERA-I analysis show the rise of the reduced cross section $\hat\sigma^{\gamma^*I\negmedspace P}(\beta,Q^2)$ (suitably normalized to correspond to $F_2^{ep}$) with $Q^2$ for $\beta\lesssim 0.7$ and the decrease for higher $\beta$. The change in slope occurs at much higher value than in case of $\gamma^*p$ scattering. The striking pattern of scaling violation is explained in a NLO DGLAP fit (parameters $\Lambda_{\overline{MS}}$, initial PDF's at $Q_0=3$ GeV) by the large gluon fraction in the Pomeron $ f_g=75\pm15 \%. $ QCD factorization is also verified by the observation of diffractive di-jet and charm jet production in agreement with NLO QCD computations using the Pomeron PDF's so obtained. Theoretical models for the diffractive structure functions ({\it J. Bartels}) have been developed within the dipole picture with saturation, such as the models by GBW \cite{gbw} already mentioned and by BEKW \cite{bekw} which take the higher order QCD processes ($\gamma^*\to q\bar q g$) into acount and provide a good description of $F_2^{DDIS}(\beta,Q^2)$. The structure of simple Feynman diagrams for DDIS has been discussed by {\it S. Brodsky}. Explicit calculations in case of Feynman gauge for $\gamma^*q\to s\bar s q$ show that the rescattering of the struck $s$ quark involving nearly on-shell intermediate states leads to an imaginary amplitude and an effective Pomeron exchange in the production of the colour singlet $s\bar s$ state. This process survives in the Bjorken limit. The same QCD final state interaction also can produce single spin asymmetries in semi-inclusive DIS \cite{brodsky}. \subsection{Diffraction in $p\bar p$ collisions} Results from the TEVATRON on multi-gap events, hard diffractive processes with jets, $W,Z,J/\psi,B$ have been discussed; exclusive double Pomeron $\chi_c,\ \gamma\gamma$ and di-jet production are of interest as benchmark for exclusive Higgs production at the LHC; the analysis of RUN II data is in progress (reports by {\it K. Borras, M. Convey, K. Goulianos}). The factorization of Pomeron processes has been established in DDIS but the arguments cannot be taken over to hadronic processes. In fact, the observed cross sections for diffractive di-jet rates at the Tevatron, are suppressed by an order of magnitude as expected using the Pomeron PDF's from HERA assuming factorization \cite{cdffacbd}. \begin{figure*} \begin{center} \epsfig{file=fig_nlo_01.eps,width=7cm} \end{center} \caption{\label{f:diff} NLO cross sections for diffractive dijet photoproduction as functions of jet energy fraction $x_\gamma^{\rm jets}$ compared to preliminary H1 data. The predictions including absorption (R=0.34 \protect\cite{kkmr}) agree with the data, those without absorption (R=1) do not (from Ref. \protect\cite{kk}). \vspace{-0.5cm} } \end{figure*} Such a suppression has been expected from reinteraction of spectator partons which yields a reduced gap survival probability, see for example \cite{bjgap}. This idea is supported by the observation of CDF that the occurence of an additional gap is unsuppressed; by appropriate combination of multi-gap cross sections in soft diffraction the survival probability has been determined as $S=0.23\pm 0.07$ at $\sqrt{s}=1800$ GeV. Taking this effect into account the factorization properties and agreement with extrapolations from HERA can be reestablished. A general systematics of multi-gap events of various kinds with characteristic factorization properties, also in hard processes, has been proposed ({\it K. Goulianos}) in the framework of a parton model which includes empirical rules like ``$1/M^2$ scaling'' and ``Pomeron flux renormalization''. The theoretical approach by KKMR \cite{kkmr} is based on hard QCD scattering processes, but includes initial state interaction by multiple Pomeron exchanges which are derived in a 2-channel eikonal model. This approach explains quantitatively the phenomena of factorization breakdown and the rates of multiple gap events. An interesting prediction concerns the breakdown of factorization in di-jet photoproduction at HERA \cite{kkmr} with an additional suppression $S=0.34$. This effect has been verified recently in a NLO QCD calculation \cite{kk} in comparison with H1 data \cite{H1diff} (see Fig. \ref{f:diff}). \section{Heavy ion collsions and Evidence for Quark Gluon Plasma} Here the transition from a parton to a hadron ensemble is studied in a very high particle multiplicity environment which suggests a thermodynamic treatment. In lattice QCD one expects with increasing temperature $T$ or energy density $\epsilon$ a phase transition from confined to deconfined matter, i.e. from a hadron gas to a Quark Gluon Plasma. The ratio $\epsilon/T^4$, a measure of the number of degrees of freedom, shows a characteristic rise with $T$ near $T_0\sim 170$ MeV, $\epsilon_0\sim 0.7$ GeV/fm$^3$, depending also on the number of flavours, over a range of about $\Delta T \sim 80$ MeV and then, above $\epsilon \sim 2$ GeV/fm$^3$, becomes nearly $T$ independent; in this region there is still a large ($\sim 30$\%) deviation from the asymptotic Stefan-Boltzmann limit for the ideal quark-gluon gas \cite{karsch}. Another prediction concerns the dependence of the critical temperature $T_c$ on the baryochemical potential $\mu$ which can be tested through the hadron composition of the final state. Previous research at the SPS has identified various signatures expected for the transition to a QGP, such as strangeness excess, $J/\psi$-suppression, universal chemical freeze out; the energy density is found at $\epsilon\sim 2-4$ GeV/fm$^3$, just above the critical value. Now with RHIC a new regime with the much higher initial density of $\sim 15$ GeV/fm$^3$ is reached much above the critical density. This leads to new signatures: a strong asymmetric flow of particles reflecting the initial spacial anisotropy and the strong absorption of high $p_T$ jets in the nucleus (``jet quenching''). It appears particularly impressive with the new RHIC data, that more specific QCD tests are now becoming feasible. A general outline of the RHIC results and their interpretation is given by {\it R. Seto}. \subsection{Onset of deconfinement in the SPS energy range} Results on $AA$ collisions from an energy scan over the lower SPS energies 20-80 AGeV have been presented by {\it P. Seyboth}. After the observation of signatures for QGP formation at the top SPS energy the aim was to search for an energy threshold of such signatures. A remarkable effect is seen in the energy dependence of the freeze out temperature, as determined from the slope of the transverse mass of kaons: at low energies there is a strong rise of temperature followed by saturation over the SPS energy range and continued rise at RHIC energies. This is the typical behaviour expected for a mixed phase of QGP and hadron gas at constant temperature. Such a behaviour at the energy density $\epsilon\gtrsim 2$ GeV/fm$^3$ in the SPS range matches the values expected from lattice calculations. A threshold effect is also seen for strangeness production, especially the $K^+/\pi^+$ ratio which is a well established signature for QGP formation. Another test of lattice QCD calculations concerns the phase diagram in the variables $T$ vs. baryochemical potential $\mu_B$. The analysis of the particle species abundances in statistical models yields values for $T$ and $\mu_B$ at freeze out which converge for low $\mu_B$ (high energies) towards the QCD expectation \cite{fodor}. \subsection{Space-time evolution of collision process at RHIC} {\it Initial stage (E.G. Ferreiro, K. Tuchin)}. The initial conditions can be introduced by the gluon density in the nucleus at the saturation scale (see Eq. (\ref{qsat})) as $xG_A(x,Q_s^2)\sim\frac{\pi R_A^2Q_s^2(x,A)}{\alpha_s(Q_s^2)}$ (``Colour Glass Condensate'') where the $A$ dependence is inferred from $G_A\sim A$, $\pi R_A^2\sim A^{\frac{2}{3}}$ and $Q_s^2\sim A^{\frac{1}{3}}$. Assuming proportionality of particle multiplicity and initial gluon rapidity density $dN/dy \sim xG(x,Q_s^2)$ one finds for an $AA$ collision with $N_{part}$ participating nucleons a very slow increase of central hadron multiplicity $(1/N_{part}) dN/dy\sim 1/\alpha_s(Q^2_s)\sim \ln N_{part}$ with energy and centrality \cite{kn}, a very successful prediction; other predictions from parton saturation follow for the transverse momentum distributions (review \cite{levin}). {\it Early interactions: jet production and jet quenching (Miller, Vitev)}. These new phenomenona are related to the hard parton-parton scattering in nuclear collisions and subsequent absorption of one parton in the dense medium. The absorption is mainly due to induced gluon radiative energy loss in multiple scattering inside the nucleus and is proportional to the plasma density \cite{quench}. The most striking evidence for jet quenching comes from the study of azimuthal angle correlations of particles associated with a high $p_T$ trigger particle \cite{starquench}: whereas in both $pp$ and $dAu$ scattering one observes a jet in direction opposite to the trigger jet this away side jet is fully suppressed in central $AuAu$ collisions. This is naturally explained by the hard collision taking place near the edge of the nucleus where one scattered parton leaves the nucleus undisturbed whereas the second parton has to move through the big nucleus. This measurement demonstrates the big difference between cold nuclear matter traversed in $dA$ collisions and the matter created in the central $AuAu$ collision. Theoretical calculations lead to an estimate of energy density of $\epsilon \sim 15$ GeV/fm$^3$ corresponding to about 100 times nuclear density. Additional studies confirm the absorption strength as function of the nuclear thickness in non-central collisions as well as the reappearence of the lost energy of the primary parton in the soft particles in jet direction. While these observations provide already a strong argument in favour of the presence of a QGP there are further crucial tests. QCD predicts the absorpion of a gluon to be stronger than that of the quark by the factor $C_A/C_F$. Furthermore, heavy quarks are less absorbed as the small angle radiation is cut off below $\Theta_c=m_Q/E$ (``dead cone effect'') \cite{dokHQ}. This effect is being searched for in nuclear D meson production ({\it Z. Xu}). {\it Hydrodynamic evolution, flow phenomena (Y. Hama, T. Hirano, H. Long, J. Velkowska and S. Voloshin)}. The hydrodynamics description of the particle production process includes the initial condition (energy density, initial flow), the EoS for the QGP with a phase transition built in (parameter: latent heat) and the freeze out mechanism for hadron production. The observations related to hydrodynamics are:\\ 1. Violation of $m_T$-scaling in $AA$ collisions, which denotes a universal slope $\beta$ in $dN/dp_T^2\sim e^{-\beta m_T}$ for particles of different mass and works well in $pp$ scattering. In hydrodynamic flow the particles acquire similar velocities and therefore protons obtain higher momenta than pions.\\ 2. An important consequence of hydrodynamics is the appearence of an asymmetric, especially elliptic flow: in the non-central collision of two nuclei there is an almond shape overlap region which generates an asymmetric pressure gradient with maximum in impact direction; this results in a corresponding asymmetry in energy and particle flow $\frac{dN}{d\phi}\sim 1+2v_2 \cos2\phi+ \ldots$ The elliptic flow $v_2$ increases for particles of higher $p_T$ with a delay for heavier particles.\\ 3. The Equation of State (EoS) can be investigated, especially the properties of the phase transition (latent heat $\sim 800$ MeV/fm$^3$) which provides another QCD test.\\ 4. An apparent problem for the hydrodynamic description is met with the Bose-Einstein correlations between identical particles which depend on the evolution of the space time volume containing the matter. Whereas {\it T. Hirano} notes a failure of this description it has been pointed out by {\it T. Csorg\"o} in the discussion that the disagreement can be avoided by a proper choice of the initial transverse flow to explain the final ``Hubble flow'' (Buda-Lund model \cite{buda-l}). \begin{figure*} \centering \epsfig{file=Ks_Lam_fig1_color_2.eps,width=0.49\textwidth} \epsfig{file=Ks_Lam_fig4_color_2.eps,width=0.49\textwidth} \caption{\label{fig.k0lambda} Transverse momentum dependence of elliptic flow $v_2$ measured by STAR \protect\cite{starval} a) before and b) after rescaling by the number of valence quarks $n$. \vspace{-0.5cm} \label{fig:val} } \end{figure*} {\it Coalescence, quark recombination (R. Hwa, J. Velkovska, S. Voloshin)}. Another striking new phenomenon observed at RHIC is the grouping of spectra according to the number of constituent quarks. This is observed in the $p_T$ dependence of the elliptic flow parameter $v_2$ ($p_T<6$ GeV) where $\pi,K$ and $p,\Lambda,\Xi$ fall into separate bands but show a uniform dependence if rescaled according to the number $n_V$ of valence quarks \cite{starval,phenixval} \begin{equation} v_2/n_V=f(p_T/n_V). \label{valence} \end{equation} As an example the $K,\Lambda$ spectra are shown in Fig. \ref{fig:val}. Another observation concerns the ratios $R_{CP}$ of particle spectra for central and peripheral collisions where mesons ($\phi,K^0, K^\pm$) and baryons ($\Omega,\Xi,\Lambda +$ antiparticles) fall into separate bands in the region $2\lesssim p_T\lesssim 6$ GeV. This confirms the idea of parton coalescence \cite{mv}. There are several other ``anomalies'' in nuclear production, for example, the large ratio $p/\pi^+$ for large $p_T>2$ GeV which can be explained within a recombination mechanism for thermal and shower partons ({\it R. Hwa}). The behaviour (\ref{valence}) suggests that before hadron formation there is a flow of constituent quarks which then recombine into the observed hadrons. This implies a strong dependence of the $q/g$ composition of the plasma during the evolution as illustrated in Fig. \ref{evol} : initially the primary collision produces mainly gluons at high temperature; during the expansion $q\bar q$ pairs are produced, but in approaching the critical temperature the gluons are absorbed by the quarks (``constituent quarks'') which then by coalescence form the final state hadrons. The strong ordering according to valence content is against expectations from a hadronic resonance gas which would group particles according to mass. These observations are therefore another strong argument for the presence of a QGP (although at the end without gluons). Looked at in reverse order the evolution depicted in Fig. \ref{evol} is quite natural: the hadrons under increased pressure dissolve at first into constituent quarks (as in the additive quark model) but under increased pressure gluons are easily freed from the constituent quarks and yield a genuine QGP. \begin{figure}[t! \begin{center} \vspace{-3.1cm} \includegraphics[width=10cm]{p-001078-Model.EPS} \end{center} \vspace{-7.5cm} \caption{ Evolution of the quark gluon plasma from a gluon dominated phase of high temperature and density towards a quark dominated phase near the critical temperature $T_c$, finally the transition from constituent quarks to hadrons. } \label{evol} \end{figure} {\it Strongly interacting QGP}. There is another interesting message in the $p_T$-dependence of the elliptic flow parameter $v_2$. Using a calculation for a QGP within transport theory \cite{gmtransport} it is found that the gluon-gluon cross sections of few mb expected in pQCD would give negligeable flow effects, only cross sections of $\gtrsim 40$ mb would yield the observed asymmetric flow. Therefore, the data suggest a strongly interacting QGP and this can be related to the large deviation from the Stefan-Boltzmann limit of the ideal gas found in the lattice QCD calculations \cite{shuryak}. \section{Other presentations} Finally, we list a few other topics which have been discussed.\\ {\it 1. Hadronic Phenomena}. Particle correlations are often not accessible to a QCD description, especially the Bose-Einstein correlations which turn out to be particularly important in the discussion of Heavy Ion Collisions. Also there are discussions of critical phenomena such as percolation and clustering in a string or hadron model framework.\\ {\it 2. Hadron Spectroscopy}. This is another active field with surprising results on unexpected hadronic particles, including the still controversial ``pentaquarks'', especially $\Theta^+(1540)$, not observed at TEVATRON, and the new charmonium state with decay $X(3872)\to J/\psi \pi^+\pi^-$ reported here by CDF, which stimulates discussions of nonperturbative aspects of QCD.\\ {\it 3. Astroparticle Physics}. In this field we witness a flourishing activity where Multiparticle Production plays an important role, although not as the basic goal of the activity but rather as a tool. Primary cosmic particles of superhigh energies ($10^8$ TeV) are studied. The interpretation of the particle yields, in particular the determination of the primary particle energies requires a detailed understanding of the propagation and showering of the cosmic rays (elementary particles or nuclei) in various media (air, earth, ice), including the saturation phenomena and production and decay of heavy quarks ($c,b$ quarks). Questions of the role of particles in theories beyond the standard model are being discussed, as well as the origin of such high energy cosmic rays. \section{Summary} The number of multiparticle production phenomena which can be explained within QCD is steadily increasing.\\ {\it 1. Hard processes:} Calculations for larger kinematical ranges, for observables of higher complexity and with increasing accuracy agree with the data, no definitive failures have been reported this time.\\ {\it 2. Soft particle production:} It follows perturbative QCD expectations surprisingly well which is in support of a parton hadron duality picture of hadronization with soft colour confinement.\\ {\it 3. Small $x$ and diffraction:} For a high density regime at small $Q^2$ there is indirect evidence for saturation (``Color Glass Condensate'') inferred from the success of saturation models. Intrinsic Pomeron structure is a useful concept in diffractive scattering, the systematics of factorization and its breaking are becoming better understood.\\ {\it 4. Heavy Ions and QGP:} There is an indication of a phase transition with a mixed phase over the SPS energy range. The higher initial pressure and longer evolution time available at RHIC have provided clear evidence for jet quenching and strong elliptic flow with an initial energy density about 100 times higher than in nuclear matter and an order of magnitude above critical density. These phenomena are adequately described in terms of a strongly interacting QGP; these observables can be used as new diagnostic tools which allow detailed tests of (perturbative and non-perturbative) QCD predictions: parton type dependence of absorpion, Equation of State with latent heat and strong deviation from ideal gas limit, phase diagram $T_c$ vs. $\mu_B$, the initial CGC state. Hadrons are formed by coalescence of constituent quarks which dominate the QGP in its final stage. \section*{Acknowledgement} I would like to thank Bill Gary and his crew for their engagement in organizing this lively meeting which managed to bring together and to mix up the different multiparticle communities to the benefit of all of us. I am also grateful for the helpful discussions with participants of the meeting, especially J. Bartels, V. Khoze, C. Kiesling, N. Schmitz and P. Seyboth.
1,477,468,750,191
arxiv
\section{Introduction} With a branching ratio of $32$\% the $\eta\rightarrow 3\ensuremath{\pi^0\,}$ decay is a major decay mode of the $\eta$ despite the fact that is a G--parity forbidden transition. Neglecting a small electromagnetic transition (Sutherland's theorem \cite{Sutherland}) this decay is due almost exclusively to the isospin breaking part of QCD: \begin{equation} {\cal L}_{\not\, I} = -\frac{1}{2}\left(m_{u}-m_{d}\right) \left(\bar{u}u-d\bar{d}\right) \end{equation} \noindent and provides a nice way to determine the up-down quark mass difference.\\ Many theoretical issues \cite{BijGa02}, \cite{Hol02} refer to the study of the charged and neutral decay mode of \ensuremath{\eta\rightarrow3\pi}, and in particular to the Dalitz plot parameters of these decays.\\ The Dalitz plot distribution of the $\eta\rightarrow 3\ensuremath{\pi^0\,}$ decay is conventionally described in terms of one kinematical variable: \begin{equation} z = \frac{2}{3} \sum_{i=1}^{3} \left (\frac{3E_{i} - m_{\eta}}{m_{\eta} - 3m_{\ensuremath{\pi^0\,}}} \right )^{2} = \frac{\rho^{2}}{\rho_{MAX}^{2}} \label{eq:zeta} \end{equation} \noindent where $E_{i}$ denotes the energy of the i-th pion in the $\eta$ rest frame and $\rho$ is the distance from a point on the Dalitz plot to its center. $\rho_{MAX}$ is the maximum value of $\rho$. For the decays into three identical particles, it is possible to use a symmetrical Dalitz plot where the event density is described by a single quadratic slope parameter $\alpha$, which represents the difference from pure phase space: \begin{equation} \vert A_{\eta\to 3\ensuremath{\pi^0\,}}\left(z\right) \vert^{2} \sim 1 + 2\alpha z. \end{equation} \noindent The lowest order predictions of Chiral Pertubation Theory quote a zero value for $\alpha$. The event density in the Dalitz plot must be uniform. A non zero value is instead expected by dispersive calculation \cite{BaKaWy96} where the effect of $\pi-\pi$ rescattering is included. Lately a theoretical result \cite{Borasoy}, obtained in the chiral unitary approach based on the Bethe--Salpter equation, provide for $\alpha$ the value $-0.031$. There are three previous experimental determination of $\alpha$: the GAMS 2000 group \cite{GAMS} quoted $\alpha = -0.022 \pm 0.023$ based on $5 \times 10^{4}$ events; the Crystall Barrel Collaboration obtained $\alpha = -0.052 \pm 0.020$ from a sample of $10 \times 10^{4}$ events and finally the Crystal Ball \cite{CrystalBall} result, $\alpha = -0.031 \pm 0.004$ based on $10^{6}$ events. This was the first statistically significant measurement of $\alpha$ which agree very well with the most recent theoretical result. In this paper we report on a new precise measurement of the Dalitz plot parameter for the $\eta\rightarrow 3\ensuremath{\pi^0\,}$ decay.\\ \section{DA$\Phi$NE and KLOE} \noindent The DA$\Phi$NE e$^+$e$^-$ collider operates at a total energy W = 1020 MeV, the mass of the $\phi$(1020)--meson. Approximately $3\times10^6$ $\phi$--mesons are produced for each pb$^{-1}$ of collected luminosity. Since 2001, KLOE has integrated a total luminosity of about 2.5 fb$^{-1}$. Results presented in this paper are based on data collected on $2001-2002$ only and correspond to about 420~pb$^{-1}$. The KLOE detector consists of a large cylindrical drift chamber, DC, surrounded by a lead/scintillating-fiber electromagnetic calorimeter, EMC. The drift chamber \cite{dc}, is 4~m in diameter and 3.3~m long. The momentum resolution is $\sigma(p_{T})/p_{T} \sim 0.4\%$. Two track vertices are reconstructed with a spatial resolution of $\sim$ 3 mm. The calorimeter \cite{emc}, composed of a barrel and two endcaps,covers 98\% of the solid angle. Energy and time resolution are $\sigma(E)/E = 5.7\%/\sqrt{E[{\rm GeV}]}$ and $\sigma(t) = 57 \,{\rm ps}/ \sqrt{E[{\rm GeV}]} \oplus 100 \, {\rm ps}$. A superconducting coil around the detector provides a 0.52~T magnetic field. The KLOE trigger \cite{trg}, uses calorimeter and drift chamber information. For the present analysis only the electromagnetic calorimeter (EMC) signals have been used. Two local energy deposits above threshold, $E_{\rm th}>50$ MeV for the barrel and $E_{\rm th}>150$ MeV for the endcaps, are required. \section{ Dalitz plot of $\eta\rightarrow\ensuremath{\pi^0\,}\piz\ensuremath{\pi^0\,}$ decay} At KLOE the $\eta$ meson is produced in the process $\phi\rightarrow\eta\gamma$ where the recoil photon ($E_{\gamma} = 363 \,\textrm{MeV})$ is monochromatic and easily selected. Thus to select the final state we require to have seven prompt clusters in the event. After applying a kinematic fit, which requires the energy-momentum conservation, we look for the best pairing of photons into $\ensuremath{\pi^0\,}$ by constructing a pseudo--$\chi^{2}$ variable for each of the $15$ possible pairs:\\ \begin{equation} \chi^{2}_{j} = \sum_{i=1}^{3} \left( \frac{m_{j,\pi^{0}_{i}} - M_{\pi^{0}}} {\sigma_{m_{\pi^{0}}}}\right)^{2} \qquad j = 1,2,\ldots,15. \end{equation} \noindent where \begin{itemize} \item[-] $m_{j,\pi^{0}_{i}}$ is the invariant mass of the $i^{th} \pi^{0}$ , in correspondence of the $j^{th}$ combination; \item[-] $M_{\pi^{0}}$ is the $\pi^{0}$ mass, ($M_{\pi^{0}}$ = 134.98 MeV/c$^{2}$ \cite{PDG}); \item[-] $\sigma_{m_{\pi^{0}}}$ is the resolution on $m_{\pi^{0}}$. \end{itemize} The chosen pair is that one which minimize the $\chi^{2}$. A second kinematic fit constraining the $\ensuremath{\pi^0\,}$ mass has been performed, improving the resolution on $z$ by a factor two. The Monte Carlo, MC, $z$ distribution at generation (pure phase space), see fig.\ref{fig:DalitzPlot} and after reconstruction, see fig.\ref{fig:DalitzPlot1}, shows that resolution effects are not negligible for this analysis. Three samples with different efficiency, $\varepsilon$ and purity on pairing, $P$: \begin{align*} &\textrm{Low Purity} & &P = 75.4 \% & &\varepsilon = 30.3 \%\\ &\textrm{Medium Purity} & &P = 92.0 \% & &\varepsilon = 13.6 \%\\ &\textrm{High Purity} & &P = 97.6 \% & &\varepsilon = 4.3 \%\\ \end{align*} have been analyzed by cutting on the difference of the two lowest value of $\chi^{2}$. \begin{figure}[!h] \begin{center} \epsfig{figure=fig/daliz_gen_NEW.eps,width=.8\linewidth} \caption{MC $z$ distribution according to pure phase space.} \label{fig:DalitzPlot} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \epsfig{figure=fig/daliz_rec_c2.eps,width=.8\linewidth} \caption{MC $z$ distribution after selection and reconstruction.} \label{fig:DalitzPlot1} \end{center} \end{figure} \section{Measurement of the slope parameter $\alpha$} In order to estimate $\alpha$ an unbinned likelihood function is built by convoluting the event density with the resolution function and correcting for the probability of wrong photon pairing in $\ensuremath{\pi^0\,}$'s. Using a sample with high purity and fitting in the range $(0-1)$, we found the preliminary result\cite{tiziana}: \begin{equation} \alpha = -0.014 \pm 0.004\,(stat) \;\pm 0.005\,(syst), \end{equation} where the systematics has been evaluated by varying the analysis cuts, the fit range and measuring the maximum observed variation of $\alpha$ with respect to the samples with different purity.\\ We observed a relevant dependence of $\alpha$ on the fitting range. Moreover, the low purity sample shows two different slopes (see fig.\ref{fig:linearita}) in the Data--MC ratio of the $z$ distribution. \begin{figure}[htb] \begin{center} \epsfig{figure=fig/divold_1.eps,width=.8\linewidth} \end{center} \caption{Data--Monte Carlo ratio of the $z$ distribution. The MC distribution is pure phase space.} \label{fig:linearita} \end{figure} With a dedicated simulation we have realized that this is essentially due to a different value of invariant mass of three pions system $(\ensuremath{\pi^0\,}\piz\ensuremath{\pi^0\,})$ in the Monte Carlo generator, $M_{\eta} = 547.30$ MeV/c$^{2}$, with respect to the one recently measured \cite{Biagio} by our experiment: \begin{equation} M_{\eta} = 547.822 \pm 0.005_{stat} \pm 0.069_{syst} \,\textrm{MeV/c$^{2}$}. \label{eq:Biagio} \end{equation} As a consequence, the accessible phase space on data is larger than the one on Monte Carlo simulation. A new evaluation of $\alpha$ has been obtained after a kinematic fit with an additional constraint on the $\eta$ mass, see Table \ref{tab:resfit_3}. \begin{table}[!hab] \begin{center} \begin{tabular}{||c||c||c||c||} \hhline{|t:=:t:=:t:=:t:=:t|} \multicolumn{1}{||c||}{\small{\bf Range}} & \multicolumn{1}{c||}{\small{\bf Low Pur. }} & \multicolumn{1}{c||}{\small{\bf Med. Pur.}}& \multicolumn{1}{c||}{\small{\bf High Pur.}}\\ & $\alpha (\cdot 10^{-3})$ & $\alpha (\cdot 10^{-3})$ & $\alpha (\cdot 10^{-3})$ \\ \hhline{|:=::=::=::=:|} (0,1) &$-30\pm 2$ &$-31\pm 3$ &$-26\pm 4$\\ (0,0.8) &$-26\pm 2$ &$-28\pm 3$ &$-22\pm 5$\\ (0,0.7) &$-26\pm 3$ &$-27\pm 4$ &$-23\pm 5$\\ (0,0.6) &$-30\pm 4$ &$-31\pm 4$ &$-20\pm 6$\\ \hhline{|b:=:b:=:b:=:b:=:b|} \end{tabular} \end{center} \caption{Fitted results for the slope parameter $\alpha$ for the kinematic fit with the $\eta$ mass constrained at $M_{\eta} = 547.822$ MeV/c$^{2}$.} \label{tab:resfit_3} \end{table}\\ \noindent In this way, good stability with respect to the range of fit is observed, and the linearity of the Data--MC ratio of the z distribution has been recovered, (see fig.\ref{fig:linearitaII}). \begin{figure}[htb] \begin{center} \epsfig{figure=fig/linearita_kinfit.eps,width=.8\linewidth} \end{center} \caption{Data--Monte Carlo ratio of the $z$ distribution after the kinematic fit with $\eta$ mass constraint.} \label{fig:linearitaII} \end{figure} In order to give the final result, the phase space area where the $z$ distribution is flat, is used as fit region (z $\in [0,0.7]$). The systematic uncertainties on $\alpha$ are summarized in Table \ref{tab:syst}. We have first taken into account the correction for the Data--MC discrepancy in the photon energy resolution, RES, the effect of the fit range. We have then determined the $\alpha$ variation with respect to other purity samples and the error due to the combinatorial background evaluation. We have also tested the $\alpha$--dependence on the chosen value of the $M_{\eta}$ used in the fit constraint. \begin{table}[htb] \begin{center} \begin{tabular}{||l||c||c||c||} \hhline{|t:=:t:=:t:=:t:=:t|} \multicolumn{1}{||c||}{\small{\bf }} & \multicolumn{1}{c||}{\small{\bf Low Pur. }} & \multicolumn{1}{c||}{\small{\bf Med. Pur.}}& \multicolumn{1}{c||}{\small{\bf High Pur.}}\\ & $\sigma^{syst}_{\alpha} (\cdot 10^{-3})$ & $\sigma^{syst}_{\alpha} (\cdot 10^{-3})$ & $\sigma^{syst}{\alpha} (\cdot 10^{-3})$ \\ \hhline{|:=::=::=::=:|} RES &$-9$ &$-4$ &$-3$\\ Range &$-4$ &$-4$ &$-3 +3$\\ Purity &$-1 +3$&$+4$ &$-4 $\\ BKG &$0.$ &$0.$ &$-1 +1$ \\ $M_{\eta}$ &$-1$ &$-2$ &$-5$ \\ \hhline{|t:=:t:=:t:=:t:=:t|} {\bf Total } &$-10 \,+3$ &$-6 \,+4$ &$-8 \,+3$ \\ \hhline{|b:=:b:=:b:=:b:=:b|} \end{tabular} \end{center} \caption{Summary table of systematic uncertainties. The total systematic uncertainty is obtained by adding in quadrature each contribution.} \label{tab:syst} \end{table} \section{Results} \noindent We quote as preliminary result for the slope parameter $\alpha$ the one obtained with the Medium Purity sample ,(about 650000 $\eta\rightarrow3\pi^{0}$ decays). The result including the statistical uncertainty from the fit and the evaluated systematic error is: \begin{equation} \alpha = -0.027 \pm 0.004\,(stat) \;^{+0.004}_{-0.006}\,(syst) \end{equation} with: $\chi^{2}/ndf = 13.72/17$. The result is within errors compatible with the result from Crystall Ball \cite{CrystalBall} based on $10^{6}$ events of: \begin{equation} \alpha = -0.031 \pm 0.004. \end{equation} The our measurement of $\alpha$ also agrees with the calculations from the chiral unitary approach \cite{Borasoy}.
1,477,468,750,192
arxiv
\section{Introduction} \label{sec:intro} The Solar System gas and ice giants host ring systems, although the origins of these rings remain mysterious (e.g., \citealt{DePater17,DePater18,Hedman21}). Whereas Saturn's massive rings are rich in water ice (\citealt{Cuzzi1998,poulet_et_al2003,Nicholson2005}), the less massive rings of Uranus and Neptune have a higher content of rocky particles (\citealt{Tiscareno2013}) and Jupiter's tenuous rings are composed of micron-sized dust particles (\citealt{DePater17}). Massive ring systems like that of Saturn's may form from collisions (e.g., \citealt{Pollack75}), or tidal disruptions of primordial satellites (\citealt{Canup10}) or passing objects (e.g., \citealt{Dones91}). Despite the prevalence of rings around the Solar System giants, exoplanet characterization efforts have not yet yielded conclusive observational evidence of circumplanetary rings (\citealt{Heising15,Aizawa18}). The presence of ring systems around giant exoplanets, however, may explain planets with large measured radii and unusually low bulk densities. HIP~41378 is a nearby bright ($V = 8.93$), late F-type star hosting five transiting planets (\citealt{vanderburg_et_al2016,berardo_et_al2019}). With $\rm T_{eq}$=294 K, the outermost HIP\,41378\,$\rm f$ is an intriguing target for atmospheric characterization because it is significantly colder than the giant exoplanets typically probed by ground-based and space-based observations. It therefore provides a bridge between the highly-irradiated hot Jupiters studied via transmission spectroscopy; the young, wide-orbit, and massive giant planets or substellar objects studied via direct imaging; and the colder, mature gas giants in the Solar System. With a measured mass of 12$\pm$3\,$M_{\earth}$ and a radius of 9.2$\pm$0.1\,$R_{\earth}$ \citep{santerne_et_al2019}, HIP\,41378\,$\rm f$ stands out as one of the lowest bulk density planets discovered to date ($0.09\pm0.02 \, {\rm g} \, {\rm cm}^{-3}$). Rings have been shown theoretically to inflate an exoplanet's radius inferred through transits, thereby decreasing the measured bulk density \citep{piro+vissapragada2020, akinsanmi_et_al2020}. As discussed in \citet{akinsanmi_et_al2020}, the ring-induced transit depth enhancement is expected to be chromatic: deeper transits occur at wavelengths where the ring is optically thick. Based on detailed modeling of the observed \textit{K2} photometry, the most likely ring scenario for HIP\,41378\,$\rm f$ is a Uranian-like bulk density of 1.23\,${\rm g} \, {\rm cm}^{-3}$ and a ring extending from 1.05 to 2.59 times the planetary radius inclined at an angle of $\sim$25$^\circ$ from the sky plane \citep{akinsanmi_et_al2020}. While HIP\,41378\,$\rm f$ is too close to its host star to support icy rings, the planet could instead be orbited by rings composed of small, porous rocky particles \citep{piro+vissapragada2020, akinsanmi_et_al2020}. In this ringed model, the planet radius would be $\rm R_p = 3.7 \pm 0.3 \, \rm R_\oplus$ (compared to $\rm R_p = 9.21 \pm 0.01 \, \rm R_\oplus$ for the ringless case). Alternatively, if HIP\,41378\,$\rm f$ does not possess rings, then it may be a member of the rare class of ``super-puff" exoplanets, which have been inferred to possess gas mass fractions far greater than the more common mini-Neptunes with similiar masses ($>$10\% vs.~a few \%, respectively; \citealt{Lopez14}). Super-puffs have been hypothesized to form in a less opaque region of the protoplanetary disk \citep{lee+chiang2016}, followed by inward migration to account for their large gas mass fractions. However, despite their low densities and correspondingly large atmospheric scale heights, all transmission spectra of super-puffs observed to date have been flat (e.g., \citealt{libby-roberts_et_al2020,chachan_et_al2020}), suggesting that flat spectra are a general property of the population. High-altitude photochemical hazes have been considered to explain both the flat spectra and the large radii (\citealt{gao+zhang2020,Ohno2021}). Here we present the near-infrared transmission spectrum of HIP\,41378\,$\rm f$ obtained with the \emph{Hubble Space Telescope} Wide-Field Camera 3 (HST/WFC3), which we use to constrain the planet's atmospheric composition and explore the presence of circumplanetary rings. In Section \ref{sec:obs_dr}, we describe our HST observations and data reduction procedure. Section \ref{sec:lc_fitting} details our methods for fitting the transit light curves. In Section \ref{sec:results}, we present the near-infrared transmission spectrum of HIP\,41378\,$\rm f$. In Section \ref{sec:discussion}, we interpret our results using atmospheric models, explore the possibility of rings, and compare HIP\,41378\,$\rm f$ to other planets with similar masses and radii. Section \ref{sec:conclusions} summarizes our results. \section{Observations \& Data Reduction} \label{sec:obs_dr} \subsection{Observations} \label{sec:observations} We observed a single primary transit of HIP\,41378\,$\rm f$ with HST/WFC3 using the G141 grism, which provides spectroscopy between 1.125-1.643 $\mu$m at a spectral resolving power of R$\sim$130 around $\lambda$=1.4 $\mu$m. Taken as part of GO 16267 (PI: Dressing) on UT 2021 May 19-21, the observations were scheduled over three consecutive HST visits of six orbits each to accommodate the target's long transit duration (18.998 hours; \citealt{vanderburg_et_al2016}). To ensure that the target remained centered in the instrument's field of view, we took an image of the target using the F126N filter with an exposure time of 7.317 seconds at the beginning of the first and third visits as well as at the beginning of the last orbit of the second visit. We then obtained time series spectroscopy with the G141 grism, and used round-trip spatial scanning for all visits with a scan rate of 0.419 arcsec~s$^{-1}$ to permit taking longer exposures without saturating the detector \citep{McCullough12}. Due to South Atlantic Anomaly (SAA) passages during the transit, we varied the sampling sequence for affected orbits. The first three were impacted by SAA passages, making roughly half, a third, and a quarter of the respective orbits unusable. The fourth through tenth orbits were less affected, so we were able to use almost all exposures taken during these orbits. For the remaining orbits (orbits 11-18), we again faced interruptions due to the SAA. Ultimately, we used the SPARS10 sampling sequence with 7-9 non-destructive reads per exposure (NSAMP = 7-9). As a result of these NSAMP changes, the total integration times ranged from 37.010 to 51.703 seconds and scans were taken across approximately 126-178 pixel rows in the cross-dispersion direction. With this instrument setup, we read out a 512$\times$512 pixel subarray for each science exposure and obtained a total of 274 science exposures over the 18 orbits observed. \vspace{-0.2cm} \subsection{Data Reduction} \label{sec:data_reduction} We reduced the observations for this program using the methods outlined in \citet{Alam20}, which we briefly summarize here. We started our analysis using the bias-corrected, flat-fielded {\tt ima} images from the CALWF3 pipeline. The flux for each exposure was extracted by taking the difference between successive reads and then performing a background subtraction to suppress contamination from nearby stars. For the background subtraction, we subtracted the median flux from a box 32 pixels away from the spectrum. To correct for cosmic ray events, we followed the procedure of \citet{Nikolov14}. Next, we extracted stellar spectra by summing the flux within a rectangular aperture. To determine the size of the aperture (accounting for the different-sized spatial scans; see Section \ref{sec:observations}), we fit for the aperture width along the dispersion and cross-dispersion axes. We scanned each row of the raw {\tt ima} images and fit a top-hat function to the data to determine the center of the PSF (i.e., the center of the 2D spectrum along the cross-dispersion axis). To determine the center of the spectrum along the dispersion direction, we scanned each column and fit a top-hat distribution to the data. We then used these fitted x and y center points as initial guesses for calculating the centroid positions on each image using the flux-weighted first moments in x and y pixel position. To determine the wavelength solution, we cross-correlated each stellar spectrum to a grid of model spectra from the WFC3 Exposure Time Calculator (ETC) with temperatures ranging from 4060-9230~K. To determine shifts along the dispersion axis, we used the closest matching model of 6200~K. \vspace{-0.2cm} \section{Light Curve Fits} \label{sec:lc_fitting} \begin{figure*} \centering \includegraphics[scale=0.95]{HIP41378_wlc_v2.pdf} \caption{Top: Example HIP\,41378\,$\rm f$ stellar spectrum for the HST/WFC3 G141 grism. The vertical bands denote the 0.018 $\mu$m wavelength channels adopted for the spectroscopic light curves. Middle: Raw (top) and detrended (bottom) white light curve, excluding the first orbit and the first exposure of each subsequent orbit (points). The raw light curve has been shifted vertically by an arbitrary constant for clarity. Overplotted is the best-fitting analytic light curve model (line). Epochs most affected by South Atlantic Anomaly passages are denoted by the gray shading. Bottom: RMS residuals of the transit fit in ppm (left; points) and the distribution of residuals (right).} \label{fig:wlc_lm} \end{figure*} To extract the 1.1-1.7 $\mu$m transmission spectrum, we fit the transit light curves using the fitting routine detailed in \citet{Kirk17,Kirk18,Kirk19,Kirk21} and \citet{Alam21}. Briefly, we modeled the analytic transit light curves \citep{Mandel02} using {\tt Batman} \citep{batman} and implemented a Gaussian process (GP) with the {\tt george} code \citep{Ambikasaran14} to model noise in the data. We fixed the non-linear limb darkening coefficients to the theoretical values from 3D stellar models \citep{Magic15}. To fit the white light curve, we fixed the system parameters to those in \citealt{santerne_et_al2019} ($\rm P$=542.07975 days, $\rm a/R_\star$=231.417, $i$=89.971$^{\circ}$, $e$=0). We fit for the time of mid-transit $\rm T_{0}$, the scaled planetary radius $\rm R_p/R_\star$, and the GP hyperparameters. For the white light curve, we used three squared-exponential GP kernels with the orbital phase of HST, the wavelength shift in the stellar spectra, and time as the three input vectors -- after standardizing each (subtracting the mean and dividing by the standard deviation to put each on a common scale). We fit for the natural logarithm of the inverse length-scale for each kernel, in addition to the amplitude of the GP and a white noise term. The GP was therefore described by five free hyperparameters. We placed truncated uniform-in-log-space priors on the GP hyperparameters. The amplitude was bounded between 0.01 and $100\times$ the out-of-transit variance and the length scales were bounded between the minimum spacing and $3\times$ the maximum spacing of the standardized input vectors. The white noise term was bounded between 0.1 and 1000\,ppm. To sample the parameter space, we ran a Markov Chain Monte Carlo (MCMC) using {\tt emcee} after clipping $>$4-$\sigma$ outliers from a running median computed for each light curve, which clipped 0--2 points per light curve. We optimized the GP hyperparameters to the out-of-transit data to find the starting locations for the hyperparameters. The starting value for $\rm R_{p}/R_{\star}$ was taken to be 0.0663 \citep{santerne_et_al2019} and the starting $\rm T_{0}$ value of BJD 2459354.6 was determined by visual inspection of the light curve. We initialized the chains around these values and ran the MCMC with 210 walkers for a 2000 step burn-in, followed by a 6000 step chain for our posterior and parameter estimates. The number of samples was $72\times$ the autocorrelation length, greater than the $50\times$ autocorrelation length which in general indicates convergence in \texttt{emcee}\footnote{\url{https://dfm.io/posts/autocorr/}}. The best-fit white light curve is shown in Figure \ref{fig:wlc_lm}, with $\rm R_p/R_\star$ = 0.068602 $^{+0.002684} _{-0.003370}$ and $\rm T_{0}$ = BJD 2459355.101374 $^{+0.001919} _{-0.001888}$. For the binned light curve fits, we used a common mode correction. This correction involved removing the best-fitting white light systematics model and the residuals to the white light fit from each binned light curve prior to fitting (e.g., \citealt{Gibson14,Alam20}). As a result, we were able to use a simpler systematics model for the binned light curves and therefore only used two GP kernels (HST phase and wavelength shift). We also held $\rm T_0$ fixed to the best-fit value from the white light curve fit, resulting in five free parameters per binned light curve ($\rm R_p/R_\star$ and the four GP hyperparameters). We then proceeded with the MCMC as for the white light curve fit. The measured $\rm R_{p}/R_{\star}$ values for each spectroscopic light curve\footnote{The spectroscopic light curves are shown \href{https://doi.org/10.6084/m9.figshare.17373572}{here} in the online supplementary material.} are presented in Table \ref{tab:tr_spec}. \section{Results} \label{sec:results} The near-infrared (1.1-1.7 $\mu$m) transmission spectrum of HIP\,41378\,$\rm f$ is shown in Figure \ref{fig:hip41378_trspec}. The $\rm R_p/R_\star$ values measured from our WFC3 transmission spectrum (Table~\ref{tab:tr_spec}) vary between 0.067 and 0.070, consistent within 2-$\sigma$ with the optical transit depths measured from the two \textit{K2} observations\footnote{We note that the WFC3 white light curve has errorbars $\sim$30 times larger than the \textit{K2} observations, and the lower limit of the WFC3 white light curve is consistent within 1-$\sigma$ with the \textit{K2} $\rm R_{p}/R_{\star}$ value derived in \citet{santerne_et_al2019}.} (0.0672$\pm$0.0013, \citealt{vanderburg_et_al2016}; $0.06602^{+0.00017}_{-0.00016}$, \citealt{berardo_et_al2019}; 0.0663$\pm$0.0001, \citealt{santerne_et_al2019}). Within the precision of our observations, we find that the spectrum does not display any large absorption features from gaseous molecules, with maximum deviations spanning $\sim$2 scale heights (i.e., $\sim1200$\,km, assuming a H/He-dominated atmospheric composition with a mean molecular weight of 2.3~amu). \begin{figure*} \centering \includegraphics[width=\textwidth, trim={2.1cm 2.0cm 1.9cm 2.5cm}]{HIP41378_G141_TrSpec_v5.pdf} \caption{Broadband transmission spectrum for HIP\,41378\,$\rm f$ (black points), compared to 1D radiative-convective forward models (colored lines) for a clear atmosphere with metallicities 1$\times$, 30$\times$, and 300$\times$ solar, example hazy and ringed models, as well as a flat line (gray atmosphere). Top: The WFC3 transmission spectrum and the measured optical transit depth measurement from \textit{K2} (gray diamond; \citealt{santerne_et_al2019}). Middle: Zoom-in to the WFC3 data presented in this work. Bottom: Zoom-out to the full spectrum, including mid-infrared wavelengths accessible with JWST.} \label{fig:hip41378_trspec} \end{figure*} \vspace{-0.2cm} \input{HIP41378_trspec.tex} \subsection{Comparisons to 1D Forward Models} We compare our observed transmission spectrum to model spectra generated using a 1D radiative-convective-thermochemical equilibrium model \citep{saumon2008} assuming clear, solar-composition atmospheres with metallicities 1$\times$, 30$\times$, and 300$\times$ solar. Considering HIP\,41378\,$\rm f$'s low bulk density and featureless transmission spectrum, we also compare to models incorporating high-altitude hazes and circumplanetary rings. The hazy models were computed using the Community Aerosol and Radiation Model for Atmospheres \citep[CARMA;][]{Gao18} by adding a downward flux of haze particles to the 1$\times$ solar model atmosphere and tracking particle coagulation, sedimentation, and mixing (as in e.g., \citealt{Adams19}). We assumed spherical haze particles with compositions of soots and tholins for haze column production fluxes of $10^{-10}$, $10^{-11}$, $10^{-12}$, $10^{-13}$, and $10^{-14}$ $\rm g~\rm cm^{-2}~\rm s^{-1}$ (\citealt{lavvas2017,kawashima2019}). We considered one soot model and one tholin model at each of the five haze production rates for a total of 10 hazy models. We also computed transmission spectra with circumplanetary rings following the post-processing method described in a companion paper (Ohno \& Fortney, submitted). In short, the method computes the spectrum by summing the transmittance of the ring-free planetary disk and the circumplanetary ring outside of the planetary disk. Using Equation 3 of \citet{schlichting+chang2011}, we estimate a minimum ring particle size of $\sim$10~cm and a system age of 3.1$\pm$0.6~Gyr \citep{santerne_et_al2019}. Since the particle size is much larger than the relevant wavelength, we first assume a gray ring opacity. The gray rings model grid comprises 80 models assuming a solar composition atmosphere and an opaque ring with a morphology consistent with \citet{akinsanmi_et_al2020}. We varied the ring inclination between 21-28$^{\circ}$ from the sky plane (in increments of 1$^{\circ}$) and varied the inner ring radius between 1.02-1.11$\rm R_{0}$ (the ring-free transit radius of 0.35$\rm R_{J}$ from the clear 1$\times$ solar model) in steps of 0.01. The outer ring radius was fixed to 2.55$\rm R_{0}$, the Roche radius beyond which ring particles would coagulate into a satellite. We also tested model grids for ring opacities computed using Mie theory and assuming a power-law size distribution for ring particles. The refractive indices are taken for astronomical silicates \citep{Draine03}. We assumed a ring mass surface density of $100~{\rm g~{cm}^{-2}}$ with the largest particle size of $10\,{\rm m}$, similar to Saturnian rings. Since tiny particles might survive in optically thick rings \citep{schlichting+chang2011}, we set the smallest particle size to be $0.1~{\rm {\mu}m}$. We also tested the smallest sizes of $0.01$ and $1\,{\rm {\mu}m}$, but the results were almost unchanged. For all of the ringed models, the intrinsic atmospheric features are much smaller than those in ring-free scenario because the surface gravity is about 6 times higher than the ring-free scenario, significantly reducing the true atmospheric scale height and thus the spectral features. We fit all of the models described above to the observed WFC3 transmission spectrum (excluding the \textit{K2} point) by computing the mean model prediction of each spectroscopic channel and performing a least-squares fit of the band-averaged model to the spectrum. In our fits, we preserved the shape of the model by allowing the vertical offset in $\rm R_{p}/R_{\star}$ between the spectrum and model to vary while holding all other parameters fixed. The number of degrees of freedom for each model is $n$ $-$ $m$, where $n$ is the number of data points and $m$ is the number of fitted parameters. Since $n$ = 30 for the HST spectrum and $m$ = 1, the number of degrees of freedom is the same for each model. From the fits, we quantified our model selection by computing the $\chi^{2}$ statistic. Figure \ref{fig:hip41378_trspec} shows the clear atmosphere models, the best-fitting hazy and ringed models, and a flat model. We rule out clear, low metallicity atmospheres ($\chi^{2}$=5.80 and 8.44 for the 1$\times$ and 30$\times$ solar cases, respectively). We cannot, however, distinguish between the high-metallicity (300$\times$ solar; $\chi^{2}$=1.84), hazy (soots, prod=$10^{-13}$\,$\rm g~\rm cm^{-2}~\rm s^{-1}$; $\chi^{2}$=0.97), gray opacity ringed model ($\rm R_{in}$=$1.08\rm R_{0}$, $i_{\rm ring}$=28$^{\circ}$; $\chi^{2}$=1.04) and non-gray ringed case ($\rm R_{in}$=$1.07\rm R_{0}$, $i_{\rm ring}$=28$^{\circ}$, $\rm a_{max}$=10\,$\mu$m; $\chi^{2}$=1.04) with the current observations. A flat spectrum (gray atmosphere) also matches the data well ($\chi^{2}$=1.05). \vspace{-0.2cm} \section{Discussion} \label{sec:discussion} Based on the observed WFC3 transmission spectrum, we contextualize HIP\,41378\,$\rm f$ by comparing to other planets with similar masses and radii (Section~\ref{ssec:context}) and constraining the composition of putative ring particles (Section \ref{sec:rings_in_lc}). We then explore how future JWST transit observations could break the degeneracy between high-altitude hazes and circumplanetary rings (Section~\ref{ssec:mir}). We also compare the observed WFC3 transit midpoint to previous predictions and calculate the times of upcoming transits for HIP\,41378\,$\rm f$ (Section~\ref{ssec:ttvs}). \subsection{Placing HIP\,41378\,f in Context} \label{ssec:context} HIP\,41378\,$\rm f$ ($\rm R_p$ = 9.2$\pm$0.1\,$\rm R_\oplus$) is approximately the same size as Saturn ($\rm R_p$ = 9.449\,$\rm R_\oplus$) but has a much lower mass (12$\pm$3\,$\rm M_\oplus$ versus $95.16\,\rm M_\oplus$) and density ($0.09\pm0.02 \, {\rm g}\, {\rm cm}^{-3}$ versus $0.687 \,{\rm g}\, {\rm cm}^{-3}$). Although HIP\,41378\,$\rm f$ is also less dense than other exoplanets with similar radii or masses, it is not the only known low-density Saturn-sized planet. There are currently five planets with radii of $7 \rm R_\oplus < \rm R_p < 10 \rm R_\oplus$ and densities lower than $0.15 \, {\rm g}\,{\rm cm}^{-3}$: Kepler-177\,c \citep{vissapragada_et_al2020}, Kepler-51\,b, c, d \citep{ masuda2014,libby-roberts_et_al2020}, and Kepler-79\,d \citep{jontof-hutter_et_al2014, chachan_et_al2020}. Kepler-51\,b, Kepler-51\,d, and Kepler-79\,d have previously been observed in transmission using WFC3 and displayed (within the precision of those observations) flat, featureless transmission spectra consistent with high-altitude aerosols \citep{libby-roberts_et_al2020, chachan_et_al2020}. HIP\,41378\,$\rm f$ displays a similarly flat spectrum (see Figure~\ref{fig:hip41378_trspec}), suggesting that flat spectra may be a hallmark of temperate, ultra-low density planets. Several theories have been proposed to explain the extremely low density planets discovered to-date. The large radii of the more highly irradiated planets could be attributed to ohmic dissipation \citep{pu+valencia2017} or obliquity tides \citep{millholland2019}, but these mechanisms are not expected to be significant heating sources for wide-orbit, cooler planets like HIP\,41378\,$\rm f$. A large ($>$10\%) gas mass fraction can naturally lead to an inflated radius, though acquiring and maintaining such an envelope may require formation near the water ice line and inward migration \citep{lee+chiang2016} as well as a low rate of atmospheric loss. High-altitude aerosols can reduce the gas mass fraction needed to produce the observed radii and explain the flat transmission spectra \citep[e.g.,][]{gao+zhang2020,Ohno2021}. At the low equilibrium temperature of HIP\,41378\,$\rm f$ ($\rm T_{eq}$=294~K), methane is the dominant carbon carrier in a solar metallicity atmosphere and therefore organic hazes are likely. Sulfur hazes produced from H$_{2}$S photochemistry is also possible (\citealt{Zahnle2016,gao_et_al2017}). Alternatively, the planets themselves could have higher densities but are surrounded by extended ring systems at an orientation that inflates their transit depths \citep{piro+vissapragada2020, akinsanmi_et_al2020}. All of the observed low density exoplanets, including HIP\,41378\,$\rm f$, are close enough to their host stars that any ring systems would be warmer than the water ice sublimation temperature ($T_{\rm sub} \approx 170$\,K) and must therefore be composed of rocky particles rather than icy particles \citep{gaudi_et_al2003, piro+vissapragada2020} with densities of $2-5 \, {\rm g}\,{\rm cm}^{-3}$, depending on the specific particle composition and porosity. An emerging trend in the haziness of cooler planets (see \citealt{Crossfield17,libby-roberts_et_al2020,Yu21,Dymont+21}) hints that planets with \mbox{$\rm T_{eq}$ $<$ 300\,K} may have clear atmospheres, as possibly shown by \mbox{K2-18\,b} ($\rm T_{eq}$=282\,K; \citealt{Benneke19,Tsiaras19}) and LHS~1140\,b ($\rm T_{eq}$=229\,K; \citealt{Edwards20}). Following \citet{Dymont+21}, we compute the 1.4 $\mu$m H$_{2}$O feature amplitude ($\rm A_{H}$) and add HIP\,41378\,$\rm f$ to the sample of cooler ($\rm T_{eq}$ $<$ 1000\,K) planets with measured WFC3 transmission spectra (Figure \ref{fig:Teq_Ah}). We do not find a potentially linear \citep{Crossfield17,libby-roberts_et_al2020} or quadratic \citep{Yu21} trend in $\rm A_{H}$ with planetary equilibrium temperature, as previously suggested in the literature. This larger sample reiterates that cloudiness/haziness in exoplanet atmospheres is governed by complex chemical and physical processes that are controlled by multiple parameters. Despite their comparable irradiation levels, for example, the transmission spectrum of \mbox{K2-18\,b} displays an atmospheric signature of H$_{2}$O whereas the spectrum of HIP\,41378\,$\rm f$ is essentially featureless. The different emergent spectra for these two planets with similar equilibrium temperatures may be due to their distinct bulk properties. Conversely, the observed atmospheric signal for K2-18\,b may be caused by stellar surface inhomogeneities \citep{Barclay21}. \begin{figure} \centering \includegraphics[trim={2cm 1cm 1cm 0},scale=0.50]{HIP41378f_Ah_Teq_final.pdf} \caption{Equilibrium temperature versus 1.4 $\mu$m H$_{2}$O feature amplitude ($\rm A_{H}$) for the cooler ($\rm T_{eq}<$~1000\,K) planets sample from \citealt{Dymont+21} (points) and HIP\,41378\,$\rm f$ (orange star). There is no clear trend in $\rm A_{H}$ with planetary temperature.} \label{fig:Teq_Ah} \end{figure} \subsection{Modeling Ring Compositions} \label{sec:rings_in_lc} We use the WFC3 white light curve to constrain the composition of putative ring particles for HIP\,41378\,$\rm f$. We infer the ring properties following the framework of \citet{akinsanmi_et_al2020}, which found that the observed transit depth of HIP\,41378\,$\rm f$ could be explained by a ring system extending from $1.05$-$2.59\rm R_p$ and inclined by $\sim$25$^\circ$. In this scenario, the underlying planet would have a higher density ($1.2 \pm 0.4 \, {\rm g}\,{\rm cm}^{-3}$) and a smaller radius ($3.7^{+0.3}_{-0.2}\,R_\oplus$), while the ring particles would possess a density of $\rho_r = 1.08\pm0.30 \, {\rm g}\,{\rm cm}^{-3}$ -- lower than expected for rocky materials but comparable to the densities of porous materials comprising some asteroids \citep{carry2012}. Despite the lower signal-to-noise and time sampling of the WFC3 observations, we performed a joint fit to the HST white light curve and \textit{K2} (Campaigns 5 and 18) light curves, allowing for different underlying planetary radii in the different bandpasses. Our ringed model fit (constrained mostly by the \textit{K2} data) provides a ring density estimate of 1.07$\pm$0.27\,g cm$^{-3}$, consistent with results in \citet{akinsanmi_et_al2020}. The ringed model fit\footnote{The ringed model light curve fit is available \href{https://doi.org/10.6084/m9.figshare.17374061}{here} in the online supplementary material.} suggests an underlying planetary radius of $3.7^{+0.3}_{-0.2}\,R_\oplus$ for the \textit{K2} data and $3.9^{+1.2}_{-0.4}\,R_\oplus$ for HST. \subsection{Distinguishing Between Rings, Hazes, and High Atmospheric Metallicity} \label{ssec:mir} The featureless near-infrared transmission spectrum of HIP\,41378\,$\rm f$ (Figure \ref{fig:hip41378_trspec}) might suggest the presence of circumplanetary rings -- although high-altitude hazes, a combination of rings and hazes, or a high mean molecular weight atmosphere could also explain the lack of spectral features. Rings composed of large ($>$10 $\mu$m) particles would result in a fairly flat spectrum with weak spectral features in the limit that it dominates over any signal from the planet's atmosphere (\citealt{Ohno2021}; Ohno \& Fortney, submitted). In contrast, hazes can flatten spectra fairly easily, as has been observed for other planets (e.g., \citealt{Kreidberg14,libby-roberts_et_al2020}). An enticing prospect for breaking the degeneracy between rings, metallicity, and aerosols is to obtain transmission spectra at near- and mid-infrared wavelengths. As shown by \citet{Ohno2021}, a super-puff with a hazy atmosphere would be expected to have a strongly sloped transmission spectrum in which the transit depth is much larger at bluer wavelengths than at redder wavelengths. This effect occurs because of the anticipated small size ($<$1 $\mu$m) of lofted dust particles. Conversely, planetary rings are likely composed of significantly larger particles, leading to less variation in transit depth with wavelength. Using {\tt PandExo} \citep{batalha_et_al2017}, we simulated JWST observations\footnote{The simulated JWST observations from {\tt PandExo} are included \href{https://doi.org/10.6084/m9.figshare.17960111}{here} in the online supplementary material.} of a single transit with MIRI LRS ($\sim$5-12\,$\mu$m), NIRSpec Prism ($\sim$0.5-5.5\,$\mu$m), NIRISS SOSS order 1 ($\sim$0.6-3\,$\mu$m), and NIRCam f322 ($\sim$2.4-4\,$\mu$m). At a resolution $R = 100$, we find that we can measure the transit depth for the high-metallicity clear, hazy, and non-gray rings scenarios to precisions of 85-100\,ppm with MIRI, 325-420\,ppm with NIRSpec Prism, 230-310\,ppm with NIRISS, and 220-280\,ppm for NIRCam f322. Observations with MIRI, NIRSpec Prism, NIRISS SOSS, and NIRCam f322 would be able to distinguish between the hazy and clear high-metallicity cases at 3.6-$\sigma$, 5.1-$\sigma$, 4.8-$\sigma$, and 3.4-$\sigma$, between hazy and ringed cases at 5.4-$\sigma$, 6.6-$\sigma$, 6.0-$\sigma$, and 5.1-$\sigma$, or between ringed and clear high-metallicity cases at 1.6-$\sigma$, 1.5-$\sigma$, 1.2-$\sigma$, and 1.7-$\sigma$, respectively. As shown in Figure~\ref{fig:hip41378_trspec}, the models differ in both transit depth and the slope across the near- and mid-infrared. Given the intrinsic challenge of measuring absolute transit depths, the broader wavelength coverage of MIRI and NIRISS SOSS is advantageous because of the increased ability to measure trends in transit depth with wavelength. The ability of JWST to observe continuously for the full transit also provides the opportunity to reveal subtle ring-induced deviations near ingress and egress \citep{akinsanmi_et_al2020}. \vspace{-0.44cm} \subsection{Future Transits} \label{ssec:ttvs} \begin{figure} \centering \includegraphics[trim={0 0.2cm 0 0},scale=0.56]{HIP41378f_TTVs.pdf} \caption{Transit times versus transit epoch for HIP\,41378\,$\rm f$ for observed transits from previous \textit{K2} and NGTS analyses (\citealt{vanderburg_et_al2016,becker_et_al2019, bryant_et_al2021}) compared to the current HST analysis (open black circles). We compare the transit times predicted by \citealt{bryant_et_al2021} (green circles) with our new transit predictions based on the HST transit mid-point (open blue circles). The dashed black line marks the linear ephemeris calculated using the period and transit mid-point from \citet{santerne_et_al2019}, along with the median TTV signal (orange line) and 1-$\sigma$ uncertainty (shaded region).} \label{fig:ttvs} \end{figure} To update the prediction of future transits, we reproduced the TTV analysis described in \citet{bryant_et_al2021}\footnote{\citet{bryant_et_al2021} predicted a transit center of BJD 2459355.087 with a 68\% confidence range of BJD 2459355.064 - 2459355.118 (1.3 hours long) and a 95\% confidence range of BJD 2459355.020 - 2459355.205 (4.4 hours long).}, based on the \citet{lithwick_et_al2012} formalism on the four epochs observed so far (two \textit{K2} epochs, one NGTS, and the HST transit presented in this manuscript). We assumed the timing variations of HIP\,41378\,$\rm f$ are dominated by the 2:3 resonance with HIP\,41378\,$\rm e$. As in the aforementioned study, the interaction with the very low-mass HIP\,41378\,$\rm d$ is expected to be negligible. We used {\tt emcee} to explore the posterior distribution with 40 walkers of 200,000 steps after a burn-in of 100,000 iterations. Priors were defined following the results of \citet{santerne_et_al2019}. We predict that the next transits of HIP\,41378\,$\rm f$ should occur on $\rm T_{C}$ = 2459897.046 $\pm$ 0.008 (mid-transit on 2022-11-13 at 13:06:28.30 UT) and $\rm T_{C}$ = 2460438.95 $\pm$ 0.02 (mid-transit on 2024-05-08 at 10:47:02.33 UT). As displayed in Figure~\ref{fig:ttvs}, the measured transit time is 21 minutes later than the value predicted by \citet{bryant_et_al2021} but is fully compatible with that prediction within 68.3\%. We did not detect a transit of HIP\,41378\,$\rm e$ in our HST data, which is unsurprising given the short duration of our observations compared to the length of the transit window for the planet. Given the long period of HIP\,41378\,$\rm f$, there are only a few opportunities to observe its transit during JWST's lifetime. No JWST observations are currently planned for future transits of this target, although these rare events present a unique opportunity to characterize the atmospheric properties of a cool, low mass giant planet. \section{Conclusions} \label{sec:conclusions} Using HST/WFC3, we observed a transit of the low-mass, long-period temperate giant planet HIP\,41378\,$\rm f$ to measure its near-infrared transmission spectrum. Based on these measurements, our key results about the atmospheric properties of this planet and opportunities for future observations can be summarized as follows: \begin{itemize} \item The transmission spectrum is featureless between $1.1-1.7\,\mu{\rm m}$, with no evidence of gaseous molecular features. Based on comparisons to 1D radiative-convective forward models, we rule out clear low metallicity atmospheres, but cannot distinguish between high metallicities, high-altitude hazes, and circumplanetary rings with the current observations. \item In the context of other cooler, low density exoplanets, HIP\,41378\,$\rm f$'s featureless spectrum suggests that flat spectra are possibly a population property of ultra-low density planets. This planet also complicates the picture of cloudiness versus temperature. \item Future JWST observations (e.g., MIRI, NIRSpec, NIRISS, NIRCam) can distinguish at $>$1-$\sigma$ confidence between the super-puff scenario in which HIP\,41378\,$\rm f$ is a low-density planet shrouded in a high-altitude aerosol layer, the ringed scenario in which the planet itself is much smaller than expected from the observed optical and near-infrared transit depths, and a clear high mean molecular weight atmosphere scenario. \item We predict the next transits of HIP\,41378\,$\rm f$ to occur at BJD = 2459897.046 $\pm$ 0.008 and BJD = 2460438.95 $\pm$ 0.02. These upcoming transits provide a rare opportunity to observe the atmospheric properties of a low mass, temperate gas giant planet with JWST, thereby expanding our efforts for comparative exoplanetology. \end{itemize} With the current HST observations, it is also possible to place constraints on the potential presence of exomoons. A $1.5 \, \rm R_\oplus$ moon would produce a 115~ppm transit (comparable to the precision we achieve in each spectrophotometric channel). Although a lunar transit would cause a noticeable deviation in the light curve, the moon would have to be precisely aligned and at a favorable orbital phase. A moon detection is therefore unlikely, but we will discuss the limits from this serendipitous search in a follow-up paper. \begin{acknowledgments} We appreciate the painstaking work of the HST technical staff including Patrica Royle and Nikolay Nikolov in scheduling this long sequence of observations. This paper makes use of observations from the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with HST GO program 16267 (PI: Dressing) and the analysis was supported by grant HST-GO-16267. M.K.A. is grateful to Johanna Teske and Anjali Piette for useful discussions. C.D.D. gratefully acknowledges additional support from the David \& Lucile Packard Foundation (grant number 2019-69648) and helpful conversations with Christina Hedges. K.O. was supported by JSPS Overseas Research Fellowship. N.S., S.B., and B.A. acknowledge the support by FCT (Funda\c{c}\~ao para a Ci\^encia e a Tecnologia) through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionaliza\c{c}\~ao by these grants: UID/FIS/04434/2019; UIDB/04434/2020; UIDP/04434/2020; PTDC/FIS-AST/32113/2017 \& POCI-01-0145-FEDER-032113; PTDC/FISAST /28953/2017 \& POCI-01-0145-FEDER-028953. V.A. acknowledges the support from FCT through Investigador contract nr. IF/00650/2015/CP1273/CT0001. J.L-B. acknowledges financial support received from ``la Caixa" Foundation (ID 100010434) and from the European Unions Horizon 2020 research and innovation programme under the Marie Slodowska-Curie grant agreement No 847648, with fellowship code LCF/BQ/PI20/11760023. This research has also been partly funded by the Spanish State Research Agency (AEI) Projects No.ESP2017-87676-C5-1-R and No. MDM-2017-0737 Unidad de Excelencia ``Mar\'ia de Maeztu"- Centro de Astrobiolog\'ia (INTA-CSIC). \end{acknowledgments} \vspace{5mm} \facilities{HST(WFC3)}
1,477,468,750,193
arxiv
\section{Introduction} Autonomous trucks are expected to fundamentally transform the freight transportation industry. Morgan Stanley estimates the potential savings from automation at \$168 billion annually for the US alone \cite{Greene2013-AutonomousFreightVehicles}. Additionally, autonomous transportation may improve on-road safety, and reduce emissions and traffic congestion \cite{ShortMurray2016-IdentifyingAutonomousVehicle,SlowikSharpe2018-AutomationLongHaul}. SAE International~\shortcite{SAEInternational2018-TaxonomyDefinitionsTerms} defines different levels of driving automation, ranging from L0 to L5, corresponding to no-driving automation to full-driving automation. The current focus is on L4 technology (high automation), which aims at delivering automated trucks that can drive without any need for human intervention in specific domains, e.g., on highways. The trucking industry is actively involved in making L4 vehicles a reality. Daimler Trucks, one of the leading heavy-duty truck manufacturers in North America, is working with both Torc Robotics and Waymo to develop autonomous trucks \cite{HDT2021-DaimlersRedundantChassis}. In 2020, truck and engine maker Navistar announced a strategic partnership with technology company TuSimple to go into production by 2024 \cite{TransportTopics2020-NavistarTusimplePartner}. Truck manufacturers Volvo and Paccar have both announced partnerships with Aurora \cite{TechCrunch2021-AuroraVolvoPartner}. Other companies developing self-driving vehicles include Embark, Gatik, Kodiak, and Plus \cite{FleetOwner2021-TusimpleAutonomousTruck,Forbes2021-PlusPartnersIveco,FreightWaves2021-GatikIsuzuPartner}. A study by Viscelli \shortcite{Viscelli-Driverless?AutonomousTrucks} describes different scenarios for the adoption of autonomous trucks by the industry. The most likely scenario, according to some of the major players, is the \emph{transfer hub business model} ~\cite{Viscelli-Driverless?AutonomousTrucks,RolandBerger2018-ShiftingGearAutomation,ShahandashtEtAl2019-AutonomousVehiclesFreight}. An Autonomous Transfer Hub Network (ATHN) makes use of autonomous truck ports, or \emph{transfer hubs}, to hand off trailers between human-driven trucks and driverless autonomous trucks. Autonomous trucks then carry out the transportation between the hubs, while conventional trucks serve the first and last miles (see Figure~\ref{fig:autonomous_example}). Orders are split into a first-mile leg, an autonomous leg, and a last-mile leg, each of which served by a different vehicle. A human-driven truck picks up the cargo at the customer location, and drops it off at the nearest transfer hub. A driverless autonomous truck moves the trailer to the transfer hub closest to the destination, and another human-driven truck performs the last leg. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{athn_example.pdf} \caption{Example of an Autonomous Transfer Hub Network.} \label{fig:autonomous_example} \end{figure} ATHNs apply automation where it counts: Monotonous highway driving is automated, while more complex local driving and customer contact is left to humans. Global consultancy firm Roland Berger \shortcite{RolandBerger2018-ShiftingGearAutomation} estimates operational cost savings between 22\% and 40\% in the transfer hub model, based on cost estimates for three example trips. A recent white paper published by Ryder System, Inc. \shortcite{RyderSAM2021-ImpactAutonomousTrucking} studies whether these savings can be attained for actual operations and realistic orders. It models ATHN operations as a scheduling problem and uses a Constraint Programming (CP) model \cite{DalmeijerVanHentenryck2021-OptimizingFreightOperations} to minimize empty miles and produce savings from 27\% to 40\% on a case study in the Southeast of the US. This paper reconsiders the optimization of ATHNs through both a solution quality and a solver performance lens. In addition to the CP model, it considers a model based on the Vehicle Routing Problem with Full Truckloads (VRPFL) \cite{ArunapuramEtAl2003-VehicleRoutingScheduling}. This model is tackled by two different approaches: a Column Generation (CG) approach and a Network Flow (NF) approach. The main technical contributions can be summarized as follows: (1) the NF model provides lower bounds as strong as those of CG on the Ryder case study; (2) solutions to the NF model can be transformed into upper bounds that are within 1\% of optimality; and (3) the resulting NF-based approach is highly scalable and provides orders of magnitudes of improvement over CP and CG. From a case study standpoint, the main contribution of the paper is to provide the first lower bounds for the ATHN optimization and to demonstrate that real instances can be solved to near-optimality. The remainder of this paper is organized as follows. Section~\ref{sec:problem} formally defines the problem of optimizing ATHN operations, and Section~\ref{sec:models} introduces the three solution methods. Section~\ref{sec:casestudy} compares the methods on the case study, and Section~\ref{sec:conclusion} provides the conclusions. \section{Problem Statement} \label{sec:problem} The ATHN problem aims to find a plan for a set of vehicles to fulfill a list of customer tasks at minimum cost. Let $L$ be a set of locations, including hub locations $L_H \subset L$, with driving time $\tau_{ij} \ge 0$ and driving distance $c_{ij} \geq 0$ between locations $i, j \in L$ ($\tau_{ij}, c_{ij} > 0$ if $i \neq j$). Customers request freight to be picked up at a given location at a given time, to be transported to a dropoff location. This information is used to generate a set of tasks $T$, where each order generates a first-mile task (regular truck from pickup to hub), an autonomous task (transportation between the hubs), and a last-mile task (regular truck from hub to dropoff). Every task $t\in T$ corresponds to a single leg, and is defined by an origin $o(t) \in L$, a destination $d(t) \in L$, and a pickup time $p(t)$. The duration of a task equals $\tau_{o(t)d(t)} + 2S$, where $S \ge 0$ is the fixed time for loading or unloading a trailer. The pickup time of the first-mile task is equal to the customer requested pickup time, and pickup times for subsequent tasks are set to the time that the freight is planned to arrive at $o(t)$. The set of available trucks is partitioned into autonomous trucks $K$, and regular trucks $K_h$ at every hub $h \in L_H$. A feasible plan is created by assigning all tasks to the vehicles. It is assumed that an appointment flexibility of $\Delta \geq 0$ minutes is permitted, which means that task $t\in T$ may start anywhere in the interval $[p(t)-\Delta, p(t)+\Delta]$. Trucks are assumed to provide dedicated service, i.e., they transport one order at a time, and may be scheduled with any start- and end-point over the planning horizon. Every task must be assigned to the right kind of truck, and tasks performed by the same vehicle must not overlap in time. If the dropoff location $i$ of the previous task does not match the pickup location $j$ of the next task, then an empty relocation is necessary with time $\tau_{ij}$ and cost $c_{ij}$. An optimal plan performs all the tasks at minimum total driving distance, or equivalently, at minimum total relocation distance. The problem described above can be decomposed and solved independently for the autonomous network and for the operations at each of the hubs. The main challenge is in optimizing the autonomous network since, in practice, the first- and last-mile problems are not very constrained. That is the focus of the paper. \section{Models and Methodology} \label{sec:models} The problem of optimizing ATHN operations is modeled as a scheduling problem and as a VRPFL. Three different solution methods are proposed: The scheduling problem is solved with CP, and the VRPFL is addressed with a CG-based heuristic and with an NF model. For simplicity the methods are presented for the autonomous part of the network only. This is without loss of generality, because the ATHN problem is decomposable. \subsection{Scheduling Modeling} \label{sec:CP} The ATHN optimization can be modeled as a scheduling problem and solved using CP as in the Ryder \shortcite{RyderSAM2021-ImpactAutonomousTrucking} case study. Figure~\ref{fig:formulation} reproduces the CP model for this problem using OPL syntax \cite{VanHentenryck1999-OplOptimizationProgramming}. \newsavebox{\modelbox} \begin{lrbox}{\modelbox} \begin{varwidth}{0.46\textwidth} \vspace{-0.25cm} \begin{lstlisting} range Trucks = ...;|\label{cp:range_start}| range Tasks = ...; range Sites = ...; range Horizon = ...; range Types = Sites union { shipType };|\label{cp:range_end}| int or[Tasks] = ...;|\label{cp:param_start}| int de[Tasks] = ...; int pickupTime[Tasks] = ...;|\label{cp:pickup_time}| int loadTime = ...;|\label{cp:load_time}| int flexibility = ...;|\label{cp:flex}| int travelTime[Types,Types] = ...; int travelCost[Types,Types] = ...;|\label{cp:param_end}| dvar interval task[t in Tasks] in Horizon size travelTime[or[t],de[t]] + 2*loadTime; dvar interval ttask[k in Trucks,t in Tasks] optional in Horizon size travelTime[or[t],de[t]] + 2*loadTime; dvar interval load[Trucks,Tasks] optional in Horizon size loadTime; dvar interval ship[k in Trucks,t in Tasks] optional in Horizon size travelTime[ort],de[t]]; dvar interval unload[Trucks,Tasks] optional in Horizon size loadTime; dvar sequence truckSeq[k in Trucks] in append(all(t in Tasks)load[k,t],all(t in Tasks)ship[k,t],all(t in Tasks)unload[k,t]) types append(all(t in Tasks)or[t],all(t in Tasks)shipType,all(t in Tasks)de[t]); dvar int emptyMilesCost[Trucks,Tasks]; dvar int truckEmptyMilesCost[Trucks]; minimize sum(k in Trucks) truckEmptyMilesCost[k];|\label{cp:obj}| constraints { forall(t in Tasks)|\label{cp:flex_constr_start}| startOf(task[t]) >= pickupTime[t] - flexibility; startOf(task[t]) <= pickupTime[t] + flexibility;|\label{cp:flex_constr_end}| forall(k in Trucks,t in Tasks) span(ttask[k,t],[load[k,t],ship[k,t],unload[k,t]]);|\label{cp:span_constr}| startOf(ship[k,t]) == endOf(load[k,t])|\label{cp:ship_constr}| startOf(unload[k,t]) == endOf(ship[k,t])|\label{cp:unload_constr}| forall(k in Trucks) alternative(task[t],all(k in Trucks) ttask[k,t])|\label{cp:alt_constr}| forall(k in Trucks,t in Tasks) emptyMilesCost[k,t] = travelCost[destination[t],typeOfNext(truckSeq[k],ttask[k,t],destination[t],destination[t])];|\label{cp:empty_miles_single}| forall(k in Trucks) truckEmptyMilesCost[k] = sum(t in Tasks) emptyMilesCost[k,t];|\label{cp:empty_miles_total}| forall(k in Trucks) noOverlap(truckSeq,travelTime);|\label{cp:no_overlap_constr}| }|\vspace{-0.25cm}| \end{lstlisting} \end{varwidth} \end{lrbox} \begin{figure}[!t] \makebox[\textwidth][l]{% \fbox{\begin{minipage}{0.47\textwidth} \usebox{\modelbox} \end{minipage}} } \caption{CP Model for the ATHN Problem.} \label{fig:formulation} \end{figure} The main decision variables are the interval variables {\tt task[t]} that specify the start and end times of task {\tt t} when processed by the autonomous network, and the optional interval variables {\tt ttask[k,t]} that are present if task {\tt t} is transported by truck {\tt k}. These optional variables consist of three subtasks that are captured by the interval variables {\tt load[k,t]} for loading, {\tt ship[k,t]} for transportation, and {\tt unload[k,t]} for unloading. The other key decision variables are the sequence variables {\tt truckSeq[k]} for every truck: these variables represent the sequence of tasks performed by every truck. They contain the loading, shipping, and unloading interval variables associated with the trucks, and their types. The type of a loading interval variable is the origin of the task, the type of an unloading interval variable is the destination of the task, and the type of the shipping interval variable is the specific type {\tt shipType} that is used to represent the fact that there is no transition cost and transition time between the load and shipping subtasks, and the shipping and destination subtasks. The model also contains two auxiliary decision variables to capture the empty mile cost between a task and its successor, and the empty mile cost of the truck sequence. The objective function (line \ref{cp:obj}) minimizes the total empty mile cost. The constraints in lines \ref{cp:flex_constr_start}--\ref{cp:flex_constr_end} specify the potential start times of the tasks, and are defined in terms of the pickup times and the flexibility parameter. The {\sc span} constraints (line \ref{cp:span_constr}) link the task variables and their subtasks, while the constraints in lines \ref{cp:ship_constr}--\ref{cp:unload_constr} link the subtasks together. The {\sc alternative} constraints on line \ref{cp:alt_constr} specify that each task is processed by a single truck. The empty mile costs between a task and its subsequent task (if it exists) is computed by the constraints in line \ref{cp:empty_miles_single} that use the {\sc typeOfNext} expression on the sequence variables. The total empty mile cost for a truck is computed in line \ref{cp:empty_miles_total}. The {\sc noOverlap} constraints in line 51 impose the disjunctive constraints between the tasks and the transition times. The CP model is solved with the CPLEX CP Optimizer version 12.8. \subsection{Vehicle Routing Modeling} \label{sec:VRPFLcg} The ATHN optimization can be modeled as a variant of the Vehicle Routing Problem with Full Truckloads \cite{ArunapuramEtAl2003-VehicleRoutingScheduling}. The VRPFL is formulated on the directed \emph{task graph} $G=(V,A)$. The set $V = T \cup \{src, snk\}$ contains a vertex for every task, together with a source and a sink node. Operations of a single truck are modeled by a route from $src$ to $snk$, where arcs model the transition from one task to the next. Arcs are defined between the tasks, going out of the source, and going into the sink. If arc $a \in A$ connects task $t$ to task $t'$, it is associated with time $\tau_{tt'} = \tau_{o(t)d(t)} + 2S + \tau_{d(t)o(t')}$ and cost $c_{tt'} = c_{o(t)d(t)} + c_{d(t')o(t')}$, i.e., performing task $t$ and relocating to the start location of task $t'$. Arcs connecting to $src$ take no time and have no cost, and arcs into $snk$ do not require relocation. Every vertex $t \in V\backslash\{src,snk\}$ has a time window $[p(t)-\Delta, p(t)+\Delta]$. The ATHN problem amounts to finding a minimum-cost set of at most $\lvert K \rvert$ feasible routes on $G$ that cover all vertices. A route is feasible if it starts at the source, ends at the sink, and satisfies the time constraints. \paragraph{Preprocessing} A computational challenge in solving Vehicle Routing Problems (VRPs) is dealing with cycles in the underlying graph: either cycles are not addressed, which results in a weak lower bound, or cycles are avoided, which takes computational effort \cite{CostaEtAl2019-ExactBranchPrice}. For long-distance trucking, it turns out that almost all cycles can be removed in preprocessing by only keeping arcs between task $t$ and $t'$ if it is possible to perform the tasks in that order, i.e., $p(t)-\Delta + \tau_{tt'} \le p(t')+\Delta$. The arcs imply an ordering on $p(t)$ if all tasks take sufficiently long. In particular, for $\Delta \le S$ it is guaranteed that $\tau_{tt'} > 2S \ge 2\Delta$ such that every arc follows the ordering $p(t) < p(t')$, and the graph is acyclic. \paragraph{Column Generation} VRP variants are often formulated with a set-partitioning formulation and solved with column generation (e.g., see \cite{CostaEtAl2019-ExactBranchPrice}). Let $R$ be the set of all feasible routes, and let binary variable $x_r$ be one if and only if route $r \in R$ is selected. The cost of a route $c_r$ is the sum over the arc costs. Figure~\ref{fig:cgmodel} states the set-partitioning formulation. Objective~\eqref{eq:cg:obj} minimizes the total distance and Constraints~\eqref{eq:cg:cover} ensure that all tasks are performed. Equations~\eqref{eq:cg:x} define the variables. \begin{figure}[!t] \centering \begin{align} \min \quad &\sum_{r \in R} c_r x_r\label{eq:cg:obj}\\ \text{s.t.} \quad &\sum_{r \in R \vert t \in r} x_{r} = 1 \qquad \forall t \in T \label{eq:cg:cover}\\ &\sum_{r \in R} x_{r} \leq \lvert K \rvert\label{eq:cg:vehicles}\\ &x_{r} \in \{0,1\} \qquad \forall r \in R \label{eq:cg:x} \end{align} \caption{Set-Partitioning Form. for the ATHN Problem.} \label{fig:cgmodel} \end{figure} A CG-based restricted master heuristic \cite{JoncourEtAl2010-ColumnGenerationBased} is used to solve the problem. First, CG is used to solve the Linear Programming (LP) relaxation and to obtain a lower bound on the objective value. CG is a technique to solve LPs by only generating the variables (columns) as they are needed, which makes it suited to deal with the large number of $x$-variables \cite{DesaulniersEtAl2005-ColumnGeneration}. The problem is split into a master problem and a subproblem that are solved iteratively until convergence. The master problem is an LP that is solved with Gurobi 9.1.2, and the subproblem is typically solved with a labeling algorithm. This paper uses the labeling algorithm provided by the cspy Python package \cite{Sanchez2020-CspyPythonPackage}. Motivated by the (almost) acyclic nature of the task graphs, this paper does not address cycles. The CG procedure results in a valid lower bound and a set $\bar{R}$ of routes that are relevant to the LP relaxation. An upper bound is generated by solving the set-partitioning formulation with integer variables for route set $\bar{R}$. To make it easier to construct solutions, tasks are allowed to be performed multiple times, and duplicates are removed afterwards. The result is a feasible solution to the ATHN problem. The CG method is a heuristic, but may be extended to an exact method by embedding CG in a branch-and-bound framework \cite{BarnhartEtAl1998-BranchPriceColumn}. \paragraph{Network Flow} \label{sec:mcnf} The VRPFL can also be modeled with the vehicle-flow formulation that is presented by Figure~\ref{fig:nfmodel} (similar to VRP1 in Toth and Vigo~\shortcite{TothVigo2014-VehicleRoutingProblems}). Rather than using route variables, the vehicle-flow formulation uses binary flow variables $y_a$ that indicate whether arc $a \in A$ is part of any route. For brevity, let $\delta^+(t)$ and $\delta^-(t)$ denote the set of arcs going out and coming into $t \in V$, respectively. Objective~\eqref{eq:nf:obj} minimizes the total distance. Constraints~\eqref{eq:nf:flow-cover} and~\eqref{eq:nf:flow+cover} require that each task has an inflow and outflow of one, which ensures the task is performed, and Constraint~\eqref{eq:nf:maxvehicles} limits the maximum number of vehicles. The subtour elimination constraints prevent cyclic flows and the time constraints ensure that the time windows are satisfied. Together the constraints ensure a disjoint set of feasible routes, just like the set-partitioning model in Figure~\ref{fig:cgmodel}. Toth and Vigo~\shortcite{TothVigo2014-VehicleRoutingProblems} present different implementations for Constraints~\eqref{eq:nf:subtour}-\eqref{eq:nf:time} but the details are not important to this paper. \begin{figure}[!t] \centering \begin{align} \min \quad &\sum_{a \in A} {c}_{a} y_{a} \label{eq:nf:obj}\\ \text{s.t.} \quad &\sum_{a \in \delta^{-} (t)}y_{a} = 1 \quad \forall t \in V\backslash\{src,snk\} \label{eq:nf:flow-cover}\\ &\sum_{a \in \delta^{+} (t)}y_{a} = 1 \quad \forall t \in V\backslash\{src,snk\} \label{eq:nf:flow+cover}\\ &\sum_{a \in \delta^{+} (src)}y_{a} \le \lvert K \rvert \label{eq:nf:maxvehicles}\\ & \textrm{\emph{(subtour elimination constraints)}} \label{eq:nf:subtour}\\ & \textrm{\emph{(time constraints)}} \label{eq:nf:time}\\ &y_a \in \{0,1\} \quad \forall a \in A \label{eq:nf:vars} \end{align} \caption{Vehicle-Flow Form. for the ATHN Problem.} \label{fig:nfmodel} \end{figure} Given the near-acyclicity of task graphs in ATHNs, Constraints~\eqref{eq:nf:subtour}-\eqref{eq:nf:time} are relaxed. The remaining Problem~\eqref{eq:nf:obj}-\eqref{eq:nf:maxvehicles}, \eqref{eq:nf:vars} will be referred to as the \emph{NF model}. Relaxing these constraints is motivated by the structure of the task graph: Only few cycles are expected {\em after preprocessing the arcs}, which makes the subtour elimination constraints almost redundant. Furthermore, the remaining arcs only connect tasks that can be performed sequentially in time, making the time constraints less important. Another important observation is that the NF model has the integrality property and can therefore be solved in polynomial time with LP \cite{AhujaEtAl1993-NetworkFlowsTheory}. Solving the NF model \eqref{eq:nf:obj}-\eqref{eq:nf:maxvehicles}, \eqref{eq:nf:vars} with linear programming immediately provides a lower bound. This follows from the fact that the NF model is a relaxation of the vehicle-flow formulation~\eqref{eq:nf:obj}-\eqref{eq:nf:vars}. It is important to remark that {\em the lower bound still depends on the time flexibility parameter $\Delta$}, even if the time constraints are relaxed. This stems from the arc preprocessing step, which removes more arcs when the flexibility shrinks. The NF model for a flexibility $\delta$ is denoted by NF$_\delta$. Upper bounds are generated according to the following procedure. Solve NF$_\delta$ for a selection of different time flexibilities $\delta \le \Delta$, including $\delta=0$. For each $\delta$ take the vehicle routes that are produced by NF$_\delta$ and try to apply them to NF$_\Delta$. First, check if the subtour elimination constraints are satisfied. Next, if this is the case, try to satisfy the time constraints of NF$_\Delta$ by following each of the routes and setting the earliest possible starting time for every vertex. If this is also successful, the resulting solution is a feasible solution to the ATHN problem with flexibility $\Delta$. The above procedure is opportunistic, but surprisingly it is \emph{guaranteed} to produce a feasible solution for $\delta=0$ if one exists, and this solution is trivially feasible for larger flexibilities. \begin{proposition} For time flexibility $\Delta = 0$ the NF model produces an optimal solution. \end{proposition} \begin{proof} Arc preprocessing for $\Delta=0$ ensures that all arcs $(t,t')$ satisfy $p(t) + \tau_{tt'} \le p(t')$. This immediately implies that the graph is acyclic and thus that the subtour elimination constraints are automatically satisfied. Following any route, the condition above ensures that starting task $t$ at time $p(t)$ satisfies the time constraints. As the minimum-cost solution to NF$_0$ also satisfies the relaxed constraints, the solution is optimal. \end{proof} \noindent The fact that the upper bound procedure is guaranteed to work is specifically due to autonomous vehicles. The main technical difference is that autonomous trucks are completely interchangable, while human-driven trucks need to be distinguished to ensure that drivers return to their specific starting point \cite{ArunapuramEtAl2003-VehicleRoutingScheduling} or that they do not exceed the maximum driving time \cite{GronaltEtAl2003-NewSavingsBased}. Gronalt \emph{et al.}~\shortcite{GronaltEtAl2003-NewSavingsBased} consider a network flow relaxation with aggregated drivers which results in a lower bound, but the outcomes cannot be transformed into upper bounds. Human factors do not apply to autonomous trucks, which enables the simple upper bound strategy in this paper. \section{Case Study} \label{sec:casestudy} The three models are applied to the realistic order data introduced in the Ryder~\shortcite{RyderSAM2021-ImpactAutonomousTrucking} white paper. Ryder prepared this representative dataset for its dedicated transportation business in the Southeast of the US, reducing the scope to orders that were strong candidates for automation. \paragraph{Data Description} \label{sec:data} The dataset consists of long-haul trips (431 miles average) that start in the first week of October 2019, and stay completely within the following states: AL, FL, GA, MS, NC, SC, and TN. The case study focuses on scheduling the 494 most \emph{challenging orders} that currently consist of a single delivery followed by an empty return trip. These orders make up 24\% of the dataset, and account for 53\% of the empty mileage. Two sets of hubs are considered: a small network with 17 transfer hubs in the areas where Ryder trucks most frequently access the highway system, and a large network that includes 13 additional hubs in locations with fewer highway access points. Figure~\ref{fig:designs} visualizes the networks. The exact hub locations are masked, but the figures are accurate within a 50 mile range. Orders that are better served with a conventional truck are filtered out, and the remaining orders are split into separate tasks. See the Ryder \shortcite{RyderSAM2021-ImpactAutonomousTrucking} white paper for additional details. \begin{figure}[!t] \centering \includegraphics[width=0.20\textwidth, trim=28cm 3cm 29cm 15cm, clip]{obfuscated_design_small.png} \hspace{0.5cm} \includegraphics[width=0.20\textwidth, trim=28cm 3cm 29cm 15cm, clip]{obfuscated_design_large.png} \caption{Small and Large Networks for the Southeast.} \label{fig:designs} \end{figure} \paragraph{Base Cases} \label{sec:base} Two base cases are considered in the case study: The \emph{N-17 base case} on the small network and the \emph{N-30 base case} on the large network. Both cases assume a time flexibility of $\Delta=60$ minutes, loading or unloading of $S=30$ minutes, $\lvert K \rvert = 50$ autonomous trucks are available, and operating autonomous trucks will be 25\% cheaper per mile than conventional trucks. After filtering, the N-17 and N-30 base cases respectively serve 437 and 468 orders on the ATHN. This paper focuses on optimizing the autonomous part of the system. When comparing costs to the current system, it is assumed that the first/last-mile tasks can be served with at most 25\% empty miles. The associated models are significant in size: The CP model (Figure~\ref{fig:formulation}) has about 100,000 variables and 100,000 constraints. The NF model~\eqref{eq:nf:obj}-\eqref{eq:nf:maxvehicles}, \eqref{eq:nf:vars} is similar in size but only contains continuous variables. The set-partitioning formulation (Figure~\ref{fig:cgmodel}) has around 500 constraints and generates about 3,000 variables as part of the CG procedure. \paragraph{Base Case Results} \label{sec:baseresults} The CG and NF methods are used to generate the lower bounds that are presented by Table~\ref{tab:basecaseLB}. The NF model is indeed very effective in exploiting the problem structure: Compared to CG, the solution time is reduced from hours to seconds while the lower bounds remain exactly the same. This result is possible because the NF model only ignores constraints that are unlikely to be violated, and therefore hardly affect the bound. Calculating the NF bound only requires solving an LP, which explains the tremendous speedup. The bounds are not guaranteed to always be the same: If autonomous trucks are 40\% cheaper than regular trucks and time flexibility is increased to $\Delta=120$ minutes, then CG generates a lower bound that is 0.07\% better than the bound produced by NF. Hence the NF model presents an excellent trade-off in terms of quality and time. The Ryder~\shortcite{RyderSAM2021-ImpactAutonomousTrucking} white paper only considers the CP method, which inherently does not produce bounds. Now that lower bounds are available they will be used to assess solution quality and complement earlier work. \begin{table}[!t] \begin{adjustbox}{width=1\linewidth,center} \begin{tabular}{lcccccc} \toprule & \multicolumn{2}{c}{CP} & \multicolumn{2}{c}{CG} & \multicolumn{2}{c}{NF} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} & LB (mi) & Time & \multicolumn{1}{c}{LB (mi)} & \multicolumn{1}{c}{Time} & \multicolumn{1}{c}{LB (mi)} & \multicolumn{1}{c}{Time} \\ \midrule N-17 & n.a. & n.a. & 122,061 & 17.7 h & 122,061 & 8 s \\ N-30 & n.a. & n.a. & 134,382 & 24.1 h & 134,382 & 9 s \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Base Case Lower-Bound Comparison.} \label{tab:basecaseLB} \vspace{\baselineskip} \begin{adjustbox}{width=1\linewidth,center} \begin{tabular}{lcccccc} \toprule & \multicolumn{2}{c}{CP} & \multicolumn{2}{c}{CG} & \multicolumn{2}{c}{NF} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} & UB (mi) & Time & \multicolumn{1}{c}{UB (mi)} & \multicolumn{1}{c}{Time} & \multicolumn{1}{c}{UB (mi)} & \multicolumn{1}{c}{Time} \\ \midrule N-17 & 135,834 (11.3\%) & 1.0 h & - & 1.0 h & 122,658 (0.5\%) & 20 s \\ N-30 & 150,573 (12.0\%) & 1.0 h & - & 1.0 h & 135,486 (0.8\%) & 20 s \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Upper-Bound Comparison (gaps in parenthesis).} \label{tab:basecaseUB} \end{table} Table~\ref{tab:basecaseUB} compares the three methods on their ability to produce high-quality solutions. For consistency with earlier work, each method is given one additional hour after computing the lower bound to generate a feasible solution. The CG method makes use of the route set $\bar{R}$ that was obtained while calculating the lower bound. The NF method uses the procedure described in Section~\ref{sec:mcnf} to opportunistically generate solutions from NF$_\delta$ for $\delta\in \{0, 30, 60\}$. Additionally, the table includes optimality gaps that compare the upper bounds to the lower bounds presented earlier. \begin{figure}[!t] \includegraphics[width=0.9\linewidth,trim={0.5cm 1.2cm 5cm 0},clip]{small_ub_gantt.pdf} \caption{NF Solution for the N-17 Base Case (one vehicle per row, blue for performing tasks and red for relocation).} \label{fig:small_ub_gantt} \end{figure} The NF method again outperforms the other methods. The solutions found by the CP method already correspond to significant savings of more than 27\% for ATHN compared to the current system. However, the optimality gaps of over 11\% show that more savings may be possible. Surprisingly, the CG method fails to find a feasible solution within one hour. Intermediate solutions are found with 55 and 56 vehicles for N-17 and N-30 respectively, but these plans are already more expensive than those found with CP. It seems that the route set $\bar{R}$ is not necessarily a good starting point to find a feasible solution. The NF method is the clear winner: Not only are the solutions significantly better than those obtained from the CP method, they are provably within only 1\% from optimality. Figure~\ref{fig:small_ub_gantt} visualizes the NF solution for the N-17 base case: The tasks are spread out over the week, which explains why the time constraints are not very restrictive and relaxing them still leads to good results. \paragraph{Impact of Time Flexibility} \label{sec:delta} \begin{figure}[!t] \centering \includegraphics[scale=0.42]{time_flexibility.png} \caption{Impact of Time Flexibility.} \label{fig:twdelta} \end{figure} Deviating from an agreed time window flexibility must be negotiated with the customer, but if there are significant benefits in terms of efficiency, this may be worth the effort. To determine the impact of appointment flexibility, the two models that produced feasible solutions (CP and NF) are compared on the N-17 base case for different values of $\Delta$ ranging from 30 to 120 minutes. The time limit is set to four hours per setting to account for the fact that instances with more flexibility are more difficult to solve. Figure~\ref{fig:twdelta} summarizes the upper bounds (UB-CP, UB-NF) and the lower bounds (LB-NF) that are generated for different flexibility parameters. If UB-CP deteriorates when $\Delta$ increases, the previous better solution is reported. The upper bounds for the NF method are calculated with the opportunistic procedure for $\delta \in \{0,30,60,90,120\}$. Similar to the base case, Figure~\ref{fig:twdelta} shows that the NF method maintains strong performance when time flexibility is increased. The largest optimality gap of 1.2\% is obtained for $\Delta=90$. The NF method was unable to improve from $\Delta=60$ to $\Delta=90$, but is able to benefit from additional flexibility provided by $\Delta=120$. The CP method is able to benefit significantly from the extension of $\Delta=60$ to $\Delta=90$, but does not improve after that. A possible explanation is that additional flexibility increases the search space, which makes it more difficult for the CP method to find a good solution. The performance of the NF method is consistent when the experiment is repeated for different values of cost reduction per mile compared to conventional trucks. Additional instances are generated for 30\%, 35\%, and 40\% cost reduction and for different time flexibilities. On all instances the optimality gap found by the NF method is less than 1.6\%. These results further support that the NF method is able to find high-quality solutions. \paragraph{Impact on ATHN Cost Savings} The Ryder \shortcite{RyderSAM2021-ImpactAutonomousTrucking} white paper reported cost saving in the range of 27\%-40\% for the challenging orders when ATHN is compared to current operations. The lower value stems from applying the CP model to the N-17 base case. The solution found by the NF model improves this number to 32.2\%. The higher value follows from the scenario that considers the N-30 base case with autonomous trucks that are 40\% cheaper per mile. Applying the NF model to this instance results in a solution with 44.0\% cost savings compared to current operations. It follows that better optimization methods improve the range of potential cost savings from 27\%-40\% to 32\%-44\%, which strengthens the business case for ATHNs and autonomous trucking. \section{Conclusion} \label{sec:conclusion} Autonomous trucks are expected to fundamentally transform the freight transportation industry. Recent studies have shown that Autonomous Transfer Hub Networks (ATHN), which combine autonomous trucks on middle miles with human-driven on the first and last miles, can produce significant savings. This paper presented three different methods to optimize ATHN operations: a Constraint-Programming (CP) method, a Column-Generation (CG) method, and a Network Flow (NF) method. The methods were compared on a realistic case study with data provided by Ryder System, Inc. The paper complemented earlier work by calculating lower bounds on performance. This showed that the ATHN CP solution, which already corresponds to large savings compared to the current system, could potentially be improved. It was demonstrated on the case study that the NF model effectively exploits the problem structure and outperforms the other methods: It produces both lower bounds that are comparable to the CG method and upper bounds that improve upon the CP method in a matter of seconds. Further analysis showed that NF is able to benefit from additional pickup flexible, and consistently outperforms the other methods. The NF method improved the range of potential savings of ATHN from 27\%-40\% to 32\%-44\%, further strengthening the business case for autonomous trucking. \section*{Acknowledgments} This research was partly funded through a gift from Ryder and partly supported by the NSF AI Institute for Advances in Optimization (Award 2112533). Special thanks to the Ryder team for their invaluable support, expertise, and insights. \bibliographystyle{named}
1,477,468,750,194
arxiv
\section{Introduction} In this paper I consider the problem of counting the number of independent sets and kernels of regular graphs. For the purposes of this paper, a {\em graph} will be defined as a collection of {\em vertices} and a collection of {\em edges} that connect pairs of vertices. {\em Simple} graphs are graphs with no more than one edge between any two vertices, and with no edges that connect a vertex to itself. The {\em degree} of a vertex is the number of edges connected to that vertex. A {\em k-regular} graph is a simple graph in which each vertex has degree $k$. An {\em independent set} is a set of vertices in a graph, no two of which are connected by an edge. A {\em kernel}, also called a ``maximal independent set,'' is an independent set such that adding any other vertex to the set forces the set to contain a pair of vertices connected by an edge. Independent sets are closely related to ``hard sphere'' models that physicists use to model liquids and gases. In a hard sphere model, particles never overlap. Independent sets could therefore be seen as the legal positions of particles on a lattice, with no two particles being adjacent. In physics, the number of legal configurations of a hard sphere model is known as the ``partition function,'' and the logarithm of that number is known as the {\em entropy} of the model. In a recent paper \cite{freakout}, Chandrasekaran et al. proved that the average number of independent sets for 3-regular graphs of size $n$ will approach $w^n$ as $n$ grows large, where $w \approx 1.54563$. This value was computed using the Bethe approximation from statistical physics \cite{Jed}. They also made a very surprising prediction about the fluctuations around this result. Ordinarily, the fluctuations between random samples of similar systems will grow as $\sqrt n$, where $n$ is the size of the system. Instead, Chandrasekaran et al. conjectured that the standard deviation of the logarithm of the number of independent sets of random regular graphs will not be $O(\sqrt n)$, as one might expect, but instead be $O(1)$! This implies that the Bethe approximation will always provide an amazingly accurate estimate for the entropy of independent sets for every randomly chosen 3-regular graph. Chandrasekaran et al. proved that their surprising conjecture is true if the Shortest Cycle Cover Conjecture (SCCC) of Alon and Tarsi \cite{Alon} is true, but they offered no direct numerical evidence. This is most likely due to the difficulty of actually counting independent sets for large graphs. Counting independent sets, even for 3-regular graphs, is a \#P-hard problem \cite{sharp}, meaning that the time it takes to count independent sets will grow exponentially as the graph sizes grow. Well-suited, however, to counting solutions to combinatorial problems is the binary decision diagram (BDD), first introduced by Bryant \cite{Bryant}, and recently explicated by Knuth \cite{Knuth}. In my paper, I use BDDs to gather numerical evidence that convincingly confirms the conjecture of \cite{freakout}. Chandrasekaran et al. did not make any predictions about kernels. We can still use the BDDs to gather evidence about kernels, however, and the evidence shows that kernels behave very similarly to independent sets. More precisely, I make the novel conjectures that the average number of kernels of 3-regular graphs grows as $y^n$, with $y \approx 1.299$, and that the fluctuations in the logarithm of that number are only $O(1)$ as $n$ grows large. \section{Independent Sets and Kernels} In this section I will give a more detailed explanation of independent sets. Recall that an independent set is defined as a set of vertices in a graph, no two of which are connected by an edge. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figure1.eps} \caption {A regular 6-graph.} \label{g6} \end{figure} Figure \ref{g6} shows a $3$-regular graph of six vertices (or ``6-graph''). The independent sets of this graph would be \{$\emptyset$, \{1\}, \{2\}, \{3\}, \{4\}, \{5\}, \{6\}, \{1, 3\}, \{1, 5\}, \{2, 4\}, \{2, 5\}, \{3, 6\}, \{4, 6\}\}, because those are the thirteen possible sets of vertices such that no two of them will be connected. \begin{figure} \centering \includegraphics[width=0.35\textwidth]{figure2.eps} \caption {A different regular 6-graph.} \label{g6_2} \end{figure} Change the figure to another 6-graph of degree 3, and the independent sets change as well, as in figure \ref{g6_2}. This graph's independent sets are \{$\emptyset$, \{1\}, \{2\}, \{3\}, \{4\}, \{5\}, \{6\}, \{1, 3\}, \{1, 5\}, \{2, 4\}, \{2, 6\}, \{3, 5\}, \{4, 6\}, \{1, 3, 5\}, \{2, 4, 6\}\}. There are fifteen configurations here; this shows that there can be variance in the number of independent sets even in graphs of identical size and degree. Recall now that a kernel is a ``maximal independent set'' or an independent set to which one cannot add a vertex without also adding an edge. Although there were 13 independent sets for the graph in figure \ref{g6}, there are only 6 kernels: they are \{\{1, 3\}, \{1, 5\}, \{2, 4\}, \{2, 5\}, \{3, 6\}, \{4, 6\}\}. The graph in figure \ref{g6_2} contains 15 independent sets, but there are only {\em two} kernels: \{\{1, 3, 5\}, \{2, 4, 6\}\}. This shows that more independent sets does not necessarily translate into more kernels, and also shows that there is perhaps more variance in the number of kernels than in the number of independent sets. However, this paper will show that as the number of vertices grows, the number of kernels and independent sets will actually behave very similarly. Kernels and independent sets can also be thought of as binary functions of ones and zeroes. This is done by assigning each vertex of a graph a value of either 0 or 1. When checking if a possible configuration of 0's and 1's is an independent set or kernel, one considers the vertices included in the set to have value 1, and those that are excluded to have value 0. The binary function for an independent set (or kernel) has a value 1 if the configuration corresponds to an independent set (or kernel), and 0 otherwise. \section{Binary Decision Diagrams} Binary decision diagrams (BDDs) provide compact representations of binary functions \cite{Bryant}\cite{Knuth}; in our case the binary functions represent independent sets and kernels. Because BDDs are the source of all of the numerical evidence in this paper, it is essential that the paper contain an adequate explanation of them. Although graphs and BDDs look similar to each other, they serve quite different purposes. A BDD is composed of {\em nodes} and {\em links} between those nodes, only now the links ``flow'' in a particular direction and the relationship between the links and the nodes is more complicated than in a graph. Each node has a value, denoted {\tt V}, a {\tt LO} branch, which ``points'' to another node, and a {\tt HI} branch, which also points to another node. Each node's ({\tt LO, HI}) combination must be unique for it to be a true binary decision diagram, and at each node, ${\tt LO} \ne {\tt HI}$. At each node, {\tt V} describes the variable on which the decision depends. For example, in a graph of size $n$ as described above, it is often convenient to number the vertices of the graph $1, 2, 3,...,n$. So if we were to use a BDD to describe the binary function corresponding to independent sets, a node with ${\tt V}=x$ would depend on the vertex numbered $x$ in the graph. The {\tt LO} and {\tt HI} branches of this node would point to other nodes; the idea is that if the vertex numbered $x$ had a value of 1, one should take the {\tt HI} branch to the next node, and if it was equal to 0, one should take the {\tt LO} branch. These nodes will eventually point to two ``sinks,'' {\tt True} and {\tt False}. The sink one reaches by going down the tree will determine whether the path you have taken corresponds to the binary function having a value of 1 ({\tt True}) or 0 ({\tt False}). \begin{figure} \centering \includegraphics[width=.5\textwidth]{majoritybdd.eps} \caption {A BDD for the majority function.} \label{BDD1} \end{figure} This idea is best explained with an example. Let us suppose we have three binary variables, $x_1$, $x_2$, and $x_3$, and let us suppose our binary function is the ``majority function'' which has value 1 if and only if two or more of the three variables have value 1. The BDD for this problem would look like figure \ref{BDD1}. As we look at this BDD, we consider first the top node. In BDDs, a solid line denotes the {\tt HI} path and a dotted line the {\tt LO} path. Let us assume that $x_1=1$ and therefore we take the {\tt HI} path, to the leftmost 2 node. Here, we see that if we take the {\tt HI} path again, we go directly to the {\tt True} sink, without even considering 3. This is because once we know that both $x_1$ and $x_2$ equal 1, we already know that the majority function equals 1--it doesn't matter what $x_3$ is. In fact, it would be incorrect to add the redundant extra node: we stated earlier that no node in a BDD can have ${\tt LO}={\tt HI}$, and both of the extra node's branches would point to {\tt True}. Let us go back to the leftmost 2 node. If we choose the {\tt LO} path, then that means $x_1=1$ and $x_2=0$. That means that for the majority function to equal 1, $x_3$ must equal 1. That is why $x_3$'s {\tt HI} branch points to {\tt True} and its {\tt LO} branch points to {\tt False}: this final variable decides the value of the majority function. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{messbdd.eps} \caption {A BDD representing the independent sets of a 3-regular 6-graph.} \label{BDD2} \end{figure} Now we consider a more complex BDD. Figure \ref{BDD2} is the BDD for the independent sets of the first graph we looked at. Extra {\tt False} sinks have been added for a clearer picture. This BDD is complicated, but one can still recognize some patterns. For example, one can only reach {\tt False} sinks by taking a {\tt HI} branch--this makes sense, since removing a vertex from an independent set always yields another independent set. The main use of BDDs such as this in this paper, however, is not to be read by humans, but to be read by computer. We exploit the fact that there exist algorithms to systematically construct the BDD for the independent sets and kernels of a graph \cite{Knuth}. Moreover, given a BDD, it is straightforward to {\em exactly} count the number of solutions of the binary function it represents, which in our case corresponds to the number of independent sets (or kernels). The counting algorithm \cite{Knuth} works as follows, where $s$ is the total number of nodes in the BDD, counting the {\tt True} and {\tt False} nodes as one node each, and $v_k$, $l_k$, and $h_k$ are {\tt V}, {\tt LO}, and {\tt HI} for the $k$th node. \begin{itemize} \item Step 1: [Loop over $k$]. Set $c_0 \leftarrow 0$, $c_1 \leftarrow 1$, and do Step 2 for $k=2,3,...,s-1$. Then return the answer $2^{v_{s-1}-1}c_{s-1}$. \item Step 2: [Compute $c_k$]. Set $l \leftarrow l_k$, $h \leftarrow h_k$, $c_k \leftarrow 2^{v_l-v_k-1}c_l+2^{v_h-v_k-1}c_h$. \end{itemize} Using this algorithm, it is possible to quickly and efficiently count solutions to independent sets of reasonably small graphs. This is what I did for graphs of degree 3 and sizes 6 to 40 (even numbers only, because it is impossible to have a graph of odd degree and odd size). My data convincingly confirm the conjecture of Chandrasekaran et al. for independent sets, and lead to similar conjectures for kernels. \section{Numerical evidence}\label{results} To create the data presented in this section, I generated 1000 random 3-regular simple graphs of each even size between 6 and 40. It is easy to do this using an algorithm that randomly adds edges between vertices that still have fewer than three edges, and that have not previously been connected. For each graph I then created a BDD by automatically generating, for that graph, appropriate input for D. E. Knuth's BDD creation program \cite{bdd14} written in his ``BDD language.'' The above counting algorithm was used to exactly count the number of solutions of each BDD. For both independent sets and kernels, the BDDs were created using a boolean function that was a large {\tt AND} function of a collection of local functions. For the independent set case, each local function required that the variables corresponding to the two vertices on an edge were not both 1. For the kernel case, the local functions required for each vertex variable that if it was was 1, all its neighbors' vertex variables were 0, and if it was 0, at least one of its neighbors' vertex variables was 1. This last condition corresponds to the requirement that one cannot add a vertex to a maximal independent set and have it remain an independent set. Knuth's program records the number of memory accesses it makes as it creates a BDD. Memory accesses in modern computers dominate the running time, so they serve as a good proxy for computational complexity. I found that the average number of memory accesses to create a BDD for the independent sets of a 3-regular graph of size $n$ grew roughly as $400 \times 1.28^n$. Because of the exponential growth in the complexity, BDD's, like any other algorithm for exact counting of independent sets and kernels, are limited to relatively small $n$. \subsection{Independent sets} Chandrasekaran et al. prove that at $n$ grows large, the average number of independent sets of a 3-regular $n$-graph will approach $w^n$, where $w=z^{-3/2}(2-z)^{-1/2}$ and $z$ is a root of the equation $z^3+z-1=0$, giving $w \approx 1.545634155$. \begin{figure} \centering \begin{tabular}{c|c|c|c} $n$ & $w^n$ & mean & $w_{est} = e^{\frac{\ln mean}{n}}$\\ \hline 6 & 13.635 & 13.464 & 1.5423952668\\ \hline 8 & 32.573 & 31.815 & 1.54109350802 \\ \hline 10 & 77.815 & 75.777 & 1.54153624619\\ \hline 12 & 185.9005 & 181.494 & 1.54254741637\\ \hline 14 & 444.1134 & 434.487 & 1.54321669622\\ \hline 16 & 1060.980 & 1041.904 & 1.54388245415 \\ \hline 18 & 2534.665 & 2485.237 & 1.54394400334\\ \hline 20 & 6055.279 & 5930.353 & 1.5440239311\\ \hline 22 & 14465.97 & 14191.04 & 1.54428663307\\ \hline 24 & 34558.98 & 33960.44 & 1.54450939167\\ \hline 26 & 82560.89 & 81049.27 & 1.54453602897\\ \hline 28 & 197236.7 & 193795.5 & 1.54466285137\\ \hline 30 & 471195.6 & 462317.9 & 1.54465451307\\ \hline 32 & 1125679 & 1106305 & 1.54479583718\\ \hline 34 & 2689230 & 2639377 & 1.54478373281\\ \hline 36 & 6424531 & 6313624 & 1.54488668558\\ \hline 38 & 15348108 & 15109601 & 1.54499725062\\ \hline 40 & 36666398 & 36075768 & 1.54500677979\\ \hline \end{tabular} \caption{Numerical results for the number of independent sets in random regular graphs, compared with the Bethe approximation estimate of \cite{freakout}, which says that the mean should approach $w^n$, with $w\approx 1.54563$, for large $n$.} \label{table1} \end{figure} Numerically, we can estimate $w$ for any $n$ as $w_{est} = \exp \left( \frac{\ln mean}{n} \right)$, where $mean$ is the numerically determined mean of the number of counts. Figure \ref{table1} presents a table that shows that even using graphs of size $n=40$ or less, we could numerically estimate $w$ accurately to three significant figures if we did not know its exact value. Note that the estimate of $w$ seems to be approaching its ultimate exact value from below. Chandrasekaran et al.'s conjecture about fluctuations was precisely stated as follows in their Theorem 11 \cite{freakout}: ``{\sl Let $G$ be chosen uniformly at random among all 3-regular graphs with $n$ vertices. Assuming SCCC is true, there exists a function $f:(0,1) \rightarrow \mathbb{R}^+$, so that $|\ln Z-\ln Z_B| \le f(\epsilon)$ with probability $1-\epsilon$, where $\frac{1}{n}\ln Z_B \approx \ln 1.545$.}'' Here $Z$ is the number of independent sets, $Z_B$ is the Bethe approximation to that number, and ``SCCC'' is the ``Shortest Cycle Cover Conjecture,'' due to Alon and Tarsi \cite{Alon} \cite{freakout}, which states ``{\sl Given a bridgeless graph $G$ with $m$ edges, all of its edges can be covered by a collection of cycles with the sum of their lengths being at most $7m/5=1.4m$.}'' Chandrasekaran et al.'s Theorem 11 means that for any probability $1-\epsilon$, the fluctuations in the number of independent sets of regular 3-graphs will not be more than $f(\epsilon)$. Since $f(\epsilon)$ does not depend on the size of the graph, that means that the fluctuations are $O(1)$. The data that follows in this section is meant to test this claim (which depends on the unproven SCCC), by numerically finding the function $f(\epsilon)$. \begin{figure \centering \includegraphics[width=0.6\textwidth]{a.1} \caption {Estimate of $f(\epsilon)$ for independent sets of 3-regular 6-graphs.} \label{reg6} \end{figure} \begin{figure \centering \includegraphics[width=0.6\textwidth]{a.2} \caption {Estimate of $f(\epsilon)$ for independent sets of 3-regular 8-graphs.} \label{reg8} \end{figure} The approach I take is to plot a numerical estimate $f(\epsilon)$, by computing the difference $|\ln Z-\ln Z_B|$ for each graph, and then finding the probability $1-\epsilon$ that the difference has a particular value for each $n$. Consider for example the plot shown in figure \ref{reg6}, which is for the $n=6$ case. This graph only contains two values on the vertical axis: one at approximately 0.0954, the other at approximately 0.0476. This is because, as was mentioned before, there can only be 13 or 15 independent sets of a regular 6-graph of degree 3, and those are the differences one finds (in the logarithm of the number) with respect to the Bethe approximation of approximately 13.635. As the number of variables becomes larger, however, the number of ``levels'' in the graph will also increase. For example, figure \ref{reg8} shows the estimate of $f(\epsilon)$ for regular 8-graphs. \begin{figure \centering \includegraphics[width=0.6\textwidth]{a.3} \caption {Estimates of $f(\epsilon)$ for independent sets of 3-regular $n$-graphs, with $n=10,12,14,16$.} \label{reg10-16} \end{figure} The value of $f(0)$, which is the largest difference found between the true logarithm of the number of independent sets and the Bethe approximation, increased from approximately .0954 to approximately .2646. This trend, however, will not continue, substantiating the prediction of Chandrasekaran et al. The largest difference $f(0)$ actually drops between 8 and 10, and will stabilize as the sizes get larger. So will the estimate for $f(\epsilon)$, for general $\epsilon$, which stops looking like a series of step function and start taking on a smoother shape. Figure \ref{reg10-16} is actually four plots, for 3-regular graphs of sizes 10, 12, 14, and 16, superposed. \begin{figure \centering \includegraphics[width=0.6\textwidth]{a.4} \caption {Estimates of $f(\epsilon)$ for independent sets of 3-regular $n$-graphs, with $n$ taking all even values between $18$ and $40$, inclusive.} \label{reg18-40} \end{figure} Figure \ref{reg18-40} shows the data for the remaining 12 plots, all superposed. Their sizes are comprised of the even numbers between 18 and 40, inclusive. It is difficult to believe that the figure is twelve different sets of data. Not only do the fluctuations not grow beyond some upper limit (which is all that is necessary for the Chandrasekaran et al.'s conjecture to be true); they hardly change at all! This means we can confidently estimate the expected size of the fluctuations in the entropy from our numerical data. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{figure3.eps} \caption {A 6-graph with average degree 3, that is not 3-regular.} \label{r6} \end{figure} To emphasize how unusual the behavior shown in figure \ref{reg18-40} is, I will present a similar set of data for random graphs that are selected to have the same size and the same average degree of 3, but are not necessarily regular. An example of a non-regular random 6-graph with average degree 3 is shown in figure \ref{r6}. This graph's independent sets are \{$\emptyset$, \{1\}, \{2\}, \{3\}, \{4\}, \{5\}, \{6\}, \{1, 3\}, \{1, 4\}, \{2, 3\}, \{2, 6\}, \{3, 4\}, \{4, 6\}, \{1, 3, 4\}\}, so it has 14 independent sets. This graph was not previously possible, of course, because vertices 3 and 4 only have two edges, and vertex 5 has five. \begin{figure \centering \includegraphics[width=0.6\textwidth]{a.7} \caption {Estimates of $f(\epsilon)$ for independent sets of $n$-graphs with average degree 3, with $n$ taking all even values between $10$ and $36$, inclusive. The fluctuations consistently grow as $n$ increases.} \label{rand10-36} \end{figure} For the class of $n$-graphs with average degree 3, the average number of independent sets is somewhat larger than it is for random 3-regular graphs. I find that the average number of independent sets grows as $x^n$, where $x \approx 1.594$. For the fluctuations, one can measure a function $f(\epsilon)$ defined by $f(\epsilon) = |\ln Z - n \ln x|$. Figure \ref{rand10-36} shows the estimated $f(\epsilon)$, obtained in the same way as the regular graph data, for the random graphs with average degree 3, and for $n$ between 10 and 36. Here, the fluctuations are clearly increasing with $n$, as one would expect. \subsection{Kernels} \begin{figure} \centering \begin{tabular}{c|c|c} $n$ & mean & $y_{est} = e^{\frac{\ln mean}{n}}$ \\ \hline 8 & 7.941 & 1.29564015538 \\ \hline 10 & 14.437 & 1.30601358862 \\ \hline 12 & 23.420 & 1.30056553464 \\ \hline 14 & 39.822 & 1.30105155128 \\ \hline 16 & 66.855 & 1.30038175746 \\ \hline 18 & 112.229 & 1.29985445627 \\ \hline 20 & 189.283 & 1.29973729397 \\ \hline 22 & 321.368 & 1.30003386341 \\ \hline 24 & 540.124 & 1.29973224901 \\ \hline 26 & 904.901 & 1.29931791247 \\ \hline 28 & 1516.237 & 1.29896911345 \\ \hline 30 & 2581.067 & 1.29935147154 \\ \hline 32 & 4333.530 & 1.29912609539 \\ \hline 34 & 7308.847 & 1.29910009294 \\ \hline 36 & 12285.019 & 1.29895400448 \\ \hline 38 & 20694.544 & 1.29889831749 \\ \hline 40 & 34996.192 & 1.29897481351 \\ \hline \end{tabular} \caption{Table showing the mean number of kernels, averaged over 1000 3-regular $n$-graphs for each value of $n$, and the numerical estimate for $y$, where the average is given by $y^n$.} \label{table3} \end{figure} Next we look at the data for kernels. First, I find that the average number of kernels for 3-regular $n$-graphs is approximately equal to $y^n$, with $y \approx 1.299$. This value of $y$ can be read off from the table presented in figure \ref{table3}. Notice that $y$ actually seems to reach its ultimate value more quickly than $w$ did. \begin{figure \centering \includegraphics[width=0.6\textwidth]{a.10} \caption {Estimates of $f(\epsilon)$ for kernels of 3-regular $n$-graphs, with $n=6,8,10,12,14,16$.} \label{kern8-16} \end{figure} \begin{figure \centering \includegraphics[width=0.6\textwidth]{a.11} \caption {Estimates of $f(\epsilon)$ for kernels of 3-regular $n$-graphs, with $n$ taking all even values from 18 to 40, inclusive.} \label{kern18-40} \end{figure} For the plots of the fluctuations, one can estimate a function $f(\epsilon)$ analogous to the function for the independent sets using $f(\epsilon) = |\ln Z - n \ln y|$, with $y=1.299$. With that in mind, figure \ref{kern8-16} shows the estimated function $f(\epsilon)$ for kernels of 3-regular $n$-graphs, with $n$ ranging over even numbers from 6 to 16 inclusive, while figure \ref{kern18-40} shows $f(\epsilon)$ for $n$ ranging from 18 to 40. The estimated function $f(\epsilon)$ which measures fluctuations for kernels looks similar to that for independent sets, albeit with larger fluctuations. This is not so surprising, if one recalls that regular 6-graphs have either 2 or 6 kernels, while they have 13 or 15 independent sets. Comparing figure \ref{kern18-40} for kernels with figure \ref{reg18-40} for independent sets, we see that the fluctuations in the entropy are nearly four times as large for kernels as independent sets. It is also clear, however, that numerically, kernels and independent sets share the essential property that their fluctuations do not grow as the graph size increases. I thus conjecture that the fluctuations in the logarithm of the number of kernels in 3-regular $n$-graphs will only be $O(1)$ as $n$ grows large. The strong similarity between the numerical results for kernels and independent sets suggests that a Bethe approximation \cite{Jed} could give a highly accurate result for the number of kernels. However, performing such a calculation turns out to be considerably more intricate for the case of kernels than it was for independent sets, because the binary function representing configurations that are kernels is the {\tt AND} of local functions of vertex variables that involve a vertex and all its neighbors (e.g. four variables in the case of 3-regular graphs), while for independent sets the local functions only involve simple pairs of vertices. I hope to report on the results of such a calculation in the near future. \section*{Acknowledgements} I thank Jonathan Yedidia for his encouragement and advice. \newpage
1,477,468,750,195
arxiv
\section{Introduction} \label{section:intro} The extension of the Standard Model \cite{GSW} for one generation of fermions advocated for in \cite{Machet1} is endowed with two Higgs doublets, a ``chiral'' doublet \begin{equation} K = \left(\begin{array}{c}{\mathfrak p}^1-i{\mathfrak p}^2 \cr -({\mathfrak s}^0+{\mathfrak p}^3) \end{array}\right), \quad <{\mathfrak s}^0>=\frac{v}{\sqrt{2}}, \label{eq:K} \end{equation} and a ``weak'' doublet \begin{equation} H = \left(\begin{array}{c}{\mathfrak s}^1-i{\mathfrak s}^2 \cr -({\mathfrak p}^0+{\mathfrak s}^3) \end{array}\right), \quad <{\mathfrak s}^3>=\frac{\sigma}{\sqrt{2}}, \label{eq:H} \end{equation} both isomorphic to the Higgs doublet of the Standard Model \cite{GSW}. It constitutes the ``smallest maximal extension'' of the Glashow-Salam-Weinberg model. It is maximal in the sense that it incorporates all possible $J=0$ scalars (and pseudoscalars) that are expected for a given number of generations, and it is the smallest extension because it does not invoke {\em a priori} any physics ``beyond the Standard Model'' now any new type of particle. ${\mathfrak s}^0$ and ${\mathfrak s}^3$ have non-vanishing vacuum expectation values (VEV's) as written in (\ref{eq:H}). In there, the symbols ``$\mathfrak s$'' and ``$\mathfrak p$'' stand respectively for ``scalar'' and ``pseudoscalar'', such that $H$ and $K$ are parity transformed of each other. Their components that we call generically $h^0, h^1, h^2, h^3$ transform respectively by $SU(2)_L$ and $SU(2)_R$ according to \begin{equation} \begin{array}{rcl} T^i_L\,.\,h^j&=&-\frac{1}{2}\left(i\,\epsilon_{ijk}h^k + \delta_{ij}\,h^0\right),\cr T^i_L\,.\,h^0 &=& -\frac{1}{2}\, h^i, \end{array} \label{eq:ruleL} \end{equation} and \begin{equation} \begin{array}{rcl} T^i_R\,.\,h^j&=&-\frac{1}{2}\left(i\,\epsilon_{ijk}h^k - \delta_{ij}\,h^0\right),\cr T^i_R\,.\,h^0 &=& +\frac{1}{2}\, h^i.\end{array} \label{eq:ruleR} \end{equation} The main steps of this works are the following. In section \ref{section:wmass} we give the general formula for the mass of the $\vec W$ gauge bosons in terms of the two VEV's $<{\mathfrak s}^0>$ and $<{\mathfrak s}^3>$. In section \ref{section:yukawa} we introduce Yukawa couplings of quarks to both Higgs doublets $K$ and $H$. It could have looked more natural to first introduce the scalar potential, but it turns out that the latter gets strongly constrained by the former. After giving their general expression, from which we extract the $u$ and $d$ quark masses in terms of $<{\mathfrak s}^0>$ and $<{\mathfrak s}^3>$, we investigate in section \ref{section:lowen} their low energy limit by using the one-to-one correspondence demonstrated in \cite{Machet1} between $K$, $H$ and 4-sets of bilinear quark operators. At this limit, renormalizability is not a concern and Yukawa couplings can be rewritten in a very simple form in which, in particular, symmetries clearly show up. Using the Partially Conserved Axial Current hypothesis (PCAC) \cite{Dashen} \cite{Lee} \cite{dAFFR} and the Gell-Mann-Oakes-Renner (GMOR) \cite{GMOR} \cite{dAFFR} relation enables to account for the mass of the pions and to determine the values of all but one Yukawa parameters. The last one is obtained by identifying the Goldstones of the spontaneously broken weak $SU(2)_L$ symmetry. A last constraint results from considering the $\pi^0-\eta$ system and requesting that it be devoid of any tachyonic state. This determines the quantity $(m_u-m_d)<\bar u u -\bar d d>$ ($(m_u+m_d)<\bar u u + \bar d d>)$ is determined by the GMOR relation). We comment at length on fermion masses, and the important role of both Higgs doublets in their generation. After gathering the values of the parameters in section \ref{section:paramsummary}, section \ref{section:pot} is devoted to the scalar potential. $V(K,H)$ is chosen to be invariant by the chiral group $U(2)_L \times U(2)_R$, which clearly identifies the Goldstone of chiral symmetry breaking. It only depends on two parameters, one quadratic and one quartic coupling. At low energy, it receives corrections from the bosonised (low energy) form of the Yukawa couplings, which yields an effective potential $V_{eff}(K,H)$. A last constraint comes from minimizing $V_{eff}$ at the known VEV's of the two Higgs bosons, which reproduce the pion and $W$ masses. It determines the value of the quartic coupling and the masses of the two Higgs bosons. In section \ref{section:couplings}, we determine their couplings to quarks, gauge bosons and leptons. Section \ref{section:symmetries} provides some additional considerations concerning symmetries, Goldstone and pseudo-Goldstone bosons. Several symmetries are at work and some fields play dual roles. We focus in particular on the custodial $SU(2)$ symmetry and on the respective roles of $<\bar u u + \bar d d>$ and $<\bar u u-\bar d d>$. Section \ref{section:moregen} gives some remarks concerning more generations. Section \ref{section:conclusion} is a brief conclusion. \section{Kinetic terms for the Higgs doublets and gauge boson masses} \label{section:wmass} The masses of gauge bosons arise from the kinetic terms \begin{equation} \Big( (D_\mu K)^\dagger D^\mu K + (D_\mu H)^\dagger D^\mu H \Big) \label{eq:kinetic} \end{equation} for the two Higgs doublets $K$ and $H$. $D_\mu$ is the covariant derivative with respect to the group $SU(2)_L$ of weak interactions. Owing to the laws of transformations (\ref{eq:ruleL}), the VEV's of ${\mathfrak s}^0$ and ${\mathfrak s}^3$ generate a mass $m_W$ for the $\vec W$ gauge bosons \begin{equation} m_W^2 =\frac{g^2}{2}\big(<{\mathfrak s}^0>^2 + <{\mathfrak s}^3>^2\big) = g^2\, \frac{v^2 + \sigma^2}{4}, \label{eq:mw} \end{equation} in which $g$ is the $SU(2)_L$ coupling constant \begin{equation} \approx .61\ . \label{eq:g} \end{equation} \section{Yukawa couplings} \label{section:yukawa} We choose to first introduce Yukawa couplings because their low-energy limit (see section \ref{section:lowen}) will in particular constrain the effective scalar potential. \subsection{General expression} \label{subsec:genyuk} Quarks must be coupled to the two Higgs doublets $K$ and $H$. Introducing the couplings $\rho_u$ and $\rho_d$ to $K$ and $\lambda_u$ and $\lambda_d$ to $H$, the Yukawa Lagrangian writes \footnote{$\tau^2$ is the Pauli matrix $\left(\begin{array}{rr} 0 & -i \cr i & 0 \end{array}\right)$. The doublets $\tilde K \equiv i\tau^2 K^\ast$ and $\tilde H\equiv i\tau^2 H^\ast$ are isomorphic to $K$ and $H$.} \begin{eqnarray} {\cal L}_{Yukawa}= &+&\rho_d \left(\begin{array}{cc}\overline{u_L}\ \overline{d_L}\end{array}\right) K \, d_R - \rho_u \left(\begin{array}{cc}\overline{u_L}\ \overline{d_L}\end{array}\right) (i\tau^2 K^\ast) \, u_R\cr &+&\lambda_d \left(\begin{array}{cc}\overline{u_L}\ \overline{d_L}\end{array}\right) H \, d_R +\lambda_u \left(\begin{array}{cc}\overline{u_L}\ \overline{d_L}\end{array}\right) (i\tau^2 H^\ast) \, u_R\cr &+& h.c., \label{eq:genyuk1} \end{eqnarray} which gives, explicitly, \begin{eqnarray} \hskip -.6cm{\cal L}_{Yukawa} = &-&\left[\delta_1\frac{v}{\sqrt{2}\mu^3}(\bar u u+\bar d d) +\kappa_{12}\frac{\sigma}{\sqrt{2}\nu^3}(\bar u u-\bar d d)\right]{\mathfrak s}^0 -\left[\delta_{12}\frac{v}{\sqrt{2}\mu^3}(\bar u u+\bar d d) +\delta_2\frac{\sigma}{\sqrt{2}\nu^3}(\bar u u-\bar d d)\right]{\mathfrak s}^3\cr & \cr &+&\left[ \delta_1\frac{v}{\sqrt{2}\mu^3} \Big(\bar u\gamma_5 d\, {\mathfrak p}^- +\bar d\gamma_5 u\, {\mathfrak p}^+ +(\bar u \gamma_5 u -\bar d \gamma_5 d)\,{\mathfrak p}^3\Big) +\kappa_{12}\frac{\sigma}{\sqrt{2}\nu^3} \Big(\bar d u\, {\mathfrak p}^+-\bar u d\, {\mathfrak p}^- +(\bar u \gamma_5 u + \bar d\gamma_5 d)\,{\mathfrak p}^3\Big) \right] \cr & \cr &-&\left[ \delta_{12}\frac{v}{\sqrt{2}\mu^3}\Big(\bar d\gamma_5 u\, {\mathfrak s}^+ -\bar u\gamma_5 d\,{\mathfrak s}^- -(\bar u\gamma_5 u - \bar d\gamma_5 d)\,{\mathfrak p}^0\Big) + \delta_2\frac{\sigma}{\sqrt{2}\nu^3}\Big(\bar d u\, {\mathfrak s}^+ + \bar u d\, {\mathfrak s}^--(\bar u\gamma_5 u + \bar d\gamma_5 d)\,{\mathfrak p}^0\Big) \right].\cr && \label{eq:genyuk2} \end{eqnarray} In (\ref{eq:genyuk1}) and (\ref{eq:genyuk2}) the signs have been set such that for positive $<{\mathfrak s}^0>$ and $<{\mathfrak s}^3>$, the fermion masses are positive for positive $\rho_{u,d}$ and $\lambda_{u,d}$ (given that a fermion mass term is of the form $-m\bar\psi \psi$). We introduced in (\ref{eq:genyuk2}) the parameters with dimension $[mass]^2$ \begin{eqnarray} \delta_1 &=& \displaystyle\frac{\rho_u + \rho_d}{2}\,\frac{\sqrt{2}\mu^3}{v} ,\cr &&\cr \kappa_{12} &=& \displaystyle\frac{\rho_u - \rho_d}{2}\,\frac{\sqrt{2}\nu^3}{\sigma} ,\cr &&\cr \delta_{12} &=& \displaystyle\frac{\lambda_u+\lambda_d}{2}\,\frac{\sqrt{2}\mu^3}{v} ,\cr &&\cr \delta_2 &=& \displaystyle\frac{\lambda_u-\lambda_d}{2}\,\frac{\sqrt{2}\nu^3}{\sigma}. \label{eq:params} \end{eqnarray} \subsection{Fermion masses} \label{subsec:fermass} We define the two quantum Higgs fields $\varsigma$ and $\xi$ by shifting the scalar fields ${\mathfrak s}^0$ and ${\mathfrak s}^3$ occurring respectively in the Higgs doublets $K$ and $H$ (see (\ref{eq:K}),(\ref{eq:H})) according to \begin{equation} {\mathfrak s}^0 = <{\mathfrak s}^0> + \varsigma, \quad {\mathfrak s}^3 = <{\mathfrak s}^3> + \xi. \label{eq:defhxi} \end{equation} The two VEV's (given in (\ref{eq:K}) and (\ref{eq:H})) contribute to the fermion masses according to \begin{equation} m_u=\rho_u<{\mathfrak s}^0> +\lambda_u<{\mathfrak s}^3> =\frac{v\rho_u + \sigma\lambda_u}{\sqrt{2}},\quad m_d=\rho_d<{\mathfrak s}^0> +\lambda_d<{\mathfrak s}^3> =\frac{v\rho_d + \sigma\lambda_d}{\sqrt{2}}. \label{eq:mumd} \end{equation} Additional remarks concerning fermion masses are written in subsection \ref{subsec:fmasslowen}. \section{The low energy limit} \label{section:lowen} At low energy we use the one-to-one correspondence between $K,H$ and \vbox{ \begin{eqnarray} {\mathfrak K} =\frac{1}{\sqrt{2}}\frac{v}{\mu^3}\left(\begin{array}{c} \phi^1-i\phi^2 \cr -(\phi^0+\phi^3)\end{array}\right) &=&\frac{v\sqrt{2}}{\mu^3}\left(\begin{array}{c} \bar d \gamma_5 u \cr -\frac12(\bar u u + \bar d d) -\frac12(\bar u\gamma_5 u - \bar d \gamma_5 d) \end{array}\right) \equiv\left(\begin{array}{c} {\mathfrak k}^1-i{\mathfrak k}^2\cr-({\mathfrak k}^0+{\mathfrak k}^3)\end{array}\right),\cr && \cr <\bar u u + \bar d d> &=& \mu^3,\cr && \cr {\mathfrak H} =\frac{1}{\sqrt{2}}\frac{\sigma}{\nu^3}\left(\begin{array}{c} \xi^1-i\xi^2\cr -(\xi^0+\xi^3)\end{array}\right) &=&\frac{\sigma\sqrt{2}}{\nu^3}\left(\begin{array}{c} \bar d u \cr -\frac12(\bar u\gamma_5 u + \bar d\gamma_5 d) -\frac12(\bar u u - \bar d d) \end{array}\right) \equiv\left(\begin{array}{c} {\mathfrak h}^1-i{\mathfrak h}^2\cr-({\mathfrak h}^0+{\mathfrak h}^3)\end{array}\right),\cr && \cr <\bar u u - \bar d d> &=& \nu^3,\cr && \label{eq:compdoub} \end{eqnarray} } that has been established in \cite{Machet1} and identify accordingly \begin{equation} (\mathfrak{s}^0, \mathfrak{p}^1, \mathfrak{p}^2, \mathfrak{p}^3) \simeq (\mathfrak{k}^0, \mathfrak{k}^1, \mathfrak{k}^2, \mathfrak{k}^3),\quad (\mathfrak{p}^0, \mathfrak{s}^1, \mathfrak{s}^2, \mathfrak{s}^3) \simeq (\mathfrak{h}^0, \mathfrak{h}^1, \mathfrak{h}^2, \mathfrak{h}^3). \label{eq:ident} \end{equation} \subsection{Rewriting Yukawa couplings} \label{subsec:newyuk} The first consequence of this correspondence is that, defining \begin{equation} m_{12}^2 = \kappa_{12}+ \delta_{12}, \label{eq:defm12} \end{equation} and expressing the bilinear quark operators in (\ref{eq:genyuk2}) in terms of the components $({\mathfrak s}^0, \vec {\mathfrak p}),({\mathfrak p}^0,\vec {\mathfrak s})$ of $K$ and $H$, the Yukawa couplings (\ref{eq:genyuk2}) rewrite \begin{equation} {\cal L}_{Yukawa}^{eff}=-\delta_1\, K^\dagger K - \frac12m_{12}^2\, (K^\dagger H + H^\dagger K) -\delta_2\, H^\dagger H, \label{eq:lowenyuk1} \end{equation} or, indifferently, since renormalizability is not an issue at low energy, as a sum of 4-fermion interactions \begin{equation} {\cal L}_{Yukawa}^{eff}=-\delta_1\, \mathfrak{K}^\dagger \mathfrak{K} - \frac12m_{12}^2\, (\mathfrak{K}^\dagger \mathfrak{H} + \mathfrak{H}^\dagger \mathfrak{K}) -\delta_2\, \mathfrak{H}^\dagger \mathfrak{H}. \label{eq:lowenyuk2} \end{equation} This bosonised form of the Yukawa couplings, only valid at low energy, will be later added to the scalar potential $V(K,H)$ to define the low energy effective potential $V_{eff}(K,H)$ (see subsection \ref{subsec:effpot}). \subsection{PCAC and the Gell-Mann-Oakes-Renner relation} \label{subsec:pimass} Kinetic terms together with Yukawa couplings include in particular \begin{equation} (\partial_\mu K)^\dagger \partial^\mu K -\delta_1 K^\dagger K - \frac12m_{12}^2\, ({K}^\dagger {H} + {H}^\dagger {K}) + (\partial_\mu H)^\dagger \partial^\mu H -\delta_2\, {H}^\dagger {H} +\ldots \label{eq:kinyuk} \end{equation} and we now raise the issue whether, at low energy, the charged components of $K$ can be identified with the charged pions. As we shall see in subsection \ref{subsec:pieta} below, both $\delta_2$ and $m_{12}^2$ have to vanish: the first to ensure that the breaking of the weak $SU(2)_L$ is accompanied by three true Goldstone bosons, and the second to ensure that the ${\mathfrak p}^0-{\mathfrak p}^3$ system does not exhibit a tachyonic state. Eq.~(\ref{eq:kinyuk}) reduces then to standard kinetic terms for unmixed doublets. Furthermore, the scalar potential will be chosen in such a way that the three pseudoscalar bosons inside $K$ are Goldstone bosons in the absence of Yukawa couplings. So, due to the mass term proportional to $\delta_1$, the three ``pions'' inside $K$ get a mass $m$ at the simple condition that $\delta_1= \frac12 m^2$. Owing to the Partially Conserved Axial Current (PCAC) hypothesis \cite{Dashen}\cite{Lee} \begin{equation} i(m_u+m_d)\,\bar u \gamma_5 d= \sqrt{2} f_\pi m_\pi^2\, \pi^+, \label{eq:PCAC} \end{equation} which identifies the interpolating pion field with a bilinear quark operator, and to the corresponding Gell-Mann-Oakes-Renner relation \cite{GMOR} \begin{equation} (m_u+m_d)<\bar u u + \bar d d> =2f_\pi^2 m_\pi^2, \label{eq:GMOR} \end{equation} ${\mathfrak p}^+ \equiv {\mathfrak p}^1+i{\mathfrak p}^2 = \frac{v\sqrt{2}}{\mu^3}\bar d\gamma_5 u$ as it is defined in (\ref{eq:lowenyuk1}) and (\ref{eq:ident}) can be identified at low energy with \begin{equation} {\mathfrak p}^\pm \simeq -\frac{iv}{f_\pi} \pi^\pm. \label{eq:ppi1} \end{equation} So, the kinetic terms $ (\partial_\mu K)^\dagger \partial^\mu K$, which contain in particular $ \partial_\mu {\mathfrak p}^+ \partial ^\mu {\mathfrak p}^- \equiv \left( \partial_\mu {\mathfrak p}^1 \partial ^\mu {\mathfrak p}^1 + \partial_\mu {\mathfrak p}^2 \partial ^\mu {\mathfrak p}^2\right)$, will be normalized in the standard way if \begin{equation} v=f_\pi, \label{eq:v} \end{equation} such that \begin{equation} {\mathfrak p}^\pm \simeq -i\pi^\pm. \label{eq:ppi} \end{equation} Then, the term proportional to $\delta_1$ in (\ref{eq:kinyuk}) is a suitable pion mass terms if \begin{equation} \delta_1 = m_\pi^2. \label{eq:delta1} \end{equation} Going back to the definition of $\delta_1$ in (\ref{eq:params}) and using (\ref{eq:v}) and (\ref{eq:GMOR}), (\ref{eq:delta1}) corresponds to \begin{equation} \rho_u + \rho_d = \frac{m_u+m_d}{\sqrt{2}f_\pi}. \label{eq:rou+rod} \end{equation} Since $f_\pi \ll m_W$, (\ref{eq:v}) plugged into (\ref{eq:mw}) entails \begin{equation} \sigma \approx \frac{2 m_W}{g}, \label{eq:sigma} \end{equation} which shows that the $\vec W$'s get their mass essentially from the VEV of ${\mathfrak s}^3$. The ratio of the VEV's of the two Higgs doublets comes out accordingly as \begin{equation} \tan\beta = \frac{<{\mathfrak s}^3>}{<{\mathfrak s}^0>}= \frac{\sigma/\sqrt{2}}{v/\sqrt{2}} \approx \frac{2m_W}{gf_\pi} \approx 2856. \label{eq:rapvev} \end{equation} They correspond respectively to the weak ($m_W$) and chiral ($f_\pi$) scale. Both scales can now coexist, unlike in the genuine Glashow-Salam-Weinberg model where the parity-transformed $H$ of the Higgs doublet $K$ is missing. Eqs.~(\ref{eq:mumd}) and (\ref{eq:sigma}) then determine $\lambda_u$ and $\lambda_d$ \begin{equation} (\lambda_u+\lambda_d) = \frac{g}{2\sqrt{2}m_W}(m_u+m_d),\quad (\lambda_u-\lambda_d) = \frac{g}{\sqrt{2}m_W}\left((m_u-m_d)-\frac{f_\pi}{\sqrt{2}}(\rho_u-\rho_d)\right), \label{eq:lambda1} \end{equation} that is \begin{equation} \lambda_u=g\,\frac{3 m_u-m_d-2\sqrt{2}f_\pi(\rho_u-\rho_d)}{4\sqrt{2}\,m_W},\quad \lambda_d=g\,\frac{3 m_d-m_u+2\sqrt{2}f_\pi(\rho_u-\rho_d)}{4\sqrt{2}\,m_W}, \label{eq:lambda2} \end{equation} in terms of $\rho_u - \rho_d$ which is, at this point, still undetermined. \subsection{Goldstones and pseudo-Goldstones} \label{subsec:pieta} \subsubsection{The charged Goldstones of the broken $\boldsymbol{SU(2)_L}$} \label{subsub:xigolds} Since $<{\mathfrak s}^3>$ provides most of the mass of the $\vec W$'s, the charged Goldstone bosons of the broken $SU(2)_L$ weak symmetry are, to a very good approximation, the excitations of ${\mathfrak s}^3$ by the generators $T^+_L$ and $T^-_L$, that is $\mathfrak{s}^+$ and $\mathfrak{s}^-$ $\in H$. However, the $SU(2)_L$ invariant Yukawa couplings that need to be introduced to provide fermions with ``soft'' masses also give, at low energy, among other couplings, a ``soft'' mass to $\mathfrak{s}^+$ and $\mathfrak{s}^-$ through the term proportional to $\delta_2$. The situation for $\mathfrak{s}^+$ and $\mathfrak{s}^-$ is different from that of the pions which can become pseudo-Goldstone bosons and stay as physical particles. The spontaneously broken $SU(2)_L$ symmetry requires true Goldstones, which can only go along with \begin{equation} \delta_2=0, \label{eq:delta2} \end{equation} which is accordingly a side-effect of weak symmetry breaking. Looking at (\ref{eq:params}), one could think that $\nu^3 \equiv <\bar u u-\bar d d>=0$ could be a solution to $\delta_2=0$. However, we shall see later in subsection \ref{subsec:condensates} that $<\bar u u>$ must be different from $<\bar d d>$ as a trigger of both weak and custodial symmetry breaking. So, (\ref{eq:delta2}) entails \begin{equation} \lambda_u = \lambda_d = \frac{g}{4\sqrt{2}\,m_W}(m_u+m_d). \label{eq:lambda} \end{equation} By (\ref{eq:lambda1}), (\ref{eq:lambda}) determines \begin{equation} \rho_u-\rho_d=\frac{\sqrt{2}(m_u-m_d)}{f_\pi}, \label{eq:rou-rod} \end{equation} and, combined with (\ref{eq:rou+rod}), \begin{equation} \rho_u=\frac{3m_u-m_d}{2\sqrt{2}f_\pi},\quad \rho_d=\frac{3m_d-m_u}{2\sqrt{2}f_\pi}. \label{eq:rourod} \end{equation} \subsubsection{The $\boldsymbol{{\mathfrak p}^3-{\mathfrak p}^0}$ system} \label{subsub:pieta} The $(\mathfrak{p}^3,\mathfrak{p}^0)$ or $(\mathfrak{k}^3,\mathfrak{h}^0)$ or, equivalently $(\pi^0,\eta)$ system gets endowed by the Yukawa couplings with a mass matrix \begin{equation} \frac12\left(\begin{array}{cc} 2\delta_1 & m_{12}^2\cr m_{12}^2 & 2\delta_2 \end{array}\right). \end{equation} However, since $\delta_2$ has been fixed to zero in subsection \ref{subsub:xigolds}, this system now exhibits a tachyonic state unless \begin{equation} m_{12}^2=0 \Leftrightarrow (\rho_u-\rho_d)\frac{\nu^3}{\sigma}=-(\lambda_u+\lambda_d)\frac{\mu^3}{v} \Leftrightarrow \frac{m_u-m_d}{m_u+m_d}=-\frac12\frac{\mu^3}{\nu^3} \equiv -\frac12 \frac{<\bar u u + \bar d d>}{<\bar u u-\bar d d>}, \label{eq:m12vanish} \end{equation} in which we have used (\ref{eq:defm12}).(\ref{eq:params}), (\ref{eq:lambda}), (\ref{eq:rourod}) and the definitions of $\mu^3$ and $\nu^3$ that were introduced in (\ref{eq:compdoub}). Eq.~(\ref{eq:m12vanish}) is equivalent to \footnote{$<\bar d d>$ vanishes for $m_d=3 m_u$. We shall see in subsection \ref{subsec:htoq} that this is also the condition for the $u$ quark to couple to the ``standard'' Higgs boson $\xi$ like in the Glashow-Salam-Weinberg model.} \begin{equation} \frac{<\bar d d>}{<\bar u u>}= \frac{3m_u-m_d}{m_u-3m_d}. \label{eq:rapdduu} \end{equation} When this is realized, ${\mathfrak p}^0$ is a true Goldstone and ${\mathfrak p}^3$ keeps its mass $m_\pi^2$. They do not mix. This fits the picture of ${\mathfrak p}^0$ being the third Goldstone boson of the broken $SU(2)_L$ symmetry, and ${\mathfrak p}^3$ being the neutral member of the triplet of pseudo-Goldstone bosons of the broken chiral symmetry $SU(2)_L \times SU(2)_R$ down to the diagonal $SU(2)$. Other considerations concerning symmetries will be given in section \ref{section:symmetries}. \subsubsection{No scalar-pseudoscalar coupling} \label{subsub:noscps} Yukawa couplings are seen on (\ref{eq:genyuk2}) to potentially generate couplings between charged scalars, for example ${\mathfrak s}^-=\frac{\sigma}{\sqrt{2}\nu^3}\bar d u$ and pseudoscalars, for example ${\mathfrak p}^+$. It is the second important effect of the condition $m_{12}^2 \equiv \delta_{12}+\kappa_{12}=0$ obtained in subsection \ref{subsub:pieta} to cancel these transitions. \subsubsection{The unitary gauge. Leptonic decays of pions} \label{subsub:pilep} In the unitary gauge the crossed couplings between the $\vec W$ gauge bosons and the (derivative of the) $SU(2)_L$ Goldstone bosons ${\mathfrak p}^0, {\mathfrak s}^+, {\mathfrak s}^-$ are canceled, which leaves untouched the similar couplings between $\vec W$ and the three pions. Their proportionality to $v = f_\pi$ yields in particular leptonic decays of pions in agreement with the standard PCAC calculation. \subsection{Fermion masses versus the low energy effective Lagrangian} \label{subsec:fmasslowen} Fermions receive their masses from the VEV's of the two Higgs doublets $K$ and $H$. From (\ref{eq:mumd}) and the values of the parameters that have been determined (see also section \ref{section:paramsummary} below), it appears that $<{\mathfrak s}^3> \in H$ contribute to the $u$ and $d$ masses by the same amount $\frac{\sigma\lambda_u}{\sqrt{2}}=\frac{\sigma\lambda_d}{\sqrt{2}} = \frac{m_u+m_d}{4}$. Then, $<{\mathfrak s}^0> \in K$ contributes to the $u$ mass by $\frac{v\rho_u}{\sqrt{2}}= \frac{3m_u-m_d}{4}$ and to the $d$ mass by $\frac{v\rho_d}{\sqrt{2}}= \frac{3m_d-m_u}{4}$. The second point is the inadequacy to calculate quark masses from the low energy effective expression (\ref{eq:lowenyuk1}) of the Yukawa couplings and its set of parameters determined by low energy considerations. When plugged into (\ref{eq:lowenyuk1}) the conditions $\delta_2=0$ and $m_{12}^2\equiv \delta_{12}+\kappa_{12}=0$ demonstrated respectively in (\ref{eq:delta2}) and in (\ref{eq:m12vanish}) entail that quark masses come from the sole Higgs doublet $K$, by $-\delta_1 K^\dagger K$. Going back to quark fields and writing it for example as the product $-\delta_1 K^\dagger {\mathfrak K}$ of scalar fields $K$ times their equivalents in terms of bilinear quark operators $\mathfrak K$, which respects renormalizability, ${\cal L}_{Yukawa}^{eff}$ does, through quark-antiquark condensation, generate quark masses. They however come out as $-\delta_1\frac{v^2}{2\mu^3}(\bar u u+\bar d d) =-\frac{m_u+m_d}{4} (\bar u u+\bar d d)$, which is different from the masses obtained from the original Lagrangian (\ref{eq:genyuk2}) \begin{equation} -\delta_1 \frac{v}{\sqrt{2}\mu^3}(\bar u u+\bar d d)<{\mathfrak s}^0> +\delta_{12}\left(\frac{\sigma}{\sqrt{2}\nu^3}(\bar u u-\bar d d) <{\mathfrak s}^0>-\frac{v}{\sqrt{2}\mu^3}(\bar u u+\bar d d)<{\mathfrak s}^3> \right); \label{eq:fmass} \end{equation} using the expression for $\delta_{12}$ deduced from (\ref{eq:params}) and (\ref{eq:lambda1}), the genuine Lagrangian (\ref{eq:fmass}) yields the mass terms \begin{equation} \begin{split} -\delta_1 \frac{v^2}{2\mu^3}(\bar u u+\bar d d) +\delta_{12}\frac{v\sigma}{2}\left(\frac{\bar u u-\bar d d}{\nu^3} -\frac{\bar u u+\bar d d}{\mu^3} \right) & \stackrel{(\ref{eq:sumparams})}{=} -\frac{f_\pi^2m_\pi^2}{2}\frac{\bar u u+\bar d d}{\mu^3} +\frac{f_\pi^2m_\pi^2}{2}\left(\frac{\bar u u-\bar d d}{\nu^3}-\frac{\bar u u+\bar d d}{\mu^3}\right)\cr &\hskip -4.5cm \stackrel{(\ref{eq:sumparams})}{=} -\underbrace{\frac14(m_u+m_d)(\bar u u+\bar d d)}_{\text{from}\ \delta_1} -\underbrace{\frac12(m_u-m_d)(\bar u u-\bar d d)}_{\text{from}\ \kappa_{12}} -\underbrace{\frac14(m_u+m_d)(\bar u u+\bar d d)}_{\text{from}\ \delta_{12}}. \end{split} \label{eq:massterms} \end{equation} In (\ref{eq:massterms}), unlike in ${\cal L}_{Yukawa}^{eff}$, the terms proportional to $\delta_{12}$ do not vanish because the bilinear fermion operators do not reduce to their low energy VEV's $<\bar u u-\bar d d>=\nu^3, <\bar u u+\bar d d>=\mu^3$. Furthermore, even if $m_u$ is set equal to $m_d$, the part proportional to $\delta_{12}$, which describes $H-K$ interplay, contributes to quark masses as much as the one proportional to $\delta_1$ which comes from $K$ alone. Therefore, neither the effective Lagrangian ${\cal L}^{eff}_{Yukawa}$ nor the ``low energy truncation'' of the model, that includes only one Higgs doublet, $K$, can correctly account for fermion masses (nor, of course, for the masses of the gauge bosons, problem which led to ``technicolor'' models \cite{Susskind}). ${\cal L}_{Yukawa}^{eff}$ we shall accordingly only use to deal with low energy physics of scalars and pseudoscalars, in particular to build the effective scalar potential $V_{eff}$ in subsection \ref{subsec:effpot}. \section{Summary of the parameters} \label{section:paramsummary} By low energy considerations, we have determined the following parameters, introduced in particular in (\ref{eq:genyuk1}) and (\ref{eq:params}): \vbox{ \begin{eqnarray} &&\rho_u=\frac{3m_u-m_d}{2\sqrt{2}f_\pi},\quad \rho_d=\frac{3m_d-m_u}{2\sqrt{2}f_\pi},\quad \lambda_u=\lambda_d= \frac{g(m_u+m_d)}{4\sqrt{2}m_W},\cr && \cr && \delta_1= m_\pi^2,\quad \delta_{12}=-\kappa_{12}=\frac{gf_\pi m_\pi^2}{2m_W},\quad \delta_2=0,\cr && \cr && (m_u+m_d)<\bar u u + \bar d d> \stackrel{(\ref{eq:GMOR})}{=} 2f_\pi^2m_\pi^2,\quad (m_u-m_d)<\bar u u-\bar d d> \stackrel{(\ref{eq:m12vanish})}{=} - f_\pi^2 m_\pi^2,\cr && \cr && v\equiv \sqrt{2}<{\mathfrak s}^0>=f_\pi, \quad \sigma\equiv \sqrt{2}<{\mathfrak s}^3>=\frac{2m_W}{g}. \label{eq:sumparams} \end{eqnarray} } These should be plugged into the renormalizable form (\ref{eq:genyuk2}) of the Yukawa Lagrangian. Note that, unlike its low energy avatar (\ref{eq:lowenyuk1}), it depends on $\kappa_{12}$ and $\delta_{12}=-\kappa_{12}$, and not on $m_{12}^2=0$. \section{The scalar potential} \label{section:pot} \subsection{A $\boldsymbol{U(2)_L \times U(2)_R}$ invariant potential} \label{subsec:pot} We shall consider a quartic $U(2)_L \times U(2)_R$ invariant potential \begin{equation} V(K,H) = -\frac{m_H^2}{2}\, (K^\dagger K + H^\dagger H) +\frac{\lambda_H}{4}\,\Big((K^\dagger K)^2 + (H^\dagger H)^2\Big), \label{eq:pot} \end{equation} which thus decomposes into two independent potentials, one for $K$ and one for $H$. This is possible because (see \cite{Machet1}) $K$ and $H$ are stable by both $SU(2)_L$ and $SU(2)_R$ and transform into each other by $U(1)_L$ and $U(1)_R$ (with the appropriate signs). This last symmetry dictates in particular the equality of the couplings (quadratic and quartic) for the two doublets. $SU(2)_L$ breaking by $v\not=0$ and $\sigma \not=0$ generates three Goldstone bosons in each Higgs multiplet: $\vec{\mathfrak p}\in K$, the pseudoscalar singlet ${\mathfrak p}^0$ and the two charged scalars ${\mathfrak s}^\pm \in H$. This also fits the scheme according to which $v\not=0$ and $\sigma\not=0$ spontaneously break the chiral $U(2)_L \times U(2)_R$ down to $U(1) \times U(1)_{em}$ (see \cite{Machet1}); there, too, six Goldstones are generated. The pseudoscalar triplet $\vec{\mathfrak p}\in K$ gets a small mass from the $SU(2)_L$ invariant Yukawa couplings while the pseudoscalar singlet ${\mathfrak p}^0$ and the two charged scalars ${\mathfrak s}^\pm \in H$ must be protected from this since they are also the three Goldstones to be eaten by the $\vec W$ gauge bosons (see section \ref{section:lowen}). ${\mathfrak p}^0$ plays a double role in that is also the Goldstone of the breaking of $U(1)_L \times U(1)_R$ down to the diagonal $U(1)$, which, at the level of the algebra, is related to parity breaking. $v\not=0$ is associated with $<\bar u u + \bar d d> \not=0$, responsible for the breaking of $SU(2)_L \times SU(2)_R$ down to $SU(2)$ with the three pions as (pseudo)-Goldstone bosons, while $\sigma \not=0$ is associated with $<\bar u u-\bar d d>\not=0$ which is also responsible for the breaking of the custodial $SU(2)$ into $U(1)_{em}$ and of the $\vec W$ mass. Our choice for the potential amounts to requesting that, in the absence of Yukawa couplings, all fields are Goldstones but for the two Higgs bosons. In the most general potential for two Higgs doublets the following terms have accordingly been discarded: $\bullet$\quad $(m^2 K^\dagger H +h.c)$, with $m \in {\mathbb C}$ would mediate in particular transitions between scalars and pseudoscalars that should not occur classically; $\bullet$\quad $\lambda_4(K^\dagger K)(K^\dagger H) + h.c.$, $\lambda_5(H^\dagger H)(K^\dagger H_2) + h.c.$ with $\lambda_4, \lambda_5 \in {\mathbb C}$ would also mediate unwanted classical transitions between scalars and pseudoscalars; $\bullet$\quad $\lambda_3(K^\dagger H)^2 + h.c.$ with $\lambda_3 \in {\mathbb C}$ would in particular contribute to the mass of the neutral pion and not to that of the charged pions. Such a classical $\pi^+-\pi^0$ mass difference which is not electromagnetic nor due to $m_u \not= m_d$ is unwelcome; $\bullet$\quad $\lambda_1(K^\dagger K)(H^\dagger H)$, $\lambda_2(K^\dagger H)(H^\dagger K)$, with $\lambda_1, \lambda_2 \in {\mathbb R}$ would also spoil the Goldstone nature of the pions and $\eta$, the first because of terms proportional to $<{\mathfrak s}^3>^2 \vec\pi^2$ and $<{\mathfrak s}^0>^2 \eta^2$, the second because of terms proportional to $<{\mathfrak s}^0>^2 \eta^2, <{\mathfrak s}^3>^2 {\pi^0}^2$ and $<{\mathfrak s}^0><{\mathfrak s}^3> \pi^0 \eta$. \subsection{The low energy effective potential} \label{subsec:effpot} At low energy, the renormalizable $V(K,H)$ is supplemented by $(-1)\times$ the bosonised form of the Yukawa Lagrangian (\ref{eq:lowenyuk1}). This yields the effective potential \begin{eqnarray} V_{eff}(K,H) &=& V(K,H) +\delta_1\, K^\dagger K + \frac12 m_{12}^2\, (K^\dagger H + H^\dagger K) +\delta_2\, H^\dagger H\cr && \hskip -2cm = -\frac{m_H^2}{2}\, (K^\dagger K + H^\dagger H) +\frac{\lambda_H}{4}\,\Big((K^\dagger K)^2 + (H^\dagger H)^2\Big) +\delta_1\, K^\dagger K + \frac12 m_{12}^2\, (K^\dagger H + H^\dagger K) +\delta_2\, H^\dagger H.\cr && \label{eq:effpot1} \end{eqnarray} It is further simplified since we have shown that $\delta_2=0$ and $m_{12}^2=0$ (see (\ref{eq:delta2}) and (\ref{eq:lambda}) in section \ref{section:lowen}) and $V_{eff}$ accordingly reduces to \begin{equation} V_{eff}(K,H) = -\frac{m_H^2-2 m_\pi^2}{2}\, K^\dagger K -\frac{m_H^2}{2}\, H^\dagger H +\frac{\lambda_H}{4}\,\Big((K^\dagger K)^2 + (H^\dagger H)^2\Big). \label{eq:effpot2} \end{equation} Last, to suitably reproduce the $\vec\pi$ and $\vec W$ masses, we know that it should have a minimum at values of $v$ and $\sigma$ given by (\ref{eq:v}) and (\ref{eq:sigma}). The two equations $\frac{\partial V_{eff}}{\partial {\mathfrak s}^0}\Big|_{<{\mathfrak s}^0>=\frac{f_\pi}{\sqrt{2}}}=0$ and $\frac{\partial V_{eff}}{\partial {\mathfrak s}^3}\Big|_{<{\mathfrak s}^3>=\frac{\sqrt{2}m_W}{g}}=0$ yield respectively $m_H^2= \lambda_H <{\mathfrak s}^0>^2 + 2 m_\pi^2$ and $m_H^2 = \lambda_H <{\mathfrak s}^3>^2$ such that \begin{equation} \lambda_H=\frac{2 m_\pi^2}{<{\mathfrak s}^3>^2-<{\mathfrak s}^0>^2} \approx \frac{2 m_\pi^2}{<{\mathfrak s}^3>^2}\left(1+\frac{<{\mathfrak s}^0>^2}{<{\mathfrak s}^3>^2}\right) = \frac{g^2 m_\pi^2}{m_W^2}\left(1+\frac{g^2 f_\pi^2}{4 m_W^2}\right), \label{eq:lambdaeff} \end{equation} which puts it definitely in the perturbative regime. It is because of the presence of $m_\pi^2$ that $\lambda_H$ is different from zero. $m_\pi\not=0$ keeps accordingly the theory away from instability. \subsection{The masses of the two Higgs bosons $\boldsymbol{\varsigma}$ and $\boldsymbol{\xi}$} \label{subsec:hmass} Since the effective scalar potential is now fully determined, one can calculate the masses of the two Higgs bosons $\varsigma$ and $\xi$ defined in (\ref{eq:defhxi}), which do not mix. One gets \begin{eqnarray} m_\xi &=& <{\mathfrak s}^3> \sqrt{\lambda_H} \approx \sqrt{2}\, m_\pi,\cr m_\varsigma &=& <{\mathfrak s}^0> \sqrt{\lambda_H} =m_\xi \frac{<{\mathfrak s}^0>}{<{\mathfrak s}^3>} \approx m_\pi\frac{ gf_\pi}{\sqrt{2}\,m_W} \approx 68\,KeV, \label{eq:mhiggs} \end{eqnarray} In particular, their ratio is that of the two VEV's \begin{equation} \frac{m_\xi}{m_\varsigma} = \frac{<{\mathfrak s}^3>}{<{\mathfrak s}^0>}=\frac{2 m_W/g}{f_\pi} \end{equation} which is also the ratio of the two scales involved in this 1-generation standard model, the weak scale $\simeq m_W$ and the chiral scale $\simeq f_\pi$. The masses are small and justify {\em a posteriori} our low energy treatment of the scalar effective potential. The composition of the two Higgs doublets is accordingly as follows. Inside the ``chiral'' doublet $K$ one finds 3 pions and the very light scalar Higgs boson $\varsigma$. As was shown in \cite{Machet1}, they correspond respectively to a triplet and a singlet of the custodial $SU(2)$ symmetry. Inside the ``weak'' doublet $H$, one finds the three Goldstones of the broken $SU(2)_L$ weak symmetry, the neutral pseudoscalar $SU(2)$ singlet and two charged scalars inside the $SU(2)$ triplet. The third component of this triplet is the second scalar Higgs boson $\xi$ with mass $\approx m_\pi$. Note that the four particles $(\vec\pi, \xi)$ with mass $m_\pi$ do not lie together inside the same $SU(2)_L$ doublet, nor do the three $SU(2)_L$ Goldstones and the very light Higgs boson $\varsigma$. \subsubsection{The roles of $\boldsymbol{m_W}$ and $\boldsymbol{m_\pi}$} \label{subsub:wpi} In our rebuilding of Standard Model with only one generation, we find that the masses of the two Higgs bosons are both proportional to $m_\pi$ and small. But they are not small in the same way. If $m_\pi$ is replaced by the mass of some heavier bound state $m \leq \sqrt{2}m_W/g \equiv <{\mathfrak s}^3>\ \approx 168\,GeV$, $m_\varsigma$ will stay very small $m_\varsigma \leq f_\pi \approx 93\,MeV$ while $m_\xi$ will grow like the mass of the bound state. So, in the case of more generations, the presence of very light Higgs boson(s) with a mass lower than $100\,MeV$ looks a robust feature as a damping effect of the weak scale $m_W$ but larger masses can be expected for some others. It would not be a surprise that, for 3 generations and up to some coefficient, the mass of one of the Higgs bosons be set by that of a bound state involving the top quark. In the present case, the masses of the two Higgs bosons vanish at the limit $m_\pi \to 0$, that is, by the GMOR relation (\ref{eq:GMOR}), either when $<\bar u u+ \bar d d>\to 0$ or when $(m_u+m_d)\to 0$. Since we have also determined (see (\ref{eq:sumparams})) that $(m_u-m_d)<\bar u u-\bar d d>$ vanishes with $m_\pi$, this limit corresponds either to $<\bar u u>=0 =<\bar d d>$ or to $m_u=0=m_d$. \section{Couplings of the Higgs bosons} \label{section:couplings} \subsection{Couplings of Higgs bosons to quarks} \label{subsec:htoq} Like for the calculation of fermion masses (see subsection\ref{subsec:fmasslowen}), the bosonised forms (\ref{eq:lowenyuk1}) or (\ref{eq:lowenyuk2}) of the Yukawa couplings, which are only valid at low energy, is inappropriate to evaluate the couplings of fermions, in particular those to the Higgs bosons. Indeed, plugging into (\ref{eq:lowenyuk1}) or (\ref{eq:lowenyuk2}) the relations $m_{12}^2 \stackrel{(\ref{eq:defm12})} {\equiv} \delta_{12}+\kappa_{12}=0$ and $\delta_2=0$ that we have obtained for the crossed couplings (see (\ref{eq:sumparams})) from low energy considerations would erroneously leave as the only couplings of quarks to Higgs bosons the ones present in $-\delta_1 K^\dagger K$, in which, in particular, no coupling exists between the ``quasi-standard'' Higgs boson $\xi$, which belongs to $H$, and quarks. In order to properly determine these parameters, the original form (\ref{eq:genyuk2}) of the Yukawa couplings must instead be used. Plugging therefore the definition (\ref{eq:defhxi}) into (\ref{eq:genyuk2}) yields the following couplings of the Higgs bosons $\varsigma$ and $\xi$ to quarks \begin{equation} \begin{array}{lll} & -\varsigma\,(\rho_u \bar u u + \rho_d \bar d d)-\xi\,(\lambda_u \bar u u + \lambda_d \bar d d) & \cr =& -\varsigma\,\left(\delta_1\frac{v}{\sqrt{2}\mu^3}(\bar u u+\bar d d) +\kappa_{12}\frac{\sigma}{\sqrt{2}\nu^3}(\bar u u-\bar d d)\right) -\xi\,\left(\delta_{12}\frac{v}{\sqrt{2}\mu^3}(\bar u u+\bar d d) +\delta_2\frac{\sigma}{\sqrt{2}\nu^3}(\bar u u-\bar d d)\right)& \end{array} \label{eq:htoq1} \end{equation} which exhibits, of course, the same structure as in (\ref{eq:fmass}) and which, using the values (\ref{eq:sumparams}) of the parameters, $\delta_{12}=-\kappa_{12}$ and $\delta_2=0$, yields \begin{equation} {\cal L}_{Higgs-quarks}= -\varsigma\left(\frac{3m_u-m_d}{2\sqrt{2}f_\pi}\,\bar u u + \frac{3m_d-m_u}{2\sqrt{2}f_\pi}\,\bar d d\right) -\xi\, \frac{g(m_u+m_d)}{4\sqrt{2} m_W}\, (\bar u u + \bar d d). \label{eq:htoq2} \end{equation} The $\varsigma$ Higgs boson is more strongly coupled to quarks than $\xi$. Its coupling is still ``perturbatively'' since $m_u, m_d \ll f_\pi$. It however suggests that, for heavier quarks, some Higgs boson(s) could strongly couple to hadronic matter. As far as $\xi$ is concerned, it looks at first sight ``quasi-standard'' because it is proportional to $g m_{quark}/m_W$. It is however not quite so because in the standard case we would have obtained $-\frac{g}{\sqrt{2}m_W}(m_u \bar u u+ m_d \bar d d)\,\xi$. The difference is that, though $u$ and $d$ have different masses, they now get coupled to $\xi$ with equal strength: unlike in the genuine Glashow-Salam-Weinberg model, the heavier quark is no more strongly coupled than the lighter. Taking $m_d=\gamma m_u, \gamma > 1$, the coupling $-\frac{g(1+\gamma)}{4\sqrt{2}m_W}m_u$ of $\xi$ to $u$ quarks can be very close to the standard one (it becomes identical for $\gamma=3$, value at which $<\bar d d>$ vanishes, see footnote in subsection \ref{subsub:pieta}), while the one $-\frac{g(1/\gamma + 1)}{4\sqrt{2}m_W}m_d$ of $\xi$ to the heavier $d$ is smaller than standard by the factor $\frac{(1+\gamma)}{4\gamma}$. \subsection{Couplings of Higgs bosons to gauge bosons} \label{subsec:htow} They arise from the kinetic terms (\ref{eq:kinetic}). Using (\ref{eq:v}) and (\ref{eq:sigma})), one gets \begin{equation} {\cal L}_{HiggsWW}=\frac{g m_W}{2} W_\mu^2\, \xi + \frac{g^2 f_\pi}{4\sqrt{2}} W_\mu^2\, \varsigma. \label{eq:htow} \end{equation} $\xi$ couples accordingly in a ``standard'' way $\simeq gm_W$ to two $W$'s while the coupling of $\varsigma$, ${\cal O}(g^2 f_\pi)$ is much smaller by a factor ${\cal O} (10^{-3})$. \subsection{Couplings of Higgs bosons to leptons} \label{subsec:htol} Yukawa couplings to leptons need introducing four parameters, $\rho_e$ and $\rho_\nu$ for ${\mathfrak s}^0$ and the quantum Higgs $\varsigma$, $\lambda_e$ and $\lambda_\nu$ for ${\mathfrak s}^3$ and the quantum Higgs $\xi$ \vbox{ \begin{eqnarray} \hskip -1cm{\cal L}_{Yuk-lept}&=&\Big((\rho_\nu \bar\nu \nu +\rho_e \bar e e)\,{\mathfrak s}^0 -(\lambda_\nu \bar\nu \nu + \lambda_e \bar e e)\,{\mathfrak s}^3 \Big)\cr &+& \left(\frac{\rho_\nu + \rho_e}{2}\Big(\bar \nu\gamma_5 e\, {\mathfrak p}^- + \bar e\gamma_5 \nu\, {\mathfrak p}^+ +(\bar \nu \gamma_5 \nu -\bar e\gamma_5 e)\,{\mathfrak p}^3\Big) \right) +\frac{\rho_\nu-\rho_e}{2}\left(\Big(\bar e \nu\, {\mathfrak p}^+ -\bar\nu e\,{\mathfrak p}^- +(\bar \nu \gamma_5 \nu + \bar e\gamma_5 e)\,{\mathfrak p}^3 \Big) \right)\cr &-&\left(\frac{\lambda_\nu+\lambda_e}{2}\Big(\bar e\gamma_5\nu\, {\mathfrak s}^+ -\bar\nu\gamma_5 e\, {\mathfrak s}^- -(\bar\nu\gamma_5\nu - \bar e\gamma_5 e)\,{\mathfrak p}^0\Big)\right) +\frac{\lambda_\nu-\lambda_e}{2}\left(\Big(\bar e\nu\,{\mathfrak s}^+ + \bar\nu e\,{\mathfrak s}^- -(\bar\nu\gamma_5\nu + \bar e\gamma_5 e)\,{\mathfrak p}^0 \Big) \right).\cr && \label{eq:lepyuk} \end{eqnarray} } Using again (\ref{eq:v}) and (\ref{eq:sigma} provides the lepton masses \begin{equation} m_e= \rho_e\frac{f_\pi}{\sqrt{2}} +\lambda_e\frac{\sqrt{2}m_W}{g},\quad m_\nu=\rho_\nu\frac{f_\pi}{\sqrt{2}}+\lambda_\nu \frac{\sqrt{2}m_W}{g}. \label{eq:memnu} \end{equation} \subsubsection{The low energy limit} Let us use again the one-to-one correspondence between the components of the Higgs multiplets and bilinear quark operators (\ref{eq:compdoub}). Using PCAC (\ref{eq:PCAC}) and the Gell-Mann-Oakes-Renner relation (\ref{eq:GMOR}), we could relate the charged pion fields $\pi^\pm$ and the charged pseudoscalar components ${\mathfrak p}^\pm$ of the Higgs doublet $K$ by (\ref{eq:ppi}). Yukawa couplings (\ref{eq:lepyuk}) are then seen to trigger, among others, leptonic decays of charged pions. These come in addition to the ``standard ones'' obtained from the $W_\mu \partial^\mu \pi$ crossed couplings that originate from the kinetic terms (\ref{eq:kinetic}) at low energy (see subsection \ref{subsub:pilep}) and which agree with PCAC usual calculations. This means that, in a first approximation (and it is not the goal of this work to go beyond), we should take \begin{equation} \rho_\nu \approx 0 \approx \rho_e. \label{eq:rolep} \end{equation} In case observed leptonic pion decay turn out to differ from PCAC estimates, the issue could be raised whether (\ref{eq:rolep}) should be revisited. In relation with (\ref{eq:memnu}) the choice (\ref{eq:rolep}) leads to a standard coupling of the Higgs boson $\xi$ to leptons, proportional to $gm_{lepton}/m_W$, while the ones of $\varsigma$ vanish (or are extremely close to). \section{Symmetries again} \label{section:symmetries} \subsection{The roles of $\boldsymbol{<\bar u u + \bar d d>}$ and $\boldsymbol{<\bar u u - \bar d d>}$} \label{subsec:condensates} $<\bar u u + \bar d d>\not=0$ is the signal for what is commonly called ``chiral symmetry breaking'', the breaking of $SU(2)_L \times SU(2)_R$ down to the diagonal $SU(2)$. $<\bar u u-\bar d d> \not=0$ breaks $SU(2)_L$, and the custodial $SU(2)$ down to $U(1)_{em}$. Let us show that $<\bar u u>$ cannot be equal to $<\bar d d>$. Indeed, for $\nu^3=0$ one gets from (\ref{eq:params}) $\delta_2=0=\kappa_{12}$. Then \begin{equation} m_{12}^2 = \delta_{12} = \frac{g f_\pi m_\pi^2}{2m_W}, \end{equation} in which we used the definition of $\delta_{12}$ in (\ref{eq:params}), the GMOR relation (\ref{eq:GMOR}) and (\ref{eq:lambda1}). Performing the minimization of the effective potential $V_{eff}(K,H)$ while still supposing that $V(K,H)$ is $U(2)_L \times U(2)_R$ invariant gives the two equations \begin{equation} m_{H}^2 = \lambda_{H} <{\mathfrak s}^0>^2 +2\delta_1 +\delta_{12}\frac{<{\mathfrak s}^3>}{<{\mathfrak s}^0>},\quad m_{H}^2 = \lambda_{H}<{\mathfrak s}^3>^2 +\delta_{12} \frac{<{\mathfrak s}^0>}{<{\mathfrak s}^3>}, \label{eq:minpopt} \end{equation} which yield, since $<{\mathfrak s}^3> \gg <{\mathfrak s}^0>$ (see (\ref{eq:v}) and (\ref{eq:sigma})) \begin{equation} \lambda_H \approx \frac{2\delta_1}{<{\mathfrak s}^3>^2} + \frac{\delta_{12}}{<{\mathfrak s}^0><{\mathfrak s}^3>} =\frac32\frac{g^2 m_\pi^2}{m_W^2}. \end{equation} The mass matrix of the ${\mathfrak s}^0-{\mathfrak s}^3$ system becomes then (we use (\ref{eq:minpopt})) \begin{equation} \left(\begin{array}{cc} \frac{\partial^2 V_{eff}}{(\partial {\mathfrak s}^0)^2} \equiv 2\lambda_H <{\mathfrak s}^0>^2-\delta_{12}\frac{<{\mathfrak s}^3>}{<{\mathfrak s}^0>} & \frac12\frac{\partial^2 V_{eff}}{\partial {\mathfrak s}^0 \partial {\mathfrak s}^3} =0\cr \frac12\frac{\partial^2 V_{eff}}{\partial {\mathfrak s}^0 \partial {\mathfrak s}^3} =0 & \frac{\partial^2 V_{eff}}{(\partial {\mathfrak s}^3)^2} \equiv 2\lambda_H <{\mathfrak s}^3>^2-\delta_{12}\frac{<{\mathfrak s}^0>}{<{\mathfrak s}^3>} \end{array}\right) \approx \left(\begin{array}{cc} - m_\pi^2 & 0 \cr 0 & 6 m_\pi^2 \end{array}\right). \end{equation} It exhibits, because of the term $ -\delta_{12}\frac{<{\mathfrak s}^3>}{<{\mathfrak s}^0>}$ in $\frac{\partial^2 V_{eff}}{(\partial {\mathfrak s}^0)^2}$, which comes from the low energy expression of Yukawa couplings, a tachyonic $s$ Higgs boson $m_\varsigma^2 \approx - m_\pi^2$. The theory with $<\bar u u> = <\bar d d>$ is thus unstable. Since we have everywhere supposed that the minimum of the effective potential fits the $\vec W$ and $\vec \pi$ masses, we conclude that chiral and weak symmetry breakings as they are observed are only possible for $<\bar u u> \not= <\bar d d>$. Unlike for the pions the masses of which are related to $<\bar u u + \bar d d>$ by the GMOR relation (\ref{eq:GMOR}), there is no such relation between $m_W$ and $<\bar u u-\bar d d>$ (see the last line of (\ref{eq:sumparams})). Moreover, even when $<\bar u u> = <\bar d d>$ (that is, $<\nu^3>=0$) $<{\mathfrak s}^3>$ can be equal to $\sigma/\sqrt{2}$ because, in its expression (\ref{eq:compdoub}), $\nu^3$ cancels between the numerator and the denominator. This is why it looks opportune to rather speak of $<\bar u u> \not= <\bar d d>$ as the {\em catalyst} of weak (and custodial) symmetry breaking. \subsection{The custodial $\boldsymbol{SU(2)}$} \label{subsec:custodial} While $(\bar u u + \bar d d)$ gets annihilated by all generators of $SU(2)$, $(\bar u u - \bar d d)$ does not, it only gets annihilated by $T^3 = Q$ (see \cite{Machet1}). So, $<\bar u u> \not= <\bar d d>$ spontaneously breaks the custodial $SU(2)$ down to $U(1)_{em}$. In this breaking one expects two Goldstones. They are the excitations by $T^+$ and $T^-$ of the ${\mathfrak s}^3$ vacuum , that is the two scalars ${\mathfrak s}^+$ and ${\mathfrak s}^-$ eaten by $W^\pm$, and which coincide with the two charged Goldstones of the spontaneously broken weak $SU(2)_L$. The electroweak Lagrangian is invariant by the custodial $SU(2)$ as soon as the $\vec W$'s form an $SU(2)$ vector. But, in the broken phase, the $W^3$ can only eat ${\mathfrak s}^0$ which is a $SU(2)$ singlet. This is how the generation of the $\vec W$ mass breaks the custodial symmetry. \subsection{Goldstones and pseudo-Goldstones} \label{subsec:golds} Three true Goldstones are eaten by the $\vec W$'s to get massive: they are ${\mathfrak p}^0$, ${\mathfrak s}^+$ and ${\mathfrak s}^-$, belonging to the doublet $H$. ${\mathfrak p}^0$ is also the Goldstone of the $U(1)_L \times U(1)_R$ spontaneous breaking down to the diagonal $U(1)$. The three $\vec{\mathfrak p}$ (the three pions) are the pseudo-Goldstones of the broken $SU(2)_L \times SU(2)_R$ down to $SU(2)$. The only non-Goldstones are the two Higgs bosons $\xi$ and $\varsigma$ in the sense that, though their masses also vanish with $m_\pi$, they do not seem connected with the breaking of any continuous symmetry. The first could only be excited by acting either on ${\mathfrak p}^0$ with $T^3_L$ or $T^3_R$, or on ${\mathfrak p}^3$ with ${\mathbb I}_L$ or ${\mathbb I}_R$. However, in a first approximation, neither ${\mathfrak p}^0$ nor ${\mathfrak p}^3$, being a pseudoscalar, has a non-vanishing VEV. Likewise, $H$ could only be excited either by acting on ${\mathfrak p}^3$ with $T^3_L$ or $T^3_R$, or on ${\mathfrak p}^0$ with ${\mathbb I}_L$ or ${\mathbb I}_R$. The same argumentation rejects thus both as Goldstone bosons, unless some additional spontaneously broken continuous symmetry is at work, which is to be uncovered. \section{A few hints for more generations} \label{section:moregen} Before concluding, it is worth pointing at a few features concerning the case of a larger number $N$ of generations (some information can also be found in \cite{Machet}). A more detailed study is postponed to \cite{Machet3}. There are features of this work which only belong to the case of one generation. For example the fact that the $\eta$ pseudoscalar meson (pseudoscalar singlet) becomes the longitudinal neutral $W^3$. In the case of more generations, it may happen that this role is still held by the singlet $\propto \bar u\gamma_5 u + \bar d\gamma_5 d + \bar c\gamma_5 c + \bar d\gamma_5 s + \ldots$, but it is no longer the $\eta$, or by another neutral combination. Though this can only be known by a precise study, it is likely that the $\eta$ can then live again its life as a physical pseudoscalar meson. Other features are certainly, at the opposite, robust, like the fact that there is a very light Higgs boson with mass $\leq f_\pi \approx 93\,MeV$. Likewise, from the expression (\ref{eq:lambdaeff}) for the quartic Higgs coupling $\lambda_H$, it seems reasonable to believe that, even if the mass of the pion gets replaced by the mass of a much heavier bound state, $\lambda_H$ will stay smaller than $1$ and thus ``perturbative''. It can only get equal to $1$ if $m_\pi$ is replaced by $\sqrt{2}m_W/g \approx 168\,GeV$, such that one should only be careful when the ``top'' generation is concerned, for which ``non-perturbative'' phenomena could appear. The logic of the present work and of \cite{Machet1} is that all (pseudo)scalar doublets isomorphic to the one of the Standard Model of Glashow, Salam and Weinberg \cite{GSW} should be incorporated. This would stay an empty or meaningless statement without noticing that the standard Higgs doublet has transformations by the chiral group (\ref{eq:ruleL}) (\ref{eq:ruleR}) that are identical to those of bilinear quark operators. For one generation, this doubled the number of possible doublets, with parity distinguishing the two of them. In the case of $N$ generations, it was shown in \cite{Machet} that there exists $2N^2$ such doublets, divided, by parity again, in two sets. Their $8N^2$ real components can be put in one-to-one relationship with the same number of scalar and pseudoscalar $J=0$ mesons that occur for $2N$ flavors of quarks. The same logic as the one followed here requires accordingly that the Standard Model be then endowed with $2N^2$ complex Higgs doublets. Among these, one expects in particular as many Higgs fields as there exist quark-antiquark $<\bar q_i q_i>$ condensates, that is, $2N$. Owing to the large number of parameters involved, it looks of course too optimistic to think that one can easily calculate all masses and couplings as we did here. This path stays nevertheless in our opinion the most natural to follow, the underlying guess being that the mystery of Higgs boson(s) simply lies inside the one of scalar (and eventually pseudoscalar) $J=0$ mesons. \section{Conclusion and prospects} \label{section:conclusion} As we re-built it, the Standard Model for one generation of fermions is complete in the sense that all masses and couplings of all fields present in the Lagrangian and of all $J=0$ pseudoscalar mesons are determined. Pions are accounted for with the correct decays and, of the four expected scalar mesons, two (the charged ones) become the longitudinal charged $W^\pm$ while the last two are the Higgs bosons $\varsigma$ and $\xi$. Both have small masses and are perturbatively coupled and self-coupled. While $\xi$ is expected to be close to standard, $\varsigma$ is extremely light and has peculiar properties that deserve a specific investigation concerning the role that it can hold in nature \cite{Machet4}. As far as we can see, this minimal extension of the Standard Model is different from what other authors have been considering; it is different as a 2-Higgs doublet model \cite{BFLRSS} \cite{HHG} \cite{DiazSanchez}, and it is different in that, for a larger number of generations $N>1$, it cannot stay as a 2-Higgs doublet model and should be endowed with $2N^2$ Higgs doublets. A key ingredient to account simultaneously for the different scales in presence, weak and chiral, is parity doubling. It could only be uncovered through the one-to-one correspondence demonstrated in \cite{Machet1} between the Higgs fields and bilinear quark operators and detailed symmetry considerations. The breaking of parity has reflected here in the mass splitting of the two Higgs bosons, their ratio being precisely that of the two scales in presence. At this stage, no physics ``beyond the Standard Model'' looks needed \footnote{The only hint in favor of it may be the vanishing of the masses of the two Higgs bosons at the chiral limit, which makes them appear ``like pseudo-Goldstone bosons'' (see subsection \ref{subsec:golds}).} but, since the one generation case can only be considered as a ``toy Standard Model'', this is one among the features that should be carefully scrutinized for more generations of fermions \cite{Machet3}. \medskip {\em \underline{Acknowledgments:} it is a great pleasure to thank O.~Babelon, M.~Capdequi-Peyran\`ere, S.~Davidson, M.~Knecht, J.~Lavalle, G.~Moultaka, P. Slavich and M.I.~Vysotsky for conversations, advice, and helping me to correct mistakes.} \newpage \begin{em}
1,477,468,750,196
arxiv
\section*{Abstract} The leaves of angiosperms contain highly complex venation networks consisting of recursively nested, hierarchically organized loops. We describe a new phenotypic trait of reticulate vascular networks based on the topology of the nested loops. This phenotypic trait encodes information orthogonal to widely used geometric phenotypic traits, and thus constitutes a new dimension in the leaf venation phenotypic space. We apply our metric to a database of 186 leaves and leaflets representing 137 species, predominantly from the Burseraceae family, revealing diverse topological network traits even within this single family. We show that topological information significantly improves identification of leaves from fragments by calculating a ``leaf venation fingerprint'' from topology and geometry. Further, we present a phenomenological model suggesting that the topological traits can be explained by noise effects unique to specimen during development of each leaf which leave their imprint on the final network. This work opens the path to new quantitative identification techniques for leaves which go beyond simple geometric traits such as vein density and is directly applicable to other planar or sub-planar networks such as blood vessels in the brain. \section*{Author Summary} Planar reticular networks are ubiquitous in nature and engineering, formed for instance by the arterial vasculature in the mammalian neocortex, urban street grids or the vascular network of plant leaves. We use a \reva{topological} metric to characterize the way loops are nested in such networks and analyze a large database of 186 leaves and leaflets, revealing for the first time that the nesting of the networks' cycles constitutes a distinct phenotypic trait orthogonal to previously used geometric features. Furthermore, we demonstrate that the information contained in the leaf topology can significantly improve specimen identification from fragments, and provide an empirical growth model that can explain much of the observed data. Our work can improve understanding of the functional significance of the various leaf vein architectures and their correlation with the environment. It can pave the way for similar analyses in diverse areas of research involving reticulate networks. \section*{Introduction} The angiosperm leaf vein network fulfills the combined requirements of efficient liquid transport within the leaf and high robustness against load fluctuations and damage, while at the same time providing structural reinforcement \cite{Katifori2010,Niklas1999,Sack2012,Sack2008}. Modern leaf vein networks evolved gradually from simple dendritic branching patterns by introduction of anastomoses \cite{Sack2013, Roth-Nebelsick2001}, leading to leaf vascular networks that are highly reticulate, exhibiting nested, hierarchically organized vein loops. The reticulate leaf vascular system is an example of evolutionary adaptation under various constraints \cite{Noblin2008,Jensen2013a,Jensen2013,Katifori2010,McCulloh2003}. Despite some common trends, the diversity of vein morphology in dicotyledonous plants is striking (see for instance Fig.~1~a-f). Current models of vascular development in the model species \emph{Arabidopsis thaliana} predict several overlapping phases in which the leaf primordium at first mainly grows by cell division, then later by cell expansion \cite{Sack2012,Kang2004}. Lower order (major) veins are thought to be formed during the first phases, whereas minor veins are formed primarily during the later, leaving an imprint in the higher order vascular system of the leaf. The morphology, anatomy, and correlations with climate of the lower order \revc{vascular} architecture have been extensively studied \cite{Wright2004,Peppe2011}, and primary and secondary vein traits can be easily quantified \cite{Sack2012}. Certain leaf traits such as vein density are closely linked to photosynthetic efficiency \cite{Brodribb2007, Boyce2009, Brodribb2010}. \revc{Links to climatic conditions and vegetation type have been proposed as well} \cite{Peppe2011, Sack2013, Wright2005, Uhl1999}. \revc{The hydraulic resistance of the whole plant is strongly affected by the leaf hydraulic resistance. The smallest veins, by virtue of their combined length and small hydraulic diameter are responsible for the bulk of this resistance. At the same time, the smallest veins, and in particular the small free-ending veinlets, are perhaps the most crucial for water delivery \cite{Fiorin2015}.} However, the architecture of \revc{higher order vein reticulation} has been largely ignored in the literature. Other than an extensive descriptive nomenclature \cite{Ellis2009} and mainly qualitative measures \cite{Green2014}, to this day there is no quantitative work that goes beyond obvious geometric characteristics, like minor vein density, areole size, angle distribution, vascular segment length and width distribution \cite{Blonder2011,Sack2012,Bohn2002}. These characteristics by themselves are not sufficient to describe the full architecture, in particular the organization of the loops. Loops typically show a large degree of hierarchical nesting, i.e. larger loops composed of larger-diameter veins contain many smaller loops with smaller vein diameter (see Fig.~1~e). \reva{Although topological studies of spatial network architectures such as street networks are quite common \cite{Barthelemy2011}, a detailed} quantitative characterization of \reva{the} topological properties related to reticulation has been elusive in the past, and only recently have researchers started to seriously attack the question \cite{Katifori2012,Mileyko2012,Bohn2002}. We use ideas inspired by computational topology \cite{Zomorodian2005} to define a metric suitable to quantify the architecture of higher order venation of leaves. We apply our topological metric to a dataset of 186 leaves and leaflets, demonstrating that our characterization constitutes a new phenotypic trait in plant leaves and carries information \reva{complementary} to previously used quantities. \reva{We then show that this information can be useful in the task of identifying leaves from fragments, significantly improving identification accuracy.} We finally present a growth model that reproduces most of the observed variation in the topological traits. Our results suggest that topological and geometric venation traits are \reva{approximately} independent, and that the higher order venation topology is mainly controlled by a small set of parameters regulating noise during vein morphogenesis. The topological venation traits we use can be employed in much broader contexts than leaves, being applicable to any (sub-)planar, anastomosing network such as blood vessels in the brain, liver or retina, foraging networks build by slime molds, lowland river networks, urban street networks or force chains in granular media, thereby possibly opening up an entire new line of research. \subsection*{Topological phenotypes} Our topological metric quantifies the hierarchical nesting of loops within the network as well as the topological lengths of tapered veins. The analysis follows an existing hierarchical decomposition algorithm \cite{Katifori2012, Mileyko2012, Bohn2005}, constructing from a weighted network a binary tree graph termed the \emph{nesting tree} which contains information about nesting of loops. The algorithm is schematically shown in Fig.~1~g and discussed in the supplement. We stress that the method depends not on exact measurements of vein diameters but only on relative order. Similarly, transformations which slightly alter node positions do not affect the outcome (see Fig.~1~h). Once the binary nesting tree (see Fig.~1~g) has been obtained, its structure can be quantified. Here, for each node $j$ in the nesting tree, we calculate the nesting ratio $q_j = \frac{s_j}{r_j}$ \cite{VanPelt1992}, where $r_j \geq s_j$ are the numbers of leaf nodes in the right and left subtrees of node $j$. We then define the \emph{nesting number} as a weighted average $i = \sum_j w_j q_j$, where $\sum_j w_j = 1$. We employ an unweighted nesting number $i_u$, with $w_j = 1$, and a degree-weighted nesting number $i_w$, with $w_j \propto d_j - 1 = r_j + s_j - 1$, where $d_j$ is called \emph{subtree degree}. A high value of $i_{u,w}$ qualitatively represents graphs that are highly nested such as those in the top row of Fig.~1~i. The presence and extent of tapered veins is quantified as follows. Starting from some edge $e$, we find the next edge by taking the maximum width edge amongst all with smaller width than $e$. We count how many steps can be taken until no more edges with smaller width are adjacent, resulting in a topological length $L_e$ assigned to each edge in the network. The mean topological length $L_\mathrm{top} = \frac{1}{N_E} \sum_e L_e$, where $N_E$ is the number of edges, characterizes tapered veins in a network. Fig.~1~i shows a qualitative representation of various example network topologies using mean topological length and nesting number. Instead of using just the nesting \reva{number}, we additionally calculate pairwise topological distances between networks as the two-sample Kolmogorov-Smirnov statistic $D_{KS}$ between the cumulative distributions of nesting ratios in order to quantify the statistical similarity between nested loop topologies. \reva{Other methods to quantify the degree of topological dissimilarity between binary trees representing biological systems have been proposed on the basis of a ``tree edit distance'' \cite{Ferraro2000}. Despite promise, this distance suffers from being dominated by differences in the size of the compared trees. In its local form \cite{Ferraro2004}, it suffers from the opposite problem, quantifying only the similarity between the $n$ most similar subtrees. In contrast, our method is designed to capture statistical similarities between nesting trees, making it more suitable for dissimilarly sized, noisy networks.} \section*{Results} We show that the topological characteristics described above provide a new dimension in the phenotypic space of leaf venation morphology. For this, we analyze a dataset consisting of 186 leaflets from various species primarily belonging to the Burseraceae family (see \nameref{S1_Text} and \nameref{S1_Table}). Although most of species are therefore closely related, their venation patterns show considerable diversity (see Fig.~1~a-f), rendering them a good test set for our metrics The leaves were chemically cleared and stained to make their higher order venation network apparent \cite{Vasco2014}, then scanned at high resolution ($6400\,\mathrm{dpi}$) and vectorized in-house (see \nameref{S1_Text}). Scanning whole leaves and digitizing at high resolution is computationally expensive but necessary for this work to accurately represent the statistics of the high order veins \cite{Sack2014}. Publicly available databases of scanned specimens \cite{Das2014} contain mostly low resolution images. \subsection*{Analysis of full leaf networks} From the vectorized data, we obtained for each leaf five local geometric quantities: vein density $\sigma$ (total length of all veins/leaf area), mean distance between veins $a$, mean areole area $A$, areole density $\rho_A$, and average vein diameter weighted by length of venation between junctions $d$. The (un)weighted nesting number $i_{(u)\, w}$ was calculated from all subtrees of the nesting tree with degree $d \leq 256$ in order to remove leaf size effects for the full networks; the mean topological length was calculated from the whole network. Together, these metrics form a ``leaf venation fingerprint'' encompassing \emph{local} features of the network, that can be estimated from leaf segments alone if necessary. \revc{Fig.~1~a shows the complete dataset plotted in the space of unweighted nesting number and mean topological length. We plot the most abundant genera \emph{Protium} (98 specimen in the dataset), \emph{Bursera} (21 specimen), and \emph{Parkia} (8 speciment) as different symbols. Although the dataset does not allow for firm conclusions at this taxonomic level, both \emph{Protium} and \emph{Parkia} appear to show a modest trend towards clustering around characteristic nesting numbers.} We then employed Principal Component Analysis (see Fig.~2~b) and found that together, the first two principal components explain 81\% (=52\% + 29\%) of the total variance in the dataset. Component 1 can be interpreted as containing mostly metrics derived from geometry, whereas Component 2 contains mostly metrics from topology. Topological lengths contribute roughly equally to either. \reva{Even though small correlations between them exist, this} reveals local geometrical and topological leaf traits as \reva{approximately orthogonal traits} for the description of the phenotype of leaf venation (see \nameref{S1_Text}, also for further analysis of the data in terms of latent factors). Pairs of leaves (see Fig.~2~a and Fig.~2~e,f) which are close according to the topological distance defined by the $D_\mathrm{KS}$ metric applied to the nesting ratio statistics can possess similar ``by eye'' venation traits. In the samples in Fig.~2~e,f, cycle nestedness and vein thickness are traits that appear correlated. However, the topology of leaf venation constitutes a new phenotypic trait that provides information orthogonal to geometric traits. \subsection*{Analysis of leaf fragments} Topological information significantly helps in identifying leaf samples to species, especially when only a segment of the leaf is available. We fragmented all leaf samples in silico into equally sized segments of ca. $1.2\times1.2$cm and calculated all venation traits for the individual pieces (see \nameref{S2_Table}). Here, we thresholded the nesting ratios at subtree degree $d \leq 128$. We employed Linear Discriminant Analysis (LDA) \cite{Barber2012} to classify the fragments based on specimen membership (see also \nameref{S1_Text}). We then calculated the the probability of correctly identifying a segment as belonging to one of the 186 leaves and leaflets (the accuracy, see Fig.~2~c). Using only geometrical degrees of freedom, we found a 10-fold cross-validated accuracy of \revc{$0.35$ (95\% CI: $[0.31, 0.39])$}. Adding topology improves the accuracy to \revc{$0.54$ (95\% CI: $[0.48, 0.60]$)}. Additionally, for each pair of individual leaves in the dataset, the same procedure was applied to obtain a mean pairwise accuracy score (the probability of correctly identifying a fragment as belonging to one of two leaves.) Again, using topological traits significantly improved the summary result (see Fig.~2~d and \nameref{S1_Text}). The same classification was applied towards identification of segments to species, as opposed to samples, with quantitatively similar results (see \nameref{S1_Text}). \revc{It must be noted that there can be considerable variance among leaf traits, even when comparing among specimen from a single plant --- in particular between sun- and shade leaves \cite{Roth-Nebelsick2001, Scoffoni2015}--- that should be taken into account if the information is available.} \subsection*{Comparison with venation growth model} In order to explain the nesting ratio and topological length distributions measured in our dataset, we examine a developmental model for the formation of higher-order venation in which the interplay between strictly hierarchical loop genesis and random noise is the major factor affecting nestedness. \reva{Empirically, during the expansion growth phase of the leaf lamina, high order vein loops grow and are subdivided by the appearance of new veins, subsequent vein orders appearing discretely one after the other \cite{Nelson1997,Kang2004}. Our model intends to capture this phenomenological fact (see Fig.~3~a for an illustration). The} model is compatible with models of vein morphogenesis that invoke either auxin canalization \cite{Feugier2006} or mechanical instabilities \cite{Laguna2008}, or a combination. \revc{It is similar in spirit to that described in the supporting information of \cite{Laguna2008} or \cite{Perna2011} but adds fine-grained control of stochasticity.} We stipulate that each leaf is subject to a species dependent characteristic amount of noise during development, resulting in unique characteristic statistics of minor venation patterns. The model as a whole is controlled by four dimensionless parameters (see Methods section). In Fig.~3~b,c we show the distributions of normalized areole size, mean topological lengths and nesting ratios for the same two leaves as in Fig.~2~e,f. The real distributions can be explained well by tuning two of the parameters. Thus, noise during growth of cycles can explain the observed local hierarchical nesting characteristics. \reva{It should be noted that different mechanisms may underlie the organization of low order veins. Indeed, both models \cite{Fujita2006} and empirical observations \cite{Dengler2001} have found strong links between low order vein structure and leaf shape that may be connected to the overall growth pattern and developmental constraints of the lamina \cite{Couturier2009}.} \section*{Discussion} The leaf vasculature is a complex reticulate network, and properly chosen and defined topological metrics can quantify and highlight aspects of the architecture that have been ignored until now. The topological metrics presented in this work provide a new, independent dimension in the phenotypic space of leaf venation, allowing for more precise characterization of leaf features and improved identification accuracy, including identification of fragments. The extensive nomenclature for characterization of the vascular morphology \cite{Ellis2009} offers a discrete set of attributes that is mathematically insufficient to properly quantify a continuum of leaf venation phenotypes. However, this descriptive terminology can be incorporated as additional topological dimensions in the phenotypic space and alongside the metrics presented in this work can provide a tool to quantify inter- and intra- species diversity. In addition, we show that the local hierarchy of nested loops in the leaf venation network can be explained by very simple stochastic processes during development, pointing toward a universal mechanism governing (minor) vein morphogenesis. The topological measures we employ have possible applications that range far beyond the leaf data set explored here, being usable on any loopy complex weighted network which possesses an embedding on a surface. Examples of systems that could benefit from an analysis along the lines of this work include the blood vessels in the retina, liver or brain, anastomosing foraging networks built by slime molds and fungi, lowland river networks, human-made street networks, force chain networks in granular materials, and many more, thereby possibly opening up an entire new line of research. \section*{Materials and Methods} \subsection*{Vectorization} The extraction the networks from the original high-resolution scans (6400 dpi) can be divided into two main steps: segmentation of the image to create a suitable binary representation and skeletonization of the shapes. To segment the image we use a combination of Gaussian blurring to reduce noise, local histogram equalization and recombination with the original image to increase contrast, and Otsu thresholding~\cite{otsu} to find the optimal threshold for the creation of the binary image. For the skeletonization we use a vectorization technique known from optical sign recognition~\cite{vectorization1,zou-yan}. The approach relies on the extraction and approximation of the foreground feature's contours using the Teh-Chin dominant point detection algorithm~\cite{dominant_points} and subsequent triangulation of the contours via constrained Delaunay triangulation~\cite{CDT}. Therefore the foreground is partitioned into triangles which can be used to create a skeleton of the shape. Each triangle contributes a ``center'' point to the skeleton which is determined by looking for local maxima in the euclidean distance map~\cite{edm} of the binary and together these center points approximate the skeleton. By looking at edges shared between two triangles, neighborhood relations can be established and an adjacency matrix can be created. This adjacency matrix defines a graph composed of nodes (the former triangle centers) and edges (the connections between two adjacent triangles). In addition to the topology of the graph the original geometry of the network including coordinates of the nodes and lengths and radii of edges are preserved and stored in the graph. The processing is done using algorithms implemented in \texttt{python}. The framework uniting all the aforementioned functionality is freely available at \cite{linkGithub}. \revb{ \subsection*{Hierarchical decomposition} A complete and detailed description of the hierarchical decomposition algorithm to extract the nesting tree from leaf network graphs can be found in the supplement \nameref{S1_Text}. The software package used to calculate nesting numbers, topological lengths, and geometric metrics is freely available at \cite{linkHD}. } \subsection*{Modeling cycle nesting} The model starts from a single rectangular loop of veins (Fig.~3~a). The loops grow and subdivide when they reach a threshold size $A_0$ by introduction of a new vein. Not all loops subdivide at exactly the same size: the probability of subdivision as a function of areole area is a sigmoidal of width $\sigma_A$ (Fig.~3). All veins start with a fixed small width and grow linearly with time. The relative growth rate of vein lengths and widths is controlled by the nondimensional parameter $\alpha$. The areole subdivision is only approximately symmetric: the new vein is randomly positioned close to the midline of the areole and the extent of the asymmetry is controlled by a parameter $\rho \in [0,1]$ (see \nameref{S1_Text}). After the growing leaf has a certain size, the simulation is terminated and random Gaussian noise with zero mean and standard deviation proportional to the parameter $f_n$ is added to the vein diameters. The model is controlled by the four dimensionless parameters $\rho$, $\beta=\sigma_A/A_0$, $\alpha$ and $f_n$. \section*{Supporting Information} \subsection*{S1 Text} \label{S1_Text} {\bf Detailed description of methods and further analysis.} Includes description of the geometric and topological metrics used including more explicit hierarchical decomposition algorithm, an explanation of the leaf clearing, staining and vectorization process, more details on the cycle growth model. Further data analysis includes comparison of our data set with earlier work, \revb{validation of the method}, and detailed results of PCA and Factor Analysis. \subsection*{S1 Table} \label{S1_Table} {\bf Leaf fingerprint database.} The complete fingerprint data extracted from the full leaf networks. \subsection*{S2 Table} \label{S2_Table} {\bf Leaf fragment fingerprint database.} The complete fingerprint data extracted from the $1.2 \times 1.2$ cm leaf fragments.
1,477,468,750,197
arxiv
\section{Introduction and main results} With the introduction of the sandpile model by Bak, Tang and Wiesenfeld (BTW), the notion of self-organized criticality was introduced, and subsequently applied to several other models such as forest-fire models, and the Bak-Sneppen model for evolution. In turn, these models serve as a paradigm for a variety of natural phenomena in which, empirically, power laws of avalanche characteristics and/or correlations are found, such as the Gutenberg-Richter law for earthquakes. See \cite{turcotte} for a extended overview. After the work of Dhar \cite{dhar}, the BTW model was later renamed `abelian sandpile model' (ASM), referring to the abelian group structure of addition operators. This abelianness has since served as the main tool of analysis for this model. A less known variant of the BTW-model has been introduced by Zhang \cite{zhang}, where instead of discrete sand grains, continuous height variables are used. This lattice model is described informally as follows. Consider a finite subset $\Lambda \subset \mathbb{Z}^d$. Initially, every lattice site $i \in \Lambda$ is given an {\em energy} $0\leq E_i <E_c$, where $E_c$ is the so called {\em critical threshold}, and often chosen to be equal to 1. Then, at each discrete time step, one adds a random amount of energy, uniformly distributed on some interval $[a,b] \subset [0, E_c]$, at a randomly chosen lattice site. If the resulting energy at this site is still below the critical value then we have arrived at the new configuration. If not, an {\em avalanche} is started, in which all unstable sites (that is, sites with energy at least $E_c$) `topple' in parallel, i.e., give a fraction $1/2d$ of their energy to each neighbor in $\Lambda$. As usual in sandpile models, upon toppling of boundary sites, energy is lost. As in the BTW-model, the stabilization of an unstable configuration is performed instantaneously, i.e., one only looks at the final stable result of the random addition. In his original paper, Zhang observes, based on results of numerical simulation (see also \cite{janosi}), that for large lattices, the energy variables in the stationary state tend to concentrate around discrete values of energy; he calls this the emergence of energy `quasi-units'. Therefore, he argues that in the thermodynamic limit, the stationary dynamics should behave as in the discrete ASM. However, Zhang's model is not abelian (the next configuration depends on the order of topplings in each avalanche; see below), and thus represents a challenge from the analytical point of view. There is no mentioning of this fact in \cite{janosi, zhang}, see however \cite{pastor}; probably they chose the usual parallel order of topplings in simulations. After its introduction, a model of Zhang's type (the toppling rule is the same as Zhang's, but the addition is a deterministic amount larger than the critical energy) has been studied further in the language of dynamical systems theory in \cite{cessac}. The stationary distributions found for this model concentrate on fractal sets. Furthermore, in these studies, emergence of self-organized criticality is linked to the behavior of the smallest Lyapounov exponents for large system sizes. From the dynamical systems point of view, Zhang's model is a non-trivial example of an iterated function system, or of a coupled map lattice with strong coupling. In this paper we rigorously study Zhang's model in dimension $d=1$ with probabilistic techniques, investigating uniqueness and deriving certain properties of the stationary distribution. Without loss of generality, we take $E_c=1$ throughout the paper. In Section \ref{modeldefsection} we rigorously define the model for $d=1$. We show that in the particular case of $d=1$ and stabilizing after every addition, the topplings are in fact abelian, so that the model can be defined without specifying the order of topplings. In that section, we also include a number of general properties of stationary distributions. For instance, we prove that if the number of sites is finite, then every stationary distribution is absolutely continuous with respect to Lebesgue measure on $(0,1)$, in contrast with the fractal distributions for the model defined in \cite{cessac} (where the additions are deterministic). We then study several specific cases of Zhang's model. For each case, we prove by coupling that the stationary distribution is unique. In Section \ref{onesitesection}, we explicitly compute the stationary distribution for the model on one site, with $a=0$, by reducing it to the solution of a delay equation \cite{delay}. Our main result is in Section \ref{halftoteensection}, for the model with $a \geq 1/2$. We show that in the infinite volume limit, every one-site marginal of the stationary distribution concentrates on a non-random value, which is the expectation of the addition distribution (Theorem \ref{quasiunits}). This supports Zhang's conjecture that in the infinite volume limit, his model tends to behave like the abelian sandpile. Section \ref{halftoteensection} contains a number of technical results necessary for proving Theorem \ref{quasiunits}, but which are also of independent interest. For instance, we construct a coupling of the so-called reduction of Zhang's model to the abelian sandpile model, and we prove that any initial distribution converges exponentially fast to the stationary distribution. In Section \ref{nultoteensection}, we treat the model for $[a,b]=[0,1]$. We present simulations that indicate the emergence of quasi-units also for this case. However, since in this case there is less correspondence with the abelian sandpile model, we cannot fully prove this. We can prove that the stationary distribution is unique, and we show that if every one-site marginal of the stationary distribution tends to the same value in the infinite volume limit, and in addition if there is a certain amount of asymptotic independence, then this value is $\sqrt{1/2}$. This value is consistent with our own simulations. \section{Model definition} \label{modeldefsection} We define Zhang's model in dimension one as a discrete-time Markov process with state space $\Omega_N:= [0, 1)^{ \{1, 2, \ldots, N\} }\subset [0,\infty)^{ \{1, 2, \ldots, N\} }:=\Xi_N$, endowed with the usual sigma-algebra. We write $\eta, \xi \in \Omega_N$, configurations of Zhang's model and $\eta_j$ for the $j$th coordinate of $\eta$. We interpret $\eta_j$ as the amount of energy at site $j$. By ${\mathbb P}_{\eta}$, we denote the probability measure on (the usual sigma-algebra on) the path space $\Omega_N^\mathbb{N}$ for the process started in $\eta$. Likewise we use ${\mathbb P}_{\nu}$ when the process is started from a probability measure $\nu$ on $\Omega_N$, that is, with initial configuration chosen according to $\nu$. The configuration at time $t$ is denoted as $\eta(t)$ and its $j$th component as $\eta_j(t)$. We next describe the evolution of the process. Let $0 \leq a<b\leq 1$. At time 0 the process starts in some configuration $\eta \in \Omega_N$. For every $t=1,2,\ldots$, the configuration $\eta(t)$ is obtained from $\eta(t-1)$ as follows. At time $t$, a random amount of energy $U_{t}$, uniformly distributed on $[a,b]$, is added to a uniformly chosen site $X_t\in \{ 1,\ldots, N\}$, hence $P(X_{t} = j)=1/N$ for all $j=1,\ldots, N$. We assume that $U_t$ and $X_t$ are independent of each other and of the past of the process. If, after the addition, the energies at all sites are still smaller than 1, then the resulting configuration is in $\Omega_N$ and this is the new configuration of the process. If however after the addition the energy of site $X_t$ is at least 1 - such a site is called {\em unstable} - then this site will {\em topple}, i.e., transfer half of its energy to its left neighbor and the other half to its right neighbor. In case of a toppling of a boundary site, this means that half of the energy disappears. The resulting configuration after one toppling may still not be in $\Omega_N$, because a toppling may give rise to other unstable sites. Toppling continues until all sites have energy smaller than 1 (i.e., until all sites are {\em stable}). This final result of the addition is the new configuration of the process in $\Omega_N$. The entire sequence of topplings after one addition is called an {\em avalanche}. We call the above model the $(N, [a,b])$-model. We use the symbol $\mathcal{T}_x(\xi)$ for the result of toppling of site $x$ in configuration $\xi \in \Xi_N$. We write $\ensuremath{\mathcal{A}}_{u,x}(\eta)$ for the result of adding an amount $u$ at site $x$ of $\eta$, and stabilizing through topplings. It is not a priori clear that the process described above is well defined. By this we mean that it is not a priori clear that every order in which we perform the various topplings leads to the same final configuration $\eta(t)$. In fact, unlike in the abelian sandpile, topplings according to Zhang's toppling rule are {\em not} abelian in general. To give an example of non-abelian behavior, let $N=2$ and $\xi = (1.2, 1.6)$. Then $\mathcal{T}_1(\mathcal{T}_2(\xi)) = \mathcal{T}_1((2,0)) = (0,1)$, whereas $\mathcal{T}_2(\mathcal{T}_1(\xi)) = \mathcal{T}_2((0, 2.2)) = (1.1,0)$. Despite this non-abelianness of certain topplings, we will now show that in the process defined above, we only encounter avalanches that consist of topplings with the abelian property. When restricted to a certain subset of $\Omega_N$, topplings are abelian, and it turns out that this subset is all we use. (In particular, the example that we just gave cannot occur in our process.) \begin{proposition} The $(N, [a,b])$-model is well defined. \end{proposition} \begin{proof} We will prove in two steps that all topplings actually encountered in the process are abelian. To this end, we first show that in Zhang's model in dimension one, we can never create two adjacent unstable sites by making one addition to a stable configuration and toppling unstable sites in any order. Let $\tilde{\Omega}_N \subset \Xi_N$ be the set of all (possibly unstable) configurations such that between every pair of unstable sites there is at least one empty site, and such that the energy of any unstable site is smaller than 2. It is clear that by making an addition to a stable configuration, we arrive in $\tilde{\Omega}_N$. We show that, for every configuration $\tilde{\eta} \in \tilde{\Omega}_N$, the resulting configuration after toppling of one of the unstable sites is still in $\tilde{\Omega}_N$. Introducing some notation, we call a site $j$ of a configuration $\eta$ $$ \begin{array}{lll} \textrm{empty} & \textrm{if} & \eta_j = 0,\\ \textrm{nonempty} & \textrm{if} & \eta_j \in (0,1),\\ \textrm{unstable} & \textrm{if} & \eta_j \geq 1.\\ \end{array} $$ An unstable site $i$ of $\tilde{\eta}$ can have either two empty neighbors (first case), two nonempty neighbors (second case) or one nonempty and one empty (third case). In the first case, toppling of site $i$ cannot create a new unstable site, since $\frac 12 \tilde{\eta}_i < 1$, but $i$ itself becomes empty. Thus, if there were unstable sites to the left and to the right of $i$, after the toppling there still is an empty site between them. In the second and third case, the nonempty neighbor(s) of $i$ can become unstable. Suppose the left neighbor $i-1$ becomes unstable. Directly to its right, at $i$, an empty site is created. To its left, there was either no unstable site, or first an empty site and then somewhere an unstable site. The empty site can not have been site $i-1$ itself, because to have become unstable it must have been nonempty. For the right neighbor the same argument applies. Therefore, the new configuration is still in $\tilde{\Omega}_N$. So far, we showed that in the process of stabilization after addition to a stable configuration, only configurations in $\tilde{\Omega}_N$ are created. In the second step of the proof we show that, if $\eta \in \tilde{\Omega}_N$ and $i$ and $j$ are unstable sites in $\eta$, then \begin{equation} \mathcal{T}_i(\mathcal{T}_j(\eta)) = \mathcal{T}_j(\mathcal{T}_i(\eta)). \label{topplingsseq} \end{equation} To prove this, we consider all different possibilities for $x$. If $x$ is not a neighbor of either $i$ or $j$, then toppling of $i$ or $j$ does not change $\eta_x$, so that (\ref{topplingsseq}) is obvious. If $x$ is equal to $i$ or $j$, or neighbor to only one of them, then only one of the topplings changes $\eta_x$, so that again (\ref{topplingsseq}) is obvious. Finally, if $x$ is a neighbor of both $i$ and $j$, then, since $\eta \in \tilde{\Omega}_N$, $x$ must be empty before the topplings at $i$ and $j$. We then have $$ \mathcal{T}_j(\eta)_x = \frac 12 \eta_j, $$ so that $$ \mathcal{T}_i(\mathcal{T}_j(\eta))_x = \frac 12 \eta_j + \frac 12 \eta_i = \mathcal{T}_j(\mathcal{T}_i(\eta))_x $$ Therefore, also in this last case (\ref{topplingsseq}) is true. Having established that the topplings of two unstable sites commute, it follows that the final stable result after an addition is independent of the order in which we topple, and hence $\ensuremath{\mathcal{A}}_{u,x}(\eta(t))$ is well-defined; see \cite{meester}, Section 2.3 for a proof of this latter fact. \end{proof} \begin{remark} \label{wavedef} {\em It will be convenient to order the topplings in so-called {\it waves} \cite{priezzhev}. Suppose the addition of energy at time $t$ takes place at site $k$ and makes this site unstable. In the first wave, we topple site $k$ and then all other sites that become unstable, {\it but we do not topple site $k$ again}. After this wave only site $k$ can possibly be unstable. If site $k$ is unstable after this first wave, the second wave starts with toppling site $k$ (for the second time) and then all other sites that become unstable, leaving site $k$ alone, until we reach a configuration in which all sites are stable. This is the state of the process at time $t$. It is easy to see that in each wave, every site can topple at most once.} \end{remark} \section{Preliminaries and technicalities} In this section, we discuss a number of technical results which are needed in the sequal, and which are also interesting in their own right. The section is subdivided into three subsections, dealing with connections to the abelian sandpile, avalanches, and nonsingularity of the marginals of stationary distributions, respectively. \subsection{Comparison with the abelian sandpile model} \label{abeliansection} We start by giving some background on the abelian sandpile model in one dimension. In the abelian sandpile model on a finite set $\Lambda \subset \mathbb{Z}$, the amount of energy added is a nonrandom quantity: each time step one grain of sand is added to a random site. When a site is unstable, i.e., it contains at least two grains, it topples by transferring one grain of sand to each of its two neighbors (at the boundary grains are lost). The abelian addition operator is as follows: add a particle at site $x$ and stabilize by toppling unstable sites, in any order. We denote this operator by $a_x:\{0,1\}^{\Lambda}\to \{0,1\}^{\Lambda}$. For toppling of site $x$ in the abelian sandpile model, we use the symbol $T_x$. Abelian sandpiles have some convenient properties \cite{dhar}: topplings on different sites commute, addition operators commute, and the stationary measure on finitely many sites is the uniform measure on the set of so-called {\em recurrent} configurations. Recurrent (or {\em allowed}) configurations are characterized by the fact that they do not contain a forbidden subconfiguration (FSC). A FSC is defined as a restriction of $\eta$ to a subset $W$ of $\Lambda$, such that $\eta_x$ is less than the number of neighbors of $x$ in $W$, for all $x$. In \cite{priezzhev}, a proof can be found that a FSC cannot be created by an addition or by a toppling. In the one-dimensional case on $N$ sites, the abelian sandpile model behaves as follows. Sites are either empty, containing no grains, or full, containing one grain. When an empty site receives a grain, it becomes full, and when a full site receives a grain, it becomes unstable. In the latter case, the configuration changes in the following manner. Suppose the addition site was $x$. We call the distance to the first site that is empty to the left $i$. If there is no empty site to the left, then $i-1$ is the distance to the boundary. $j$ is defined similarly, but now to the right. After stabilization, the sites in $\{x-i,\ldots,x+j\} \cap \{1,\ldots,N\}$ are full, except for a new empty site at $x-i+j$. Only sites in $\{x-i,\ldots,x+j\} \cap \{1,\ldots,N\}$ have toppled. The amount of topplings of each site is equal to the minimum of its distances to the endsites of the avalanche. For example, boundary sites can never topple more than once in an avalanche. These results follow straightforwardly from working out the avalanche. The recurrent configurations are those with at most one empty site; a FSC in the one-dimensional case is a subset of $\Lambda$ of more than 1one site, with empty sites at its boundary. Here is an example of how a non-recurrent state on 11 sites relaxes through topplings. An addition was made to the 7th site; underlined sites are the sites that topple. The topplings are ordered into waves (see Remark \ref{wavedef}). In the example, the second wave starts on the 5th configuration: \begin{eqnarray*} 110111\underline{2}1101 &\to& 11011\underline{2}0\underline{2}101 \to 1101\underline{2}020\underline{2}01 \to 110\underline{2}0121011 \to 111011\underline{2}1011 \\ & \to & 11101\underline{2}0\underline{2}011 \to 1110\underline{2}020111 \to 111101\underline{2}0111 \to 11110\underline{2}01111 \\ &\to& 11111011111. \end{eqnarray*} To compare Zhang's model to the abelian sandpile, we label the different states of a site $j\in\{1,\ldots,N\}$ in $\eta \in \tilde{\Omega}_N$ as follows: \begin{equation} \begin{array}{lll} \textrm{empty} (0) & \textrm{if} & \eta_j = 0,\\ \textrm{full} (1) & \textrm{if} & \eta_j \in [\frac 12,1),\\ \textrm{unstable} (2) & \textrm{if} & \eta_j \geq 1,\\ \textrm{anomalous} (a) & \textrm{if} & \eta_j \in (0,\frac 12).\\ \end{array} \label{reduction} \end{equation} \begin{definition} The {\em reduction} of a configuration $\eta\in \tilde{\Omega}_N$ is the configuration denoted by $\ensuremath{\mathcal{R}}(\eta)\in \{0,1,2,a\}^{\{1,\ldots,N\}}$ corresponding to $\eta$ by (\ref{reduction}). \label{reductiondef} \end{definition} For general $0 \leq a < b \leq 1$, we have the following result. \begin{proposition} For any starting configuration $\eta\in\Omega$, there exists a random variable $T\geq 0$ with $P(T < \infty)=1$ such that for all $t \geq T$, $\eta(t)$ contains at most one empty or anomalous site. Moreover, for $t\geq T$, given that $\eta (t)$ contains an empty site $x_t$, the distribution of this site is uniform on $\{ 1,\ldots,N\}$. \label{compare} \end{proposition} To prove this proposition, we first introduce FSC's for Zhang's model. We define a FSC in Zhang's model in one dimension as the restriction of $\eta$ to a subset $W$ of $\{1,\ldots, N\}$, in such a way that $2\eta_j$ is less than the number of neighbors of $j$ in $W$, for all $j \in W$. From here on, we will denote the number of neighbors of $j$ in $W$ by $\mbox{deg}_W(j)$. To distinguish between the two models, we will from now on call the above {\em Zhang-FSC}, and the definition given in Section \ref{abeliansection} {\em abelian-FSC}. From the definition, it follows that a Zhang-FSC in a stable configuration is a restriction to a subset of more than one site, with the boundary sites either empty or anomalous. Note that according to this definition, a stable configuration without Zhang-FSC's can be equivalently described as a configuration with at most one empty or anomalous site. \begin{lemma} A Zhang-FSC cannot be created by an addition in Zhang's model. \label{fsc} \end{lemma} \begin{proof} The proof is similar to the proof of the corresponding fact for abelian-FSC's, which can be found for instance in \cite{meester}, Section 5. We suppose that $\eta(t)$ does not contain an FSC, and an addition was made at site $x$. If the addition caused no toppling, then it cannot create a Zhang-FSC, because no site decreased its energy. Suppose therefore that the addition caused a toppling in $x$. Then for each neighbor $y$ of $x$ $$ \mathcal{T}_x(\eta)_y \geq \eta_y + \frac 12, $$ so that $2\mathcal{T}_x(\eta)_y \geq 2\eta_y + 1$. Also $\mathcal{T}_x(\eta)_x = 0$, and all other sites are unchanged by the toppling. We will now derive a contradiction. Suppose the toppling created a Zhang-FSC, on a subset which we call $W$. It is clear that this means that $x$ should be in $W$, because it is the only site that decreased its energy by the toppling. For all $j \in W$, we should have that $2\mathcal{T}_x(\eta)_j < \mbox{deg}_W(j)$. This means that for all neighbors $y$ of $x$ in $W$, we have $2\eta_y < \mbox{deg}_W(y) - 1$, and for all other $j \in W$ we have $2\eta_j < \mbox{deg}_W(j)$. From these inequalities it follows that $W\setminus \{x\}$ was already a Zhang-FSC before the toppling, which is not possible, because we supposed that $\eta(t)$ contained no Zhang-FSC. By the same argument, further topplings cannot create a Zhang-FSC either, and the proof is complete. \end{proof} \begin{remark} {\em We have not defined Zhang's model in dimension $d>1$, because in that case the resulting configuration of stabilization through topplings is not independent of the order of topplings. But since the proof above only discusses the result of one toppling, Lemma \ref{fsc} remains valid for any choice of order of topplings. The proof is extended simply by replacing the factor 2 by $2d$.} \end{remark} \medskip\noindent {\it Proof of Proposition \ref{compare}.} If $\eta$ already contains at most one non-full, i.e., empty or anomalous site, then it contains no Zhang-FSC's, and the proposition follows. Suppose therefore that at some time $t$, $\eta(t)$ contains $M(t)$ non-full sites, with $1<M(t) \leq N$. We denote the positions of the non-full sites of $\eta(t)$ by $Y_i(t)$, $i = 1, \ldots, M(t)$, and we will show that $M(t)$ is nonincreasing in $t$, and decreases to 1 in finite time. Note that for all $1 \leq i<j\leq M(t)$, the restriction of $\eta(t)$ to $\{Y_i(t),Y_i(t)+1,\ldots,Y_j(t)\}$ is a Zhang-FSC. At time $t+1$, we have the following two possibilities. Either the addition causes no avalanche, in that case $M(t+1) \leq M(t)$, or it causes an avalanche. We will call the set of sites that change in an avalanche (that is, all sites that topple at least once, together with their neighbors) the {\em range} of the avalanche. We first show that if the range at time $t+1$ contains a site $y \in \{Y_i(t),\ldots,Y_{i+1}(t)\}$ for some $i$, then $M(t+1) < M(t)$. Suppose there is such a site. Then, since $\{Y_i(t)+1,\ldots,Y_{i+1}(t)-1\}$ contains only full sites, all sites in this subset will topple and after stabilization of this subset, it will not contain a Zhang-FSC. In other words, in this subset at most one non-full site is created. But since $Y_i(t)$ and $Y_{i+1}(t)$ received energy from a toppling neighbor, they are no longer empty or anomalous. Therefore, $M(t+1) < M(t)$. If there is no such site, then the range is either $\{1,Y_1(t)\}$, or $\{Y_{M(t)}(t),N\}$. With the same reasoning as above, we can conclude that in these cases, $Y_1(t+1) < Y_1(t)$, resp. $Y_{M(t)}(t+1) > Y_{M(t)}(t)$. Thus, $M(t)$ strictly decreases at every time step where an avalanche contains topplings between two non-full sites. As long as there are at least two non-full sites, such an avalanche must occur eventually. We cannot make infinitely many additions without causing topplings, and we cannot infinitely many times cause an avalanche at $x < Y_1(t)$ or $x > Y_{M(t)}(t)$ without decreasing $M(t)$, since after each such an avalanche, these non-full sites `move' closer to the boundary. \qed \medskip \noindent In the case that $a \geq 1/2$, we can further specify some characteristics of the model. We prove that for any initial configuration, after at most $N(N-1)$ time steps there is at most one empty site, and the other sites are is full, i.e., there are no anomalous sites. We will call such configurations {\em regular}. \begin{proposition} Suppose $a \geq \frac12$. Then \begin{enumerate} \item for any initial configuration $\eta$, for all $t \geq N(N-1)$, $\eta(t)$ is regular, \item for every stationary distribution $\mu$, and for all $i \in \{1,\ldots,N\}$, $$ \mu(\eta_i =0) = \frac{1}{N+1}. $$ \end{enumerate} \label{compare2} \end{proposition} In words, this proposition states that if $a \geq 1/2$, then every stationary distribution concentrates on regular configurations. Moreover, the stationary probability that a certain site $i$ is empty, does not depend on $i$. Note that as a consequence, the stationary probability that all sites are full, is also $\frac 1{N+1}$. To prove this proposition, we need the following lemma. In words, it states that if $a \geq 1/2$ and $\eta$ contains no anomalous sites, then the reduction of Zhang's model (according to Definition \ref{reductiondef}) behaves just as the abelian sandpile model. \begin{lemma} For all $u \in [\frac12,1)$, for all $\eta\in\Omega_N$ which do not contain anomalous sites, and for all $x\in \{ 1,\ldots,N\}$ \begin{equation} \ensuremath{\mathcal{R}} (\ensuremath{\mathcal{A}}_{u,x}(\eta))= a_x (\ensuremath{\mathcal{R}} (\eta)), \end{equation} where $a_x$ is the addition operator of the abelian sandpile model. In both avalanches, corresponding sites topple the same number of times. \label{comparelemma} \end{lemma} \begin{proof} Under the conditions of the lemma, site $x$ can be either full or empty. If $x$ is empty, then upon the addition of $u\geq \frac12 $ it becomes full. No topplings follow, so that in that case we directly have $\ensuremath{\mathcal{R}} (\ensuremath{\mathcal{A}}_{u,x}(\eta))= a_x (\ensuremath{\mathcal{R}} (\eta))$. If $\eta$ is such that site $x$ is full, then upon addition it becomes unstable. We call the configuration after addition, but before any topplings $\tilde{\eta}$. To check if in that case $\ensuremath{\mathcal{R}} (\ensuremath{\mathcal{A}}_{u,x}(\eta))= a_x (\ensuremath{\mathcal{R}} (\eta))$, we only need to prove $\ensuremath{\mathcal{R}}(\mathcal{T}_x(\tilde{\eta})) = T_x(\ensuremath{\mathcal{R}} (\tilde{\eta}))$, with $\ensuremath{\mathcal{R}}(\tilde{\eta})_x = 2$, since we already know that in both models, the final configuration after one addition is independent of the order of topplings. In $\mathcal{T}_x(\tilde{\eta})$, site $x$ will be empty. This corresponds to the abelian toppling, because site $x$ contained two grains after the addition, and by toppling it gave one to each neighbor. In $\mathcal{T}_x(\tilde{\eta})$, the energy of the neighbors of $x$ is their energy in $\eta$, plus at least $\frac 12$. Thus the neighbors of site $x$ will in $\mathcal{T}_x(\tilde{\eta})$ be full if they were empty, or unstable if they were full. Both correspond to the abelian toppling, where the neighbors of $x$ received one grain. \end{proof} \noindent \textit{Proof of Proposition \ref{compare2}}. To prove part (1), we note that any amount of energy that a site can receive during the process, i.e., either an addition or half the content of an unstable neighbor, is at least $1/2$. Thus, anomalous sites can not be created in the process. Anomalous sites can however disappear, either by receiving an addition, or, as we have seen in the proof of Proposition \ref{compare}, when they are in the range of an avalanche. When we make an addition of at least $1/2$ to a configuration with more than one non-full site, then either the number of non-full sites strictly decreases, or one of the outer non-full sites moves at least one step closer to the boundary. We note that $\eta$ contains at most $N$ non-full sites, and the distance to the boundary is at most $N-1$. When finally there is only one non-full site, then in the next time step it must either become full or be in the range of an avalanche. Thus, there is a random time $T'\leq N(N-1)$ such that $\eta(T')$ is regular for the first time, and as anomalous sites cannot be created, by Proposition \ref{compare}, $\eta(t)$ is regular for all $t \geq T'$. For $t \geq T'$, $\eta(t)$ satisfies the condition of Lemma \ref{comparelemma}. This means that the stationary distribution of the reduction of Zhang's model must coincide with that of the abelian sandpile model. As we mentioned in Section \ref{abeliansection}, this is the uniform measure on all configurations with at most one empty site. This proves part (2). \qed \subsection{Avalanches in Zhang's model} We next describe in full detail the effect of an avalanche, started by an addition to a configuration $\eta(t)$ in Zhang's model. Let $\mathcal{C}(t+1)$ be the range of this avalanche. Recall that we defined the range of an avalanche as the set of sites that change their energy at least once in the course of the avalanche (that is, all sites that topple at least once, together with their neighbors). We denote by $\mathcal{T}(t+1)$ the collection of sites that {\em topple} at least once in the avalanche. Finally, $\mathcal{C}'(t+1) \subset \mathcal{C}(t+1)$ denotes the collection of anomalous sites that change, but do not topple in the avalanche. During the avalanche, the energies of sites in the range, as well as $U_{t+1}$, get redistributed through topplings in a rather complicated manner. By decomposing the avalanche into waves (see Remark \ref{wavedef}), we prove the following properties of this redistribution. \begin{proposition}\label{belprop} Suppose an avalanche is started by an addition at site $x$ to configuration $\eta(t)$. For all sites $j$ in $\mathcal{C}(t+1)$, there exist $F_{ij} = F_{ij}(\eta(t),x,U_{t+1})$ such that we can write \begin{equation} \eta_j(t+1) = \sum_{i\in\mathcal{T}(t+1)} F_{ij}\eta_i(t) + F_{xj} U_{t+1} + \eta_j(t){\large \bf{1}}_{j \in \mathcal{C}'(t+1)}, \label{efjes} \end{equation} with \begin{enumerate} \item \begin{equation} \label{efjes2} F_{xj}+ \sum_{i\in\mathcal{T}(t+1)} F_{ij} = \ensuremath{\mathcal{R}}(\eta(t+1))_j; \end{equation} \item for all $j \in \mathcal{C}(t+1)$ such that $\eta_j(t+1) \neq 0$, $$ F_{xj} \geq 2^{-\lceil 3N/2\rceil}; $$ \item for all $j \in \mathcal{C}(t+1)$ such that $\eta_j(t+1) \neq 0$, $j \geq x$, we have $$ F_{x,j+1} \leq F_{xj}; $$ and similarly, $F_{x,j-1} \leq F_{xj}$ for $j\leq x$. \end{enumerate} \label{factortjes} \end{proposition} In words, we can write the new energy of each site in the range of the avalanche at time $t+1$ as a linear combination of energies at time $t$ and the addition $U_{t+1}$, in such a way that the prefactors sum up to 1 or 0. Furthermore, every site in the range receives a positive fraction of at least $2^{-\lceil 3N/2\rceil}$ of the addition. These received fractions are such that larger fractions are found closer to the addition site. We will need this last property in the proof of Theorem \ref{quasiunits}. \medskip\noindent {\it Proof of Proposition \ref{factortjes}.} We start with part (1). First, we decompose the avalanche started at site $x$ into waves. We index the waves with $k = 1, \ldots, K$, and write out explicitly the configuration after wave $k$, in terms of the configuration after wave $k-1$. The energy of site $i$ after wave $k$ is denoted by $\tilde{\eta}_{i,k}$; we use the tilde to emphasize that these energies are not really encountered in the process. We define $\tilde{\eta}_{i,0} = \eta_i(t)+U_{t+1} {\large \bf{1}}_{i=x}$; note that $\tilde{\eta}_{i,K} = \eta_i(t+1)$. In each wave, all participating sites topple only once. We call the outermost sites that toppled in wave $k$, the {\em endsites} of this wave, and we denote them by $M_k$ and $M'_k$, with $M_k > M'_k$. For the first wave, this is either a boundary site, or the site next to the empty or anomalous site that stops the wave. Thus, $M_1$ and $M'_1$ depend on $\eta(t)$, $x$ and $U_{t+1}$. All further waves are stopped by the empty sites that were created when the endsites of the previous wave toppled, so that for each $k$, $M_{k+1} = M_k -1$ and $M'_{k+1} = M'_k +1$. In every wave but the last, site $x$ becomes again unstable. Only in the last wave, $x$ is an endsite, so that at most one of its neighbors topples. In wave $k$, first site $x$ topples, transferring half its energy, that is, $\frac 12 \tilde{\eta}_{x,k-1}$, to each neighbor. Then, if $x$ is not an endsite, both its neighbors topple, transferring half of their current energy, that is, $\frac 12 \tilde{\eta}_{x\pm 1,k-1}+ \frac 14 \tilde{\eta}_{x,k-1}$, to their respective neighbors. Site $x$ is then again unstable, but it does not topple again in this wave. Thus, the topplings propagate away from $x$ in both directions, until the endsites are reached. Every toppling site in its turn transfers half its current energy, including the energy received from its toppling neighbor, to both its neighbors. Writing out all topplings leads to the following expression, for all sites $i \geq x$. A similar expression gives the updated energies for the sites with $i < x$. Note that for every $k>1$, $\tilde{\eta}_{M_k+1,k-1} = 0$. Only when $k=1$, it can be the case that site $M_1+1$ was anomalous, so that $\tilde{\eta}_{M_1+1,0} >0$. \begin{eqnarray} \label{avalanche} \tilde{\eta}_{x,k} & = & \left(\frac 12 \tilde{\eta}_{x+1,k-1}+ \frac 14 \tilde{\eta}_{x,k-1}\right){\large \bf{1}}_{M_k>x}+\left(\frac 12 \tilde{\eta}_{x-1,k-1}+ \frac 14 \tilde{\eta}_{x,k-1}\right){\large \bf{1}}_{M'_k<x}, \nonumber\\ \tilde{\eta}_{i,k} & = & \sum_{n=0}^{i+1} \frac{1}{2^{i+2-n}} \tilde{\eta}_{n,k-1}, \mbox{for } i=x+1,\ldots, M_k-1,\nonumber\\ \tilde{\eta}_{M_k,k} & = & 0,\\ \tilde{\eta}_{M_k+1,k} & = & \left\{ \begin{array}{lll} \tilde{\eta}_{M_k-1,k}+\tilde{\eta}_{M_k+1,k-1} & \mbox{if}& M_k \geq x+2,\nonumber \\ \frac 12 \tilde{\eta}_{x+1,k-1}+\frac 14 \tilde{\eta}_{x,k-1}+\tilde{\eta}_{x+2,k-1} & & M_k = x+1,\\ \frac 12 \tilde{\eta}_{x,k-1}+ \tilde{\eta}_{x+1,k-1} & & M_k = x.\\ \end{array} \right. \end{eqnarray} \medskip\noindent We write for all $j \in \mathcal{C}(t+1)$, with $f_{ij}(k)$ implicitly defined by the coefficients in (\ref{avalanche}), $$ \tilde{\eta}_{j,k} = \sum_{i\in\mathcal{T}(t+1)} f_{ij}(k)\tilde{\eta}_{i,k-1}. $$ Since we made an addition to a stable configuration, we only encounter configurations in $\tilde{\Omega}_N$. From a case by case analysis of (\ref{avalanche}), we claim that for all $j \in \mathcal{C}(t+1)$ we have \begin{equation} \label{ronald} \ensuremath{\mathcal{R}}(\tilde{\eta}_{j,k}) = \sum_{i\in\mathcal{T}(t+1)} f_{ij}(k)\ensuremath{\mathcal{R}}(\tilde{\eta}_{i,k-1}); \end{equation} the reader can verify this for all cases. To prove the proposition, we start with $j\in \mathcal{C}'(t+1)$, for which we have $$ \eta_j(t+1) = \tilde{\eta}_{j,1} = \sum_{i\in\mathcal{T}(t+1)} f_{ij}(1)\tilde{\eta}_{i,0} + \eta_j(t), $$ which is \eqref{efjes} with $F_{ij} = f_{ij}(1)$. We also have, according to (\ref{ronald}), \begin{eqnarray*} \ensuremath{\mathcal{R}}(\eta_j(t+1)) & = & \sum_{i\in\mathcal{T}(t+1)} f_{ij}(1)\ensuremath{\mathcal{R}}(\tilde{\eta}_{i,0})\\ &=& f_{xj}(1) + \sum_{i\in\mathcal{T}(t+1)} f_{ij}(1), \end{eqnarray*} since a site in $\mathcal{C}'(t+1)$ becomes full in the avalanche. This proves part (\ref{efjes2}) of the proposition for such sites. For all other sites in $\mathcal{C}(t+1)$, we use induction in $k$. For wave $k-1$, we make the induction hypothesis that \begin{equation} \tilde{\eta}_{j,k-1} = \sum_{m\in\mathcal{T}(t+1)} F_{mj}(k-1)\eta_m(t) + F_{xj}(k-1)U_{t+1}, \label{hyp1} \end{equation} with \begin{equation} \sum_{m\in\mathcal{T}(t+1)}F_{mj}(k-1)+F_{xj}(k-1) = \ensuremath{\mathcal{R}}(\tilde{\eta}_{j,k-1}). \label{hyp2} \end{equation} For $k$ we then obtain \begin{eqnarray*} \tilde{\eta}_{j,k} &=& \sum_{i\in\mathcal{T}(t+1)} f_{ij}(k)\tilde{\eta}_{i,k-1}\\ &=& \sum_{m\in\mathcal{T}(t+1)} \sum_{i\in\mathcal{T}(t+1)}F_{mi}(k-1) f_{ij}(k)\eta_m(t) + \sum_{i\in\mathcal{T}(t+1)}f_{ij}(k) F_{xi}(k-1)U_{t+1}. \end{eqnarray*} We also have \begin{eqnarray*} \ensuremath{\mathcal{R}}(\tilde{\eta}_{j,k}) &=& \sum_{i\in\mathcal{T}(t+1)}f_{ij}(k) \ensuremath{\mathcal{R}}(\tilde{\eta}_{j,k-1})\\ &=& \sum_{i\in\mathcal{T}(t+1)}f_{ij}(k)\left[ \sum_{m\in\mathcal{T}(t+1)}F_{mi}(k-1)+F_{xi}(k-1)\right]. \end{eqnarray*} Hence, if we define $$ F_{mj}(k) = \sum_{i\in\mathcal{T}(t+1)}f_{ij}(k)F_{mi}(k-1), $$ then (\ref{hyp1}) and (\ref{hyp2}) are also true for wave $k$. For $k-1=0$, the hypothesis is also true, with $F_{mi}(0) = {\large \bf{1}}_{m=i}$. If we define $F_{ij}:= F_{ij}(K)$, then the first part of the proposition follows. To prove part (2) of the proposition, we derive a lower bound for $F_{xj}$. The number $K$ of waves in an avalanche is equal to the minimum of the distance to the end sites, leading to the upper bound $K \leq \lceil N/2 \rceil$. After the first wave, (\ref{avalanche}) gives for all nonempty $j \neq x$, $F_{xj}(1) \geq (\frac 12)^{N+1}$. At the start of the next wave, the fraction of $U_{t+1}$ present at $x$ is equal to $F_{xx}(1) = \frac 12$. Hence, after the second wave, even if we ignore all fractions of $U_t$ on sites other than $x$, then we still have, again by (\ref{avalanche}), $F_{xj}(2) > \frac 12(\frac 12)^{N+1}$. So if before each wave we always ignore all fractions of $U_t$ on sites other than $x$, and if we assume the maximum number of waves, then we arrive at a lower bound for nonempty sites $j$: $$ F_{xj} \geq (\frac 12)^{\lceil N/2\rceil-1} (\frac 12)^{N+1} \geq 2^{-\lceil 3N/2\rceil}. $$ To prove part (3) of the theorem (we only discuss the case $j\geq x$, since by symmetry, the case $j \leq x$ is similar), we show that for every $k \in \{1,\dots,K\}$, \begin{equation} F_{xx}(k)>F_{x,x+1}(k)> \dots >F_{x,M_k-1}(k) = F_{x,M_k+1}(k), \label{eis1} \end{equation} and \begin{equation} F_{x,M_k+1}(k) \geq F_{x,M_{k-1}+1}(k-1). \label{eis2} \end{equation} This is sufficient to prove the theorem, since for every $k$, site $M_k+1$ does not change anymore in the waves $k+1, \ldots, K$. After the first wave, we have from (\ref{avalanche}) that $$ \frac 12 F_{xx}(1)>F_{x,x+1}(1)> \cdots >F_{x,M_k-1}(1) = F_{x,M_k+1}(1), $$ so that (\ref{eis1}) and (\ref{eis2}) are satisfied after the first wave. For all other waves except the last wave, we apply induction. Assume that after wave $k-1$, for $k<K$, \begin{equation} \frac 12 F_{xx}(k-1)>F_{x,x+1}(k-1)> \dots >F_{x,M_{k-1}-1}(k-1) = F_{x,M_{k-1}+1}(k-1). \label{hyp3} \end{equation} We have seen that this hypothesis is true after the first wave. We rewrite (\ref{avalanche}), for every $k<K$ (so that $M_k>1$ and $F_{x,x+1}(k-1) = F_{x,x-1}(k-1)$), as follows: \begin{eqnarray} F_{xx}(k) &= & F_{x,x+1}(k-1) + \frac 12 F_{xx}(k-1), \nonumber\\ F_{x,x+1}(k) &= & \frac 12 F_{x,x+2}(k-1) + \frac 14 F_{xx}(k), \nonumber\\ \nonumber F_{x,x+i}(k) &= & \frac 12 F_{x,x+i+1}(k-1) + \frac 12 F_{x,x-i-1}(k) \hspace{1cm} \mbox{for $i = 2, \ldots, M_k-1$},\\ F_{xM_k}(k) &= & 0, \nonumber\\ \nonumber F_{x,M_k+1}(k) &= & F_{x,M_k-1}(k).\\ \label{plakjesexpressie} \end{eqnarray} From (\ref{plakjesexpressie}) and (\ref{hyp3}), we find the following inequalities, each one following from the previous one: $$ F_{xx}(k) = F_{x,x+1}(k-1) + \frac 12 F_{xx}(k-1) < \frac 12 F_{xx}(k-1) + \frac 12 F_{xx}(k-1) = F_{xx}(k-1), $$ $$ F_{x,x+1}(k) = \frac 12 F_{x,x+2}(k-1) + \frac 14 F_{xx}(k) < \frac 12 F_{x,x+1}(k-1) + \frac 14 F_{xx}(k-1) = \frac 12 F_{xx}(k), $$ $$ F_{x,x+2}(k) = \frac 12 F_{x,x+3}(k-1) + \frac 12 F_{x,x+1}(k) < \frac 12 F_{x,x+2}(k-1) + \frac 14 F_{xx}(k) = F_{x,x+1}(k). $$ For all $i = 2, \ldots, M_k-1$, if $F_{x,x+i}(k) < F_{x,x+i-1}(k)$ then \begin{equation} F_{x,x+i+1}(k) = \frac 12 F_{x,x+i+2}(k-1) + \frac 12 F_{x,x+i}(k) < \frac 12 F_{x,x+i+1}(k-1) + \frac 12 F_{x,x+i-1}(k) = F_{x,x+i}(k). \label{horinductie} \end{equation} Since $F_{x,x+i}(k) < F_{x,x+i-1}(k)$ is true for $i=2$, (\ref{hyp3}) follows for wave $k$, and (\ref{eis1}) is proven for every $k<K$. Moreover, we have $$ F_{x,M_k+1}(k) = F_{x,M_k-1}(k) = \frac 12 F_{x,M_k}(k-1) + \frac 12 F_{x,M_k-2}(k). $$ With the above derived $F_{x,M_k-1}(k) < F_{x,M_k-2}(k)$, it follows that $F_{x,M_k}(k-1) < F_{x,M_k-2}(k)$, so that $$ F_{x,M_k-1}(k) > F_{x,M_k}(k-1) = F_{x,M_{k-1}+1}(k-1), $$ which is (\ref{eis2}). Finally we discuss the last wave. In case $M_K = 0$ we have \begin{eqnarray*} F_{xx}(K) & =& 0, \\ F_{x,x+1}(K) & = & \frac 12 F_{xx}(K-1) = F_{x,x+2}(K-1). \\ \end{eqnarray*} In case $M_K = 1$ we have \begin{eqnarray*} F_{xx}(K) & = & \frac 12 F_{x,x+1}(K-1) + \frac 14 F_{xx}(K-1), \\ F_{x,x+1}(K) & = & 0, \\ F_{x,x+2}(K) & = & F_{xx}(K) = \frac 12 F_{x,x+1}(K-1) + \frac 14 F_{xx}(K-1) >F_{x,x+1}(K-1).\\ \end{eqnarray*} For all $M_K>1$ we have \begin{eqnarray*} F_{xx}(K) & = &\frac 12 F_{x,x+1}(K-1) + \frac 14 F_{xx}(K-1),\\ F_{x,x+1}(K) & = &\frac 12 F_{x,x+2}(K-1) + \frac 14 F_{xx}(K-1) < F_{xx}(K).\\ \end{eqnarray*} We can now use (\ref{horinductie}) as above for $i = 1, \ldots, M_K-1$, so that (\ref{eis1}) and (\ref{eis2}) follow. \qed \subsection{Absolute continuity of one-site marginals of stationary distributions} Consider a one-site marginal $\nu_j$ of any stationary distribution $\nu$ of Zhang's sandpile model. It is easy to see that $\nu_j$ will have an atom at 0, because after each avalanche there remains at least one empty site. It is intuitively clear that there can be no other atoms: by only making uniformly distributed additions, it seems impossible to create further atoms. Here we prove the stronger statement that the one-site marginals of any stationary distribution are absolutely continuous with respect to Lebesgue measure on $(0,1)$. \begin{theorem}\label{abscon} Let $\nu$ be a stationary distribution for Zhang's model on $N$ sites. Every one-site marginal of $\nu$ is on $(0,1)$ absolutely continuous with respect to Lebesgue measure. \end{theorem} \begin{proof} Let $A \subset (0,1)$ be so that $\lambda(A) = 0$, where $\lambda$ denotes Lebesgue measure. We pick a starting configuration $\eta$ according to $\nu$. We define a stopping time $\tau$ as the first time $t$ such that all non-zero energies $\eta_i(t)$ contain a nonzero contribution of at least one of the added amounts $U_1, U_2,\ldots, U_t$. We then write, for an arbitrary nonzero site $j$, \begin{equation} \label{hallo} {\mathbb P}_{\nu}(\eta_j(t)\in A) \leq {\mathbb P}_{\nu} (\eta_j(t) \in A, \tau <t) + {\mathbb P}_{\nu}(t \leq \tau). \end{equation} The second term at the right hand side tends to 0 as $t \to \infty$ by item 2 of proposition \ref{belprop}. We claim that the first term at the right hand side is equal to zero. To this end, we first observe that $\eta_j(t)$ is built up of fractions of $\eta_j(0)$ and the additions $U_1, U_2,\ldots, U_t$. These fractions are random variables themselves, and we can bound this term by \begin{equation} \label{anne} {\mathbb P}_{\nu} \left(\sum_{i=1}^N Z_i \eta_i(0) + \sum_{s=1}^t Y_s U_s \in A, \sum_{s=1}^t Y_s >0\right), \end{equation} where $Z_i$ represents the (random) fraction of $\eta_i(0)$ in $\eta_j(t)$, and $Y_s$ represents the (random) fraction of $U_s$ in $\eta_j(t)$. We clearly have that the $U_s$ are all independent of each other and of $\eta_i(0)$ for all $i$. However, the $U_s$ are not necessarily independent of the $Z_i$ and the $Y_s$, since the numerical value of the $U_s$ effects the relevant fractions. Also, we know from the analysis in the previous subsection that the $Z_i$ and $Y_s$ can only take values in a countable set. Summing over all elements in this set, we rewrite (\ref{anne}) as $$ \sum_{z_i, y_s; \sum_s y_s > 0}{\mathbb P}_{\nu}\left(\sum_{i=1}^N z_i\eta_i(0) + \sum_{s=1}^t y_s U_s \in A, Z_i=z_i, Y_s=y_s\right) $$ which is at most $$ \sum_{z_i, y_s; \sum_s y_s > 0}{\mathbb P}_{\nu} \left(\sum_{i=1}^N z_i\eta_i(0) + \sum_{s=1}^t y_s U_s \in A\right), $$ which, by the independence of the $U_s$ and the $\eta_i(0)$, is equal to $$ \sum_{z_i, y_s; \sum_s y_s > 0} \int {\mathbb P}_{\nu} \left(\sum_{i=1}^N z_i x_i + \sum_{s=1}^t y_sU_s\in A\right) d \nu(x_1,\ldots, x_N). $$ Since $\sum_{s=1}^t y_s >0$, $U_s$ are independent uniforms, and by assumption $\lambda (A)=0$, the probabilities inside the integral are clearly zero. Since the left hand side of (\ref{hallo}) is equal to $\nu_j(A)$ for all $t$, we now take the limit $t \to \infty$ on both sides, and we conclude that $\nu_j(A)=0$. \end{proof} \begin{remark} {\rm The same proof shows that for every stationary measure $\nu$, and for every $i_1,\ldots,i_k\in \{ 1,\ldots,N\}$, conditional on $i_1,\ldots,i_k$ being nonempty, the joint distribution of $\eta_{i_1},\ldots,\eta_{i_k}$ under $\nu$ is absolutely continuous with respect to Lebesgue measure on $(0,1)^k$.} \end{remark} \section{The $(1,[a,b])$-model} \label{onesitesection} In this section we consider the simplest version of Zhang's model: the $(1,[a,b])$-model. In words: there is only one site and we add amounts of energy that are uniformly distributed on the interval $[a,b]$, with $0 \leq a < b\leq 1$. \subsection{Uniqueness of the stationary distribution} Before turning to the particular case $a=0$, we prove uniqueness of the stationary distribution for all $[a,b] \subseteq [0,1]$. \begin{theorem} (a) The $(1,[a,b])$ model has a unique stationary distribution $\rho=\rho^{ab}$. For every initial distribution ${\mathbb P}_\eta$ on $\Omega_1$, we have time-average total variation convergence to $\rho$, i.e., $$ \lim_{t \to \infty} \sup_{A\subset \Omega_1} \left|\frac 1t \sum_{s=0}^t {\mathbb P}_{\eta} \left(\eta(s) \in A\right) - \rho(A)\right| = 0. $$ (b) In addition, if there exists no integer $m > 1$ such that $[a,b]\subseteq [\frac 1m, \frac1{m-1}]$, (hence in particular if $a=0$), then we have convergence in total variation to $\rho$ for every initial distribution ${\mathbb P}_\eta$ on $\Omega_1$, i.e., $$ \lim_{t \to \infty} \sup_{A\subset \Omega_1} \left|{\mathbb P}_{\eta} (\eta(t) \in A) - \rho(A)\right| = 0. $$ \label{rho} \end{theorem} \begin{proof} We prove this theorem by constructing a coupling. The two processes to be coupled have initial configurations $\eta^1$ and $\eta^2$, with $\eta^1$,$\eta^2 \in \Omega_1$. We denote by $\eta^1(t)$, $\eta^2(t)$ two independent copies of the process starting from $\eta^1$ and $\eta^2$ respectively. The corresponding independent additions at each time step are denoted by $U^1_t$ and $U^2_t$, respectively. Let $T_1 = \min\{t: \eta^1(t) = 0\}$ and $T_2 = \min\{t: \eta^2(t)=0\}$. Suppose (without loss of generality) that $T_2 \geq T_1$. We define a shift-coupling (\cite{thorisson}, Chapter 5) as follows: $$ \begin{array}{ll} \hat{\eta}^1(t) & = ~ \eta^1(t) \hspace{2.6cm} \mbox{for all} ~t,\\ \hat{\eta}^2(t) & = \left\{ \begin{array}{ll} \eta^2(t) & \mbox{for} ~ t < T_2,\\ \eta^1(t - (T_2-T_1)) & \mbox{for} ~t \geq T_2.\\ \end{array} \right.\\ \end{array} $$ Defining $T = \min\{t: \eta^1(t) = \eta^2(t)=0\}$, we also define the exact coupling $$ \begin{array}{ll} \hat{\eta}^1(t) & = ~ \eta^1(t) \hspace{0.7cm} \mbox{for all} ~t,\\ \hat{\eta}^3(t) & = \left\{ \begin{array}{ll} \eta^2(t) & \mbox{for} ~ t < T,\\ \eta^1(t) & \mbox{for} ~t \geq T.\\ \end{array} \right.\\ \end{array} $$ Since the process is Markov, both couplings have the correct distribution. We write $\tilde{{\mathbb P}} = {\mathbb P}_{\eta_1}\times{\mathbb P}_{\eta_2}$. Since $\tilde{{\mathbb P}}(T_2<\infty) = \tilde{{\mathbb P}}(T_1<\infty) = 1$, the shift-coupling is always successful, and (a) follows. To investigate whether $\eta^1(t) = \eta^2(t)=0$ occurs infinitely often $\tilde{{\mathbb P}}$-a.s., we define $\mathcal{N} = \{n: (n-1)a <1 \wedge nb>1\}$; this is the set of possible numbers of time steps between successive events $\eta^1(t)=0$. In words, an $n \in \mathcal{N}$ is such that, starting from $\eta^1 = 0$, it is possible that in $n-1$ steps we do not yet reach energy 1, but in $n$ steps we do. To give an example, if $a \geq 1/2$, then $\mathcal{N} = \{2\}$. If the gcd of $\mathcal{N}$ is 1 (this is in particular the case if $a=0$), then the processes $\{t:\eta^1(t) = 0\}$ and $\{t:\eta^2(t) = 0\}$ are independent aperiodic renewal processes, and it follows that $\eta^1(t) = \eta^2(t) = 0$ happens infinitely often $\tilde{{\mathbb P}}$-a.s. As we have seen, for $a>0$, the gcd of $\mathcal{N}$ need not be 1. In fact, we can see from the definition of $\mathcal{N}$ that this is the case if (and only if) there is an integer $m > 1$ such that $[a,b]\subseteq [\frac 1m, \frac1{m-1}]$. Then $\mathcal{N} = \{m\}$. For such values of $a$ and $b$, the processes $\{t:\eta^1(t) = 0\}$ and $\{t:\eta^2(t) = 0\}$ are periodic, so that we do not have a successful exact coupling. \end{proof} \subsection{The stationary distribution of the $(1,[0,b])$-model} We write $\rho^b$ for the stationary measure $\rho^{0b}$ of the $(1,[0,b])$-model and $F^{b}$ for the distribution function of the amount of energy at stationarity, that is, $$ F^b(h) = \rho^b(\eta: 0 \leq \eta \leq h). $$ We prove the following explicit solution for $F^b(h)$. \begin{theorem} \label{onesite} (a) The distribution function of the energy in the $(1, [0,b])$-model at stationarity is given by \begin{equation} \label{Fthm} F^b(h) = \left \{ \begin{array}{ll} 0 & \mbox{for} ~h<0, \\ F^{b}(0) > 0 & \mbox{for} ~h=0, \\ F^b(0)\sum_{\kappa = 0}^{m_h} \frac{(-1)^{\kappa}}{b^{\kappa} \kappa!}(h- \kappa b)^{\kappa} e^{\frac{h- \kappa b}{b}} & \mbox{for} ~0 < h \leq 1, \\ 1 & \mbox{for} ~h>1, \end{array} \right. \end{equation} where $m_h ={\lceil\frac hb \rceil-1}$ and where \[ F^b(0) = \frac{1}{\sum_{\kappa = 0}^{m_h} \frac{(-1)^{\kappa}}{b^{\kappa} \kappa!}(1- \kappa b)^{\kappa} e^{\frac{1- \kappa b}{b}}} \] follows from the identity $F^b(1) =1$. \medskip\noindent (b) For $h \in [0,1]$ we have $$ \lim_{b \rightarrow 0} F^b(h) = h. $$ \end{theorem} \medskip\noindent We remark that although in (a) we have a more or less explicit expression for $F^b(h)$, the convergence in (b) is not proved analytically, but rather probabilistically. \medskip\noindent {\it Proof of Theorem \ref{onesite}, part (a).} Observe that the process for one site and $a=0$ is defined as \begin{equation} \eta(t+1) = \left(\eta(t) + U_{t+1}\right)~ {\large \bf{1}}_{\eta(t) + U_{t+1} <1}. \label{tijdstap} \end{equation} We define $F_t^b= {\mathbb P}(\eta(t) \leq h)$, and derive an expression for $F_{t+1}^b(h)$ in terms of $F_t^b(h)$. In the stationary situation, these two functions should be equal. We deduce from (\ref{tijdstap}) that for $0 \leq h \leq 1$, \begin{equation} F_{t+1}^b(h) = {\mathbb P}(\eta(t)+U_{t+1} \leq h) + {\mathbb P}(\eta(t) + U_{t+1} \geq 1). \label{C} \end{equation} We compute for $0 \leq h \leq b,$ \begin{eqnarray} \nonumber {\mathbb P}(\eta(t)+ U_{t+1} \leq h) & = & {\mathbb P}(\eta(t)\leq h-U_{t+1})\\ \nonumber & = & \int_0^h \frac 1b {\mathbb P}(\eta(t)\leq h-u) \, du\\ & = &\label{C2} \int_0^h \frac{1}{b} F_t^b(h-u) \, du, \end{eqnarray} and likewise for $b \leq h \leq 1$ we find \begin{equation} {\mathbb P}(\eta(t) + U_{t+1} \leq h) = \int_{0}^{b} \frac{1}{b} (F_t^b(h-u)) \, du. \label{C3} \end{equation} Finally, \begin{eqnarray} \nonumber {\mathbb P}(\eta(t)+U_{t+1} \geq 1) & = & \int_0^b \frac{1}{b} (F_t^b(1)-F_t^b(1-u)) \, du \\ & = & \label{C4} \int_{0}^{b} \frac{(1-F^b_t(1-u))}{b} \, du = F_{t+1}^b(0). \end{eqnarray} Putting (\ref{C}), (\ref{C2}), (\ref{C3}) and (\ref{C4}) together leads to the conclusion that the stationary distribution $F^b(h)$ satisfies \begin{equation} \label{vwdeF} F^b(h)= \left\{ \begin{array}{ll} \int_{0}^{h} \frac{F^b(h-u)}{b} \, du + F^b(0) & \mbox{if $0 \leq h \leq b$,} \\ \int_{0}^{b} \frac{F^b(h-u)}{b} \, du + F^b(0) & \mbox{if $b \leq h \leq 1$}. \end{array} \right. \end{equation} Furthermore, since $F^b(h)$ is a distribution function, $F^b(h)=0$ for $h <0$ and $F^b(1) =1$. We can rewrite equation (\ref{vwdeF}) as a differential delay equation. We take $f^b(h) =\frac{d F^b(h)}{dh}$ the density corresponding to $F^b$ for $0 < h <1$; this density exists according to Theorem \ref{abscon}. We first consider the case $0<h \leq b$, in which case $m_h=0$. We differentiate (\ref{vwdeF}) twice, to get $$ \frac{d f^{b}(h)}{dh} = \frac{1}{b} f^b(h), $$ which leads to the conclusion that $F^b(h) = F^b(0)e^{\frac{h}{b}}$, consistent with (\ref{Fthm}) for $0 \leq h \leq b$. Now we consider the case $b \leq h \leq 1$. We differentiate (\ref{vwdeF}) on both sides to get \begin{equation} \label{vwdeF2} f^b(h) = \frac 1b(F^b(h)-F^b(h-b)). \end{equation} At this point, we can conclude that the solution is unique and could in principle be found using the method of steps. However, since we already have the candidate solution given in Theorem \ref{onesite}, we only need to check that it indeed satisfies equation (\ref{vwdeF}). We check that for the derivative $f^b$ of $F^b$ as defined in (\ref{Fthm}), for $b \leq h \leq 1$, \begin{eqnarray*} f^b(h) & = & F^b(0)\sum_{\kappa = 0}^{m_h} (-\frac 1b)^{\kappa} \frac{1}{\kappa!}\left( \kappa(h- \kappa b)^{\kappa-1} e^{\frac{h- \kappa b}{b}}+(h- \kappa b)^\kappa \frac {1}{b} e^{\frac{h- \kappa b}{b}}\right)\\ & = & - \frac{F^b(0)}{b}\sum_{\kappa = 0}^{m_h} (-\frac 1b)^{\kappa-1} \frac{1}{(\kappa-1)!}(h- \kappa b)^{\kappa-1} e^{\frac{h- \kappa b}{b}}\\ & & + \frac{F^b(0)}{b}\sum_{\kappa = 0}^{m_h} (-\frac 1b)^{\kappa} \frac{1}{\kappa!} (h- \kappa b)^{\kappa} e^{\frac{h- \kappa b}{b}}, \end{eqnarray*} whereas $$ \frac {F^b(h)}{b} =\frac {F^b(0)}{b}\sum_{\kappa = 0}^{m_h} (-\frac 1b)^{\kappa} \frac{1}{\kappa!} (h- \kappa b)^{\kappa} e^{\frac{h- \kappa b}{b}} $$ and \begin{eqnarray*} -\frac{F^b(h-b)}{b} & = & -\frac{1}{b} \sum_{\kappa = 0}^{m_h-1} (-\frac 1b)^{\kappa} \frac{1}{\kappa!} \kappa(h- (\kappa+1) b)^{\kappa} e^{\frac{h- (\kappa+1) b}{b}} \\ & = & -\frac{1}{b} F^b(0)\sum_{\kappa = 0}^{m_h} (-\frac 1b)^{\kappa-1} \frac{1}{(\kappa-1)!}(h- \kappa b)^{\kappa-1} e^{\frac{h- \kappa b}{b}}, \end{eqnarray*} which leads to (\ref{vwdeF2}) as required. \qed \medskip\noindent We remark that the probability density function $f^b(h)$ has an essential point of discontinuity at $h=b$. Figures \ref{biseenhalf} and \ref{biseentiende} show two examples of $f^b(h)$. \begin{figure}[ht] \centerline{\includegraphics[width=7cm]{aiseenhalf3}} \caption{$f^b(h)$ for $b = \frac 12$. Note the discontinuity at $h = \frac 12$.} \label{biseenhalf} \end{figure} \begin{figure}[ht] \centerline{\includegraphics[width=7cm]{aiseentiende2}} \caption{$f^b(h)$ for $b = \frac 1{10}$. This figure illustrates that for small $b$, $f^b(h)$ tends to the uniform distribution.} \label{biseentiende} \end{figure} \medskip \noindent {\it Proof of Theorem \ref{onesite}, part (b).} Without loss of generality, we start at time $0$ with zero energy. To avoid confusion and express the dependence of the models on the parameter $b$, we will write $\eta^b(t)$ for the random state of the $(1,[0,b])$-model at time $t$, with $\eta^b=0$. In this proof, it is helpful to see that the process can be viewed as an alternating renewal process. To this end, we define the following random times, with $i = 1,2,\ldots$: $$ T_i^b(0) := \min\{n>T_{i-1}^b(0), \eta^b(n)=0\}, $$ with $T_1^b(0)=0$. We also define $$ T_i^b(h) := \max\{n \in (T_{i}^b(0),T^b_{i+1}(0)):\eta^b(s) \leq h, \forall s \in \{T_{i}^b(0)+1,\ldots, n\}\}. $$ We fix $h$. The process alternates between states where $\eta^b(t)\leq h$ and $\eta^b(t) >h$, such that renewal events occur at times $T_i^b(0)$. All intervals between successive $T_i^b(0)$ are i.i.d., as they are all intervals of the form $T_i^b(h)-T_i^b(0)$. Thus, the requirements of an alternating renewal process are met, and we can conclude that \begin{equation} F^b(h) = \lim_{t\to \infty} {\mathbb P}_{\eta^b}(\eta^b(t)\leq h) = \frac {{\mathbb E}(T_1^b(h))+1}{{\mathbb E}(T_2^b(0))}, \label{altrenewal} \end{equation} which is valid for all $b\leq 1$. To compute the expectations in (\ref{altrenewal}) in the limit $b \to 0$, we use another process: let $U_1, U_2, \ldots$ be independent uniform random variables on the interval $[0,1]$ and write (as before) $S_n^1=U_1+\cdots + U_n$. We define $$ N(s)= \max \{n \in \mathbb{N}: S_n^1 \leq s \}. $$ The process $\{N(s):s\geq0\}$ is a renewal process and by the elementary renewal theorem, \begin{equation} \label{ert} \lim_{s \rightarrow \infty} \frac{{\mathbb E}(N(s))}{s} = {\mathbb E}(U_1)=\frac 12. \end{equation} Observe that $T_1^b(h)$ and $N(\frac{h}{b})$ have the same distribution; this is just a rescaling. Likewise, $T_2^b(0)$ has the same distribution as $N(\frac 1b)+1$. Hence, \begin{equation} \label{Fb} F^b(h) = \frac{\mathbb{E}\left( N \left(\frac{h}{b} \right) \right)+1}{\mathbb{E}\left( N \left(\frac{1}{b} \right) \right)+1}, \end{equation} and by (\ref{ert}) we find that $$ \lim_{b \rightarrow 0} \frac{\mathbb{E}\left( N \left(\frac{h}{b} \right) \right)+1}{ \ \frac{h}{2b}} =1. $$ Combining this with (\ref{Fb}) we conclude that \begin{eqnarray*} \lim_{b \rightarrow 0} F^b(h) & = & \lim_{b \rightarrow 0} h \cdot \frac{\left(\mathbb{E}\left( N \left(\frac{h}{b} \right) \right) +1 \right) \cdot \frac{2b}{h}} {\left(\mathbb{E} \left( N \left(\frac{1}{b} \right) \right) +1 \right) \cdot 2b} = h.\\ \end{eqnarray*} \qed \section{The $(N,[a,b])$-model with $N \geq 2$ and $a \geq \frac 12$} \label{halftoteensection} \subsection{Uniqueness of stationary distribution} In the course of the process of Zhang's model, the energies of all sites can be randomly augmented through additions, and randomly redistributed among other sites through avalanches. Thus at time $t$, every site contains some linear combination of all additions up to time $t$, and the energies at time 0. In a series of lemma's we derive very detailed properties of these combinations in the case $a \geq 1/2$. These properties are crucial to prove the following result. \begin{theorem}\label{uniquethm} The $(N,[a,b])$ model with $a \geq \frac 12$, has a unique stationary distribution $\mu = \mu^{ab}$. For every initial distribution $\nu$ on $\Omega_N$, ${\mathbb P}_\nu$ converges exponentially fast in total variation to $\mu$. \label{mu} \end{theorem} We have demonstrated for the case $a \geq \frac 12$ that after a finite (random) time, we only encounter regular configurations (Proposition \ref{compare2}). By Lemma \ref{comparelemma}, if $\eta(t-1)$ is regular, then the knowledge of $\ensuremath{\mathcal{R}}(\eta(t-1))$ and $X_t$ suffices to know the number of topplings of each site at time $t$. Thus, also the factors $F_{ij}$ in Proposition \ref{factortjes} are functions of $\ensuremath{\mathcal{R}}(\eta(t-1))$ and $X_t$ only. Using this observation, we prove the following. \begin{lemma}\label{bombom} Let $a\geq \frac12$. Suppose at some (possibly random) time $\tau$ we have a configuration $\xi(\tau)$ with no anomalous sites. Then for all $j = 1, \ldots, N$ and for $t \geq \tau$, we can write \begin{equation}\label{karamba} \xi_j(t) = \sum_{\theta=\tau+1}^{t}A_{\theta j}(t)U_{\theta} + \sum_{m=1}^N B_{mj}(t)\xi_m(\tau) \end{equation} in such a way that the coefficients in \eqref{karamba} satisfy $$ A_{\theta j}(t) = A_{\theta j}(\ensuremath{\mathcal{R}}(\xi(\tau)),X_{\tau+1},\ldots,X_{t}) $$ and $$ B_{mj}(t) = B_{mj}(\ensuremath{\mathcal{R}}(\xi(\tau)),X_{\tau+1},\ldots,X_{t}), $$ and such that for every $j$ and every $t \geq \tau$, $$ \sum_{\theta=\tau+1}^{t}A_{\theta j}(t)+\sum_{m=1}^N B_{mj}(t) = \ensuremath{\mathcal{R}}(\xi(t))_j = {\large \bf{1}}_{\xi_j(t) \neq 0}. $$ \label{plakjesalgemeen} \end{lemma} \begin{remark} {\em Notice that, in the special case that $\tau$ is a stopping time, $A_{\theta j}(t)$ is independent of the amounts added after time $\tau$, i.e., $A_{\theta j}(t)$ and $\{ U_\theta, \theta \geq \tau+1\}$ are independent. We will make use of this observation in Section \ref{mainresultssection}.} \label{plakjesspeciaal} \end{remark} \begin{proof} We use induction. We start at $t=\tau$, where we choose $B_{mj}(\tau)=\ensuremath{\mathcal{R}}(\xi(\tau))_j~ {\large \bf{1}}_{m=j}$. We then have $\sum_{m=1}^N B_{mj}(\tau) = \ensuremath{\mathcal{R}}(\xi(\tau))_j$, so that at $t=\tau$ the statement in the lemma is true. We next show that if the statement in the lemma is true at time $t > \tau$, then it is also true at time $t+1$. At time $t$ we have for every $j = 1, \ldots, N$, $$ \xi_j(t) = \sum_{\theta=\tau+1}^{t}A_{\theta j}(t)U_{\theta} + \sum_{m=1}^N B_{mj}(t)\xi_m(\tau), $$ with $$ \sum_{\theta=\tau+1}^{t}A_{\theta j}(t)+\sum_{m=1}^N B_{mj}(t) = \ensuremath{\mathcal{R}}(\xi(t))_j, $$ where all $A_{\theta j}(t)$ and $B_{mj}(t)$ are determined by $\ensuremath{\mathcal{R}}(\xi(\tau)),X_{\tau+1},\ldots,X_{t}$, so that $\ensuremath{\mathcal{R}}(\xi(t))$ is also determined by $\ensuremath{\mathcal{R}}(\xi(\tau)),X_{\tau+1},\ldots,X_{t}$. We first discuss the case where we added to a full site, so that an avalanche is started. In that case, the knowledge of $\ensuremath{\mathcal{R}}(\xi(\tau)),X_{\tau+1},\ldots,X_{t+1}$ determines the sets $\mathcal{C}(t+1)$, $\mathcal{T}(t+1)$ and the factors $F_{ij}$ from Proposition \ref{factortjes}. (This last fact is due to the fact that $a \geq 1/2$.) We write, denoting $X_{t+1} = x$, \begin{eqnarray} \nonumber \xi_j(t+1) & = & \sum_{i\in\mathcal{T}(t+1)} F_{ij}\xi_i(t) + F_{xj} U_{t+1}\\ \nonumber & = & \sum_{i\in\mathcal{T}(t+1)} F_{ij}\left[\sum_{\theta=1}^{t}A_{\theta i}(t)U_{\theta} + \sum_{m=1}^N B_{mi}(t)\xi_m(\tau)\right] + F_{xj} U_{t+1}.\\ \end{eqnarray} Thus we can identify \begin{equation} A_{\theta j}(t+1) = \sum_{i\in\mathcal{T}(t+1)} F_{ij} A_{\theta i}(t), \label{aatjes} \end{equation} $$ B_{mj}(t+1) = \sum_{i\in\mathcal{T}(t+1)} F_{ij} B_{mi}(t), $$ and $$ A_{t+1,j}(t+1) = F_{xj}, $$ so that indeed all $A_{\theta j}(t+1)$ and $B_{mj}(t+1)$ are functions of $\ensuremath{\mathcal{R}}(\xi(\tau)),X_{\tau+1},\ldots,X_{t+1}$ only. Furthermore, \begin{eqnarray} \nonumber \sum_{\theta=1}^{t+1}A_{\theta j}(t+1) + \sum_{m=1}^N B_{mj}(t+1) & = & \sum_{i\in\mathcal{T}(t+1)} F_{ij} \left[\sum_{\theta=1}^{t}A_{\theta i}(t) + \sum_{m=1}^N B_{mi}(t)\right] + F_{xj}\\ \nonumber & = & \sum_{i\in\mathcal{T}(t+1)} F_{ij} + F_{xj} = \ensuremath{\mathcal{R}}(\eta_j(t+1)),\\ \end{eqnarray} where we used that all sites that toppled must have been full, therefore had reduced value 1. If no avalanche was started, then the only site that changed is the addition site $x$, and it must have been empty at time $t$. Therefore, we have for all $\theta<t+1$, $A_{\theta x}(t+1) = A_{\theta x}(t) = 0$, for all $m$, $B_{mx}(t+1) = B_{mx}(t) = 0$ and $A_{t+1,x}(t+1) = 1$, so that the above conclusion is the same. \end{proof} \medskip \noindent For every $\theta$, we have $\sum_{i\in\mathcal{T}(t+1)} A_{\theta i}(t) \leq 1$, because the addition $U_\theta$ gets redistributed by avalanches, but some part disappears through topplings of boundary sites. One might expect that, as an addition gets redistributed multiple times and many times some parts disappear at the boundary, the entire addition eventually disappears, and similarly for the energies $\xi_j(\tau)$. Indeed, we have the following results about the behaviour of $A_{\theta i}(t)$ for fixed $\theta$, and about the behaviour of $B_{mj}(t)$ for fixed $m$. \begin{lemma} For every $\theta$, and for $t>\theta$, \begin{enumerate} \item $\max_{1\leq i\leq N} A_{\theta i}(t)$ and $\max_{1\leq i\leq N} B_{mi}(t)$ are both non-increasing in $t$. \item For all $\theta, m$ and $i$, $\lim_{t \to \infty} A_{\theta i}(t) = 0$, and $\lim_{t \to \infty} B_{mi}(t) = 0$. \end{enumerate} \label{uitsmeren} \end{lemma} \begin{proof} We can assume that $t>\theta$. The proofs for $A_{\theta j}(t)$ and for $B_{mj}(t)$ proceed along the same line, so we will only discuss $A_{\theta j}(t)$. We will show that for every $j$, $A_{\theta j}(t+1) \leq \max_i A_{\theta i}(t)$, by considering one fixed $j$. If the energy of site $j$ did not change in an avalanche at time $t+1$, then $$ A_{\theta j}(t+1) = A_{\theta j}(t) \leq \max_i A_{\theta i}(t). $$ If site $j$ became empty in the avalanche, then $$ A_{\theta j}(t+1) = 0 < \max_i A_{\theta i}(t). $$ For the third possibility - the energy of site $j$ changed to a nonzero value in an avalanche at time $t+1$ - we use \eqref{aatjes}, and estimate $$ A_{\theta j}(t+1) = \sum_{i\in\mathcal{T}(t+1)} F_{ij} A_{\theta i}(t) \leq \max_i A_{\theta i}(t) \sum_{i\in\mathcal{T}(t+1)} F_{ij}. $$ By Proposition \ref{factortjes} part (1) and (2), $\sum_{i\in\mathcal{T}(t+1)} F_{ij} \leq 1-2^{-\lceil3N/2\rceil}$, so that in this third case, $$ A_{\theta j}(t+1) \leq (1-2^{-\lceil3N/2\rceil})\max_i A_{\theta i}(t) < \max_i A_{\theta i}(t). $$ Thus, it follows that $\max_i A_{\theta i}(t+1)$ can never be larger than $\max_i A_{\theta i}(t)$. This proves part (1). It also follows that when between $t$ and $t+ \tau$ all sites have changed at least once, we are sure that $\max_i A_{\theta i}(t+\tau) \leq (1-2^{-\lceil3N/2\rceil})\max_i A_{\theta i}(t)$. We next derive an upper bound for the time that one of the sites can remain unchanged. Suppose at some finite time $t$ when $\eta(t)$ is regular (see Proposition \ref{compare2}), we try never to change some sites again. If all sites are full, then this is impossible: the next avalanche will change all sites. If there is an empty site $x$, then the next addition changes either the sites $1,\ldots, x$, or the sites $x,\ldots, N$. In the first case, after the avalanche we have a new empty site $x'<x$. If we keep trying not to change the sites $x,\ldots, N$, we have to keep making additions that move the empty site closer to the boundary (site 1). It will therefore reach the boundary in at most $N-1$ time steps. Then we have no choice but to change all sites: we can either add to the empty site and obtain the full configuration, so that with the next addition all sites will change, or add to any other site, which immediately changes all sites. This argument shows that the largest possible number of time steps between changing all sites is $N+1$. We therefore have \begin{equation}\label{impoa} \max_i A_{\theta i}(t) < (1-2^{-\lceil3N/2\rceil})^{\lfloor \frac{t-\theta}{N+1}\rfloor}, \end{equation} so that $$ \lim_{t \to \infty}\max_i A_{\theta i}(t) < \lim_{t \to \infty} (1-2^{-\lceil3N/2\rceil})^{\lfloor \frac{t-\theta}{N+1}\rfloor} = 0. $$ \end{proof} \medskip \noindent With the above results, we can now prove uniqueness of the stationary distribution. \medskip\noindent {\it Proof of Theorem \ref{mu}.} By compactness, there is at least one stationary measure $\mu$. To prove the theorem, we will show that there is a coupling $(\hat{\eta}^1(t),\hat{\eta}^2(t))_0^{\infty}$ with probability law $\hat{{\mathbb P}}_{(\eta^1, \eta^2)}$ for two realizations of the $(N,[a,b])$ model with $a \geq\frac 12$, such that for all $\epsilon>0$, and for all starting configurations $\eta^1$ and $\eta^2$, for $t \to \infty$ we have \begin{equation} \hat{{\mathbb P}}_{(\eta^1, \eta^2)}\left(\max_j|\hat{\eta}_j^1(t) - \hat{\eta}_j^2(t)| > \epsilon\right) = O(e^{-\alpha_N t})), \label{halftoteencoupling} \end{equation} with $\alpha_N > 0$. From (\ref{halftoteencoupling}), it follows that the Wasserstein distance (\cite{dudley}, Chapter 11.8) between any two measures $\mu_1$ and $\mu_2$ on $\Omega_N$ vanishes exponentially fast as $t \to \infty$. If we choose $\eta^1$ distributed according to $\mu$ stationary, then it is clear that every other measure on $\Omega_N$ converges exponentially fast to $\mu$. In particular, it follows that $\mu$ is unique. As in the proof of Theorem \ref{rho}, the two processes to be coupled have initial configurations $\eta^1$ and $\eta^2$, with $\eta^1$,$\eta^2 \in \Omega_N$. The independent additions at each time step are denoted by $U^1_t$ and $U^2_t$, the addition sites $X^1_t$ and $X^2_t$. We define the coupling as follows: $$ \begin{array}{ll} \hat{\eta}^1(t) & = \eta^1(t) \hspace{2.8cm} \mbox{for all $t$}\\ \hat{\eta}^2(t) & = \left\{ \begin{array}{ll} \eta^2(t) & \mbox{for} ~ t \leq T,\\ \ensuremath{\mathcal{A}}_{U^1_t,X^1_t}(\hat{\eta}^2(t-1)) & \mbox{for} ~t > T,\\ \end{array} \right.\\ \end{array} $$ where $T = \min\{t>T':\ensuremath{\mathcal{R}}(\eta^1(t))=\ensuremath{\mathcal{R}}(\eta^2(t))\}$, and $T'$ the first time that both $\eta^1(t)$ and $\eta^2(t)$ are regular. In Proposition \ref{compare2} it was proven that $T' \leq N(N-1)$, uniformly in $\eta$. In words, this coupling is such that from the first time on where the reductions of $\hat{\eta}^1(t)$ and $\hat{\eta}^2(t)$ are the same, we make additions to both copies in the same manner, i.e., we add the same amounts at the same location to both copies. Then, by Lemma \ref{comparelemma}, in both copies the same avalanches will occur. We will then use Lemma \ref{uitsmeren} to show that, from time $T$ on, the difference between $\hat{\eta}^1(t)$ and $\hat{\eta}^2(t)$ vanishes exponentially fast. First we show that $\hat{{\mathbb P}}_{(\eta^1, \eta^2)}(T>t)$ is exponentially decreasing in $t$. There are $N+1$ possible reduced regular configurations. Once $\hat{\eta}^1(t)$ is regular, the addition site $X^1_{t+1}$ uniquely determines the new reduced regular configuration $\ensuremath{\mathcal{R}}(\eta^1(t+1))$. This new reduced configuration cannot be the same as $\ensuremath{\mathcal{R}}(\eta^1(t))$. Thus, there are $N$ equally likely possibilities for $\ensuremath{\mathcal{R}}(\eta^1(t+1))$, and likewise for $\ensuremath{\mathcal{R}}(\eta^2(t+1))$. If $\ensuremath{\mathcal{R}}(\eta^1(t)) \neq \ensuremath{\mathcal{R}}(\eta^2(t))$, then one of the possibilities for $\ensuremath{\mathcal{R}}(\eta^1(t+1))$ is the same as $\ensuremath{\mathcal{R}}(\eta^2(t))$, so that there are $N-1$ possible reduced configurations that can be reached both from $\eta^1(t)$ and $\eta^2(t)$. The probability that $\ensuremath{\mathcal{R}}(\eta^1(t+1))$ is one of these is $\frac{N-1}N$, and the probability that $\ensuremath{\mathcal{R}}(\eta^2(t+1))$ is the same is $\frac 1N$. Therefore, $T$ is geometrically distributed, with parameter $p_N = \frac{N-1}{N^2}$. We now use Lemma \ref{plakjesalgemeen} with $\tau = T$. For $t>T$, we have in this case that $A^1_{j\theta}(t) = A^2_{j\theta}(t)$ and $B^1_{jm}(t) = B^2_{jm}(t)$, because from time $T$ on, in both processes the same avalanches occur. Also, for $t>T$, we have chosen $U^1(t) = U^2(t)$. Therefore, for $t>T$, $$ \hat{\eta}_j^1(t) - \hat{\eta}_j^2(t) = \sum_{m=1}^N B^1_{jm}(t)\left(\hat{\eta}^1_m(T) - \hat{\eta}^2_m(T)\right). $$ From \eqref{impoa} in the proof of Lemma \ref{uitsmeren} we know that $$B^1_{jm}(t) \leq (1- 2^{-\lceil 3N/2\rceil})^{\lfloor \frac{t-T}{N+1}\rfloor},$$ so that $$ \sum_{m=1}^N B^1_{jm}(t)\hat{\eta}^1_m(T) \leq N (1- 2^{-\lceil 3N/2\rceil})^{\lfloor \frac{t-T}{N+1}\rfloor}, $$ so that for $t>T$, we arrive at $$ \max_j|\hat{\eta}_j^1(t) - \hat{\eta}_j^2(t)| \leq 2N (1- 2^{-\lceil 3N/2\rceil})^{\frac{t-T-1}{N+1}}. $$ We now split $\hat{{\mathbb P}}_{(\eta^1, \eta^2)}(\max_j|\hat{\eta}_j^1(t) - \hat{\eta}_j^2(t)|>\epsilon)$ into two terms, by conditioning on $t < 2T$ and $t \geq 2T$ respectively. Both terms decrease exponentially in $t$: the first term because the probability of $t<2T$ is exponentially decreasing in $t$, and the second term because for $t \geq 2T$, $\max_j|\hat{\eta}_j^1(t) - \hat{\eta}_j^2(t)|$ itself is exponentially decreasing in $t$. \qed \medskip\noindent A comparison of the two terms ${\mathbb P}(t<2T)$ and $\max_j|\hat{\eta}_j^1(t) - \hat{\eta}_j^2(t)|$ yields that for $N$ large, the second term dominates. We find that $\alpha_N$ depends, for large $N$, on $N$ as $\alpha_N = -\frac 12 \ln(1- 2^{-\lceil 3N/2\rceil})^{\frac 1{N+1}}$. We see that as $N$ increases, our bound on the speed of convergence decreases exponentially fast to zero. Furthermore, we remark that from the above proof it also follows that all stationary temporal correlations decay exponentially fast, i.e., there exist $\beta_N > 0$ such that for all square integrable functions $f(\eta)$, with $\int f^2d\nu=1$, $\int fd\nu =0$, $$ {\mathbb E}_\nu\left[ f(\eta(0))f(\eta(t))\right] \leq e^{-\beta_N t}. $$ \subsection{Emergence of quasi-units in the infinite volume limit} \label{mainresultssection} In Proposition \ref{compare}, we already noticed a close similarity between the stationary distribution of Zhang's model with $a \geq 1/2$, and the abelian sandpile model. We found that the stationary distribution of the reduced Zhang's model, in which we label full sites as 1 and empty sites as 0, is equal to that of the abelian sandpile model (Proposition \ref{compare2}). In this section, we find that in the limit $N \to \infty$, the similarity is even stronger. We find emergence of Zhang's quasi-units in the following sense: as $N \to \infty$, all one-site marginals of the stationary distribution concentrate on a single, nonrandom value. We believe that the same is true for $a < 1/2$ also (see Section \ref{expectedsection} for a related result), but our proof is not applicable in this case, since it heavily depends on Proposition \ref{compare2}. To state and prove our result, we introduce the notation $\mu_N$ for the stationary distribution for the model on $N$ sites, with expectation and variance ${\mathbb E}^N$ and ${\mathrm{Var}}^N$, respectively. \begin{theorem}\label{quasiunits} In the $(N,[a,b])$ model with $a\geq \frac 12$, for the unique stationary measure $\mu_N$ we have \begin{equation}\label{diraccon} \lim_{N\to\infty} \mu_N = \delta_{{\mathbb E}(U)} \end{equation} where $\delta_{{\mathbb E}(U)}$ denotes the Dirac measure concentrating on the (infinite-volume) constant configuration $\eta_i={\mathbb E} (U)$ for all $i\in\mathbb{N}$, and where the limit is in the sense of weak convergence of probability measures. \end{theorem} We will prove this theorem by showing that for $\eta$ distributed according to $\mu_N$, in the limit $N \to \infty$, for every sequence $1 \leq j_N \leq N$, \begin{enumerate}\label{cocoproof} \item $\lim_{N \to \infty} {\mathbb E}^N \eta_{j_N} = {\mathbb E} U$, \item $\lim_{N \to \infty} {\mbox {\em Var}}^N (\eta_{j_N}) = 0$. \end{enumerate} \medskip\noindent The proof of the first item is not difficult. However, the proof of the second part is complicated, and is split up into several lemma's. \medskip\noindent {\it Proof of Theorem \ref{quasiunits}, part (1).} We choose as initial configuration $\eta \equiv \mathbf{0}$, the configuration with all $N$ sites empty, so that according to Lemma \ref{plakjesalgemeen}, we can write \begin{equation} \eta_{j_N}(t) = \sum_{\theta=1}^t A_{\theta j_N}(t)U_{\theta}. \label{handig} \end{equation} Denoting expectation for this process as ${\mathbb E}^N_\mathbf{0}$, we find, using Remark \ref{plakjesspeciaal}, that $$ {\mathbb E}^N_\mathbf{0} \eta_{j_N}(t) = {\mathbb E} U ~{\mathbb E}^N_\mathbf{0} \ensuremath{\mathcal{R}}(\eta(t))_{j_N}. $$ First, we take the limit $t \to \infty$. By Theorem \ref{mu}, ${\mathbb E}^N_\mathbf{0}\eta_{j_N}(t)$ converges to ${\mathbb E}^N \eta_{j_N}$. From Proposition \ref{compare2}, it likewise follows that $\lim_{t \to \infty}{\mathbb E}^N_\mathbf{0} \ensuremath{\mathcal{R}}(\eta(t))_j = \frac N{N+1}$. Inserting these and subsequently taking the limit $ N\to \infty$ proves the first part. \qed {\medskip}{\noindent} For the proof of the second, more complicated part, we need a number of lemma's. First, we rewrite ${\mathrm{Var}}^N (\eta_{j_N})$ in the following manner. \begin{lemma} $$ {\mbox{\em Var}}^N(\eta_{j_N}) = {\mbox{\em Var}}(U) \lim_{t \to \infty} {\mathbb E}^N_\mathbf{0}\left[\sum_{\theta=1}^t(A_{\theta j_N}(t))^2\right] + ({\mathbb E} U)^2 \frac N{(N+1)^2}. $$ \label{variantieherschrijven} \end{lemma} \begin{proof} We start from expression (\ref{handig}), and use that the corresponding variance ${\mathrm{Var}}_\mathbf{0}^N$ converges to the stationary ${\mathrm{Var}}^N$ as $t \to \infty$ by Theorem \ref{uniquethm}. We rewrite, for fixed $N$ and $j_N = j$, $$ {\mathrm{Var}}^N_\mathbf{0}(\eta_j(t)) = {\mathbb E}^N_\mathbf{0}\left[(\eta_j(t))^2\right] - \left[{\mathbb E}^N_\mathbf{0} \eta_j(t)\right]^2 = {\mathbb E}^N_\mathbf{0}\left[\left(\sum_{\theta=1}^t A_{\theta j}(t)U_{\theta}\right)^2\right] - \left[{\mathbb E}^N_\mathbf{0} \sum_{\theta=1}^t A_{\theta j}(t)U_{\theta}\right]^2 $$ $$ = {\mathbb E}^N_\mathbf{0}\left[\sum_{\theta=1}^t (A_{\theta j}(t))^2 U_{\theta}^2 + \sum_{\theta \neq \theta'} A_{\theta j}(t)U_{\theta}A_{\theta' j}(t)U_{\theta'}\right] - \left[{\mathbb E}^N_\mathbf{0} \sum_{\theta=1}^t A_{\theta j}(t)U_{\theta}\right]^2 $$ $$ = {\mathbb E}(U^2) {\mathbb E}^N_\mathbf{0}\left[\sum_{\theta=1}^t (A_{\theta j}(t))^2\right] + ({\mathbb E} U)^2{\mathbb E}^N_\mathbf{0}\left[\sum_{\theta \neq \theta'} A_{\theta j}(t)A_{\theta' j}(t)\right] - ({\mathbb E} U)^2\left[{\mathbb E}^N_\mathbf{0} \sum_{\theta=1}^t A_{\theta j}(t)\right]^2 $$ $$ = \left({\mathbb E}(U^2) - ({\mathbb E} U)^2\right){\mathbb E}^N_\mathbf{0}\left[\sum_{\theta=1}^t (A_{\theta j}(t))^2\right] + ({\mathbb E} U)^2\left[{\mathbb E}^N_\mathbf{0}\left(\sum_{\theta=1}^t A_{\theta j}(t)\right)^2 - \left({\mathbb E}^N_\mathbf{0}\sum_{\theta=1}^t A_{\theta j}(t)\right)^2\right] $$ $$ = {\mathrm{Var}}(U)~{\mathbb E}^N_\mathbf{0}\left[\sum_{\theta=1}^t (A_{\theta j}(t))^2\right] + ({\mathbb E} U)^2 {\mathrm{Var}}^N_\mathbf{0} (\ensuremath{\mathcal{R}}(\eta(t))_j). $$ Where in the third equality we used the independence of the $A$-coefficients of the added amounts $U_\theta$. We now insert $j=j_N$, take the limit $t \to \infty$, and insert $\lim_{t \to \infty} {\mathrm{Var}}^N_\mathbf{0} (\ensuremath{\mathcal{R}}(\eta(t))_{j_N}) = {\mathrm{Var}}^N (\ensuremath{\mathcal{R}}(\eta)_{j_N}) = \frac N{(N+1)^2}$. \end{proof} Arrived at this point, in order to prove Theorem \ref{quasiunits}, it suffices to show that \begin{equation} \lim_{N\to \infty}\lim_{t \to \infty} {\mathbb E}^N_\mathbf{0}\left[\sum_{\theta=1}^t (A_{\theta j_N}(t))^2\right]=0. \label{alleenmaardit} \end{equation} The next lemma's are needed to obtain an estimate for this expectation. We will adopt the strategy of showing that the factors $A_{\theta j}(t)$ are typically small, so that the energy of a typical site consists of many tiny fractions of additions. To make this precise, we start with considering one fixed $\theta > N(N-1)$, a time $t>\theta$, and we fix $\epsilon>0$. \begin{definition} We say that the event $G_t(\alpha)$ occurs, if $\max_j A_{\theta j}(t) \leq \alpha$. We say that the event $H_t(\epsilon)$ occurs, if $\max_j A_{\theta j}(t) \geq \epsilon$, and if in addition there is a lattice interval of size at most $M = \lceil \frac 1\epsilon \rceil +1$, containing $X_\theta$, such that for all sites $j$ outside this interval, $A_{\theta j}(t) \leq \epsilon$. (We call the mentioned interval the $\theta$-heavy interval.) \label{zetjes} \end{definition} \medskip\noindent Note that since we have $\sum_j A_{\theta j}(t) \leq 1$ for every $\theta$, the number of sites where $A_{\theta j}(t) \geq \epsilon$, cannot exceed $\lceil \frac 1\epsilon \rceil$. In Lemma \ref{uitsmeren}, we proved that $\max_j A_{\theta j}(t)$ is nonincreasing in $t$, for $t \geq \theta$. Therefore, also $G_t(\alpha)$ is increasing in $t$. This is not true for $H_t(\epsilon)$, because after an avalanche, the sites where $A_{\theta j}(t)>\epsilon$ might not form an appropriate interval around $X_{\theta}$. In view of what we want to prove, the events $G_t(\epsilon)$ and $H_t(\epsilon)$ are good events, because they imply that (if we think of $N$ as being much larger than $M$) $A_{\theta j}(t) \leq \epsilon$ `with large probability'. In the case that $G_t(\epsilon)$ occurs, $A_{\theta i}(t) \leq \epsilon$ for all $i$, and in the case that $H_t(\epsilon)$ occurs, there can be sites that contain a large $A_{\theta i}(t)$, but these sites are in the $\theta$-heavy interval containing $X_\theta$. This latter random variable is uniformly distributed on $\{1,\ldots,N\}$, so that there is a large probability that a particular $j$ does not happen to be among them. If we only know that $G_t(\alpha)$ occurs for some $\alpha>\epsilon$, then we cannot draw such a conclusion. However, we will see that this is rarely the case. \begin{lemma} For every $N$, for every $\theta>N(N-1)$, for every $\epsilon>0$, for every $K$ and $j$, \begin{enumerate} \item there exists a constant $c= c(\epsilon)$, such that for $\theta \leq t \leq \theta+K$ $$ {\mathbb P}^N_\mathbf{0}(A_{\theta j}(t)>\epsilon) \leq \frac {cK}N; $$ \item for every $N$ large enough, there exist constants $w=w(\epsilon)$ and $0<\gamma = \gamma(N,\epsilon)<1$, such that for $t > \theta$ $$ {\mathbb P}^N_\mathbf{0}(A_{\theta j}(t) > \epsilon) \leq (1-\gamma)^{t-\theta-3w}. $$ \end{enumerate} \label{eentheta} \end{lemma} {\medskip}{\noindent} In the proof of Lemma \ref{eentheta}, we need the following technical lemma. \begin{lemma} Consider a collection of real numbers $y_i \geq 0$, indexed by $\mathbb{N}$, with $\sum_i y_i \leq 1$ and such that for some $x\in\mathbb{N}$, $\max_{i\neq x} y_i \leq \alpha$. Then, for $j \geq x+ \lceil \frac 1{\alpha}\rceil$, we have $$ \sum_{i=1}^{j-x+2} \frac 1{2^i} y_{j-i+2} ~\leq ~ f(\alpha) := \left(1- \frac 1{2^{\lceil \frac 1{\alpha} \rceil}}\right)\alpha. $$ \label{alpha} \end{lemma} \begin{proof} We write \begin{eqnarray} \nonumber \sum_{i=1}^{j-x+2} \frac 1{2^i} y_{j-i+2} & = & \sum_{i=1}^{\lceil \frac 1{\alpha} \rceil} \frac 1{2^i} y_{j-i+2} + \sum_{i= \lceil \frac 1{\alpha} \rceil +1}^{j-x+2} \frac 1{2^i} y_{j-i+2}\\ \nonumber & \leq & \sum_{i=1}^{\lceil \frac 1{\alpha} \rceil} \frac 1{2^i} y_{j-i+2} + \frac 1{2^{\lceil\frac 1{\alpha}\rceil +1}} \sum_{i= \lceil \frac 1{\alpha} \rceil +1}^{j-x+2} y_{j-i+2}.\\ \end{eqnarray} Note that index $x$ is in the second sum. For $i = 1, \ldots, \lceil\frac 1{\alpha}\rceil$, write $$ y_{j-i+2} = {\alpha} - z_i, $$ with $0 \leq z_i \leq {\alpha}$. Then, since ${\alpha}\lceil\frac 1{\alpha}\rceil \geq 1$ and $\sum_i y_i \leq 1$, we have $\sum_{i= \lceil\frac 1{\alpha}\rceil +1}^{j-x+2} y_{j-i+2} \leq \sum_{i=1}^{\lceil\frac 1{\alpha}\rceil} z_i$, so that $$ \sum_{i=1}^{\lceil \frac 1{\alpha} \rceil} \frac 1{2^i} y_{j-i+2} + \frac 1{2^{\lceil\frac 1{\alpha}\rceil +1}} \sum_{i= \lceil \frac 1{\alpha} \rceil +1}^{j-x+2} y_{j-i+2} \leq \sum_{i=1}^{\lceil \frac 1{\alpha} \rceil} \frac 1{2^i} ({\alpha} - z_i) + \frac 1{2^{\lceil\frac 1{\alpha}\rceil +1}} \sum_{i= 1} ^{\lceil \frac 1{\alpha}\rceil} z_i $$ $$ = \sum_{i=1}^{\lceil\frac 1{\alpha}\rceil} \frac 1{2^i}{\alpha} - \sum_{i=1}^{\lceil\frac 1{\alpha}\rceil} \left(\frac 1{2^i} - \frac 1{2^{\lceil\frac 1{\alpha}\rceil +1}}\right) z_i \leq \sum_{i=1}^{\lceil\frac 1{\alpha}\rceil} \frac 1{2^i}{\alpha}, $$ where in the last step we used $z_i \geq 0$. Thus $$ \sum_{i=1}^{j-x+2} \frac 1{2^i} y_{j-i+2} \leq \sum_{i=1}^{\lceil\frac 1{\alpha}\rceil} \frac 1{2^i}{\alpha} = \left(1- \frac 1{2^{\lceil \frac 1{\alpha} \rceil}}\right){\alpha}. $$ \end{proof} {\medskip}{\noindent} {\it Proof of Lemma \ref{eentheta}, part (1).} We first discuss the case $t=\theta$. We show that $H_\theta(\epsilon)$ occurs, for arbitrary $\epsilon$. At $t=\theta$, the addition is made at $X_\theta$. From Proposition \ref{factortjes} part (3), it follows, for every $\epsilon$, that if after an avalanche there are sites $j$ with $A_{\theta j}(\theta)>\epsilon$ (we will call such sites `$\theta$-heavy' sites), then these sites form a set of adjacent sites including $X_\theta$, except for a possible empty site among them. Since we have $\sum_{j=1}^N A_{\theta j}(\theta) \leq 1$, there can be at most $\lceil \frac 1{\epsilon} \rceil$ $\theta$-heavy sites. If the addition was made to an empty site, then $A_{\theta X_\theta}(\theta) = 1$. Thus, the $\theta$-heavy interval has length at most $1+\lceil \frac 1{\epsilon} \rceil$, and we conclude that $H_\theta(\epsilon)$ occurs. To estimate the probability that $A_{\theta j}(\theta)>\epsilon$, or in other words, the probability that a given site $j$ is in the $\theta$-heavy interval, we use that $X_\theta$ is uniformly distributed on $\{1,\ldots,N\}$. Site $j$ can be in the $\theta-$heavy interval if the distance between $X_\theta$ and $j$ is at most $1+\lceil \frac 1{\epsilon} \rceil$, so that ${\mathbb P}^N_\mathbf{0}(A_{\theta j}(\theta)>\epsilon) \leq 2\frac{1+\lceil \frac 1{\epsilon} \rceil}N =: \frac {c_2(\epsilon)}N$. We next discuss $\theta<t\leq \theta+K$. We introduce the following constants. We choose a number $w$ such that $f^w(1) \leq \epsilon$, with $f$ as in Lemma \ref{alpha}, and where $f^w$ denotes $f$ composed with itself $w$ times. Note that this is possible because $\lim_{k \to \infty}f^k(1)=0$. We choose a combination of $\tilde{\epsilon}_0$ and $d$ such that $\tilde{\epsilon}_w \leq \epsilon$, with $\tilde{\epsilon}_{k+1}$ defined as $\tilde{\epsilon}_k + \frac 1{2^{d+1}}(f^k(1) - \tilde{\epsilon}_k)$. Finally, we define $\tilde{M_k} = \lceil \frac 1{\tilde{\epsilon}_0} \rceil +1 +k(1+d)$. For fixed $\theta$ and a time $t > \theta$, we define three types of avalanches at time $t$: `good', `neutral' and `bad'. \begin{definition} For a fixed $\theta$, the avalanche at time $t$ is \begin{itemize} \item a {\em good} avalanche if the following conditions are satisfied: \begin{enumerate} \item $X_t$ and $X_\theta$ are on the same side of the empty site (if present) at $t-1$, \item $X_\theta$ is at distance at least $\tilde{M}_w$ from the boundary, \item $X_t$ is at distance at least $w$ from the boundary, and from the empty site (if present) at $t-1$, \item $X_t$ is at distance at least $\tilde{M}_w+ \lceil \frac 1\epsilon \rceil$ from $X_\theta$, \item $X_\theta$ is at distance at least $\tilde{M}_w$ from the empty site (if present) at $t-1$, \end{enumerate} \item a {\em neutral} avalanche if condition (5) is satisfied, but (1) is not, \item a {\em bad} avalanche in all other cases. \label{goodavalanche} \end{itemize} \end{definition} Having defined the three kinds of avalanches, we now claim the following: \begin{itemize} \item If $H_{t-1}(\epsilon)$ occurs, then after a neutral avalanche, $H_t(\epsilon)$ occurs. \item If $H_{t-1}(\epsilon)$ occurs, then after a good avalanche, $G_t(\epsilon)$ occurs. \end{itemize} The first claim (about the neutral avalanche) holds because if condition (5) is satisfied, but (1) is not, then not only is $X_\theta$ not in the range of the avalanche, but the distance of $X_\theta$ to the empty site is large enough to guarantee that the entire $\theta$-heavy interval is not in the range. Thus, the $\theta$-heavy interval does not topple in the avalanche. It then automatically follows that $H_t(\epsilon)$ occurs. To show the second claim (about the good avalanche), more work is required. We break the avalanche up into waves. Using a similar notation as in the proof of Proposition \ref{factortjes}, we will denote by $\tilde{A}_{\theta j}(k)$ the fraction of $U_\theta$ at site $j$ after wave $k$. We also define another event: we say that $\tilde{H}_k(\tilde{M}_k,\tilde{\alpha_k},\tilde{\epsilon_k})$ occurs, if $\max_j \tilde{A}_{\theta j}(k)\leq \tilde{\alpha}(k)$, and all sites where $\tilde{A}_{\theta j}(k) > \epsilon$ are in an interval of length at most $\tilde{M}_k$ containing $X_\theta$ (we will call this the $(k$-$\theta)$-heavy interval), with the exception of site $X_t$ when it is unstable, in which case we require that $\tilde{A}_{\theta X_t}(k) \leq 2\tilde{\epsilon}_k$. We define a `good' wave, as a wave in which all sites of the $(k$-$\theta)$-heavy interval topple, and the starting site $X_t$ is at a distance of at least $\frac 1{\alpha_k}$ from the $(k$-$\theta)$-heavy interval. It might become clear now that Definition \ref{goodavalanche} has been designed precisely so that a good avalanche is an avalanche that starts with at least $w$ good waves. We will now show by induction in the number of waves that after an avalanche that starts with $w$ good waves, $G_t(\epsilon)$ occurs. For $k=0$, we choose $\tilde{\alpha}_0 = 1$, so that at $k=0$, $\tilde{H}_0(\tilde{M}_0,\tilde{\alpha_0},\tilde{\epsilon_0})$ occurs. We will choose $\tilde{\alpha}_{k+1} = f(\tilde{\alpha}_k)$ and $\tilde{\epsilon}_{k+1} = \tilde{\epsilon}_k + \frac 1{2^{d+1}} (\tilde{\alpha}_k-\tilde{\epsilon}_k)$, so that once $\tilde{H}_w(\tilde{M}_w,\tilde{\alpha_w},\tilde{\epsilon_w})$ occurred, we are sure that after the avalanche $G_t(\epsilon)$ occurs, because both $\tilde{\alpha_w}$ and $\tilde{\epsilon_w}$ are smaller than $\epsilon$. Now all we need to show is that, if $\tilde{H}_k(\tilde{M}_k,\tilde{\alpha_k},\tilde{\epsilon_k})$ occurs, then after a good wave $\tilde{H}_{k+1}(\tilde{M}_{k+1},\tilde{\alpha}_{k+1},\tilde{\epsilon}_{k+1})$ occurs. From (\ref{avalanche}), we see that, for all $j>X_t$ that topple (and do not become empty), \begin{equation} \tilde{A}_{\theta j}(k+1) = \frac 12 \tilde{A}_{\theta, j+1}(k) + \frac 14 \tilde{A}_{\theta j}(k) + \frac 18 \tilde{A}_{\theta, j-1}(k)+ \cdots + \frac 1{2^{j-X_t+2}} \tilde{A}_{\theta, X_t}(k), \label{jisnietx} \end{equation} and similarly for $j<X_t$. For $j=X_t$, we have \begin{equation} \tilde{A}_{\theta j}(k+1) = (\frac 12 \tilde{A}_{\theta, j+1}(k)+ \frac 14 \tilde{A}_{j,\theta}(k)){\large \bf{1}}_{\tilde{A}_{\theta, j+1}(k)\neq 0}+(\frac 12 \tilde{A}_{\theta, j-1}(k)+ \frac 14 \tilde{A}_{j,\theta}(k)){\large \bf{1}}_{\tilde{A}_{\theta, j-1}(k)\neq 0}. \label{jisx} \end{equation} First we use that in a good wave, all sites in the $\theta$-heavy interval topple, and $X_t$ is not in this interval. We denote by $m$ the leftmost site of the $\theta$-heavy interval, so that the rightmost site is $m+\tilde{M}(k)$. Suppose, without loss of generality, that $X_t<m$. We substitute $\tilde{A}_{\theta j}(k) \leq \tilde{\alpha}_k$ for all $j$ in the $\theta$-heavy interval, $\tilde{A}_{X_t \theta}(k) \leq 2\tilde{\epsilon}_k$, and $\tilde{A}_{\theta j}(k) \leq \tilde{\epsilon}_k$ otherwise into \eqref{jisnietx}, to derive for all $j$ that topple: \begin{equation} \begin{array}{ll} j < m-1, j \neq x & \tilde{A}_{\theta j}(k+1) \leq \tilde{\epsilon}_k,\\ j = m-1, \ldots, m+\tilde{M}(k) & \tilde{A}_{\theta j}(k+1) < \tilde{\alpha}_k,\\ j = m+\tilde{M}(k)+d' & \tilde{A}_{\theta j}(k+1) < \tilde{\epsilon}_k + \frac 1{2^{d'+1}} (\tilde{\alpha}_k-\tilde{\epsilon}_k), \hspace{1cm} d'=1,2,\ldots\\ \end{array} \label{goodwave} \end{equation} Additionally, by (\ref{jisx}), we have $\tilde{A}_{\theta, X_t}(k+1) \leq 2\tilde{\epsilon}_k$. The factor 2 is only there as long as site $X_t$ is unstable. From \eqref{goodwave} we have that $\tilde{\alpha}_{k+1} < \tilde{\alpha}_k$, but moreover, in a good wave, the variables $\tilde{A}_{\theta j}(k)$ satisfy the conditions of Lemma \ref{alpha}, so that in fact $\tilde{\alpha}_{k+1} \leq f(\tilde{\alpha}_k)$. If we insert our choice of $d$ for $d'$, then we can see from \eqref{goodwave} that indeed after the good wave $\tilde{H}_{k+1}(\tilde{M}_{k+1},\tilde{\alpha}_{k+1},\tilde{\epsilon}_{k+1})$ occurs. Now we are ready to evaluate ${\mathbb P}^N_\mathbf{0}(A_{\theta j}(t)>\epsilon)$, for $t \in \{\theta+1, \ldots, \theta+K\}$. As is clear by now, there are three possibilities: $G_t(\epsilon)$ occurs, so that $\max_jA_{\theta j}(t) \leq \epsilon$, or $H_t(\epsilon)$ occurs, in which case $A_{\theta j}(t)$ can be larger than $\epsilon$ if $j$ is in the $\theta$-heavy interval. We derived in the case $t=\theta$ that the probability for this is bounded above by $\frac {c_2}N$. Finally, it is possible that neither occurs, in which case we do not have an estimate for the probability that $A_{\theta j}(t) > \epsilon$. But for this last case, we must have had at least one bad avalanche between $\theta+1$ and $t$. We will now show that the probability of this event is bounded above by $\frac{K c_1}N$, where $c_1$ depends only on $\epsilon$. As stated in Definition \ref{goodavalanche}, a bad avalanche can occur at time $t$ if at least one of the conditions (2) through (5) is not satisfied. Thus, we can bound the total probability of a bad avalanche at time $t$, by summing the probabilities that the various conditions are not satisfied. We discuss the conditions one by one. \begin{itemize} \item The probability that condition (2) is not satisfied, is bounded above by $\frac{2\tilde{M}_w}N$, since $X_\theta$ is distributed uniformly on $\{1,\ldots,N\}$. \item The probability that condition (3) is not satisfied, is bounded above by $\frac{4w}N$, since $X_t$ is distributed uniformly on $\{1,\ldots,N\}$, and independent of the position of the empty site at $t-1$, if present. \item The probability that condition (4) is not satisfied, is bounded above by $\frac{2(\tilde{M}_w+\lceil\frac 1\epsilon \rceil)}N$, since $X_t$ and $X_\theta$ are independent. \item The probability that condition (5) is not satisfied is bounded above by $\frac{2\tilde{M}_w}N$, since the position of the empty site at $t-1$ is uniform on $\{1,\ldots,N\}$. \end{itemize} Thus, the total probability of a bad avalanche at time $t$ is bounded by $\frac{2\tilde{M}_w}N + \frac{4w}N + \frac{2(\tilde{M}_w+\lceil\frac 1\epsilon \rceil)}N + \frac{2\tilde{M}_w}N \equiv \frac{c_1}N$, so that the probability of at least one bad avalanche between $\theta+1$ and $t$ is bounded by $\frac{K c_1}N$. We conclude that for $t \in \{\theta+1, \ldots, \theta+K\}$, ${\mathbb P}^N_\mathbf{0}(A_{\theta j}(t)>\epsilon) \leq \frac{Kc_1 + c_2}N \leq \frac {cK}N$, for some $c>0$. \qed \medskip\noindent {\it Proof of Lemma \ref{eentheta}, part (2).} From Lemma \ref{alpha}, it follows that if $G_t(\alpha)$ occurs, and after $s$ time steps all $\theta$-heavy sites have toppled at least once, in avalanches that all start at least a distance $\lceil \frac 1\alpha \rceil$ from all current $\theta$-heavy sites, then $G_{t+s}(f(\alpha))$ occurs. We will exploit this fact as follows. Suppose that $G_t(\alpha)$ occurs and that in addition, at time $t$ the distribution of the empty set - if present - is uniform on $\{1,\ldots,N\}$. We claim that this implies that $G_{t+2}(f(\alpha))$ occurs with a probability that is bounded below, uniformly in $N$ and $\epsilon$. To see this, observe that if there is no empty site, then all $N$ sites topple in one time step. If there is an empty site, this (meaning all sites topple) also happens in two steps if in the first step, we add to one side of the empty site, and in the second step to the other side. Denote by $e_1$ the position of the empty site before the first addition (at $X_{t+1}$), and $e_2$ before the second addition (at $X_{t+2}$). If $X_{t+1} < e_1$, then $e_2<e_1$. Therefore, all sites topple if $X_{t+1} < e_1$ and $X_{t+2} > e_1$. With the distribution of $e_1$ uniform on $\{1,\ldots,N\}$, the probability that this happens is bounded below by some constant $\gamma'$ independent of $N$ and $\epsilon$. However we have the extra demand that both additions should start at least a distance $\lceil \frac 1\alpha \rceil \leq \lceil \frac 1\epsilon \rceil$ from all current $\theta$-heavy sites, of which there are at most $\lceil \frac 1\epsilon \rceil$. Thus, both additions should avoid at most $\lceil \frac 1\epsilon \rceil^2$ sites. The probability that this happens is therefore less than some $\gamma'>0$, but it is easy to see that the difference decreases with $N$. We can then conclude that there is an $N'$ large enough so that the probability that this happens is at least $\gamma > 0$ for all $N \geq N'$, with $0<\gamma<1$ independent of $N$ and $\epsilon$. In view of this, the probability that $G_{\theta+2}(f(1))$ occurs, for $N$ large enough, is at least $\gamma$. We wish to iterate this argument $w$ times. However, the lower bound $\gamma$ is only valid when the distribution of the empty site, if present, is uniform on $\{1,\ldots,N\}$. We do not have this for $\eta(\theta+2)$ since we have information about what happened in the time interval $(\theta,\theta+2)$. However, after one more addition, the position of the empty site in $\eta(\theta+3)$, if present, is again uniform on $\{1,\ldots,N\}$. Since $f^w(1) \leq \epsilon$, iterating this argument gives $$ {\mathbb P}^N_\mathbf{0}(\max_j A_{\theta j}(t) > \epsilon) \leq (1-\gamma)^{t-\theta-3w}. $$ \qed \medskip\noindent {\it Proof of Theorem \ref{quasiunits}, part (2).} By Lemma \ref{variantieherschrijven}, it suffices to prove \eqref{alleenmaardit}. We estimate, using that $\sum_{\theta}A_{\theta j_N}(t) \leq 1$, $$ {\mathbb E}^N_\mathbf{0}\left[\sum_{\theta=1}^t (A_{\theta j_N}(t))^2\right] \leq {\mathbb E}^N_\mathbf{0}\left[\max_{1\leq\theta\leq t} A_{\theta j_N}(t) \sum_{\theta=1}^t A_{\theta j_N}(t)\right] \leq {\mathbb E}^N_\mathbf{0}\left[\max_{1\leq\theta\leq t} A_{\theta j_N}(t)\right] $$ $$ \leq {\mathbb E}^N_\mathbf{0}\left[\max_{t-K \leq \theta \leq t}A_{\theta j_N}(t)\right] + {\mathbb E}^N_\mathbf{0}\left[\max_{N(N+1) < \theta < t-K}A_{\theta j_N}(t)\right] + {\mathbb E}^N_\mathbf{0}\left[\max_{1 \leq \theta \leq N(N+1)}A_{\theta j_N}(t)\right]. $$ We then for the first two terms estimate, using that $\max_\theta A_{\theta j_N}(t) \leq 1$, $$ {\mathbb E}^N_\mathbf{0}[\max_\theta A_{\theta j_N}(t)] \leq \epsilon + {\mathbb P}^N_\mathbf{0}(\max_\theta A_{\theta j_N}(t) > \epsilon) \leq \epsilon + \sum_\theta {\mathbb P}^N_\mathbf{0}(A_{\theta j_N}(t)>\epsilon). $$ We finally use Lemma \ref{eentheta}, and choose $K = K_N$ increasing with $N$. For $\theta \in [t,t-K_N]$, we straightforwardly obtain $\sum_{\theta=t-K_N}^t {\mathbb P}^N_\mathbf{0}\left(A_{\theta j_N}(t)>\epsilon\right) = O(\frac {K_N^2}N)$, uniformly in $t$, as $N \to \infty$. For $\theta < t-K_N$ we calculate $$ \sum_{\theta < t-K_N} {\mathbb P}^N_\mathbf{0}\left(A_{\theta j_N}(t)>\epsilon\right) \leq \sum_{t-\theta>K_N}(1-\gamma)^{t-\theta-3w} = O((1-\gamma)^{K_N}), \hspace{2.8cm} N \to \infty, $$ so that $$ {\mathbb E}^N_\mathbf{0}\left[\sum_{\theta=1}^t(A_{\theta j_N}(t))^2\right] \leq 2\epsilon + O(\frac {K_N^2}N) + O((1-\gamma)^{K_N}) + {\mathbb E}^N_\mathbf{0}\left[\max_{1 \leq \theta \leq N(N+1)}A_{\theta j_N}(t)\right], \hspace{1cm} N\to\infty. $$ In the limit $t \to \infty$, by Lemma \ref{uitsmeren} part (2), the last term vanishes. We now choose $K_N = N^{1/3}$, to obtain $$ \limsup_{N\to\infty}\lim_{t\to\infty}{\mathbb E}^N_\mathbf{0}\left[\sum_{\theta=1}^t (A_{\theta j_N}(t))^2\right] \leq 2\epsilon, $$ Since $\epsilon>0$ is arbitrary, we finally conclude that $$ \lim_{N\to\infty}\lim_{t\to\infty}{\mathbb E}^N_\mathbf{0} \left[\sum_{\theta=1}^t (A_{\theta j_N}(t))^2\right] = 0. $$ \qed \section{The $(N,[0,1])$-model} \label{nultoteensection} \subsection{Uniqueness of the stationary distribution} \begin{theorem} The $(N,[0,1])$ model has a unique stationary distribution $\upsilon_N$. For every initial distribution $\nu$ on $\Omega_N$, ${\mathbb P}_\nu$ converges in total variation to $\upsilon_N$. \label{nu} \end{theorem} \begin{proof} We prove this theorem again by constructing a successful coupling. For clarity, we first treat the case $N=2$, and then generalize to $N > 2$. The coupling is best described in words. Using the same notation as in previous couplings, we call two independent copies of the process $\eta^1(t)$ and $\eta^2(t)$, and call the coupled processes $\hat{\eta}^1(t)$ and $\hat{\eta}^2(t)$. Initially, we choose $\hat{\eta}^1(t) = \eta^1(t)$, and $\hat{\eta}^2(t) = \eta^2(t)$. It is easy, but tedious, to show that $\eta^1_1(t)=\eta^2_1(t)=0$, while $\ensuremath{\mathcal{R}}(\eta^1_2(t))=\ensuremath{\mathcal{R}}(\eta^2_2(t)) = 1$, occurs infinitely often. At the first such time $T_1$ that this occurs, we choose the next addition as follows. Call $\Delta(t) = \eta^1_2(t) - \eta^2_2(t)$. We choose $\hat{X}^2_{T_1+1} = X^1_{T_1+1}$, and $\hat{U}^2_{T_1+1} = (U^1_{T_1+1}+\Delta(T_1))\mbox{mod}~1$. Observe that the distribution of $\hat{U}^2_{T_1+1}$ is uniform on $[0,1]$. This addition is such that with positive probability the full sites are chosen for the addition, and the difference $\Delta(T_1)$ is canceled. More precisely, this occurs if $X^1_{T_1+1} = 2$, which has probability 1/2, and $(U^1_{T_1+1}+\Delta(T_1))\mbox{mod}~1 = U^1_{T_1+1}+\Delta(T_1)$, which has probability at least 1/2, since $\eta^1_2(T_1)$ and $\eta^2_2(T_1)$ are both full, therefore $\Delta(T_1) \leq 1/2$. If this occurs, then we achieve success, i.e., $\hat{\eta}^1(T_1+1)=\hat{\eta}^2(T_1+1)$, and from that time on we can let the two coupled processes evolve together. If $\hat{\eta}^1(T_1+1) \neq \hat{\eta}^2(T_1+1)$, then we evolve the two coupled processes independently, and repeat the above procedure at the next instant that $\hat{\eta}^1_1(t)=\hat{\eta}^2_1(t)=0$. Since at every such instant, the probability of success is positive, we only need a finite number of attempts. Therefore, the above constructed coupling is successful, and this proves the claim for $N=2$. We now describe the coupling in the case $N>2$. We will again evolve two processes independently, until a time where $\eta^1_1(t) = \eta^2_1(t) = 0$, while all other sites are full. At this time we will attempt to cancel the differences on the other $N-1$ sites one by one. We define $\Delta_j(t) = \eta^1_j(t)-\eta^2_j(t)$, and as before we would be successful if we could cancel all these differences. However, now that $N>2$, we do not want an avalanche to occur during this equalizing procedure, because we need $\eta^1_1(t) = \eta^2_1(t) = 0$ during the entire procedure. Therefore, we specify $T_1$ further: $T_1$ is the first time where not only $\eta^1_1(t) = \eta^2_1(t) = 0$ and all other sites are full, but also $\eta^1_j(t) < 1-\epsilon$ and $\eta^2_j(t) < 1- \epsilon$, for all $j = 2, \ldots, N$, with $\epsilon=\frac 1{2^{N+1}}$. At such a time, a positive amount can be added to each site without starting an avalanche. We will first show that this occurs infinitely often, which also settles the case $N=2$. By Proposition \ref{compare}, after a finite time $\eta^1(t)$ and $\eta^2(t)$ contain at most one non-full site. It now suffices to show that for any $\xi(t) \in \Omega_N$ with at most one non-full site, with positive probability the event that $\xi_1(t+4)=0$, while $\xi_j(t+4)\leq 1-\frac 1{2^{N+1}}$ for every $2 \leq j \leq N$, occurs. One explicit possibility is as follows. The first addition should cause an avalanche. This will ensure that $\xi(t+1)$ contains one empty site. This occurs if the addition site is a full site, and the addition is at least $1/2$. The probability of this is at least $\frac 12 (1-\frac 1N)$. The second addition should change the empty site into full. For this to occur, the addition should be at least 1/2, and the empty site should be chosen. This has probability $\frac 1{2N}$. The third addition should be at least 1/2 to site $1$, so that an avalanche is started that will result in $\xi_N(t+3)=0$. This has again probability $\frac 1{2N}$. Finally, the last addition should be an amount in $[\frac 12, \frac 34]$, to site $N-1$. Then by (\ref{avalanche}), every site but site $N$ will topple once, and after this avalanche, site 1 will be empty, while every other site contains at most $1-\frac 1{2^{N+1}}$. This last addition has probability $\frac 1{4N}$. Now we show that at time $T_1$ defined as above, there is a positive probability of success. To choose all full sites one by one, we require, first, for all $j = 2, \ldots, N$ that $X^1_{T_1+j-1} = j$. This has probability $(\frac 1N)^{N-1}$. Second, we need $(U^1_{T_1+j-1}+\Delta_j(T_1))\mbox{mod}~1 = U^1_{T_1+j-1}+\Delta_j(T_1)$ for all $j = 2, \ldots, N$. This event is independent of the previous event and has probability at least $(\frac 12)^{N-1}$. If this second condition is met, then third, we need to avoid avalanches, so for all $j = 2, \ldots, N$, $\eta^1_j(T_1+j-1)+U^1_{T_1+j-1} = \eta^2_j(T_1+j-1)+\hat{U}^2_{T_1+j-1} <1$. It is not hard to see that this has positive conditional probability, given the previous events. We conclude that the probability of success at time $T_1+N-1$ is positive, so that we only need a finite number of such attempts. Therefore, the coupling is successful, and we are done. \end{proof} \subsection{Simulations} \begin{figure}[ht] \centerline{\includegraphics[width=7cm]{zhangsims}} \caption{Simulation results for the $(N,[0,1])$-model. The histograms represent observed energies during 100,000 (a,b) and 200,000 iterations (c-f). The system size was 3 sites (a,b), 30 sites (c,d) and 100 sites (e,f). (a),(c) and (e) are boundary sites, (b), (d) and (f) are central sites.} \label{zhangsims} \end{figure} We performed Monte Carlo simulations of the $(N,[0,1])$-model, for various values of $N$. Figure \ref{zhangsims} shows histograms of the energies that a site assumes during all the iterations. We started from the empty configuration, but omitted the first 10$\%$ of the observations to avoid recording transient behavior. Further increasing this percentage, or the number of iterations, had no visible influence on the results. The presented results show that, as the number of sites of the model increases, the energy becomes more and more concentrated around a value close to 0.7. In the next section, we present an argument for this value to be $\sqrt{1/2}$. We further observe that it seems to make a difference where the site is located: at the boundary the variance seems to be larger than in the middle. \subsection{The expected stationary energy per site as $N \to \infty$} \label{expectedsection} From the simulations it appears that for large values of $N$, the energy per site concentrates at a value close to 0.7, for every site. Below we argue, under some assumptions that are consistent with our simulations, that this value should be $\sqrt{1/2}$. First, we assume that every site has the same expected stationary energy. Moreover, we assume that pairs of sites are asymptotically independent, i.e., $\eta_x$ becomes independent of $\eta_y$ as $|x-y| \to \infty$. (If the stationary measure is indeed such that the energy of every site is a.s.\ equal to a constant, then this second assumption is clearly true.) With ${\mathbb E}_{\upsilon_N}$ denoting expectation with respect to the stationary distribution $\upsilon_N$, we say that $(\upsilon_N)_N$ is {\em asymptotically independent} if for any $1 \leq x_N, y_N \leq N$ with $|x_N-y_N| \to \infty$, and for any $A,B$ subsets of $\mathbb{R}$ with positive Lebesgue measure, we have \begin{equation} \lim_{N \to \infty} \left( {\mathbb E}_{\upsilon_N}({\bf 1}_{\eta_{x_N}\in B}|\eta_{y_N}\in A)- {\mathbb E}_{\upsilon_N}({\bf 1}_{\eta_{x_N}\in A}\right) = 0. \label{expected2} \end{equation} \begin{theorem} Suppose that in the $(N,[0,1])$ model, for any sequence $j_N \in \{1,\ldots,N\}$, \begin{equation} \lim_{N \to \infty} {\mathbb E}_{\upsilon_N} (\eta_{j_N}) = \rho, \label{expected1} \end{equation} for some constant $\rho$. Suppose in addition that $(\upsilon_N)_N$ is asymptotically independent. Then we have $\rho = \sqrt{\frac 12}$. \end{theorem} \begin{proof} The proof is based on a conservation argument. If we pick a configuration according to $\upsilon_N$ and we make an addition $U$, we denote the random amount that leaves the system by $E_{out,N}$. By stationarity, the expectation of $U$ must be the same as the expectation of $E_{out,N}$. The amount of energy that leaves the system in case of an avalanche, depends on whether or not one of the sites is empty (or behaves as empty). Remember (Proposition \ref{compare}) that when we pick a configuration according to the stationary distribution, then there can be at most one empty or anomalous site. If there is one empty site, then the avalanche reaches one boundary. If there are only full sites, then the avalanche reaches both boundaries, and in case of one anomalous site, both can happen. However, configurations with no empty site have vanishing probability as $N \to \infty$: we claim that the stationary probability for a configuration to have no empty site, is bounded above by $p_N$, with $\lim_{N\to\infty}p_N = 0$. To see this, we divide the support of the stationary distribution into two sets: $\mathcal{E}$, the set of configurations with one empty site, and $\mathcal{N}$, the set of configurations with no empty site. The only way to reach $\mathcal{N}$ from $\mathcal{E}$, is to make an addition precisely at the empty site. As $X$ is uniformly distributed on $\{1,\ldots,N\}$, this has probability $\frac 1N$, irrespective of the details of the configuration. The only way to reach $\mathcal{E}$ from $\mathcal{N}$, is to cause an avalanche; this certainly happens if an addition of at least $1/2$ is made to a full site. Again, since $X$ is uniformly distributed on $\{1,\ldots,N\}$, and since there is at most one non-full site, this has probability at least $\frac 12 \frac {N-1}{N}$. Now let $X$ be the (random) addition site at a given time, and denote by $A_x$ the event that $X=x$ and that this addition causes the start of an avalanche. Since $E_{out,N}=0$ when no avalanche is started, we can write \begin{equation} {\mathbb E}_{\upsilon_N}(E_{out,N}) = \sum_{x=1}^N {\mathbb E}_{\upsilon_N}(E_{out,N}|A_x){\mathbb P}_{\upsilon_N}(A_x). \label{uit} \end{equation} We calculate ${\mathbb P}_{\upsilon_N}(A_x)$ as follows, writing $U$ for the value of the addition: \begin{eqnarray} \label{ggg} {\mathbb P}_{\upsilon_N}(A_x) & = & \frac{1}{N}{\mathbb P}_{\upsilon_N}(\eta_x+U \geq 1) = \frac{1}{N}{\mathbb P}_{\upsilon_N}(U \geq 1-\eta_x) \nonumber \\ & = & \frac{1}{N}\int{\mathbb P}_{\upsilon_N}(U \geq 1-\eta_x)d\upsilon_N(\eta) =\frac{1}{N} \int \eta_x d\upsilon_N(\eta) \nonumber \\ & = & \frac{1}{N}{\mathbb E}_{\upsilon_N}(\eta_x). \label{probA} \end{eqnarray} Let $L_N = \lceil \log N \rceil$. Even if the avalanche reaches both boundary sites, the amount of energy that leaves the system can never exceed 2, which implies that \begin{equation} \label{uu} \left|\sum_{x=1}^N {\mathbb E}_{\upsilon_N}(E_{out,N}|A_x) - \sum_{x=2L_N}^{N-2L_N} {\mathbb E}_{\upsilon_N}(E_{out,N}|A_x)\right| \leq 8L_N. \end{equation} It follows from (\ref{uit}), (\ref{ggg}) and (\ref{uu}) that \begin{equation} \label{voorlopig} {\mathbb E}_{\upsilon_N}(E_{out,N}) = \frac{1}{N}\sum_{x=2L_N}^{N-2L_N} {\mathbb E}_{\upsilon_N}(E_{out,N}|A_x){\mathbb E}_{\upsilon_N}(\eta_x) + O(L_N/N). \end{equation} If the avalanche, started at site $x$, reaches the boundary at site 1, then the amount of energy that leaves the system is given by $\frac 12 \eta_1 + \frac 14 \eta_2 + \cdots + (\frac 12)^x (\eta_x +U)$. For all $x \in \{2L_N,\ldots,N-2L_N\}$, this can be written as $$ \frac 12 \eta_1 + \frac 14 \eta_2 + \cdots + (\frac 12)^{L_N}\eta_{L_N} + (\frac 12)^{L_N+1}\eta_{L_N+1} + \cdots + (\frac 12)^x (\eta_x +U), $$ where for the last part of this expression, we have the bound $$ (\frac 12)^{L_N+1}\eta_{L_N+1} + \cdots + (\frac 12)^x (\eta_x +U) \leq (\frac 12)^{L_N}. $$ Since the occurrence of $A_x$ depends only on $\eta_x$ (and on $X$ and $U$), for $2L_N \leq x \leq N-2L_N$, by asymptotic independence there is an $\alpha_N$, with $\lim_{N\to\infty} \alpha_N = 0$, such that for all $1 \leq i \leq L_N$ and $ 2L_N \leq x \leq N-2L_N$, we have $$ |{\mathbb E}_{\upsilon_N}(\eta_i|A_x) - {\mathbb E}_{\upsilon_N}(\eta_i)| \leq \alpha_N, $$ so that $$ \left|{\mathbb E}_{\upsilon_N}\left(\frac 12 \eta_1 + \cdots + (\frac 12)^{L_N}\eta_{L_N}|A_x\right)-{\mathbb E}_{\upsilon_N}\left(\frac 12 \eta_1 + \cdots + (\frac 12)^{L_N}\eta_{L_N}\right)\right| \leq \left(\frac 12 + \frac 14 + \cdots\right)\alpha_N, $$ which is bounded above by $\alpha_N$. By symmetry, we have a similar result in the case that the other boundary is reached. In case both boundaries are reached, we simply use that the amount of energy that leaves the system is bounded above by 2. In view of this, we continue the bound in (\ref{voorlopig}) as follows \begin{eqnarray*} \nonumber {\mathbb E}_{\upsilon_N}(E_{out,N}) & = & \frac 1N \sum_{x=2L_N}^{N-2L_N}\left( {\mathbb E}_{\upsilon_N}(\frac 12 \eta_1 + \frac 14 \eta_2 + \cdots + (\frac 12 )^{L_N}\eta_{L_N}| A_x) +O((\frac{1}{2})^{L_N}) \right){\mathbb E}_{\upsilon_N}(\eta_x ) + \\ & & + O(L_N/N)\\ & = & \frac 1N \sum_{x=2L_N}^{N-2L_N} {\mathbb E}_{\upsilon_N}(\frac 12 \eta_1 + \frac 14 \eta_2 + \cdots + (\frac 12 )^{L_N}\eta_{L_N}){\mathbb E}_{\upsilon_N}(\eta_x ) + \\ & & + O(\frac{L_N}{N}) + O((\frac12 )^{L_N}) + O( \alpha_N ) + O(p_N), \end{eqnarray*} as $N \to \infty$. Letting $N \to \infty$ and inserting \eqref{expected1} now gives $$ \lim_{N\to\infty}{\mathbb E}_{\upsilon_N}(E_{out,N}) = \rho^2. $$ As the expectation of $U$ is $\frac 12$, we conclude that $\rho = \sqrt{\frac 12}$. \end{proof}
1,477,468,750,198
arxiv
\section{Introduction} \label{sect:intro} Although significantly more luminous than their low-mass counterparts, massive stars ($M_\star>8$\,M$_\odot$) pose a challenging problem for study, particularly in their early formation stages. Massive stars are much less common and have shorter lifetimes than intermediate- and low-mass stars. They are known to form almost exclusively in clusters \citep{de-wit2004} where source confusion limits the ability to discriminate between individual stars. Because these stars form much more quickly than intermediate- and low-mass stars\citep[e.g.,][]{davies2011,mottram2011b}, they reach the main sequence while still deeply embedded within their natal clump, so that their formation stages take place beneath hundreds of magnitudes of visual extinction. These observational difficulties complicate the identification of the large statistical samples required to investigate the earliest stages of massive star formation: as a result, our understanding of the initial conditions required or the processes involved in massive star formation is much poorer than for lower-mass stars. The motivation for understanding the formation of massive stars is nevertheless strong: these stars are responsible for many of the higher-energy events in the universe, and play a significant role in the evolution of their host galaxies \citep{kennicutt2005}. Throughout their lives they enrich the local chemistry, and inject an enormous amount of radiative and mechanical energy into the interstellar medium (ISM) in the form of UV radiation, stellar winds, jets and outflows, and supernova explosions. These feedback mechanisms play a role in regulating the star formation in their vicinity by disrupting molecular clouds before star formation has begun, or by triggering the formation of future generations of stars in the surrounding molecular material. Triggering may occur by sweeping up and compressing the molecular material in surrounding clouds via the collect-and-collapse mechanism (e.g., \citealt{whitworth1994, deharveng2003}) or through radiatively-driven implosion (e.g., \citealt{bertoldi1989, urquhart2007d}). The processes involved in the formation of massive stars and their subsequent impact on their local environment is a key element to understanding the role massive stars play in shaping the dynamics and structure of their host galaxies. The Red MSX Source (RMS; \citealt{hoare2005,urquhart2008}; Lumsden et al. 2013, submitted) survey has been tailored to address many of these outstanding questions by identifying a large well-selected sample of massive young stellar objects (MYSOs) and compact and ultra-compact (UC) H~{\sc ii}\ regions. The RMS survey essentially consists of a suite of follow-up observations and complementary data from other surveys (e.g., 2MASS, UKIDSS, VVV, GLIMPSE, MIPSGAL, ATLASGAL and CORNISH; \citealt{2mass}, \citealt{vvv}, \citealt{lawrence2007}, \citealt{benjamin2003_ori}, \citealt{carey2009}, \citealt{schuller2009} and \citealt{hoare2012}, respectively) of a mid-infrared colour selected sample of $\sim$5000 MSX sources \citep{lumsden2002}. This initial sample contains a significant number of dusty objects such as evolved stars that have similar mid-infrared colours as embedded young stars; however, the follow-up observations had been carefully chosen to identify and remove these contaminating sources from the final sample. A database has been constructed to hold all of these multi-wavelength data sets and to compile all of the available data on a source-by-source basis to aid in their classification.\footnote{\tt{http://rms.leeds.ac.uk/cgi-bin/public/RMS\_DATABASE.cgi.}} With our programme of follow-up observations and the classification now complete, we have identified approximately 1600 YSO and H~{\sc ii}\ regions located throughout the Galactic plane ($|{b}| < 5\degr$). This sample of young embedded massive stars is an order of magnitude larger than was previously available. The RMS sample provides a sufficient number of sources to allow statistically significant studies of young massive stars as a function of luminosity and environment, while avoiding many of the biases associated with previous surveys of young massive stars. A complete overview of this project, detailed description of the classification scheme, and discussion of the properties of the final embedded catalogue is presented in Lumsden et al. (2013). We have investigated the Galactic distribution of RMS MYSOs in two previous papers, both of which were focused on smaller subsamples \citep[i.e.,][]{urquhart2011a,urquhart2012}. In this paper we build on these previous studies and use the full RMS sample to provide a comprehensive picture of the Galactic distribution of massive star formation. We will compare this distribution of the now complete RMS sample of massive young stars with the expected positions of the spiral arms, and investigate the source and luminosity surface densities as a function of Galactic location. In the next section we will summarise the previous molecular line observations and describe additional observations that have been undertaken in an effort to obtain a complete set of radial velocities for this sample. In Section\,\ref{sect:distances} we review the various methods used to determine distances, which are then used to calculate individual source bolometric luminosities and estimate the survey's completeness. We use these distances and luminosities to investigate the Galactic distribution of massive star formation and to compare this with the position of the spiral arms and measure the scaleheight of the Galactic disk in Section\,\ref{sect:gal_structure}. Finally, in Section\,\ref{sect:summary_conclusions} we present a summary of our results and highlight our main findings. \section{Radial velocity measurements} \subsection{Previous observations} Molecular-line observations are a crucial part of the RMS follow-up campaign, as they provide radial velocities for the determination of kinematic distances and luminosities. These are used to distinguish nearby low- and intermediate-mass YSOs from the generally more distant MYSOs. They can also be useful in identifying the more evolved stars that contaminate our sample, as these are not generally associated with sufficient cold gas to produce a detectable emission in the lower excitation states \citep[i.e.,][]{loup1993}. The \ensuremath{^{13}}CO\ (1-0) and (2-1) transitions were chosen as optimal tracers as they have a lower abundance than their $^{12}$CO counterparts. \ensuremath{^{12}}CO\ is generally optically thick in these regions, and suffers from saturation, self-absorption and multiple emission components along a given line-of-sight that can produce complex line profiles, while \ensuremath{^{13}}CO\ is generally free from these problems. Furthermore, \ensuremath{^{13}}CO\ is more abundant than C\ensuremath{^{18}}O, allowing emission to be detected in a modest amount of observing time. We have made \ensuremath{^{13}}CO\ observations towards $\sim$2000 sources; the results of the majority of these were reported in \citet{urquhart_13co_south,urquhart_13co_north} and the remainder will be discussed in the following subsection. Although the $^{13}$CO transition is much less affected by many of the problems associated with $^{12}$CO emission, multiple components are still detected towards $\sim$60\,per\,cent of the observed sources. We have therefore followed up a large number of these with observations of high density tracers such as CS~(2-1), NH$_3$ and water maser observations (\citealt{urquhart2009_h2o,urquhart2011b}). We have complemented these targeted observations with other high density transitions reported in the literature \citep[e.g.,][]{bronfman1996,schlingman2011,wienen2012}. \subsection{Additional observations: $^{13}$CO, CS and NH$_3$} \setlength{\tabcolsep}{6pt} \begin{table*} \begin{center} \caption{ Fitted molecular line parameters.} \label{tbl:addition_obs} \begin{minipage}{\linewidth} \begin{tabular}{lccc....} \hline \hline \multirow{2}{24mm}{Field Name} & RA &Dec. & \multirow{2}{14mm}{Transition} & \multicolumn{1}{c}{r.m.s.} & \multicolumn{1}{c}{$V_{\rm{LSR}}$} & \multicolumn{1}{c}{$T^*_{\rm{R}}$} & \multicolumn{1}{c}{FWHM}\\ & (J2000) & (J2000) & & \multicolumn{1}{c}{(mK)} & \multicolumn{1}{c}{km\,s$^{-1}$} &\multicolumn{1}{c}{(K)} &\multicolumn{1}{c}{(km\,s$^{-1}$)} \\ \hline G305.2017+00.2072 & 13:11:10.29 & $-$62:34:39.0 & NH$_{3}$ (1,1) & 15 & -41.38 & 0.43 & 12.24 \\ G305.2017+00.2072 & 13:11:10.29 & $-$62:34:39.0 & NH$_{3}$ (2,2) & 16 & -41.54 & 0.18 & 5.94 \\ G305.2242+00.2028 & 13:11:22.07 & $-$62:34:48.0 & CS (J=2-1) & 60 & -40.69 & 0.31 & 9.65 \\ G305.2242+00.2028 & 13:11:22.12 & $-$62:34:48.7 & NH$_{3}$ (1,1) & 16 & -41.45 & 0.09 & 10.29 \\ G305.2242+00.2028 & 13:11:22.12 & $-$62:34:48.7 & NH$_{3}$ (2,2) & 14 & -41.81 & 0.08 & 5.84 \\ G305.2535+00.2412 & 13:11:35.80 & $-$62:32:22.9 & NH$_{3}$ (1,1) & 14 & -36.91 & 0.04 & 10.84 \\ G305.2535+00.2412 & 13:11:35.80 & $-$62:32:22.9 & NH$_{3}$ (2,2) & 13 & -38.55 & 0.08 & 3.66 \\ G305.3210+00.0706 & 13:12:17.96 & $-$62:42:16.3 & $^{13}$CO (J=1-0) & 71 & -31.71 & 0.33 & 2.83 \\ G305.3210+00.0706 & 13:12:17.96 & $-$62:42:16.3 & $^{13}$CO (J=1-0) & 71 & -38.80 & 0.74 & 5.19 \\ G305.3210+00.0706 & 13:12:17.96 & $-$62:42:16.3 & $^{13}$CO (J=1-0) & 71 & -45.32 & 0.59 & 2.40 \\ G305.3526+00.1945 & 13:12:29.36 & $-$62:34:41.2 & $^{13}$CO (J=1-0) & 67 & -37.69 & 6.20 & 6.70 \\ G305.3676+00.2095 & 13:12:36.44 & $-$62:33:43.5 & $^{13}$CO (J=1-0) & 65 & -34.64 & 6.58 & 4.82 \\ G305.3719+00.1837 & 13:12:39.69 & $-$62:35:15.0 & NH$_{3}$ (1,1) & 16 & -38.11 & 0.25 & 9.03 \\ G305.3719+00.1837 & 13:12:39.69 & $-$62:35:15.0 & NH$_{3}$ (2,2) & 15 & -38.48 & 0.13 & 5.48 \\ G305.3779+00.2108 & 13:12:41.56 & $-$62:33:35.4 & $^{13}$CO (J=1-0) & 67 & -33.57 & 5.16 & 3.46 \\ G305.3779+00.2108 & 13:12:41.56 & $-$62:33:35.4 & $^{13}$CO (J=1-0) & 67 & -38.53 & 1.37 & 5.59 \\ G305.3779+00.2108 & 13:12:41.56 & $-$62:33:35.4 & $^{13}$CO (J=1-0) & 67 & -45.27 & 0.87 & 0.98 \\ G305.4399+00.2103 & 13:13:13.84 & $-$62:33:19.0 & NH$_{3}$ (1,1) & 15 & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} \\ G305.4399+00.2103 & 13:13:13.84 & $-$62:33:19.0 & NH$_{3}$ (2,2) & 15 & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} \\ G305.4748$-$00.0961 & 13:13:45.76 & $-$62:51:27.7 & NH$_{3}$ (1,1) & 10 & -38.36 & 0.25 & 5.17 \\ G305.4748$-$00.0961 & 13:13:45.76 & $-$62:51:27.7 & NH$_{3}$ (2,2) & 11 & -38.48 & 0.09 & 2.35 \\ G305.4840+00.2248 & 13:13:35.99 & $-$62:32:12.8 & CS (J=2-1) & 46 & -44.59 & 0.55 & 1.41 \\ G305.4840+00.2248 & 13:13:36.05 & $-$62:32:13.5 & NH$_{3}$ (1,1) & 10 & -44.77 & 0.07 & 1.92 \\ G305.4840+00.2248 & 13:13:36.05 & $-$62:32:13.5 & NH$_{3}$ (2,2) & 11 & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} & \multicolumn{1}{c}{$\cdots$} \\ G305.5393+00.3394 & 13:13:59.52 & $-$62:25:05.5 & NH$_{3}$ (1,1) & 13 & -35.13 & 0.70 & 8.40 \\ G305.5393+00.3394 & 13:13:59.52 & $-$62:25:05.5 & NH$_{3}$ (2,2) & 13 & -35.17 & 0.05 & 2.94 \\ G305.5516+00.0149 & 13:14:20.94 & $-$62:44:26.4 & $^{13}$CO (J=1-0) & 90 & -31.06 & 1.46 & 1.91 \\ G305.5516+00.0149 & 13:14:20.94 & $-$62:44:26.4 & $^{13}$CO (J=1-0) & 90 & -38.88 & 6.30 & 3.46 \\ G305.5610+00.0124 & 13:14:25.82 & $-$62:44:30.8 & NH$_{3}$ (1,1) & 14 & -39.13 & 0.09 & 3.63 \\ G305.5610+00.0124 & 13:14:25.82 & $-$62:44:30.8 & NH$_{3}$ (2,2) & 15 & -38.42 & 0.06 & 3.11 \\ \hline \end{tabular} \end{minipage} \end{center} Notes: A small portion of the data is provided here, the full table is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/MNRAS/. \end{table*} \setlength{\tabcolsep}{6pt} The observations described in this section were made with the 22-m Mopra radio telescope, which is located near Coonabarabran, New South Wales, Australia.\footnote{The Mopra radio telescope is part of the Australia Telescope National Facility which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.} The Mopra beam size is approximately 2.5\arcmin\ for the ammonia observations and 36\mbox{$^{\prime\prime}$}\ and 40\mbox{$^{\prime\prime}$}\ for the CO and CS observations, respectively. The CS\,(2-1) observations were made with Mopra's older cryogenically-cooled ($\sim$4\,K) Superconductor-Insulator-Superconductor (SIS) junction mixer, with a frequency range between 85-116\,GHz. The receiver backend was a digital autocorrelator capable of providing two simultaneous outputs with an instantaneous bandwidth between 4-256\,MHz. The CS observations were made using a bandwidth of 64\,MHz with a 1024-channel digital autocorrelator. This provided a frequency resolution of 62.5\,kHz and a velocity resolution of $\sim$0.2\,km\,s$^{-1}$\ at the CS transitions rest frequency (97.98\,GHz). The UNSW Mopra spectrometer (MOPS)\footnote{The University of New South Wales Digital Filter Bank used for the observations with the Mopra Telescope was provided with support from the Australian Research Council.} was commissioned in October 2005 and consists of four 2.2\,GHz bands that overlap slightly to provide a total of 8\,GHz continuous bandwidth. Up to four zoom windows can be placed within each 2.2\,GHz band allowing up to 16 spectral windows to be observed simultaneously, each providing a bandwidth of 137\,MHz with 4096 channels. MOPS was used for the NH$_3$ (1,1) \& (2,2) inversion transitions, and the \ensuremath{^{13}}CO\ (1-0) observations described in the following two subsections. \subsubsection{CS and NH$_3$ observations} NH$_3$\ and CS observations were made towards sources previously observed in the $^{13}$CO transition as part of our initial programme of follow-up observations. As previously mentioned, multiple velocity components were seen along the lines of sight towards many of these sources, and further observations of higher-density tracers were required to identify the velocity component asoociated with the IR continuum source. The Mopra telescope was used to follow up a large number of sources using CS (2-1) and the two lower-excitation inversion transitions of NH$_3$. All of these transitions have high critical densities ($\sim$10$^4$--$10^5$\,cm$^{-3}$) and are therefore excellent tracers of high density molecular gas associated with high-mass star-forming regions. The CS observations were made towards {\color{black}127} sources in September 2004 and August 2005 (project reference: M121). The signal-to-noise ratio was improved by tuning both polarisations to the CS frequency. System temperatures were typically $\sim$200\,K. The NH$_3$ observations were made in April and September 2008 towards {\color{black}499} RMS sources using the 12-mm receiver and MOPS (project reference: M270; \citealt{m270}). The observations covered a frequency range of 16 to 27.5\,GHz. Each of the 16 zoom windows was centred on the rest frequency of a known transition providing a total velocity coverage of $\sim$2000\,km\,s$^{-1}$\ with a resolution of $\sim$0.4\,km\,s$^{-1}$\ per channel. System temperatures were between 65 and 100\,K. A detailed description of these observations can be found in \citet{urquhart2009_h2o}. Although they included a large number of transitions, we only present the NH$_3$ (1,1) and (2,2) transitions here; however, we have made the data for all of the other transitions available in the RMS database. \subsubsection{CO observations} The RMS source classification criteria have evolved over time as the data obtained from the various programmes of follow-up observations have been analysed (we refer the reader to \citealt{lumsden2013} for a detailed description of the classification criteria is presented). This has resulted in a number of sources that were initially rejected being reclassified as either a YSO or H~{\sc ii}\ region. As these reintroduced sources were not included in our initial sample, no molecular-line observation had been previously made. We made observations of the CO (1-0) transitions towards 192 of the re-introduced RMS sources in March 2012 using the MOPS instrument (project reference: M573; \citealt{m573}). Three zoom windows were used to cover the $^{12}$CO, $^{13}$CO and C$^{18}$O transitions. However, the optically thick $^{12}$CO emission generally shows complex emission profiles while the C$^{18}$O is generally only detected towards a small number of sources. We therefore only present the plots of the $^{13}$CO emission profiles and the fit parameters to the emission features seen in these spectra. The average system temperature was $\sim$360\,K at 110\,GHz. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/G305.3676+00.2095_13CO_Mopra2012.eps} \includegraphics[width=0.49\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/G305.4840+00.2248_CS_Mopra2005.eps} \includegraphics[width=0.49\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/G305.4748-00.0961_NH3.eps} \caption{\label{fig:example_spectra} Example spectra obtained from the additional Mopra observations. In the upper and middle panels we present emission in the CS and $^{13}$CO (1-0) transitions towards example RMS sources. The lower panel shows example NH$_3$ (1,1) and (2,2) inversion transitions, with the latter offset by $-$0.2\,K to avoid confusion. The hyperfine structure can be clearly in the NH$_3$ (1,1) emission profile. In all of these plots the black and red lines show the observed spectra and model fits to the data, respectively. All spectra have a velocity resolution of $\sim$0.4\,km\,s$^{-1}$.} \end{center} \end{figure} \subsection{Observational procedures and data reduction} All molecular-line observations were performed in position-switching mode, with typical on-source integration times of $\sim$10, 30 and 40 minutes for the $^{13}$CO, NH$_3$ and CS transitions, respectively. The on-source integration time was split into individual one-minute on- and off-source scans. Reference positions were offset from source positions by 1 degree in a direction perpendicular to the Galactic plane. These were chosen to avoid contamination by emission at the reference position at a similar velocity. Weather conditions were stable over the short time periods required to complete the observations of each source, with system temperatures varying by no more than approximately 10\,per\,cent. The telescope pointing was checked every 1-2 hours by observing a strong nearby SiO maser (\citealt{indermuehle2013}) and the average pointing accuracy was found to be better than 10\mbox{$^{\prime\prime}$}\ r.m.s.. A measurement of an ambient load (assumed to be at 290\,K) was made before and after each $^{13}$CO and CS observation following the method of \citet{kutner1981} to put the measured antenna temperatures onto the standard $T^*_{\rm{A}}$ scale, correcting for atmospheric absorption, ohmic losses and rearward spillover and scattering. This correction is not required for observations below 70\,GHz as the atmosphere is significantly less variable and absorption from water vapour is less of a problem. The observations of all three transitions were reduced using the ATNF spectral line reducing software (ASAP). Individual on-off scans were processed to remove sky emission and visually inspected to remove poor scans. A low-order polynomial baseline was fitted, after which the individual scans were averaged together to produce a single spectrum for each source. The CO and CS spectra have been Hanning smoothed to a velocity resolution of $\sim$0.4\,km\,s$^{-1}$\ to obtain a final sensitivity of $\sim$70 and 35\,mK\,channel$^{-1}$\,beam$^{-1}$, respectively. The reduced spectra were converted to the telescope-independent main-beam temperature scale ($T_{\rm{mb}}$), assuming a main-beam efficiency ($\eta_{\rm{mb}}$) of 0.65 \citep{urquhart2010a}, 0.60 and 0.55 \citep{ladd2005} for the NH$_3$, CS and CO observations, respectively. In Fig.\,\ref{fig:example_spectra} we present an example of each of the emission detected for the three molecules observed. Gaussian profiles were fitted to the observed emission features in the CO and CS data using the spectral line analysis package \textsc{XS} written by Per Bergman.\footnote{Available from the Onsala Space Observatory at http://www.chalmers.se/rss/oso-en/observations/data-reduction-software.} Where necessary, a higher-order polynomial was fitted to the emission-free parts of the spectrum and subtracted from the baseline before the Gaussian profiles were fitted. As can be seen in the upper and middle panels of Fig.\,\ref{fig:example_spectra}, the Gaussian profile provided a good fit to the data in the majority of cases. \setlength{\tabcolsep}{6pt} \begin{table*} \begin{center} \caption{Parameters of the 25 most luminous star-forming complexes. The positions and velocities of these complexes have been determined from the mean of all associated RMS sources, and as such are only approximate values. The luminosities of each complex (given in Col.\,9) have been determined from the integrated bolometric luminosities of their embedded YSO and H~{\sc ii}\ region populations. The contribution each complex makes to the total embedded massive star population (Col.\,10) is estimated by dividing the complex's luminosity by the estimate of total Galactic MYSO and embedded H~{\sc ii}\ region luminosity ($L_{\rm{Galaxy}}=0.76\times 10^8$\,\ensuremath{\mathrm{L}_\odot}; see Sect.\,4.1 for details).} \label{tbl:complex_parameters} \begin{minipage}{\linewidth} \begin{tabular}{l.........} \hline \hline \multirow{2}{24mm}{Complex Name} & \multicolumn{1}{c}{$\ell$} &\multicolumn{1}{c}{$b$} & \multicolumn{1}{c}{$V_{\rm{LSR}}$} & \multicolumn{1}{c}{RMS Members} & \multicolumn{1}{c}{Distance} & \multicolumn{1}{c}{z} & \multicolumn{1}{c}{$R_{\rm{GC}}$}&\multicolumn{1}{c}{$L_{\rm{bol}}$} &\multicolumn{1}{c}{$L_{\rm{bol}}$/$L_{\rm{Galaxy}}$} \\ & \multicolumn{1}{c}{(\degr)} & \multicolumn{1}{c}{(\degr)} &\multicolumn{1}{c}{(km\,s$^{-1}$)} & \multicolumn{1}{c}{(\#)} & \multicolumn{1}{c}{(kpc)} &\multicolumn{1}{c}{(pc)} &\multicolumn{1}{c}{(kpc)} & \multicolumn{1}{c}{(Log[\ensuremath{\mathrm{L}_\odot}])}& \multicolumn{1}{c}{(\%)} \\ \hline W51\,(A\&B) & 49.362 & -0.330 & 60.47 & 24 & 5.4 & -31.1 & 6.45 & 6.67 & 6.82 \\ NGC3603 & 291.598 & -0.491 & 12.79 & 4 & 7.0 & -59.6 & 8.78 & 6.61 & 5.95 \\ W49A & 43.145 & -0.009 & 11.14 & 10 & 11.1 & -1.7 & 7.61 & 6.56 & 5.23 \\ RCW\,106 (G333) & 333.037 & -0.320 & -51.65 & 33 & 3.6 & -20.1 & 5.54 & 6.28 & 2.74 \\ G338.398+00.164 & 338.404 & 0.120 & -35.74 & 14 & 12.8 & 28.9 & 5.82 & 6.21 & 2.36 \\ GAL331.03$-$00.15 & 330.960 & -0.185 & -91.54 & 2 & 9.6 & -30.9 & 4.64 & 6.04 & 1.57 \\ G305 & 305.506 & 0.085 & -36.32 & 25 & 4.0 & 5.8 & 6.98 & 5.94 & 1.26 \\ G282.0$-$1.2 & 281.881 & -1.605 & -6.10 & 11 & 7.0 & -195.4 & 9.82 & 5.93 & 1.24 \\ W43 & 30.861 & -0.023 & 93.73 & 30 & 5.1 & -1.8 & 4.99 & 5.93 & 1.23 \\ AGAL032.797+00.191 & 32.796 & 0.198 & 14.99 & 2 & 12.9 & 44.5 & 7.37 & 5.89 & 1.12 \\ W47 & 37.601 & -0.223 & 53.48 & 6 & 9.9 & -38.3 & 6.07 & 5.88 & 1.10 \\ RCW42 & 273.982 & -1.191 & 37.52 & 5 & 5.5 & -114.6 & 9.81 & 5.86 & 1.05 \\ AGAL045.121+00.131 & 45.115 & 0.131 & 58.83 & 2 & 4.4 & 10.1 & 6.23 & 5.82 & 0.96 \\ W3 & 133.797 & 1.155 & -43.42 & 9 & 2.0 & 39.3 & 9.95 & 5.80 & 0.92 \\ AGAL319.399$-$00.012 & 319.390 & -0.007 & -13.50 & 4 & 11.6 & -1.5 & 7.57 & 5.69 & 0.70 \\ Far 3$-$kpc arm (south) & 348.854 & -0.015 & 13.30 & 8 & 11.3 & -2.8 & 3.36 & 5.67 & 0.67 \\ Near 3$-$kpc tangent & 336.880 & 0.011 & -123.24 & 10 & 7.7 & 1.6 & 3.34 & 5.56 & 0.53 \\ AGAL020.081$-$00.136 & 20.076 & -0.139 & 41.80 & 2 & 12.6 & -30.5 & 5.44 & 5.55 & 0.51 \\ GAL331.5$-$00.1 & 331.553 & -0.088 & -86.67 & 9 & 4.9 & -7.7 & 4.81 & 5.52 & 0.48 \\ RCW116B & 345.091 & 1.543 & -14.77 & 14 & 2.4 & 64.1 & 6.23 & 5.43 & 0.39 \\ NGC7538 & 111.626 & 0.761 & -55.48 & 9 & 2.6 & 35.2 & 9.79 & 5.43 & 0.38 \\ Cygnus-X & 79.538 & 0.942 & 1.66 & 74 & 1.4 & 23.0 & 8.36 & 5.40 & 0.36 \\ Gum\,50 & 328.567 & -0.533 & -46.57 & 1 & 3.0 & -28.3 & 6.12 & 5.38 & 0.35 \\ G010.960+00.017 & 10.962 & 0.015 & 20.29 & 2 & 8.1 & 2.8 & 5.76 & 5.36 & 0.33 \\ GAL336.40$-$00.23 & 336.470 & -0.232 & -86.11 & 5 & 10.5 & -42.4 & 4.34 & 5.30 & 0.29 \\ \hline \end{tabular} \end{minipage} \end{center} Notes: A small portion of the data is provided here, the full table is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/MNRAS/. \end{table*} \setlength{\tabcolsep}{6pt} The NH$_3$ (1,1) molecule has hyperfine structure and consists of 18 components which need to be fitted simultaneously in order to derive the optical depth and line width (see lower panel of Fig.\,\ref{fig:example_spectra} for an example). The fitting of these data has been done in the IDL environment using the MPFIT routine.\footnote{MPFIT is part of a suite of routines written by Craig B. Markwardt and made available to the general public (http://www.physics.wisc.edu/$\sim$craigm/idl/idl.html).} The hyperfine structure is generally too weak to be observed in the NH$_3$ (2,2) transition, and so these detections have been fitted with Gaussian profiles along with the weaker NH$_3$ (1,1) lines where the hyperfine components are not detected. In all cases the NH$_3$ (1,1) and (2,2) line widths have been obtained by fitting the hyperfine components to their respective main line emission to remove the effects of line broadening due to optical depth effects. The radial velocities, main beam temperatures and full-width at half-maximum (FWHM) line widths obtained from these fits are presented for all detected transitions in Table\,\ref{tbl:addition_obs} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/rms_cluster_density_histogram.eps} \caption{\label{fig:cluster_member_distribution} Number of sources associated with each complex. The names of the complexes with the largest number of RMS associations are given.} \end{center} \end{figure} \subsection{Association of sources with known star-forming regions} \label{sect:complexes} Examination of $Spitzer$ mid-infrared images obtained as part of the GLIMPSE legacy project reveals that many RMS sources are associated with large star-forming complexes. In many cases, a single RMS source is associated with a previously identified complex but, in the majority of cases, we find multiple sources positionally coincident with a given complex. To confirm such associations, we compare their radial velocities and require that these agree within $\sim$10\,km\,s$^{-1}$. Since many complexes are well known (such as W31, M16, M17, etc) and have been the focus of detailed studies, reliable distances can be readily found in the literature (although care must be exercised regarding the comparison of different distance determinations, for example Westerlund\,2; \cite{dame2007} and W51\citet{clark2009} --- we have only adopted these in cases where these are broadly consistent). \begin{figure*} \begin{center} \includegraphics[angle=90,width=0.32\textwidth, trim= 30 0 0 10]{PAPER_PLOTS/clusters_lv_north_oct13.eps} \includegraphics[angle=90,width=0.32\textwidth, trim= 30 0 0 10]{PAPER_PLOTS/clusters_lv_south_oct13.eps} \caption{\label{fig:lv_distribution} Left and right panels show the longitude and velocity distribution of the RMS sources in the Northern and Southern Galactic plane, respectively. The complexes are shown as open red circles, the size of which give an indication of the source density of each complex, while the blue filled circles show the position of the rest of the sample. The greyscale image shows the distribution of molecular gas as traced by the integrated $^{12}$CO emission (\citealt{dame2001}) for comparison. The location of the spiral and local arms are shown as curved solid lines, coloured to identify the individual arms. The positions of the four main spiral and local arms have been taken from model by \citet{tayor1993} and updated by \citet{cordes2004}, while the position of the near 3-kpc arm has been taken from \citet{bronfman2000}.} \end{center} \end{figure*} We have used visual inspection of the mid-infrared images combined with velocity information to identify all of these small groups of sources and, where possible, have associated these groups with a known star-forming complex. This has resulted in $\sim$600 sources being associated with approximately 120 known star formation regions/complexes and accounts for $\sim$40\,per\,cent of the RMS sample of YSOs and H~{\sc ii}\ regions. Fig.\,\ref{fig:cluster_member_distribution} shows the number of RMS sources associated with each of the 117 complex identified and Table\,\ref{tbl:complex_parameters} lists the complex parameters. The vast majority of these complexes are relatively small, consisting of no more than a handful of individual massive star-forming regions. However, given that each one of these regions is probably forming a stellar cluster, these complexes should be viewed as active star forming regions. If it is typical for a given star-forming complex to have a small number of localised regions of active star formation, then complexes with significantly larger numbers of RMS sources stand out as perhaps being outside the norm. There are only six complexes that have more than 20 members: these are W51, Cygnus-X, W43, G305, RCW\,106 and the Vela molecular ridge, which are among the best-studied star-formation regions in the Galaxy (e.g., \citealt{parsons2012}, \citealt{csengeri2011}, \citealt{motte2003}, \citealt{hindson2010}, \citealt{bains2006} and \citealt{hill2012}, respectively). We note, however, that although Cygnus-X and the Vela molecular ridge are associated with the largest number of RMS sources, these are both relatively nearby (1.4 and 0.7\,kpc, respectively) and so the RMS sources are not particularly luminous. Consequently, neither of these star forming complexes features in our list of the most luminous complexes presented in Table\,\ref{tbl:complex_parameters}. There is therefore a distance bias associated with the number of sources associated with each complex and care needs to be taken when drawing conclusions from the source counts alone. By grouping sources together and identifying their associated star forming complexes, we have reduced the number of distances we need to find by $\sim$500; however, this still leaves some 1100 distances to be determined. The methods used to determine these distances are discussed in Sect.\,\ref{sect:distances}. \subsection{Longitude-velocity distribution} Fig.\,\ref{fig:lv_distribution} shows the distribution of the complexes and individual embedded sources as a function of Galactic longitude and radial velocity. The upper and lower panels show the longitudinal distribution of the radial velocity (plotted over the integrated $^{12}$CO (1-0) emission mapped by \citealt{dame2001}) for the northern and southern Galactic plane, respectively. These longitude-velocity ($\ell$-$v$) plots also show the proposed positions of the four-arm spiral arm model of \citet{tayor1993} (updated by \citealt{cordes2004}), which itself is based on the earlier model of \citet{georgelin1976}. The figure reveals a strong correlation between the distributions of molecular gas and RMS sources within the inner Galaxy (i.e., $|\ell |<60$\degr). Outside of this region, the correlation is significantly weaker. The positions of the modelled spiral arms appears to be strongly correlated to the distribution of our sample of YSOs and H~{\sc ii}\ regions, particularly in the outer Galaxy where the correlation with the molecular gas is significantly poorer. Since the spiral arms in the outer Galaxy are the most distant, the poorer correlation between the RMS sources and the molecular gas there is likely to be due to the sensitivity of the CO survey, with the emission from the more distant molecular clouds being diluted in the 8\arcmin\ beam used by the Dame et al. survey (cf. Urquhart et al. 2013b). Previous CO, H~{\sc ii}\ region, and mid-infrared surveys have identified the spiral-arm line-of-sight tangents (e.g., \citealt{grabelsky1987,bronfman1988, alvarez1990,caswell1987,benjamin2008}). In the northern Galactic plane, tangent positions have been reported at $\ell\simeq$ 30\degr\ and 47\degr, which correspond to the Scutum-Centaurus\ and Sagittarius\ spiral arms. Inspection of the distribution of RMS sources in the northern Galactic plane reveals high densities towards the longitudes and velocities of these two tangents and their associations with W43 and W51 star forming complexes. Similarly, we find high RMS source densities towards the longitudes and velocities of the Sagittarius, Scutum-Centaurus, Norma\, and near 3-kpc arms ($\sim$283\degr, 308\degr, 328\degr\ and 337\degr; \citealt{bronfman2000}). We find G305 and G282.0$-$0.12 star forming complexes are associated with the Scutum-Centaurus\ and Sagittarius\ tangents, respectively. The source counts are significantly lower and broadly flat outside the spiral-arm tangents (i.e., $60\degr<\ell < 280\degr$) with only two notable exceptions; these are the Cygnus-X region ($\ell \sim$80\degr; \citealt{reipurth2008}) and the Vela molecular ridge ($\ell \sim$268\degr; \citealt{netterfield2009}). Both of these regions are likely to be associated with the local arm and are therefore relatively nearby (these two regions are labelled in the plots presented in Fig.\,\ref{fig:lv_distribution}). Incidentally, these two regions are also associated with two highest densities of RMS sources; however, as mentioned in the previous section, most of these are not very luminous. \section{Distances and luminosities} \label{sect:distances} \subsection{Distance determination} \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/G328.9842-00.4361_sgps.ps} \includegraphics[width=0.49\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/G330.6605+00.5788_sgps.ps} \includegraphics[width=0.49\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/G328.1639+00.5869_sgps.ps} \caption{\label{fig:HISA_examples} Examples of SGPS H~{\sc i}\ spectra towards the embedded RMS sources located within the Solar circle for which a reliable distance was not available in the literature. The source velocity ($v_{\rm{s}}$) and the velocity of the tangent point ($v_{\rm{t}}$) are shown by the red and blue vertical lines, respectively. The grey vertical band covers the velocity region 10\,km\,s$^{-1}$\ either side of the tangent velocity and is provided an easy way to identify sources placed at the tangent position. In the upper, middle and lower panel we present and example of a source located at the near, far and tangent positions.} \end{center} \end{figure} Distances are critical to estimating source luminosities, which allow us to discriminate between genuinely massive stars and nearby lower-mass stars and to determine their Galactic distribution. Deriving distances to our sample of MYSOs and H~{\sc ii}\ regions has been an ongoing process, with results having previously been presented for incomplete samples located primarily in the first and fourth quadrants (\citealt{urquhart2011a} and \citealt{urquhart2012}, respectively). Maser parallax derived distances are the most reliable methods to determine distances and these are becoming available for an increasing number of star-forming regions (i.e., \citealt{reid2009}). At present, these tend to be localised to the relatively nearby parts of the first and second quadrants of the Galaxy, and only provide distances to a small fraction of our sample (no more than a few per\,cent). Spectrophotometric measurements of H~{\sc ii}-region-exciting stars or associated clusters are the next most reliable method, however, this can produce incorrect distances if the star is incorrectly typed (see \citealt{clark2009} for an example). We have conducted a comprehensive review of the literature and have adopted both parallax and spectrophotometric distances where available, and will continue to update source distances as and when new measurements become available. For the remaining sources we have estimated kinematic distances using the Galactic rotation model derived by \citet[][circular rotation speed $\theta_0= 254$\,km\,s$^{-1}$\ and distance to Galactic centre $R_0 = 8.4$\,kpc]{reid2009} and the radial velocities discussed in the previous section. Kinematic distances have associated uncertainties of order $\pm$1\,kpc (allowing for $\pm$7\,km\,s$^{-1}$\ error in the velocity due to streaming motions). We also note that kinematic distances have been found to deviate significantly from parallax and spectroscopically derived distances (e.g., \citealt{moises2011}). Although these distance anomalies can dramatically affect the derived properties of individual sources it will not impact of the results drawn from the whole sample. Of greater concern is that within the Solar circle there are two solutions for each radial velocity (\textit{kinematic distance ambiguity}, or KDA). These distances are equally-spaced on either side of the tangent position, and are generally referred to as the \textit{near} and \textit{far} distances. Since the majority of our sources are located within the Solar circle, this distance ambiguity affects $\sim$80\,per\,cent of our sample. \setlength{\tabcolsep}{6pt} \begin{table*} \begin{center} \caption{Results of the H~{\sc i}\ self-absorption analysis.} \label{tbl:hisa_results} \begin{minipage}{\linewidth} \begin{tabular}{l.....c...} \hline \hline \multirow{2}{24mm}{MSX Name} & \multicolumn{1}{c}{$\ell$} &\multicolumn{1}{c}{$b$} & \multicolumn{1}{c}{$V_{\rm{LSR}}$} & \multicolumn{1}{c}{Near}& \multicolumn{1}{c}{Far} & \multirow{2}{6mm}{KDS$^a$} & \multicolumn{1}{c}{Distance} & \multicolumn{1}{c}{$R_{\rm{GC}}$}&\multicolumn{1}{c}{$z$} \\ & \multicolumn{1}{c}{(\degr)} & \multicolumn{1}{c}{(\degr)} &\multicolumn{1}{c}{(km\,s$^{-1}$)} & \multicolumn{1}{c}{(kpc)} & \multicolumn{1}{c}{(kpc)} & \multicolumn{1}{c}{} &\multicolumn{1}{c}{(kpc) } &\multicolumn{1}{c}{(kpc)} & \multicolumn{1}{c}{(pc)} \\ \hline G327.9205+00.0921 & 327.9210 & 0.0921 & -49.5 & 3.2 & 11.1 & F & 11.1 & 5.9 & 17.8 \\ G328.5487+00.2717 & 328.5490 & 0.2717 & -60.5 & 3.7 & 10.6 & N & 3.7 & 5.6 & 17.6 \\ G327.8097$-$00.6339 & 327.8100 & -0.6339 & -46.7 & 3.0 & 11.2 & N & 3.0 & 6.1 & -33.6 \\ G328.2275$-$00.2714 & 328.2280 & -0.2714 & -99.5 & 5.8 & 8.5 & F & 7.1 & 4.4 & -33.8 \\ G328.9480+00.5709 & 328.9480 & 0.5709 & -93.9 & 5.3 & 9.0 & F & 7.2 & 4.3 & 71.7 \\ G328.9580+00.5671 & 328.9580 & 0.5671 & -93.5 & 5.3 & 9.1 & F & 7.2 & 4.3 & 71.2 \\ G329.2713+00.1147 & 329.2710 & 0.1147 & -76.9 & 4.5 & 9.9 & N & 4.5 & 5.1 & 9.0 \\ G329.3371+00.1469 & 329.3370 & 0.1469 & -107.1 & 6.1 & 8.3 & F & 7.2 & 4.3 & 18.5 \\ G329.4720+00.2143 & 329.4720 & 0.2143 & -101.5 & 5.7 & 8.7 & F & 7.2 & 4.3 & 27.1 \\ G329.4579+00.1724 & 329.4580 & 0.1724 & -103.2 & 5.8 & 8.6 & F & 7.2 & 4.3 & 21.8 \\ G328.9842$-$00.4361 & 328.9840 & -0.4361 & -81.9 & 4.7 & 9.7 & N & 4.7 & 5.0 & -36.1 \\ G329.6098+00.1139 & 329.6100 & 0.1139 & -63.9 & 3.9 & 10.6 & N? & 3.9 & 5.4 & 7.7 \\ G329.4211$-$00.1631 & 329.4210 & -0.1631 & -76.8 & 4.5 & 10.0 & N & 4.5 & 5.1 & -12.8 \\ G329.8145+00.1411 & 329.8150 & 0.1411 & -85.0 & 4.9 & 9.7 & N & 4.9 & 4.8 & 12.0 \\ G329.3402$-$00.6436 & 329.3400 & -0.6435 & -74.2 & 4.4 & 10.1 & F & 10.1 & 5.2 & -113.2 \\ G330.6605+00.5788 & 330.6600 & 0.5788 & -75.7 & 4.4 & 10.2 & F & 10.2 & 5.0 & 103.2 \\ G330.2923+00.0010 & 330.2920 & 0.0010 & -64.3 & 3.9 & 10.7 & N? & 3.9 & 5.4 & 0.1 \\ G331.0890+00.0163 & 331.0890 & 0.0163 & -95.6 & 5.3 & 9.4 & N & 5.3 & 4.5 & 1.5 \\ G331.0931$-$00.1303 & 331.0930 & -0.1303 & -64.0 & 3.9 & 10.8 & F? & 10.8 & 5.3 & -24.6 \\ G330.9288$-$00.4070 & 330.9290 & -0.4070 & -41.6 & 2.8 & 11.9 & F & 11.9 & 6.1 & -84.2 \\ \hline \end{tabular} \footnotetext[1]{Kinematic Distance Solution. `N' indicates that the source was determined to be at the \textit{near} location, while an `F' reflects the \textit{far} position. Where a distance solution is considered less reliable we append '?' to the given allocation.} \end{minipage} \end{center} Notes: A small portion of the data is provided here, the full table is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/MNRAS/. \end{table*} \setlength{\tabcolsep}{6pt} We reduce the number of ambiguities that need to be resolved by applying two initial cuts. First, we place sources with velocities within 10\,km\,s$^{-1}$\ of the tangent velocity at the tangent distance, since the error in the distance is comparable to the difference between the near/far distance and the tangent distance. Second, we place sources at the near distance if a far allocation would result in an unrealistically large displacement from the Galactic mid-plane for a star formation region (i.e., $z$ $>$ 120\,pc; 4 $\times$ scaleheight for O-type stars; \citealt{reed2000}). These two steps reduce the number of KDAs that need to be resolved by {\color{black}$\sim$100}; however, the vast majority still need to be addressed. There are a number of ways to resolve these kinematic distance ambiguities: H~{\sc i}\ absorption against a continuum \citep[e.g.,][]{fish2003, kolpak2003,anderson2009a,roman2009,urquhart2012}, H~{\sc i}\ self-absorption \citep[e.g.,][]{jackson2003,roman2009,green2011b}, and matching sources with infrared dark clouds \citep[e.g.,][]{dunham2011b, ellsworth2013}. These techniques are well documented in the literature and so will not be described here in detail. Many of these studies have sources in common with our sample, and we have compared the positions and velocities (i.e., $\ell bv$ parameter space) of these and adopted the given distance solution where a match is found. In cases where the distance solutions differ we have made an independent assessment of the available data to determine the most likely distance and have favoured the allocation based on the most reliable data (i.e., the highest resolution and/or data with the highest signal to noise ratio). In cases where there is no clear distinction we do not resolve the ambiguity (but may do so in the future as more data become available). Since all of the bright H~{\sc ii}\ regions have been included in previous studies using the H~{\sc i}\ absorption against a continuum method, the $\sim$200 sources for which the ambiguity has not been resolved are either radio-quiet or the radio emission is too weak for absorption to be detected in the H~{\sc i}\ surveys available. We have therefore extracted H~{\sc i}\ spectra from the Southern Galactic Plane Survey (SGPS; \citealt{mcclure2005}) and the VLA Galactic Plane Survey (VGPS; \citealt{stil2006}) for all these remaining objects. These have been inspected for absorption coincident with the source velocity, indicating the near distance, otherwise the far distance is considered more likely. Additionally, we inspected the GLIMPSE and MIPSGAL images to correlate sources located at the near distance with IRDCs in an effort to confirm the H~{\sc i}\ results. If a correlation was found the near distance was assigned, if no correlation was found no distance was assigned. Fig.\,\ref{fig:HISA_examples} shows some examples of these distance allocations and in Table\,\ref{tbl:hisa_results} we present the distance solutions for all sources examined. Using a combination of parallax and spectroscopic distances found in the literature, and KDA solutions drawn both from the literature and derived from our own analysis, we are able to assign distances to $\sim${\color{black}1650} of the embedded RMS sources. We have been unable to assign a distance to approximately 100 sources, however, this corresponds to $\sim$7\,per\,cent of the embedded RMS population. \subsection{Bolometric luminosities and survey completeness} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/rms_luminosity_distance_distribution.eps} \caption{\label{fig:rms_luminosity} The luminosity distribution as a function of heliocentric distance. The dark line and light grey shaded region indicates the limiting sensitivity of the MSX 21\,$\umu$m\ band and its associated uncertainty. Where multiple sources have been identified within the MSX beam, the luminosity has been apportioned, resulting in some sources being located below the sensitivity limit.} \end{center} \end{figure} Bolometric luminosities have been estimated for the majority of sources using the distances discussed in the previous section and the integrated fluxes determined from model fits to the spectral energy distributions (SEDs) presented by \citet{mottram2011b}. For very bright sources and sources located in complicated regions it is not always possible to obtain a sufficient number of photometric points to adequately constrain the SED; this is primarily due to saturation of the mid- and far-infrared images and difficulties subtracting the diffuse background emission (\citealt{mottram2010}). In these cases the luminosities have been estimated by simply scaling the MSX 21\,$\umu$m flux (see \citealt{mottram2011a} for details). The RMS luminosities have been previously reported by \citet{mottram2011b}, who used a less complete RMS subset to investigate the luminosity functions of MYSO and H~{\sc ii}\ regions and determine statistical lifetimes of these stages. Rather than reproducing this work, we aim simply to use the bolometric luminosities to estimate the survey's completeness threshold for the full sample of embedded RMS sources and to investigate the distribution of massive stars across the Galaxy.\footnote{The luminosities presented by \citet{mottram2011b} were estimated using kinematic distances derived from the \citet{brand1993} rotation curve and so have been rescales to the kinematic distances derived from the \citet{reid2009} rotation model.} In Fig.\,\ref{fig:rms_luminosity} we show the distribution of YSO and H~{\sc ii}\ region bolometric luminosities as a function of heliocentric distance. This plot suggests that the sample is complete to young embedded sources with bolometric luminosities over $\sim$$2\times10^4$\,\ensuremath{\mathrm{L}_\odot}\ to a distance of $\sim$18\,kpc. While the bolometric luminosities for individual sources can be found in \citet{lumsden2013}, we present the total bolometric luminosities of star forming complexes in Table\,\ref{tbl:complex_parameters}. These have been estimated by integrating the bolometric luminosities of the embedded massive stellar population associated with each complex. \begin{figure*} \begin{center} \includegraphics[width=.9\textwidth, trim= 50 0 50 0]{PAPER_PLOTS/milky_way_distribution_everything_reid_22may13.eps} \caption{\label{fig:galactic_mass_radius_distribution}Galactic distribution of all MYSOs and H~{\sc ii}\ regions with bolometric luminosities greater than $10^4$\,\ensuremath{\mathrm{L}_\odot}. We show the kinematic positions of the complexes and individual sources as red and blue circles, respectively. The sizes of the markers give an indication of their luminosity, as depicted in the upper right corner. The background image is a sketch of the Galaxy produced by Robert Hurt of the Spitzer Science Center in consultation with Robert Benjamin. The position of the Sun is shown by the small circle above the Galactic centre. The two solid black lines enclose the Galactic centre region that was excluded from the RMS surveys due to problems with source confusion and distance determination. The smaller of the two black dot-dashed circles represent the locus of tangent points, while the larger circle shows the radius of the Solar circle.} \end{center} \end{figure*} \section{Galactic Structure} \label{sect:gal_structure} With the distances on hand we are in a position to examine the 3-dimensional distribution of this sample of young massive stars. In Fig.\,\ref{fig:galactic_mass_radius_distribution} we present a top-down view of the Milky Way showing the distribution of MYSOs and H~{\sc ii}\ regions with respect to the known large-scale structures of the Galaxy. The background image shown in this figure is an artist's conceptual image of what the Galaxy might look like if viewed from above, and incorporates all that is currently known of the structure of our Galaxy, including the 3.1-3.5\,kpc Galactic bar at an angle of 20\degr\ with respect to the Galactic centre-Sun axis (\citealt{binney1991, blitz1991, dwek1995}), a second non-axisymmetric structure referred to as the ``Long bar'' (\citealt{hammersley2000}) with a Galactic radius of 4.4$\pm$0.5\,kpc at an angle of 44$\pm$10\degr\ (\citealt{benjamin2008}), the Near and Far 3-kpc arms, and the four principle arms (i.e., Norma, Sagittarius, Perseus and Scutum-Centaurus arms). The location of the spiral arms is based on the \citet{georgelin1976} model but has been modified to take account of recent maser parallax distances and updated directions for the spiral arm tangents (\citealt{dame2001}). The RMS survey excluded the innermost 20\degr\ of the Galactic longitude (i.e., 350\degr $< \ell <$ 10\degr): we are therefore not sensitive to any star formation taking place within $\sim$3\,kpc of the Galactic centre and any features of Galactic structure within this radius. The distribution of the young embedded population with respect to the expected positions of the spiral arms given from the models suggests that they are correlated, but quantifying this correlation and estimating its significance is non-trivial. The correlation between the spiral arms and the RMS complexes appears slightly stronger than for the isolated sources in that there are far fewer found to be located in the inter-arm regions (cf \citealt{stark2006}). The reason for this is that the more reliable maser parallax and spectrophotometric distances are available for many of the complexes, and for the others, the higher number of associated RMS sources is likely to provide a better estimates of their systemic radial velocities and thus kinematic distances (cf \citealt{russeil2003}). Individual sources are by comparison more poorly constrained. The source density of the inner parts of the spiral arms ($R_{\rm{GC}} < 10$\,kpc) appears to be roughly uniform, which suggests a similar level of star formation is taking place within them. This is in contrast with the result of near- and mid-infrared source counts presented by \citet{benjamin2008}. These authors found enhancements of the source counts towards the Scutum and Centaurus tangents, but not towards the Norma or Sagittarius tangents. This led them to speculate that the Galaxy has two principal arms (i.e., Perseus and Scutum-Centaurus arms) with the Norma or Sagittarius perhaps being optically visible arms that are not associated with any enhancement in the old stellar disk. The emphasised Perseus and Scutum-Centarurus arms portrayed in the background image used in Fig.\,\ref{fig:galactic_mass_radius_distribution} reflect this \citep{benjamin2008,churchwell2009}. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/rgc_cdf_diagram.eps} \caption{\label{fig:rgc_cdf_diagram} Cumulative distribution function of the RMS source counts (red) and integrated bolometric luminosities (blue) of all MYSOs and H~{\sc ii}\ regions above $2\times10^4$\ensuremath{\mathrm{L}_\odot}\ as a function of Galactocentric radius. The shaded region shows the Galactocentric distance region excluded from the RMS survey, while the vertical dashed line indicates the distance of the Solar circle. } \end{center} \end{figure} Comparison of the number of northern and southern RMS sources above the completeness limit shows a similar number of sources (i.e., $47\pm4.6$\,per\,cent and $53\pm4.0$\,per\,cent for the northern and southern Galactic plane, respectively). We also find similar proportions of the northern and southern samples inside and outside the Solar circle ($\sim$75\,per\,cent and $\sim$25\,per\,cent, respectively). The proportional distribution of the RMS sources between the northern and southern Galactic plane and inside and outside the Solar circle are similar to that reported by \citet{bronfman2000} from a study of $\sim$750 $IRAS$ selected candidate UC\,H~{\sc ii}\ regions (67\,per\,cent and 33\,per\,cent inside and outside the Solar circle). If we look at the integrated bolometric luminosities of the sources inside and outside the Solar circle we also find the same fractional distribution as the source counts; this is nicely illustrated in the cumulative distribution function of both of these parameters in Fig.\,\ref{fig:rgc_cdf_diagram}. The fraction of the total bolometric luminosity of sources within the Solar circle is slightly lower than the value of 81\,per\,cent reported by \citet{bronfman2000}, however, it is still consistent within the errors. The similarity of the distributions of the total luminosity and source counts both inside and outside of the Solar circle would suggest that the mean source luminosity is also likely to be broadly similar. Furthermore, given that the proportion of molecular gas inside and outside the Solar circle is very similar to the luminosity and source counts it follows that the average star formation efficiencies are also comparable. This is consistent with the results of \citet{snell2002} that the star formation efficiency for the population of molecular clouds in the outer Galaxy (as estimated from the clouds' $L_{\mathrm{FIR}}/M$ ratio) is similar to that found for the inner Galaxy population. The clouds in the outer Galaxy are as active in forming massive stars, despite the fact that the clouds are typically 1-2 orders of magnitude less massive than those in the inner Galaxy. The difference in star formation rates between inner and outer regions, then, is principally a function of the molecular cloud formation rate, not of star formation $within$ the molecular clouds. Several factors are thought to contribute to the comparative lack of molecular clouds, including lower metallicity, gas density, and ambient pressure \citep{leroy2008,snell2002,schruba2011}. \subsection{Surface density of massive star formation} \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/rgc_surface_density_distribution_histogram.eps} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/rgc_luminosity_surface_density_distribution_histogram.eps} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/rgc_surface_density_distribution_histogram_north.eps} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/rgc_luminosity_surface_density_distribution_histogram_north.eps} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/rgc_surface_density_distribution_histogram_south.eps} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/rgc_luminosity_surface_density_distribution_histogram_south.eps} \caption{\label{fig:rms_rgc_distribution} Source and bolometric luminosity surface density distributions of the population of RMS sources above the sample completeness limit ($2\times 10^4$\,\ensuremath{\mathrm{L}_\odot}). The upper panel shows the distribution of the whole sample, while the middle and lower panels show the distribution of the northern and southern samples, respectively. The bin size is 0.5\,kpc and the uncertainties in the left panels are derived from Poisson\ statistics, while the those in the right panels are estimated assuming a 40\,per\,cent error in the derived source luminosities. The blue circles and connecting dashed blue line indicates the H$_2$ gas surface density taken from \citet[][see righthand $y$-axis in upper panel for values]{nakanishi2006}. The light blue vertical lines indicate the errors associated with the H$_2$ gas surface density points.} \end{center} \end{figure*} In the upper panels of Fig.\,\ref{fig:rms_rgc_distribution} we present the massive star formation and bolometric luminosity surface density distributions as a function of Galactic radius. These plots include only sources with luminosities above the completeness threshold. However, sources above this limit dominate the total luminosity, and inclusion of the lower luminosity sources has very little impact on the measured values or the overall structure of the distribution. This has been produced in the standard way by dividing the total number of sources in each annulus by its area. We have also applied a heliocentric radius limit of 17\,kpc to avoid including areas that are effectively outside the regions the RMS survey is sensitive to and the area of the wedge towards the inner Galaxy excluded from the RMS survey (i.e., $350\degr < \ell < 10\degr$). The overall distributions are both highly structured showing numerous peaks, many at similar radii. In the middle and lower panels of this figure we separate these distributions into the northern and southern populations, respectively, and identify the spiral arms that are coincident with the observed peaks in the first and fourth quadrants. \subsubsection{Source counts} Peaks are seen in the RMS source distribution for the whole sample at $\sim$3, 5, 6 and 10\,kpc. The 5 and 6\,kpc peaks are coincident with the segments of the Scutum-Centaurus\ and Sagittarius\ arms located in the northern Galactic plane, but no peaks are found towards the Perseus\ and Norma\ arms located in this part of the Galaxy. Both of these arms, however, extend into the outer Galaxy and are spread over a larger range of Galactic radii than the other arms (6-10\,kpc for and 8-13\,kpc for the Perseus\ and Norma\ arms, respectively), thus smearing their Galactocentric distribution. All four distinct peaks seen in the upper left panel of Fig.\,\ref{fig:rms_rgc_distribution} (i.e., $\sim$3, 5, 6 and 10\,kpc) are seen in the southern Galactic plane distribution; these can be attributed to the Near and Far 3-kpc arms, the Norma, Scutum-Centaurus\ and Sagittarius\ arms, respectively. All of the peaks located within the Solar circle have been previously reported for RMS subsamples located in the first and fourth quadrant (i.e., \citealt{urquhart2011a} and \citealt{urquhart2012}, respectively). The overall distribution, as well as that of the northern and southern Galactic plane samples, is also very similar to those presented by \citet{bronfman2000}. Similar features have also been reported in the Galactocentric distribution of methanol masers \citep{green2011b}, H~{\sc ii}\ regions \citep{anderson2009a,paladini2004}, thermal dust emission \citep{dunham2011b} and molecular gas \citep{rathborne2009}. The correlation of these observed peaks in the Galactocentric distribution, and between the radial velocities and Galactic longitude of the RMS sources and the spiral arms models (i.e., Fig.\,\ref{fig:lv_distribution}) is consistent with four-armed models of the Galaxy (e.g., \citealt{georgelin1976, nakanishi2006}). \begin{figure*} \begin{center} \includegraphics[width=0.99\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/long_vlsr_z_distribution_histogram.eps} \caption{\label{fig:l_z_distribution} Distance from the Galactic mid-plane ($Z{\rm{o}}$) for the embedded RMS source population is shown as a function of Galactic longitude. The complexes are shown as open circles (larger symbol sizes indicate a greater number of members), while individual sources are shown as filled circles. The symbol colours give the source or systematic velocity of the complexes; see colour bar for values. The black dashed curve is the result of a third order polynomial fitted to the source density and thus traces the mean source latitude; This is provided to emphasise the shape of the Galactic warp.} \end{center} \end{figure*} We include the H$_2$ gas surface density derived from CO observations (\citealt{nakanishi2006}) on Fig.\,\ref{fig:rms_rgc_distribution} for comparison. The molecular gas surface density peaks at $\sim$5\,kpc before decreasing exponentially with a scale length of $\sim$2.5\,kpc (as determined by \citet{nakanishi2006}). It is interesting to note that the source densities for the parts of the spiral arms inside the Solar circle are very similar (i.e., the peaks are approximately the same height $\sim$5-6\,kpc$^{-2}$), implying that the star formation rate per unit area is fairly constant. The source distribution is broadly similar to that of the molecular gas, with the peaks in the source distribution identifying localised regions where the SFE and/or SFR have been enhanced. The massive SFR surface density is therefore correlated with the molecular gas surface density. This is consistent with the SFR derived by \citet{misiriotis2006} from an analysis of COBE/DIRBE (1.2, 2.2, 60, 100, 140, 240\,$\umu$m) and COBE/FIRAS (100-1000\,$\umu$m) maps of the Galactic disk. Observations of nearby spiral galaxies (SINGS; \citealt{leroy2008}) found them to have a similar structure to that of the Milky Way with a approximately flat SFE towards their interiors, which decreased exponentially at large radii as the H~{\sc i}/H$_2$ ratio increases. They infer that the SFRs do not appear to be directly sensitive to environmental conditions within giant molecular clouds (GMCs), but that the formation of GMCs is dependent on local conditions. This would suggest that the spiral structures do not directly influence the SFR but are localized regions where the formation of GMCs is significantly more efficient. While the correlation between the SFR and molecular gas surface densities is good over the whole range of available Galactocentric distances, the peak in the SFR density at $\sim$10\,kpc, associated with the fourth quadrant section of the Sagittarius\ arm, may be of particular note. The SFR surface density here is perhaps only half that of the segments of the spiral arms located within the Solar circle, but is projected against a much lower gas surface density, which may indicate that this spiral arm has a significantly enhanced SFR per unit gas mass beyond what is observed in the other spiral arms. The final feature to note is the minimum seen in the source density plots at approximately 9\,kpc. This minimum in the source counts changes by one bin in the distribution of the northern and southern samples and so is somewhat washed out in the distribution of the whole sample presented in the upper left panel of Fig.\,\ref{fig:rms_rgc_distribution}. This minimum roughly coincides with a similar feature identified by \citet{lepine2011} that they associated with a step down in metallicity and attributed to a kinematic barrier at co-rotation though which the gas cannot easily pass. Given the difficulties in assigning accurate distances to objects near the Solar circle, which have LSR velocities near zero, it would be speculative to draw conclusions regarding this feature at this stage, so we simply note the coincidence. \subsubsection{Bolometric luminosity} There are six peaks present in the luminosity distribution, four of which coincide with the peaks seen in the source count distribution (i.e., $\sim$3, 5, 6 and 10\,kpc). Although the SFR surface densities are similar for the segments of the spiral arms located within the Solar circle, we find the luminosity surface densities increase with increasing distance from the Galactic centre. As already mentioned, the overall SFR is similar for the for most of the arms within the Solar circle, so this increase in luminosity per unit area is likely to be due to an increase in the mean luminosity of the embedded stars. This would imply that the SFE is somehow enhanced in the outer parts of the spiral arms or that it is suppressed in the inner regions, perhaps by the interaction of the gas with the Galactic bar. We estimate the total luminosity of RMS embedded MYSO and H~{\sc ii}\ region populations by multiplying the luminosity in each bin by the area of the bin annulus and summing all of these together to obtain a value of $0.76\times10^8$\,\ensuremath{\mathrm{L}_\odot}. In this estimate we include all RMS sources, including those below the completeness in order to obtain a value that can be compared to that determined by \citet{bronfman2000} from their IRAS selected sample. Our value is approximately half the value determined by \citet{bronfman2000}; however, their fluxes were drawn from the $IRAS$ point source catalogue, whose large beam sizes (i.e., $\sim$5\arcmin\ at 100\,$\umu$m) would have likely caused an overestimate of the total luminosity. It is also possible that their sample will include a significant number of more evolved H~{\sc ii}\ regions. The luminosity of the embedded massive star population represents only a few per\,cent of the total Galactic far-infrared luminosity (i.e., $\sim2\times10^9$\,\ensuremath{\mathrm{L}_\odot}; \citealt[][$COBE$ DIRBE]{sodroski1997}; \citealt[][$IRAS$]{bloemen1990}), which would point to a relatively short embedded lifetime. We use the total Galactic luminosity of the embedded massive star population to estimate the fraction contributed from each of the massive complexes discussed in Sect.\,\ref{sect:complexes}. This value is given in the final column of Table\,\ref{tbl:complex_parameters}. The full version of this table is only available in electronic form, however, the portion presented in this table lists the 25 most luminous complexes in the Galaxy. It is interesting to note that the 10 most luminous complexes contain only 8\,per\,cent of the embedded MYSOs and H~{\sc ii}\ region population, but contribute approximately 30\,per\,cent of the total luminosity of the Galactic embedded massive star population. A recent study of the WMAP free-free foreground emission maps \citep{murray2010} found that the 18 most luminous Galactic star forming regions are responsible for the production of over half of the total ionising luminosity of Galaxy. This would suggest that the formation or early O-type stars is concentrated into a relatively small number regions in the Galaxy. Furthermore, this suggests that the initial mass function (IMF) is not constant on all scales and supports the cluster-based IMF model (e.g., \citealt{kroupa2003}). Some of these complexes are clearly very luminous and contribute significantly to some of the peaks seen in Fig.\,\ref{fig:rms_rgc_distribution}. For example, the increased luminosity surface density seen in the northern distribution at 6\,kpc is somewhat due to the presence of the W51 star forming complex in this bin, which is the most active region in the Galaxy ($\sim$7\,per\,cent of the total RMS luminosity; see Table\,\ref{tbl:complex_parameters}). The additional peaks found at $\sim$7, 8 and 9\,kpc can be attributed to the G305, W49 and NGC\,3603 star forming complexes, respectively; three more of the most active in the Galaxy. \subsubsection{Sagittarius\ arm} We note that the Sagittarius\ arm is a prominent feature in all of the plots presented in Fig.\,\ref{fig:rms_rgc_distribution}, much more strongly detected and clearly defined in both hemispheres than, e.g. the Perseus arm. The Sagittarius\ arm also shows up in the scale-height distribution presented in the lower panel of Fig.\,\ref{fig:z_rgc_distribution}. The peak seen at approximately 6.5\,kpc is only a feature of the northern Galactic plane sample and from the middle panels presented in Fig.\,\ref{fig:rms_rgc_distribution} the bin corresponding to this Galactocentric distance is coincident with the Sagittarius\ arm. This spiral arm therefore has a larger scaleheight than the other segments of the spiral arms located within the Solar circle. This prominence suggests that, as traced by massive star formation, the Sagittarius\ arm is a major feature of the Galaxy, rather than the minor arm implied by the sketch in Fig.\,6. This is contrary to the \citet{benjamin2008} conclusion, although the tracers are quite different as the GLIMPSE data traces an old stellar population so the implication may be that the SF history of the arms is periodic or variable. \subsection{Galactic mid-plane and scaleheight distributions} In Fig.\,\ref{fig:l_z_distribution} we show the latitude distribution of the whole RMS sample with respect to the Galactic mid-plane ($b=0$), as a function of Galactic longitude and radial velocity. This plot reveals that sources occupy a narrow range of distances ($|z|<100$\,pc) within the inner part of the Galactic plane (i.e., 300\degr $<\ell<$ 60\degr) and flare to significantly more positive and negative distances (up to 600\,pc in either direction) in the second and third quadrants, respectively. These deviations from the mid-plane follow the structure of the outer Galaxy, which is known to be significantly warped (\citealt{oort1958}). This is not the first example of infrared sources tracing the Galactic warp, this was previously reported by \citet[][see their Fig.\,3]{wouterloot1990}, but provides a nice confirmation of their results. The velocities of the sources associated with these large excursions from the mid-plane are reasonably coherent and tend towards the more extreme ends of the velocity distributions. Comparison with spiral-arm velocities (cf. Fig.\,\ref{fig:lv_distribution}) illustrates a correspondence with the Outer arm in the second quadrant and the Perseus arm in the third quadrant. The Perseus arm is also seen in the second quadrant, but lies close to the Galactic mid-plane with perhaps a small excursion to negative distances between $\ell=90$\degr\ and 120\degr. This corresponds to the part of the plane where the largest excursion to positive distances is seen towards the Outer arm. The observed lower source density associated with the third quadrant part of the Perseus arm compared to the Outer arm is likely due to the larger heliocentric distances in this part of the Galaxy. There are also regions where the velocities are correlated within the inner Galaxy, and these can be matched to the expected spiral-arm tangents: Scutum-Centaurus, Sagittarius, Perseus, Norma, and Scutum-Centaurus at $\ell=$ 20\degr, 30\degr, 340\degr, 330\degr, 300\degr, respectively. We also note that nearly all of the complexes identified are located towards the Galactic mid-plane. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/z_distribution_histogram.eps} \caption{\label{fig:z_distribution} Galactic latitude distribution of the embedded RMS population. The grey histogram shows the distribution of the whole sample while the purple hatched histogram show the subsample of sources above the completeness limit. The bin size is 20\,pc and the errors are determined from Poisson\ statistics.} \end{center} \end{figure} In Fig.\,\ref{fig:z_distribution} we show the latitude distribution of the whole sample as well as the subsample of sources with luminosities above $2\times10^4$\,\ensuremath{\mathrm{L}_\odot}; scale heights obtained from fits to both samples agree within the errors with 37.7$\pm$0.8\,pc and 39.5$\pm$1.7\,pc, respectively. This suggests that the star-formation scale height is relatively luminosity independent over the range of luminosities covered by the RMS survey (i.e., $\sim$10$^2$-10$^6$\,\ensuremath{\mathrm{L}_\odot}). We also find the scale height is similar for the northern and southern Galactic-plane subsamples. We note that these scale heights are significantly larger than those derived from the ATLASGAL-CORNISH sample of UC\,H~{\sc ii}\ regions and their star-forming clumps ($\sim$22.4$\pm$1.9 and 28.1$\pm$2.6\,pc, respectively; Urquhart et al. 2013), or methanol masers (27$\pm$1\,pc; \citealt{green2011b}) identified by the MMB survey (\citealt{caswell2010c}) or indeed the subsample of RMS sources located within the GRS region (i.e., 30.2$\pm$2.8\,pc; \citealt{urquhart2011a}). However, most of these surveys have focused on sources located primarily within the inner Galaxy and so the larger scale height obtained for the $full$ RMS sample is simply be a consequence of including the outer Galaxy sources, which have a significantly larger distribution around the mid-plane (e.g., see Fig.\,\ref{fig:l_z_distribution}). \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/z0_rgc_distribution_line_plot.eps} \includegraphics[width=0.45\textwidth, trim= 0 0 0 0]{PAPER_PLOTS/z_rgc_distribution_line_plot.eps} \caption{\label{fig:z_rgc_distribution} The offset of the mean RMS source position from the mid-plane and the scale height as functions of Galactocentric radius are shown in the upper and lower panels, respectively. The northern and southern Galactic plane subsamples are shown in red and blue in the upper panel, while in the lower panel we show only the distribution of the whole sample in purple.} \end{center} \end{figure} In Fig.\,\ref{fig:z_rgc_distribution} we present the offset of the Galactic disk from the mid-plane and measurements of the scale height as a function of Galactocentric radius. The upper panel shows the distribution of the mean southern and northern subsample positions about the mid-plane separately to illustrate the effects of the Galactic warp. The lower panel shows the absolute mean height from the mid-plane for the entire sample: the deflection of the southern plane below the mid-plane is similar to that of the northern plane above it. The samples for each of these plots have been divided into eight bins of equal number of sources. The distance from the mid-plane is determined from the average over each bin, and the error is the standard error on the mean. The scale height and its associated error are determined from an exponential fit to the histogram of data in each bin. The Galactocentric radius is simply the mean radius of the binned data, and the error is the standard deviation of this value. These plots show that both the northern and southern samples are tightly associated with the Galactic mid-plane for radii $<$7\,kpc, after which they begin to diverge. The Galactic scale height is also seen to increase modestly with increasing Galactocentric distance with a value of $\sim$20\,pc at a radius of 4\,kpc to $\sim$30\,pc at 8\,kpc. At larger distances this slope increases very rapidly, reaching values of almost 200\,pc at a Galactocentric distance of $\sim$11\,kpc. We find no significant difference between the northern or southern samples. The increase in scale height with increasing Galactocentric radius is very closely matched by height of the molecular H$_2$ layer both in magnitude and slope (e.g., \citealt{nakanishi2006,wouterloot1990,bronfman1988, grabelsky1987}). Similar increases in scale height have been reported from H~{\sc i}\ studies of the Milky Way (\citealt{malhotra1995}) and nearby edge-on spiral galaxies (\citealt{rupen1991}), although in these cases the H~{\sc i}\ disks are significantly thicker, and increase much more quickly (100 to 220\,pc between 3 and 8\,kpc; \citealt{malhotra1995}). \citet{lin1964} predict that the influence of spiral-arm shocks is significantly weaker in the outer Galaxy beyond the corotation radius ($R_{\rm{GC}}\sim 8$\,kpc). This would lead to less confined and smooth spiral features outside this radius, with supernovae probably playing a more important role than the spiral structure in determining the state of the ISM \citep{dibs2009}. Our RMS sources are young enough that they are still embedded in their natal molecular clouds and have not been dispersed by the Galactic gravitational potential, and are thus a good probe of changes in the molecular ISM. The relatively sharp change in the distribution and increase of the scaleheight of RMS sources outside the co-rotation radius would therefore support the idea of the ISM being generally more flocculent, dominated by supernovae and less confined in the outer Galaxy. \setlength{\tabcolsep}{6pt} \begin{table} \begin{center} \caption{Changes in scale height over the disk. } \label{tbl:z_rgc} \begin{minipage}{\linewidth} \begin{tabular}{.....} \hline \hline \multicolumn{1}{c}{$R_{\rm{GC}}$} & \multicolumn{1}{c}{$Z{\rm{o}}$}&\multicolumn{1}{c}{$\Delta$$Z{\rm{o}}$} &\multicolumn{1}{c}{Scaleheight} &\multicolumn{1}{c}{$\Delta$Scaleheight} \\ \multicolumn{1}{c}{(kpc)} & \multicolumn{1}{c}{(pc)}&\multicolumn{1}{c}{(pc)} &\multicolumn{1}{c}{(pc)}&\multicolumn{1}{c}{(pc)} \\ \hline 4.15 &-1.14&2.04 &22.32&1.49\\ 5.27 &-7.26&2.63 &23.03&2.00\\ 5.99 &-11.43&2.97 &31.57&3.35\\ 6.55 &-1.01&3.07 &51.01&12.07\\ 7.37 &6.37&3.44 &31.93&3.00\\ 8.40 &3.53&2.85 &31.21&2.99\\ 9.59 &-53.40&7.30 &113.89&23.79\\ 11.29 &55.01&12.71 &176.82&24.82\\ \hline \end{tabular} \end{minipage} \end{center} \end{table} \setlength{\tabcolsep}{6pt} \section{Summary and conclusions} \label{sect:summary_conclusions} The Red MSX Source (RMS) survey is a Galaxy-wide mid-infrared selected sample of $\sim$1750 MYSOs and UC\,H~{\sc ii}\ regions. In this paper we summarise the results of our previous molecular line follow-up observations, and present additional observations towards $\sim${\color{black}800} RMS sources. Observations of the $^{13}$CO (1-0) transition were made towards sources not previously observed, and the higher density tracers CS and NH$_3$ were used to identify the correct radial velocity for sources towards which multiple velocity components had been detected. Combining these results with those from our previous follow-up campaign of observations and with archival line surveys, we have obtained radial velocities to all but a handful of sources. Use of the positional and radial velocities of our sources combined with inspection of mid-infrared images has allowed us to group sources into $\sim120$ star forming complexes. Approximately one third of the RMS sample are associated with these complexes. We have conducted an in-depth literature search to identify sources/complexes for which a reliable distance has previously been determined. Where a distance was not available we have used the radial velocities and the Galactic rotation curve derived by \citet{reid2009} to calculate kinematic distances, and have used the H~{\sc i}\ self-absorption to resolve any outstanding kinematic distance ambiguities for sources within the Solar circle ($\sim$200 sources). We have obtained distances to $\sim$1650 H~{\sc ii}\ regions and YSOs, which corresponds to over than 90\,per\,cent of the embedded RMS population. These distances are used to estimate bolometric luminosities using model fits to the spectral energy distributions previously derived by \citet{mottram2011a}. We find we are complete above $2\times10^4$\,\ensuremath{\mathrm{L}_\odot}\ to a heliocentric distance of $\sim$18\,kpc. We use this sample to investigate the Galactic distribution of massive stars with respect to the position of the spiral arms, the Galactic long bar, the Galactic warp and the flaring of the molecular disk. Finally we compare the distribution of massive stars in our Galaxy with other nearby spiral galaxies. Our main findings are as follows: \begin{enumerate} \item We observe a high surface density of MYSO and H~{\sc ii}\ regions within the Solar circle and a decaying exponential at larger distances from the Galactic centre, consistent other measurements of the star formation rate determined from infrared and submillimetre studies. Variation in the star formation rate and efficiency within the Solar circle appears to be attributable to specific large scale feature of Galactic structure, such as the spiral arms and the ends of the bar. \item We have compared the distribution of the RMS sources with the expected position of the spiral arms using longitude-velocity diagrams, in 3-dimensions and as a function of Galactocentric radii and have found them to be in good agreement. Our results are therefore consistent with a model of the Galaxy consisting of four major arms. \item The embedded source and H$_2$ gas surface densities have a very similar overall distribution. We find that the source surface densities associated with the segments of the spiral arms located within the Solar circle are very similar, which suggests that the massive star formation rate per unit molecular gas mass is approximately constant for the inner parts of the spiral arms. \item We also find that the luminosity surface density increases with increasing Galactocentric radius for the segments of the spiral arms located within the Solar circle. Given that the source surface densities are approximately the same for the spiral arms in this region, this indicates that the mean source luminosity is increasing with distance from the Galactic centre. This increase in mean source luminosity can be attributed to a very small number of extremely active star forming complexes (i.e., W51, W49, W43) where the star and/or clump formation efficiency is significantly enhanced. \item We find the scaleheight of massive star formation as measured from the whole sample is 39.5$\pm$1.7\,pc, which is larger than has been reported previously ($\sim$30\,pc). Most earlier studies have concentrated on inner Galaxy samples and exclude the significant flaring of the disk observed in the outer Galaxy. We measure the scaleheight as a function of Galactocentric distance and find that it increases only modestly between $\sim$4 and 8\,kpc (i.e., from $\sim$20-30\,pc), but much more rapidly at larger distances. \item We estimate the total integrated bolometric luminosity of the embedded MYSO and H~{\sc ii}\ region population to be $\sim0.76\times 10^8$\,\ensuremath{\mathrm{L}_\odot}; however, we find that the ten most luminous complexes contribute almost 30\,per\,cent of the total integrated RMS bolometric luminosity while comprising only 8\,per\,cent of the sources. \end{enumerate} \section*{Acknowledgments} We would like to thank the referee for their informative comments and suggestions that have improved this work. We would also like to extend thanks to Friedrich Wyrowski for reading and commenting on an earlier draft of the manuscript. The Mopra radio telescope is part of the Australia Telescope National Facility which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. The University of New South Wales Digital Filter Bank used for the observations with the Mopra Telescope was provided with support from the Australian Research Council. We thank Dr. Mark Reid for providing the Fortran code used to estimation of kinematic distances for their Galactic rotation curve. This research has made use of the SIMBAD database operated at CDS, Strasbourg, France and NASA's Astrophysics Data System Bibliographic Services. This work was partially funded and carried out within the Collaborative Research Council 956, sub-project A6, funded by the Deutsche Forschungsgemeinschaft (DFG). This paper made use of information from the Red MSX Source survey database at {\tt{http://rms.leeds.ac.uk/cgi-bin/public/RMS\_DATABASE.cgi}} which was constructed with support from the Science and Technology Facilities Council of the UK.
1,477,468,750,199
arxiv
\section{Introduction} Mechanical systems of kinematically coupled structures, or multibody systems, have a variety of engineering applications including robotic manipulators, manufacturing machines with articulated components, bipedal walking and musculoskeletal system modeling \cite{Krishna89}. An interesting aspect of such systems is how they behave when certain degrees of freedom are left unactuated. In dealing with such underactuated mechanical systems, we often explore their fundamental properties. For instance, if the unactuated variables are cyclic (i.e. do not appear in the Lagrangian), then the momentum map associated with these variables will be conserved due to symmetry \cite{MaRa95}. One of the most celebrated examples of the symmetry and conservation of momentum is the falling cat problem \cite{Kane69,Montgomery93,Mo06}: a cat always lands on its feet after released upside down from rest, by deforming its body while keeping a zero angular momentum. There are variants of the falling cat such as Elroy's beanie \cite{MaMoRa90} and springboard divers \cite{Fr79}. Another popular example is the system of a human sitting on a rotating stool holding a wheel \cite{ChJe12}. Sitting on the rotating stool, one spins the wheel by his hand while holding it horizontally. A reaction torque will be created and initiate the rotating motion of the stool in the opposite direction. As long as the wheel is rotating, the stool keeps rotating. After some time, if the person applies a braking force halting the wheel spin, then the stool will also stop. All of these phenomena follow from angular momentum conservation, which holds in the absence of external forces. If there is an external force, then the momentum is not conserved any longer in general and the consequent motion of the system will be different from that in the case of momentum conservation. We are here mainly interested in the case where the external forces are linear in the velocity, i.e. viscous damping-like forces. This was well studied in \cite{ChJe12} for the case of one-dimensional symmetry, i.e., $S^1$ or $\mathbb R^1$ symmetry. According to \cite{ChJe12}, in the stool-wheel system with viscous damping friction on the rotation axis of the stool, the stool does not keep rotating but meets a bound in its motion even if the person on the stool keeps spinning the wheel. The larger the angular velocity of the spin of the wheel is, the more angle the stool rotates by, but in the end the stool meets a bound in its motion, not being able to keep rotating, as long as the angular velocity of the wheel is bounded. This bound on the motion of the stool is called the damping-induced bound. Another phenomenon, which is perhaps more remarkable, in this stool-wheel system with the friction is the following. After some time, if the person on the stool stops the spinning of the wheel, then the stool does not stop. Instead, it asymptotically converges back to its initial position, making the same number of net rotations in the opposite direction. This phenomenon is called damping-induced self-recovery. From this example, one can see that the viscous damping-like force plays a role of a restoring force, but it is different from the spring force. To the knowledge of the authors, an example of damping-induced self-recovery and boundedness was first reported by Andy Ruina at a conference \cite{Ru10}, where he shows a video of his experiment and provides an intuitive proof of the phenomena. The damping-induced self-recovery phenomenon, without damping-induced boundedness, is also explained in \cite{Gregg10} for the case where the damping coefficient is constant, but the proof therein is not complete. Both phenomena of damping-induced self-recovery and damping-induced boundedness were rigorously and completely proved for the first time in \cite{ChJe12}, where the damping coefficient is allowed to be a function of the cyclic variable and may take some negative values. The results in \cite{ChJe12} as well as \cite{Ru10,Gregg10} are for the case of one-dimensional symmetry. This paper generalizes the results in \cite{ChJe12} to the case of higher-dimensional Abelian symmetry. We consider mechanical systems with several unactuated cyclic variables that are subject to external forces linear in the velocity. The systems are allowed to have some actuated variables that are subject to control forces. We find conditions on the linear forces under which the damping-induced phenomena of self-recovery and boundedness occur for the cyclic variables. The analytical approach taken in this paper for the proof of these two phenomena is different from that in \cite{ChJe12}. The total order of $\mathbb R$ was implicitly made use of in \cite{ChJe12}, but in this paper we employ a Lyapunov-like function to prove the damping-induced self-recovery and boundedness for the multiple cyclic variables. We first characterize a class of damping-like forces that induce new conserved quantities which we call in this paper the damping-added momenta, and then construct a Lyapunov-like function using the damping-added momenta to prove the existence of the phenomena of damping-induced self-recovery and boundedness. To illustrate the main results, we consider two nontrivial examples of multibody systems that possess multiple cyclic variables: the planar pendulum with gimbal actuators and the three-link planar manipulator on a horizontal plane. \section{Main Results}\label{sec:description} In this paper, the norm $\| \cdot \|$ denotes the Euclidean norm for vectors and the corresponding induced norm for matrices. For an invertible matrix $A = (a_{ij})$, the $(i,j)$-th entry of its inverse matrix is denoted by $a^{ij}$. For a symmetric matrix $A$ we denote its positive semi-definiteness by $A \succeq 0$. For two symmetric matrices $A$ and $B$ of the same size, $A \succeq B$ means $A-B \succeq 0$. \subsection{Equations of Motion and Damping-Added Momenta} Let $Q = Q_1 \times Q_2$ be an $n$-dimensional configuration space, where $Q_1 = \mathbb R^r$ and $Q_2$ is a smooth manifold of dimension $n-r$. If a unit circle $S^1$ appears as a factor of $Q_1$, it is replaced by $\mathbb R$, which does not impose any restrictions on the description of the dynamics. Let $q = ( x, y) \in \mathbb R^r \times Q_2$ denote coordinates for $Q$, where \[ x = (x^\alpha)= (x^1, \ldots, x^r), \quad y = (y^a)= (y^{r+1}, \ldots, y^n) \] and \[ q = (q^i) = (q^1, \ldots, q^r; q^{r +1}, \ldots, q^n) = (x^1, \ldots, x^r; y^{r +1}, \ldots, y^n). \] For notational convenience, the following three groups of indices are used in this paper: \[ \underbrace{ \underbrace{1, \ldots, r}_{ \alpha, \beta, \gamma, \ldots }; \underbrace{r +1, \ldots, n}_{ a, b, c, \ldots}}_{i,j,k ...}. \] Consider a mechanical system with the Lagrangian \begin{equation*}\label{def:Lag} L(q,\dot q) = \frac{1}{2}m_{\alpha \beta}\dot x^\alpha \dot x^\beta + m_{\alpha a} \dot x^\alpha \dot y^a + \frac{1}{2}m_{ab}\dot y^a \dot y^b - V(q) \end{equation*} which is the kinetic minus potential energy of the system. Here we follow the Einstein summation convention. It is assumed that the mass matrix \[ m = \begin{pmatrix} m_{\alpha \beta} & m_{\alpha b} \\ m_{a \beta} & m_{ab} \end{pmatrix} \] is symmetric and positive definite. We make the following four assumptions on the system: \begin{itemize} \item [A1)] The variables $x^\alpha$'s are cyclic \cite{MaRa95}, i.e. \begin{equation*}\label{e:cyclic variable} \frac{\partial L}{\partial x^\alpha}=0 \end{equation*} for all $\alpha = 1, \ldots, r$. \item [A2)] Controls $u_{a}$'s are given in the directions of $y^{a}$'s. \item [A3)] Each cyclic variable $x^\alpha$ is under a general damping-like force (i.e. linear in the velocity) described as $-k_{\alpha \beta}(x)\dot x^\beta$, where each coefficient $k_{\alpha \beta}(x)$ is a continuously differentiable function of $x$. \item [A4)] The coefficients $k_{\alpha\beta}$ satisfy \begin{equation}\label{sym:cond:k} k_{\alpha \beta} = k_{\beta\alpha} \end{equation} and \begin{equation}\label{int:cond:k} \frac{\partial k_{\alpha\beta}}{\partial x^\gamma} = \frac{\partial k_{\alpha\gamma}}{\partial x^\beta}. \end{equation} \end{itemize} By A1) -- A3), the equations of motion of the system are written as \begin{align} \frac{d}{dt}\frac{\partial L}{\partial \dot x^\alpha} &= - k_{\alpha \beta}\dot x^\beta, \quad \alpha = 1, \ldots, r \label{e:equation of motion:x}\\ \frac{d}{dt}\frac{\partial L}{\partial \dot y^a} - \frac{\partial L}{\partial y^a} &= u_a, \quad a = r+1, \ldots, n. \label{e:equation of motion:y} \end{align} In individual coordinates, \eqref{e:equation of motion:x} and \eqref{e:equation of motion:y} can be expressed as \begin{align} &m_{\alpha \beta}\ddot x^\beta + m_{\alpha b} \ddot y^b + [ij,\alpha]\dot q^i \dot q^j = -k_{\alpha \beta}\dot x^\beta\label{EL:coord:1}\\ &m_{a\beta}\ddot x^\beta + m_{ab} \ddot y^b + [ij,a]\dot q^i \dot q^j + \frac{\partial V}{\partial y^a} = u_a,\label{EL:coord:2} \end{align} where $[ij,k]$ denotes the Christoffel symbol \[ [ij,k]= \frac{1}{2} \left ( \frac{\partial m_{ik}}{\partial q^j} + \frac{\partial m_{jk}}{\partial q^i} - \frac{\partial m_{ij}}{\partial q^k}\right ) \] for $i,j,k = 1, \ldots, n$. If there were no external forces, i.e. $k_{\alpha \beta} =0$ for all $\alpha, \beta$, then the momenta $\frac{\partial L}{\partial \dot x^\alpha}$'s would be the first integrals of the system (\ref{e:equation of motion:x}), but due to the external forces $F_\alpha = -k_{\alpha\beta} \dot x^\beta$'s, $\frac{\partial L}{\partial \dot x^\alpha}$'s are not conserved any more. We will here show that there are new conserved quantities associated with \eqref{e:equation of motion:x} in place of $\frac{\partial L}{\partial \dot x^\alpha}$'s. \begin{lemma}\label{lemma:fund} Assumption A4) entails the following: \begin{enumerate} \item For each $\alpha$, there is a function $h_{\alpha} : \mathbb R^r \rightarrow \mathbb R$ such that \begin{equation}\label{prop:h} \frac{\partial h_{\alpha}}{\partial x^\beta} = k_{\alpha\beta}. \end{equation} \item For each $h_{\alpha}$, there is a function $U : \mathbb R^r \rightarrow \mathbb R$ such that \begin{equation}\label{prop:V} \frac{\partial U}{\partial x^\alpha} = h_{\alpha}. \end{equation}Namely, the damping coefficient matrix ($k_{\alpha \beta}(x)$) is the second-order derivative matrix of $U$. \end{enumerate} \begin{proof} \begin{enumerate} \item By \eqref{int:cond:k} and Ponicar\'{e}'s lemma, there is a function $h_\alpha$ that satisfies \eqref{prop:h}. \item By \eqref{sym:cond:k} and \eqref{prop:h}, we have ${\partial h_{\alpha}}/{\partial x^\beta} = k_{\alpha\beta} = k_{\beta\alpha} = {\partial h_{\beta}}/{\partial x^\alpha}$. Hence, by Ponicar\'{e}'s lemma, there is a function $U : \mathbb R^r \rightarrow \mathbb R$ such that \eqref{prop:V} holds. \end{enumerate} \end{proof} \end{lemma} Using Lemma \ref{lemma:fund}, we now show that the dynamics \eqref{e:equation of motion:x} have $r$ first integrals. \begin{theorem} Let $h_\alpha$'s be functions that satisfy (\ref{prop:h}). Then, the $r$ functions \begin{equation}\label{cons:quantity} \frac{\partial L}{\partial \dot x^\alpha} + h_\alpha \end{equation} or equivalently \begin{equation}\label{first:int} m_{\alpha \beta} \dot x^\beta + m_{\alpha a} \dot y^a + h_\alpha \end{equation} for $\alpha =1, \ldots, r,$ are the first integrals of the dynamics \eqref{e:equation of motion:x}. \begin{proof} Differentiating \eqref{cons:quantity} with respect to $t$ along the trajectory of the system gives \begin{align*} \frac{d}{dt} \left (\frac{\partial L}{\partial \dot x^\alpha} + h_\alpha \right ) = \frac{d}{dt}\frac{\partial L}{\partial \dot x^\alpha} + \frac{\partial h_{\alpha}}{\partial x^\beta} \dot x^\beta = -k_{\alpha\beta} \dot x^\beta + k_{\alpha\beta}\dot x^\beta =0 \end{align*} due to \eqref{e:equation of motion:x} and \eqref{prop:h}. This completes the proof. \end{proof} \end{theorem} The vector-valued map $(q, \dot q) \mapsto (\frac{\partial L}{\partial \dot x^\alpha} + h_\alpha)$ shall be called the damping-added momentum map. \subsection{Damping-Induced Self-Recovery and Damping-Induced Boundedness}\label{sec:self-recovery and boundedness} The first integrals in \eqref{cons:quantity} depend on the initial condition $(x(0), y(0), \dot x(0), \dot y(0))$. Taking an arbitrary initial condition and letting $\mu_\alpha$ be the initial value of the corresponding first integral, $\frac{\partial L}{\partial \dot x^\alpha} + h_\alpha$, we have \begin{equation}\label{eq:cons} m_{\alpha \beta} \dot x^\beta + m_{\alpha a} \dot y^a + h_\alpha (x) = \mu_\alpha \end{equation} for all $t \in \mathbb R$. By Lemma \ref{lemma:fund}, there is a function $U$ on $\mathbb R^r$ such that \eqref{prop:V} holds. By adding $-\mu_\alpha x^\alpha$ to it, we can obtain a function $U_\mu : \mathbb R^r \rightarrow \mathbb R$ such that \begin{equation}\label{dVmu} \frac{\partial U_\mu(x)}{\partial x^\alpha} = h_\alpha(x) - \mu_\alpha. \end{equation} The conservation equations \eqref{eq:cons} for the damping-added momenta can be regarded as first-order differential equations for the cyclic variables $x^\alpha$'s. The damping-induced self-recovery phenomenon is a direct consequence of asymptotic stability of these first-order dynamics under some conditions on the function $U_\mu$. Let us make the following assumptions on $U_\mu$ which constitute sufficient conditions for the self-recovery phenomenon to occur: \begin{itemize} \item [A5)] The function $U_\mu$ has a unique critical point, denoted $x_e$, and it is a minimum point of $U_\mu$. \item [A6)] There is a number $\delta_1 >0$ such that \[ \inf_{\| x -x_e\|\geq \delta_1} U_\mu(x) > U_\mu(x_e). \] \item [A7)] There is a number $\delta_2 >0$ such that \[ \inf_{\| x -x_e\|\geq \delta_2} \| dU_\mu(x)\| >0, \] where $dU_\mu = (\frac{\partial U_\mu}{\partial x^1}, \ldots, \frac{\partial U_\mu}{\partial x^r})$. \end{itemize} Assumption A6) guarantees that there does not exist any sequence $\{x_k\}$ in $\mathbb R^r$ with $\lim_{k\rightarrow\infty} \|x_k\| = \infty$ such that $\lim_{k\rightarrow \infty} U_\mu(x_k) = U_\mu(x_e)$. Likewise, assumption A7) guarantees that there does not exist any sequence $\{x_k\}$ in $\mathbb R^r$ with $\lim_{k\rightarrow\infty} \|x_k\| = \infty$ such that $\lim_{k\rightarrow \infty} \|dU_\mu(x_k)\| = 0$. We now state one of the two main theorems of this paper. \begin{theorem}[Damping-Induced Self-Recovery]\label{thm:main:1} Suppose that controls $u_a(t)$'s are chosen such that $\lim_{t\rightarrow \infty} \dot y(t) = 0$ and there are numbers $c_1 >0$, $c_2>0$ and $c_3>0$ such that \begin{align} &c_1I \preceq (m_{\alpha \beta} (y(t))) \preceq c_2I \label{c1:m:c2}\\ &\| (m_{\alpha a} (y(t))) \| \leq c_3\label{m:c3} \end{align} for all $t\geq 0$. Then, $\lim_{t\rightarrow \infty} x(t) = x_e$ and $\lim_{t\rightarrow \infty} \dot x(t) = 0$. In particular, if the initial condition is such that $\dot x(0) =0$ and $\dot y(0)=0$, then $\lim_{t\rightarrow \infty} x(t) = x(0)$. \begin{proof} By \eqref{prop:h}, integration of \eqref{e:equation of motion:x} with respect to $t$ yields \eqref{eq:cons} for some constants $\mu_\alpha$'s, which can be written as \begin{equation}\label{dot:x:alpha:eq} \dot x^\alpha = - m^{\alpha\beta} \frac{\partial U_\mu}{\partial x^\beta} - m^{\alpha \beta} m_{\beta a} \dot y^a \end{equation} by \eqref{dVmu}. By adding a constant, we may assume that $U_\mu (x_e) =0$. For each $s>0$, define an open set $W_s$ in $\mathbb R^r$ by \[ W_s = \{ x\in \mathbb R^r \mid U_\mu (x) < s\}. \] Take any $\epsilon >0$. We will find a number $T>0$ such that $\| x(t) - x_e \| < \epsilon$ for all $t\geq T$. By A5) and A6), there is a sufficiently small $\delta >0$ such that \begin{equation}\label{Uimplyepsilon} x \in W_\delta \Rightarrow \| x - x_e \| < \epsilon. \end{equation} Replacing $\delta$ by a smaller positive number if necessary, we can find, by A5) -- A7), an $\epsilon_1 >0$ such that \begin{equation}\label{UdV} x \notin W_\delta \Rightarrow \| dU_\mu (x) \| \geq \epsilon_1. \end{equation} Since the matrix $(m_{\alpha\beta})$ is symmetric and positive definite, \eqref{c1:m:c2} implies \begin{equation}\label{c2:c1:inverse} \frac{1}{c_2}I \preceq (m^{\alpha \beta} (y(t))) \preceq \frac{1}{c_1}I. \end{equation} Choose a number $0<\ell <1$ such that \begin{equation}\label{ineq:4:d} \frac{1}{c_2} \left (1-\frac{1}{2}\ell \right )^2 - \frac{1}{4c_1}\ell^2 > 0, \end{equation} which is always possible since the left hand side of \eqref{ineq:4:d} is continuous in $\ell$ and is positive at $\ell=0$. Since $\lim_{t\rightarrow \infty} \dot y(t) =0$ and the matrix $(m_{\alpha a}(y(t)))$ is bounded, there is a $T_1 >0$ such that \begin{equation}\label{depsilon1} \| (m_{\alpha a} (y(t)) \dot y^a(t)) \| < \ell \epsilon_1 \end{equation} for all $t \geq T_1$. Whenever $x(t) \notin W_\delta$ for some $t \geq T_1$, we have, by \eqref{dot:x:alpha:eq}, \eqref{UdV}, \eqref{c2:c1:inverse} and \eqref{depsilon1}, \begin{align} \frac{dU_\mu(x(t))}{dt} &= \frac{\partial U_\mu(x)}{\partial x^\alpha}\dot x^\alpha \nonumber \\ &= \frac{\partial U_\mu}{\partial x^\alpha} \left ( - m^{\alpha\beta} \frac{\partial U_\mu}{\partial x^\beta} - m^{\alpha \beta} m_{\beta a} \dot y^a\right ) \nonumber \\ &= - m^{\alpha\beta} \left ( \frac{\partial U_\mu}{\partial x^\alpha} + \frac{1}{2}m_{\alpha a} \dot y^a \right )\left ( \frac{\partial U_\mu}{\partial x^\beta} + \frac{1}{2}m_{\beta b} \dot y^b \right ) + \frac{1}{4}m^{\alpha \beta} m_{\alpha a}\dot y^a m_{\beta b} \dot y^b \nonumber \\ &\leq -\frac{1}{c_2} \left \| \left ( \frac{\partial U_\mu}{\partial x^\alpha} + \frac{1}{2}m_{\alpha a} \dot y^a \right ) \right \|^2 + \frac{1}{4c_1} \| ( m_{\alpha a}\dot y^a) \|^2 \nonumber \\ &\leq -\frac{1}{c_2} \left | \left \| dU_\mu\right \| - \frac{1}{2} \| (m_{\alpha a}\dot y^a)\| \right |^2 + \frac{1}{4c_1} \ell^2\epsilon_1^2 \nonumber \\ &\leq -\frac{1}{c_2} \left | \epsilon_1 - \frac{1}{2}\ell\epsilon_1 \right |^2 + \frac{1}{4c_1} \ell^2\epsilon_1^2 \nonumber \\ &= - \left ( \frac{1}{c_2} \left (1-\frac{1}{2}\ell \right )^2 - \frac{1}{4c_1}\ell^2 \right ) \epsilon_1^2, \label{dVmu7} \end{align} where the right-hand side of \eqref{dVmu7} is negative by \eqref{ineq:4:d}. Hence, $U_\mu(t)$ decreases at least linearly in time as long as $x(t) \notin W_\delta$. By definition of $W_\delta$, there must exist $T > T_1$ such that $x(t) \in W_\delta $ for all $t\geq T$. Thus, $\|x(t) - x_e\| < \epsilon$ for all $t \geq T$ by \eqref{Uimplyepsilon}. Therefore, $\lim_{t\rightarrow \infty } x(t) = x_e$. By taking the limit of both sides of \eqref{dot:x:alpha:eq}, we obtain $\lim_{t\rightarrow\infty} \dot x(t) = 0$ since $dU_\mu (x_e) = 0$ by A5). In particular, if $\dot x(0) =0$ and $\dot y(0) = 0$, then $x_e = x(0)$ by A5) and \eqref{dot:x:alpha:eq}, so $\lim_{t\rightarrow \infty} x(t) = x(0)$. \end{proof} \end{theorem} \begin{remark}The result in Theorem \ref{thm:main:1} is global. In order to get a local result, one has only to assume A5) -- A7) in a neighborhood of $x_e$ and to assume that the trajectory $x(t)$ stays in the neighborhood. \end{remark} As was discovered in the case with a single cyclic variable \cite{ChJe12}, the viscous damping force not only induces self-recovery but also imposes a bound to the range of the cyclic variable. Such a boundedness property also holds for multiple cyclic variables as stated in the following theorem. \begin{theorem}[Damping-Induced Boundedness]\label{thm:damping:bound} Suppose that the function $U_\mu$ satisfies \begin{equation}\label{dVmuinfty} \lim_{\|x\| \rightarrow \infty} \| dU_\mu (x) \| = \infty \end{equation} and \begin{equation}\label{U:infty} \lim_{\|x\| \rightarrow \infty } U_\mu(x) = \infty. \end{equation} If controls $u_a(t)$'s are chosen such that $\dot y(t)$ is bounded and there exist $c_1 >0$, $c_2 >0$ and $c_3 >0$ such that (\ref{c1:m:c2}) and (\ref{m:c3}) hold for all $t\geq 0$, then $x(t)$ is bounded. \begin{proof} Since $\dot y(t)$ is bounded by assumption, there is a number $c_4>0$ such that $\| \dot y(t)\| \leq c_4$ for all $t\geq 0$. By \eqref{dVmuinfty}, there is a number $\ell>0$ such that \[ \|x - x_e\| \geq \ell \Rightarrow \| dU_\mu (x) \| \geq 1 + \frac{c_2c_3c_4}{c_1}. \] Suppose that $x(t)$ is not bounded. Then, there are numbers $0<t_1 < t_2$ such that $\| x(t) - x_e\| \geq \ell$ for all $t\in [t_1, t_2]$ and $U_\mu(x(t_1)) < U_\mu(x(t_2))$ by (\ref{U:infty}). Then, for all $t \in [t_1, t_2]$ \begin{align*} \frac{dU_\mu(t)}{dt} &= \frac{\partial U_\mu}{\partial x^\alpha} \left ( - m^{\alpha\beta} \frac{\partial U_\mu}{\partial x^\beta} - m^{\alpha \beta} m_{\beta a} \dot y^a\right )\\ &\leq -\frac{1}{c_2} \| dU_\mu (x(t))\|^2 + \frac{c_3c_4}{c_1}\|dU_\mu (x(t))\| \\ &= \left (-\frac{1}{c_2} \| dU_\mu (x(t))\| + \frac{c_3c_4}{c_1} \right ) \|dU_\mu (x(t))\| \\ &\leq \left ( -\frac{1}{c_2} \left(1 + \frac{c_2c_3c_4}{c_1}\right) + \frac{c_3c_4}{c_1} \right ) \|dU_\mu (x(t))\|\\ &\leq -\frac{1}{c_2}. \end{align*} Hence, \begin{align*} 0 < U_\mu(x(t_2)) - U_\mu(x(t_1)) = \int_{t_1}^{t_2} \frac{dU_\mu}{dt}(t) dt \leq -\frac{1}{c_2} (t_2 - t_1) < 0, \end{align*} which is a contradiction. Therefore, $x(t)$ is bounded. \end{proof} \end{theorem} \subsection{Diagonal Damping Force: A Special Case} Suppose that there are continuous functions $k_1, \ldots, k_r : \mathbb R \rightarrow \mathbb R$ such that the damping coefficients $k_{\alpha \beta}$ are given as \begin{align} &k_{11} (x) = k_1(x^1); \quad k_{22}(x) = k_2(x^2); \quad \cdots \quad; \quad k_{r\fdim}(x) = k_r (x^r); \label{k:alphas}\\ &k_{\alpha\beta}(x) = 0 \quad \textup{ for $\alpha \neq \beta$}. \nonumber \end{align} Notice that we do not assume that $k_\alpha(x^\alpha)$'s take only non-negative values though we call them damping coefficients for convenience. A function $h_\alpha$ satisfying \eqref{prop:h} is given by \begin{equation}\label{simple:h} h_\alpha(x^\alpha) = \int_0^{x^\alpha} k_\alpha(s) ds, \end{equation} where there is no summation over the index $\alpha$. Given $\mu = (\mu_1, \ldots, \mu_r) \in \mathbb R^r$, the function $U_\mu$ defined by \begin{equation}\label{simple:Vmu} U_\mu (x) = \sum_{\alpha=1}^r \int_0^{x^\alpha} (h_\alpha(s) - \mu_\alpha) ds \end{equation} satisfies \eqref{dVmu}. \begin{corollary}[Damping-Induced Self-Recovery]\label{cor:self} Suppose that the functions $k_\alpha$'s given in (\ref{k:alphas}) and the functions $h_\alpha$'s defined in \eqref{simple:h} satisfy the following: \begin{itemize} \item [(i)] For each $\alpha$, the equation $h_\alpha (s) - \mu_\alpha = 0$ has a unique root, which is denoted by $x_e^\alpha$. \item [(ii)] For each $\alpha$, $k_\alpha (x^\alpha_e) >0$. \item [(iii)] For each $\alpha$, there is an open interval $I$ containing $x_e^\alpha$ such that \[ \inf_{s \in \mathbb R\backslash I } |h_\alpha(s) - \mu_\alpha | >0. \] \end{itemize} Then, the function $U_\mu$ defined in \eqref{simple:Vmu} satisfies A5) -- A7) such that the conclusions in Theorem \ref{thm:main:1} hold true. \begin{proof} Take any $\alpha$ between $1$ and $r$. Since $h^\prime_\alpha(x_e^\alpha) = k_\alpha(x_e^\alpha) >0$, the function $h_\alpha (s) - \mu_\alpha$ is increasing over an open interval containing $x_e^\alpha$. Thus, by condition (i), we have $h_\alpha (s) - \mu_\alpha >0$ for all $s > x_e^\alpha$ and $h_\alpha (s) - \mu_\alpha <0$ for all $s < x_e^\alpha$. It is now straightforward to show that the function $U_\mu$ defined in \eqref{simple:Vmu} satisfies A5) -- A7). \end{proof} \end{corollary} \begin{corollary}[Damping-Induced Boundedness] Suppose that the functions $h_\alpha$ defined in \eqref{simple:h} satisfy the three conidtions in Corollary \ref{cor:self} and the following condition: \begin{itemize} \item [(iv)] For each $\alpha$, $\lim_{s\rightarrow \infty} h_\alpha(s) = \infty$ and $\lim_{s\rightarrow -\infty} h_\alpha(s) =- \infty$. \end{itemize} Then, the conclusion in Theorem \ref{thm:damping:bound} holds true. \begin{proof} It is easy to show that $U_\mu$ given in \eqref{simple:Vmu} satisfies \eqref{dVmuinfty} and \eqref{U:infty}. \end{proof} \end{corollary} \subsection{Choice of Control for Damping-Induced Self-Recovery} In Theorems \ref{thm:main:1} and \ref{thm:damping:bound} we have assumed that an appropriate control law exists to satisfy some conditions on the trajectory $y(t)$. We now constructively show that such a control law exists. \begin{lemma}[\cite{Spong94}]\label{lemma:spong} The control law \begin{equation}\label{control:ua} u_a = f_a(y,\dot x, \dot y) + g_{ab}(y) \tau^b, \end{equation} where $\tau^b$'s are new control variables and \begin{align*} f_{a} &= \left( [ij,a] - m_{a\alpha}[ij,\beta]m^{\alpha\beta} \right )\dot q^i \dot q^j - m_{a\alpha} m^{\alpha\beta}k_{\beta \gamma} \dot x^\gamma+ \frac{\partial V}{\partial y^a},\\ g_{ab}&= m_{ab} -m_{a\alpha}m_{b\beta} m^{\alpha\beta}, \end{align*} transforms the system in (\ref{EL:coord:1}) and (\ref{EL:coord:2}) to the following system: \begin{align} \dot x^\alpha &= - m^{\alpha\beta}\frac{\partial U_\mu}{\partial x^\beta} - m^{\alpha\beta} m_{\beta b} \dot y^b\label{seqn:1}\\ \ddot y^a &= \tau^a, \label{seqn:2} \end{align} where $U_\mu$ is a function that satisfies \eqref{dVmu}. \begin{proof} Solve \eqref{EL:coord:1} for $\ddot x^\beta$, substitute it into \eqref{EL:coord:2}, and apply the control in \eqref{control:ua} to obtain \eqref{seqn:2}. Since $dh_\alpha/dt = k_{\alpha \beta} \dot x^\beta$, integration of \eqref{e:equation of motion:x} with respect to $t$ yields \eqref{eq:cons} where $\mu_\alpha$ is the value of the first integral. Solving \eqref{eq:cons} for $\dot x^\alpha$ gives \eqref{seqn:1}. \end{proof} \end{lemma} Equation \eqref{seqn:2} implies that we have full control over the motion of $y^a$ variables. Hence, it is always possible to find a control law to satisfy the assumptions in Theorems \ref{thm:main:1} and \ref{thm:damping:bound}; refer to \cite{ChJe12} for more details on the method of choosing controls $u_a$'s. For example, suppose that $y_{\rm d}(t) = (y^a_{\rm d}(t))$ is a reference trajectory that $y(t)$ must follow. Then, we can choose the following control law for $\tau^a$: \begin{equation}\label{e:error dynamics from PD} \tau^a = {\ddot {y} }_{\rm d}^a(t) - c^a_1 ( \dot y^a - {\dot {y}}_{\rm d}^a (t) ) - c^a_0 ( y^a- {y_{\rm d} ^a} (t)) \end{equation} with $c^a_1 >0$ and $c^a_0 >0$ such that the tracking error $e^a(t):= y^a(t) - y^a_{\rm d}(t)$ obeys the following exponentially stable dynamics: \[ \ddot e^a + c^a_1 \dot e^a + c^a_0 e^a =0. \] Thus, $y^a(t)$ and $\dot y^a(t)$ converge exponentially to the reference trajectory $y^a_{\rm d}(t)$ and $\dot y^a_{\rm d}(t)$, respectively, for each $a = r+1, \ldots, n$. \section{Examples}\label{sec:examples} In this section, we take two examples of mechanical systems with multiple cyclic variables to demonstrate the phenomena of damping-induced self-recovery and boundedness. \subsection{Planar Pendulum with Gimbal Actuators} \begin{figure}[!htp] \centering \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,scale=0.42]{Fig1.eps} \caption{The planar pendulum with gimbal actuators.} \label{f:planar pendulum} \end{figure} Consider the planar pendulum system in Fig. \ref{f:planar pendulum}. The planar motion of the base block is unactuated and constrained to move freely along $x$ and $y$ directions only. The pendulum rod is assumed to be actuated by the gimbal-like mechanism which can apply torques along $\theta_1$ and $\theta_2$ rotational axes. Choose coordinates as $(q^1, q^2; q^3, q^4) = (x,y; \theta_1, \theta_2)$, where $x$ and $y$ are unactuated but subject to viscous damping-like forces and $\theta_1$ and $\theta_2$ are actuated by controls (i.e. $n=4$ and $r=2$). The mass matrix ${M}$ is given by \begin{equation* {M}=\begin{pmatrix}M_a+M_b+m & 0 & -mrc^{\phantom{1}}_1c^{\phantom{1}}_2 & mrs^{\phantom{1}}_1s^{\phantom{1}}_2\\ 0 & M_a+m & 0 & mrc^{\phantom{1}}_2\\ -mrc^{\phantom{1}}_1c^{\phantom{1}}_2 & 0 & mr^2c_2^2 & 0\\mrs^{\phantom{1}}_1s^{\phantom{1}}_2 & mrc^{\phantom{1}}_2 & 0 & mr^2\end{pmatrix}, \end{equation*}where $c_i$ and $s_i$ denote the $\cos{\theta_i}$ and $\sin{\theta_i}$, respectively and the parameters in ${M}$ are listed in Table \ref{t:planar pendulum}. Since $x$ and $y$ do not show up in ${M}$ and also are not affected by the gravity, they are cyclic variables. \begin{table}[!htp] \begin{center}\caption{Parameters for planar pendulum} \label{t:planar pendulum} \begin{tabular}{p{1cm}p{3.7cm}cl} \hline $\phantom{s}$ & Parameter & Value & Unit\\ \hline $M_a$ & Slider mass & 2 & [kg]\\ $M_b$ & Gantry bar mass & 3 & [kg]\\ $m$ & Pendulum ball mass & 3 & [kg]\\ $r$ & Rod length & 0.5 & [m] \\ \hline \end{tabular \end{center} \end{table} \begin{figure}[!htp] \centering \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,scale=0.75]{Fig2.eps} \caption{Time trajectories of the actuated variables, $\theta_1(t)$ and $\theta_2(t)$.} \label{f:planar_pendu_th} \end{figure} To demonstrate the self-recovery phenomenon in this case, we apply the control law in Lemma \ref{lemma:spong} to control $\theta_1$ and $\theta_2$ such that they move from 0 [rad] to $\frac{\pi}{3}$ [rad] and $\frac{\pi}{4}$ [rad], respectively, within 3 seconds; see Fig. \ref{f:planar_pendu_th}. For the unactuated cyclic joints, $x$ and $y$, we simulate two different cases of damping matrix $K = (k_{\alpha\beta})_{1\leq \alpha, \beta \leq 2}$: \[ K_1 = \begin{pmatrix}3 & 0\\0 & 3\end{pmatrix},\quad K_2 = \begin{pmatrix}5 + 2\cos{x} & 4\\4 & 4+ 2\cos y\end{pmatrix}. \] Notice that $K_1$ and $K_2$ are the second-order derivative matrices of $U_1(x,y) = \frac{3}{2}\left(x^2 + y^2\right)$ and $U_2(x,y) = \frac{1}{2}x^2 + 2(x+y)^2 - 2\cos x -2\cos y$, respectively. One can easily verify that both $U_1$ and $U_2$ satisfy assumptions A5) -- A7) and equations (\ref{dVmuinfty}) and (\ref{U:infty}). \begin{figure}[!htp] \centering \subfigure[Time trajectories.] {\label{f:planar_pendu_xy_time} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,scale=0.7]{Fig3a.eps}} \subfigure[$x-y$ plot.] {\label{f:planar_pendu_xy} \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,scale=0.7]{Fig3b.eps}} \caption{Trajectories of unactuated cycle variables, $x(t)$ and $y(t)$.} \label{f:planar_pendu_xy_and_time} \end{figure} The self-recovery phenomenon for both cases is verified in Fig. \ref{f:planar_pendu_xy_and_time}. From Fig. \ref{f:planar_pendu_xy_time} and Fig. \ref{f:planar_pendu_th}, one can see that $x(t)$ and $y(t)$ converge to the origin, which is the initial condition, after $\theta_1 (t)$ and $\theta_2 (t)$ settle down. The planar motion is described in Fig. \ref{f:planar_pendu_xy}, where we can clearly see the pendulum base automatically returns to its initial position for both cases. \subsection{Three-Link Manipulator on a Horizontal Plane} \begin{figure}[!htp] \centering \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,scale=0.42]{Fig4.eps} \caption{A three-link manipulator on a horizontal plane with $\theta_1$ and $\theta_3$ unactuated.} \label{f:threelink} \end{figure} Consider the three-link open chain manipulator moving on a horizontal plane (i.e. no gravity effect) in Fig. \ref{f:threelink}. One can easily see that the first joint angle $\theta_1$ is a cyclic variable, which is always the case for planar kinematic chains whose motion is constrained to a horizontal plane. For this particular system in Fig. \ref{f:threelink}, the third joint angle $\theta_3$ is also cyclic since the center of mass of the third link is located on its axis of rotation. Thus, we anticipate the self-recovery effect will occur in these two joint variables. To be compatible with the notation introduced in Section \ref{sec:description}, we choose the configuration variables as $q^1 = \theta_1$, $q^2 = \theta_3$ and $q^3 = \theta_2$. Then, the mass matrix is written as \begin{equation* {M}=\begin{pmatrix}\alpha + 2\beta c_2 & I_3 & \delta + \beta c_2\\ I_3 & I_3 & I_3 \\ \delta + \beta c_2 & I_3 & \delta \end{pmatrix}, \end{equation*}where $c_2$ denotes $\cos{\theta_2}$, and parameters $\alpha$, $\beta$ and $\delta$ are given by \begin{equation*} \begin{split} \alpha &= I_1 + I_2 + I_3 + m_1r_1^2 + m_2(\ell_1^2+r_2^2) + m_3(\ell_1^2+\ell_2^2),\\ \beta &= \ell_1(m_2r_2 + m_3\ell_2),\\ \delta &= I_2 + I_3 + m_2r_2^2 + m_3\ell_2^2, \end{split} \end{equation*} where individual parameter values are listed in Table \ref{t:three link}. We use the following damping coefficient matrix: \[ K = \begin{pmatrix}6 & 0\\0 & 3\end{pmatrix} \] which is the second-order derivative matrix of the function $U(\theta_1,\theta_3) = \frac{3}{2}\left(2\theta_1^2 + \theta_3^2\right)$. \begin{table}[!htp] \begin{center}\caption{Parameters for the three-link manipulator} \label{t:three link} \begin{tabular}{p{1cm}p{5.5cm}cl} \hline $\phantom{s}$ & Parameter & Value & Unit\\ \hline $\ell_1$ & Link 1 length & 0.5 & [m] \\ $\ell_2$ & Link 2 length & 0.5 & [m] \\ $r_1$ & Location of link 1 center of mass & 0.1 & [m] \\ $r_2$ & Location of link 2 center of mass & 0.1 & [m] \\ $I_1$ & Link 1 moment of inertia & 2 & [$\textrm{kg}\cdot\textrm{m}^2$]\\ $I_2$ & Link 2 moment of inertia & 2 & [$\textrm{kg}\cdot\textrm{m}^2$]\\ $I_3$ & Link 3 moment of inertia & 2 & [$\textrm{kg}\cdot\textrm{m}^2$]\\ $m_1$ & Link 1 mass & 10 & [kg]\\ $m_2$ & Link 2 mass & 10 & [kg]\\ $m_3$ & Link 3 mass & 10 & [kg]\\ \hline \end{tabular \end{center} \end{table} Figure \ref{f:threelink_th} plots the time trajectories of the joint angles. The top plot shows the controlled motion of the second joint $\theta_2$ while the second and the third present the time trajectories of the two unactuated joints $\theta_1$ and $\theta_3$, respectively. A tracking controller is designed according to \eqref{e:error dynamics from PD} such that the actuated second joint $\theta_2$ makes sixteen full revolutions (i.e. $32\pi$ [rad]) during the first 8 seconds of time. The initial values for the joint angles are set to $\theta_2(0)=\frac{\pi}{3}$, and $\theta_1(0)=\theta_3(0)=0$. Figure \ref{f:threelink_th} clearly shows that both of the unactuated cyclic variables $\theta_1$ and $\theta_3$ converge back to their initial positions right after $\theta_2$ is regulated at $32\pi$ [rad]. Note that $\theta_1$ returns back to its original position after having made two full turns, demonstrating that the self-recovery effect is global. Besides the self-recovery phenomenon, the trajectories of $\theta_1$ and $\theta_3$ in Fig. \ref{f:threelink_th} also demonstrate the damping-induced boundedness derived in Theorem \ref{thm:damping:bound}. More specifically, we can observe that $\theta_1$ and $\theta_2$ oscillate around $-4\pi$ and $-3\pi$, respectively while $\theta_2$ is still increasing. \begin{figure}[!htp] \centering \includegraphics[trim = 0mm 0mm 0mm 0mm, clip,scale=0.75]{Fig5.eps} \caption{Time trajectories of joint angles, $\theta_2(t)$ (actuated), $\theta_1(t)$ (unactuated), and $\theta_3(t)$ (unactuated).} \label{f:threelink_th} \end{figure} \section{Conclusion and Future Work}\label{sec:conclusions} The phenomena of damping-induced self-recovery and damping-induced boundedness have been generalized to mechanical systems with several unactuated cyclic variables. The major contribution of this paper comes from characterizing the damping coefficient matrix as the second-order derivative matrix of a function and identifying a class of such functions that guarantee the self-recovery and boundedness of all unactuated cyclic variables. Regular momentum conservation is the limit of the damping-induced self-recovery as the damping disappears, in the sense that the recovery phenomenon vanishes in the limit. Non-trivial examples of mechanical systems with multiple cyclic variables are provided to demonstrate the theoretical discoveries. It will be interesting to study the possibility of occurrence of damping-induced self-recovery and boundedness for the case of non-Abelian symmetry in the unactuated variables. We are currently examining the system of a spacecraft with internal rotors, where the spacecraft experiences viscous damping friction as it rotates.
1,477,468,750,200
arxiv
\section{Introduction} \label{sec:introduction} In this paper we study neural network approximations for the superhedging price process for a contingent claim in discrete time. \\ Superhedging was first introduced in \cite{quenez} and then thoroughly studied in various settings and market models. It is impossible to cover the complete literature here, but we name just a few references. For instance, in continuous time, for general c\`adl\`ag processes we mention \cite{kramkovduality}, for robust superhedging \cite{nutz2015robust}, \cite{touzi2014martingale}, for pathwise superhedging on prediction sets \cite{bartl2020pathwise}, \cite{bartl2019duality}, or for superhedging under proportional transaction costs \cite{campischachermayer}, \cite{cvitanic}, \cite{kabanovlast}, \cite{schachermayersuperreplication}, \cite{soner1995there}. Also in discrete time there are various studies in the literature, like the standard case \cite{follmerschied}, robust superhedging \cite{carassus2019robust}, \cite{obloj2021robust}, superhedging under volatility uncertainty \cite{nutz2012superhedging}, or model-free superhedging \cite{burzoni2017model}. The superhedging price provides an opportunity to secure a claim, but it may be too high or reduce the probability to profit from the option. In order to solve this problem, quantile hedging was introduced in \cite{foellmer99}, where the authors propose to either fix the initial capital and maximize the probability of superhedging with this capital or fix a probability of superhedging and minimize the required capital. In this way a trader can determine the desired trade-off between costs and risk. \\ In certain situations it is possible to calculate explicitly or recursively the superhedging or quantile hedging price, see e.g.\ \cite{carassus2007class}, but in general incomplete markets it may be complicated to determine superhedging prices or quantile hedging prices. In this article we investigate neural network-based approximations for quantile- and superhedging prices. Neural network-based methods have been recently introduced in financial mathematics, for instance for hedging derivatives, see \cite{buehler2018}, determining stopping times, see \cite{becker2019deep}, or calibration of stochastic volatility models, see \cite{cuchiero2020generative}, and many more. For an overview of applications of machine learning to hedging and option pricing we refer to \cite{ruf2020neural} and the references therein. \\ This paper contributes to the literature on hedging in discrete time market models in several ways. First, we prove that the $\alpha$-quantile hedging price converges to the superhedging price for $\alpha$ tending to $1$. Further, we show that it is feasible to approximate the $\alpha$-quantile hedging and thus also the superhedging price for $t=0$ by neural networks. We extend our machine learning approach also to approximate the superhedging price process for $t>0$. By the first step we obtain an approximation for the superhedging strategy on the whole interval up to maturity. By using the uniform Doob decomposition, see \cite{follmerschied}, we then only need to approximate the process of consumption $B$ to generate the superhedging price process. We prove that $B$ can be obtained as the the essential supremum over a set of neural networks. Finally, we present and discuss numerical results for the proposed neural network methods. \\ The paper is organized as follows. In Section \ref{sec:prelim}, we present the discrete time market model of \cite{follmerschied} and recall essential definitions and results on superhedging. Section \ref{sec:price0} contains the study of the superhedging price for $t=0$. More specifically, in Section \ref{sec:quantilehedging} we prove in Theorem \ref{thm:convergencesuccessset} that the $\alpha$-quantile hedging price converges to the superhedging price as $\alpha$ tends to $1$. We also present a similar result in Corollary \ref{cor:successratioconvergence} in Section \ref{sec:successratio}, where $\alpha$-quantile hedging is given in terms of success ratios. In Section \ref{sec:NNprice0} we show in Theorem \ref{thm:universalapprox2} that the superhedging price can be approximated by neural networks. This concludes the approximation for $t=0$. Then, we consider the case for $t>0$ in Section \ref{sec:superhedgingt}. In Section \ref{sec:uniformDoob}, we explain how the uniform Doob decomposition can be used to approximate the superhedging price process. In that account, we prove an explicit representation of the process of consumption, see Proposition \ref{prop:optimizationB}. Proposition \ref{prop:NNconsumption} and Theorem \ref{thm:NNgeneralt} show that the process of consumption and thus the superhedging price process can be approximated by neural networks. The numerical results are presented in Section \ref{sec:numericalresults}. The section is divided in the case $t=0$, see Section \ref{sec:case0}, and $t>0$, see Section \ref{sec:caset}. We present details on the algorithm and the implementation. Appendix \ref{sec:appendixNN} contains a version of the universal approximation theorem, derived from \cite{hornik1991}. \section{Preliminaries} \label{sec:prelim} In this section we introduce the discrete time financial market model from \cite{follmerschied} and recall some basic notions on superhedging. \\ Consider a finite time horizon $T>0$. Let $(\Omega,{\mathcal{F}},{\textbf{P}})$ be a probability space endowed with a filtration ${\mathbb{F}}:=({\mathcal{F}}_t)_{t=0,1,\dots,T}$. We assume ${\mathcal{F}}_t=\sigma(Y_0,\dots,Y_t)$ for $t=0,\dots,T$ and for some ${\mathbb{R}}^m$-valued process $Y=(Y_t)_{t=0,\dots,T}$ for some $m\in{\mathbb{N}}$, and write ${\mathcal{Y}}_t=(Y_0,\dots,Y_t)$ for $t\geq 0$. Further, we suppose that ${\mathcal{F}}={\mathcal{F}}_T$ and that $Y_0$ is constant ${\textbf{P}}$-a.s. Then ${\mathcal{F}}_0=\{\emptyset,\Omega\}$.\\ In our market model on $(\Omega,{\mathcal{F}},{\mathbb{F}},{\textbf{P}})$ the asset prices are modeled by a non-negative, adapted, stochastic process \[ \bar S=(S^0,S)=(S_t^0,S_t^1,\dots,S_t^d)_{t=0,1,\dots,T}, \] with $d\geq 1$, $d\in{\mathbb{N}}$. In particular, $m\geq d$. Further, we assume that \[ S_t^0>0 \quad{\textbf{P}}\text{-a.s. for all }t=0,1,\dots,T, \] and define $S^0=(S_t^0)_{t=0,1,\dots,T}$ to be the num\'eraire. The discounted price process $\bar X=(X^0,X)=(X_t^0,X_t^1,\dots,X_t^d)_{t=0,1,\dots,T}$ is given by \[ X_t^i:=\frac{S_t^i}{S_t^0},\quad t=0,1,\dots,T,\ i=0,\dots,d. \] A probability measure ${\textbf{P}}^*$ is called an equivalent martingale measure if ${\textbf{P}}^*$ is equivalent to ${\textbf{P}}$ and $X$ is a ${\textbf{P}}^*$-martingale. We denote by ${\mathcal{P}}$ the set of all equivalent martingale measures for $X$ and assume ${\mathcal{P}}\neq \emptyset$. By Theorem 5.16 of \cite{follmerschied} this is equivalent to the market model being arbitrage-free. \begin{definition} A trading strategy is a predictable ${\mathbb{R}}^{d+1}$-valued process \[ \bar\xi=(\xi^0,\xi)=(\xi_t^0,\xi_t^1,\dots,\xi_t^d)_{t=1,\dots,T}. \] The (discounted) value process $V=(V_t)_{t=0,\dots,T}$ associated with a trading strategy $\bar\xi$ is given by \[ V_0:=\bar\xi_1\cdot \bar X_0\quad\text{and}\quad V_t:=\bar\xi_t\cdot \bar X_t\quad \text{for }t=1,\dots,T. \] A trading strategy $\bar\xi$ is called self-financing if \[ \bar\xi_t\cdot\bar S_t=\bar\xi_{t+1}\cdot\bar S_t\quad\text{for }t=1,\dots,T-1. \] A self-financing trading strategy is called an \emph{admissible strategy} if its value process satisfies $V_T\geq 0$.\\ By ${\mathcal{A}}$ we denote the set of all admissible strategies $\bar\xi$ and by ${\mathcal{V}}$ the associated value processes, i.e., \[ {\mathcal{V}}:=\left\lbrace V=(V_t)_{t=0,1,\dots,T}:\, V_t=\bar\xi_t\cdot\bar X_t\text{ for }t=0,\dots,T,\text{ and } \bar\xi\in{\mathcal{A}}\right\rbrace \] \end{definition}\noindent By Proposition 5.7 of \cite{follmerschied}, a trading strategy $\bar\xi$ is self-financing if and only if \[ V_t=V_0+\sum_{k=1}^t\xi_k\cdot(X_k-X_{k-1})\quad\text{for all }t=0,\dots,T, \] with $V_0:=\bar\xi_1\cdot \bar X_0$. In particular, given a ${\mathbb{R}}^d$-valued predictable process $\xi$ and $V_0\in{\mathbb{R}}$, the pair $(V_0,\xi)$ uniquely defines a self-financing strategy. \begin{remark} By Theorem 5.14 of \cite{follmerschied}, $V_T\geq 0$ ${\textbf{P}}$-a.s. implies that $V_t\geq 0$ ${\textbf{P}}$-a.s. for all $t=0,\dots,T$, where $V$ denotes the value process of a self-financing strategy. More precisely, Theorem 5.14 of \cite{follmerschied} guarantees that if $\bar\xi$ is a self-financing strategy and its value process $V$ satisfies $V_T\geq 0$, then $V$ is a ${\textbf{P}}^*$-martingale for any ${\textbf{P}}^*\in{\mathcal{P}}$. In particular, in the proof the martingale property of $X$ and Proposition 5.7 of \cite{follmerschied} is used successively in the following way: \[ {\mathbb{E}}^*[V_T\mid{\mathcal{F}}_t]={\mathbb{E}}^*[V_{T-1}+\xi_T\cdot(X_T-X_{T-1})\mid{\mathcal{F}}_{T-1}]=V_{T-1}+\xi_T\cdot{\mathbb{E}}^*[X_T-X_{T-1}\mid{\mathcal{F}}_{T-1}]=V_{T-1}. \] \end{remark} A discounted European contingent claim is represented by a non-negative, ${\mathcal{F}}_T$-measurable random variable $H$ such that \[ \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]<\infty. \] \begin{definition} \label{def:superhedging} Let $H$ be a European contingent claim. A self-financing trading strategy $\bar\xi$ whose value process $V$ satisfies \[ V_T\geq H\quad{\textbf{P}}\text{-a.s.} \] is called a \emph{superhedging strategy} for $H$. In particular, any superhedging strategy is admissible since $H\geq 0$ by definition. \end{definition}\noindent The upper Snell envelope for a discounted European claim $H$ is defined by \[ U_t^\uparrow(H)=U_t^\uparrow:=\esssup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mid{\mathcal{F}}_t],\quad\text{for }t=0,1,\dots,T. \] \begin{corollary}[Corollary 7.3, Theorem 7.5, Corollary 7.15, \cite{follmerschied}] \label{cor:resultsfollmerschied} The process \[ \left(\esssup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mid{\mathcal{F}}_t]\right)_{t=0,1,\dots,T}, \] is the smallest ${\mathcal{P}}$-supermartingale whose terminal value dominates $H$. Furthermore, there exists an adapted increasing process $B=(B_t)_{t=0,\dots,T}$ with $B_0=0$ and a $d$-dimensional predictable process $\xi=(\xi_t)_{t=1,\dots,T}$ such that \begin{equation} \label{eq:uniformdoob} \esssup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mid{\mathcal{F}}_t]=\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^t\xi_k\cdot(X_k-X_{k-1})-B_t\quad{\textbf{P}}\text{-a.s. for all }t=0,\dots,T. \end{equation} Moreover, $\esssup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mid{\mathcal{F}}_t]=\essinf{\mathcal{U}}_t$ and \begin{equation} \label{eq:uniformdoobdominates} \esssup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mid{\mathcal{F}}_t]+\sum_{k=t+1}^T\xi_k\cdot(X_k-X_{k-1})\geq H,\quad\text{for all }t=0,\dots,T. \end{equation} \end{corollary}\noindent The process $B$ in \eqref{eq:uniformdoob} is sometimes called process of consumption, see \cite{kramkovduality}. Equations \eqref{eq:uniformdoob} and \eqref{eq:uniformdoobdominates} yield \begin{equation} \label{eq:optimization} \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\geq B_t\geq B_{t-1}\geq 0\quad\text{for all }t=1,\dots,T. \end{equation} Set \begin{equation} \label{eq:superhedgingprice} {\mathcal{U}}_t:=\left\{\tilde U_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}):\exists \tilde\xi\text{ pred. s.t. }\tilde U_t+\sum_{k=t+1}^T \tilde\xi_k\cdot(X_k-X_{k-1})\geq H\quad{\textbf{P}}\text{-a.s.}\right\}. \end{equation} \begin{corollary}[Corollary 7.18, \cite{follmerschied}] Suppose $H$ is a discounted European claim with \[ \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]<\infty. \] Then \[ \esssup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mid{\mathcal{F}}_t]=\essinf{\mathcal{U}}_t(H). \] \end{corollary}\noindent Corollary 7.18 of \cite{follmerschied} and \eqref{eq:uniformdoobdominates} guarantee that $U_t^\uparrow$ is the minimal amount needed at time $t$ to start a superhedging strategy and thus there exists a predictable process $\xi$ such that \[ \esssup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mid{\mathcal{F}}_t]+\sum_{k=t+1}^T\xi_k\cdot (X_k-X_{k-1})\geq H. \] Further, $U_0^\uparrow$ is called the \emph{superhedging price} at time $t=0$ of $H$ and coincides with the upper bound of the set of arbitrage-free prices. \section{Superhedging price for $t=0$} \label{sec:price0} In this section we approximate the superhedging price for $t=0$ in two steps. In the first part, we introduce the theory of quantile hedging, see \cite{foellmer99}. In Theorem \ref{thm:convergencesuccessset} we prove that the quantile hedging price for $\alpha\in(0,1)$ converges to the superhedging price as $\alpha$ tends to $1$. Analogously, in Corollary \ref{cor:successratioconvergence} we prove that for $\alpha$ tending to $1$ also the success ratios for $\alpha\in (0,1)$ converge to the superhedging price.\\ In the second part, we prove in Theorem \ref{thm:universalapprox2} that the superhedging price and the associated strategies can be approximated by neural networks. \subsection{Quantile hedging} \label{sec:quantilehedging} \subsubsection{Success sets} \label{sec:successsets} In incomplete markets perfect replication of a contingent claim may not be possible. Superhedging offers an alternative hedging method but it presents two main disadvantages. From one hand the superhedging strategy not only reduces the risk but also the possibility to profit. On the other hand, the superhedging price may result to be too high.\\ Quantile hedging was proposed for the first time in \cite{foellmer99} to address these problems. Fix $\alpha\in (0,1)$. Given probability of success $\alpha\in (0,1)$ we consider the minimization problem \begin{align} \nonumber \inf {\mathcal{U}}_0^\alpha:=\inf\{u\in{\mathbb{R}}:\ &\exists \xi=(\xi_t)_{t=1,\dots,T}\text{ predictable process with values in }{\mathbb{R}}^d\text{ such that }\\ \label{eq:quantilehedging} &(u,\xi)\text{ is admissible and }{\textbf{P}}\left(u+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})\geq H\right)\geq \alpha\}. \end{align} Here $1-\alpha$ is called the shortfall probability. Quantile hedging may be considered as a dynamic version of the value at risk concept.\\ For an admissible strategy $(u,\xi)$ with associated value process $V,$ we call \[ \{V_T\geq H\} \] the \emph{success set}. \begin{remark} \label{rem:admissiblestrategies} Note that in \eqref{eq:quantilehedging} we need to require that $(u,\xi)$ is admissible since this is not automatically implied by the definition of quantile hedging as in the case of superhedging strategies in Definition \ref{def:superhedging}. \end{remark}\noindent Proposition \ref{prop:alternativepropblem} below provides an equivalent formulation of the quantile hedging \eqref{eq:quantilehedging}, see also \cite{foellmer99}. \begin{proposition} \label{prop:alternativepropblem} Fix $\alpha\in (0,1)$. Then \[ \inf{\mathcal{U}}_0^\alpha=\inf\left\{\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_{A}]:\ A\in{\mathcal{F}}_T, \ {\textbf{P}}(A)\geq \alpha\right\}. \] \end{proposition} \begin{proof} $``\leq"$: Take $A\in{\mathcal{F}}_T$ such that ${\textbf{P}}(A)\geq \alpha$. We prove that \begin{equation} \label{eq:alternativproblemproof1} \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]\in \left\{u \in{\mathbb{R}}:\exists \xi\text{ adm. s.t. }{\textbf{P}}\left(u+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})\geq H\right)\geq \alpha\right\}. \end{equation} By the well-known superhedging duality, see Theorem 7.13 of \cite{follmerschied}, we have that \[ \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]=\inf \left\{u \in{\mathbb{R}}:\exists \xi\text{ pred. s.t. }u+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})\geq H\mathds{1}_A\ {\textbf{P}}\text{-a.s.}\right\}, \] and that there exists a superhedging strategy $\hat\xi$ for $H\mathds{1}_A$ with initial value $\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]$, i.e., \begin{equation} \label{eq:superhedgingonA} \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]+\sum_{k=1}^T\hat\xi_k\cdot(X_k-X_{k-1})\geq H\mathds{1}_A\geq 0\ {\textbf{P}}\text{-a.s.} \end{equation} In particular, by \eqref{eq:superhedgingonA} we get for $\hat\xi$ that \[ {\textbf{P}}\left(\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]+\sum_{k=1}^T\hat\xi_k\cdot(X_k-X_{k-1})\geq H\right)\geq {\textbf{P}}(A) \geq \alpha. \] This implies \eqref{eq:alternativproblemproof1} and hence \[ \inf{\mathcal{U}}_0^\alpha\leq \inf\left\{\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_{A}]:\ A\in{\mathcal{F}}_T, \ {\textbf{P}}(A)\geq \alpha\right\}. \] $``\geq"$: Take $\tilde u\in {\mathcal{U}}_0^\alpha$ and denote by $\tilde\xi=(\tilde\xi_k)_{k=1}^T$ the corresponding strategy such that \[ {\textbf{P}}\left(\tilde u+\sum_{k=1}^T\tilde\xi_k\cdot (X_k-X_{k-1})\geq H\right)\geq \alpha. \] Define the set $\tilde A$ by \[ \tilde A:=\left\{\omega\in\Omega:\tilde u+\sum_{k=1}^T\tilde\xi_k(\omega)\cdot(X_k(\omega)-X_{k-1}(\omega))\geq H(\omega)\right\}. \] Clearly $\tilde A\in{\mathcal{F}}_T$ and ${\textbf{P}}(\tilde A)\geq \alpha$. By construction we have that \[ \left(\tilde u+\sum_{k=1}^T\tilde\xi_k\cdot(X_k-X_{k-1})\right)\mathds{1}_{\tilde A}\geq H\mathds{1}_{\tilde{A}}\quad {\textbf{P}}\text{-a.s.} \] and because $\tilde\xi$ is assumed to be admissible, we have \[ \left(\tilde u+\sum_{k=1}^T\tilde\xi_k\cdot(X_k-X_{k-1})\right)\mathds{1}_{\tilde A^c}\geq 0\quad {\textbf{P}}\text{-a.s.} \] In particular, $\tilde u\in {\mathcal{U}}_0(H\mathds{1}_{\tilde A})$ and by Theorem 7.13 of \cite{follmerschied} we obtain \begin{equation} \label{eq:tildeaconstruction} \tilde u\geq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_{\tilde A}]\in \left\{\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_{A}]:\ A\in{\mathcal{F}}_T, \ {\textbf{P}}(A)\geq \alpha\right\}. \end{equation} That is, for an arbitrary $\tilde u\in {\mathcal{U}}_0^\alpha$ we have constructed a set $\tilde A$ such that \eqref{eq:tildeaconstruction} holds. Therefore, \[ \inf{\mathcal{U}}_0^\alpha\geq \inf\left\{\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_{A}]:\ A\in{\mathcal{F}}_T, \ {\textbf{P}}(A)\geq \alpha\right\}. \] \end{proof}\noindent Corollary 7.15 of \cite{follmerschied} guarantees that there exists a superhedging strategy with initial value $\inf{\mathcal{U}}_0$. In contrast, there might be no explicit solution to the quantile hedging approach \eqref{eq:quantilehedging}. If a solution to the quantile hedging approach exists, then Proposition \ref{prop:alternativepropblem} states that it is given by the solution of the classical hedging formulation for the knockout option $H\mathds{1}_A$ for some suitable $A\in{\mathcal{F}}_T$. However, such a set $A\in{\mathcal{F}}_T$ does not always exist. In particular, quantile hedging does not always admit an explicit solution in general. The Neyman-Pearson lemma suggests to consider so-called success ratios instead of success sets. We will briefly discuss success ratios below. For further information we refer the interested reader to \cite{foellmer99}.\\ We now show that the superhedging price $\inf{\mathcal{U}}_0$, can be approximated by the quantile hedging price $\inf{\mathcal{U}}_0^\alpha$ for $\alpha$ tending to $1$. \begin{definition} For $\alpha\in (0,1)$ we define \[ {\mathcal{F}}^\alpha:=\left\{A\in{\mathcal{F}}_T:{\textbf{P}}(A)\geq \alpha\right\}. \] \end{definition} \begin{theorem} \label{thm:convergencesuccessset} The $\alpha$-quantile hedging price converges to the superhedging price as $\alpha$ tends to $1$, i.e., \[ \inf{\mathcal{U}}_0^\alpha\xrightarrow{\alpha\uparrow 1}\inf{\mathcal{U}}_0. \] \end{theorem} \begin{proof} We first note that, using Proposition \ref{prop:alternativepropblem} it suffices to prove \[ \inf_{A\in{\mathcal{F}}^\alpha}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]\xrightarrow{\alpha\uparrow 1}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]. \] Let $(\alpha_n)_{n\in{\mathbb{N}}}\subset (0,1)$ be an increasing sequence such that $\alpha_n$ converges to $1$ as $n$ tends to infinity. Note that \begin{equation} \label{eq:setproblemmonotone} \inf_{A\in{\mathcal{F}}^{\alpha_n}}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_{A}]\leq \inf_{A\in{\mathcal{F}}^{\alpha_{n+1}}}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_{A}]\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H], \end{equation} because ${\mathcal{F}}^{\alpha_{n+1}}\subset{\mathcal{F}}^{\alpha_n}$. Therefore, the limit of $(\inf_{A\in{\mathcal{F}}^{\alpha_n}}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_{A}])_{n\in{\mathbb{N}}}$ exists because the sequence is monotone and bounded. Let $\varepsilon>0$ be arbitrary. For each $n\in{\mathbb{N}}$ exists $A_n\in{\mathcal{F}}^{\alpha_n}$ such that \begin{equation} \label{eq:setproblemapprox} \inf_{A\in{\mathcal{F}}^{\alpha_n}}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_{A_n}]<\inf_{A\in{\mathcal{F}}^{\alpha_n}}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]+\varepsilon. \end{equation} Then, by Lemma\footnote{For random variables $(\xi_n)_{n\in{\mathbb{N}}}\subset L^0(\Omega,{\mathcal{F}},{\textbf{P}};{\mathbb{R}}^d)$ we denote by $\text{conv}\{\xi_1,\xi_2,\dots\}$ the convex hull of $\xi_1,\xi_2,\dots$ which is defined $\omega$-wise. \begin{lemma*}[Lemma 1.70, \cite{follmerschied}] \label{lem:lemma170} Let $(\xi_n)_{n\in{\mathbb{N}}}$ be a sequence in $L^0(\Omega,{\mathcal{F}},{\textbf{P}};{\mathbb{R}}^d)$ such that $\sup_{n\in{\mathbb{N}}}|\xi_n|<\infty$ ${\textbf{P}}$-a.s. Then there exists a sequence of convex combinations \[ \eta_n\in\text{conv}\{\xi_n,\xi_{n+1},\dots\},\quad n\in{\mathbb{N}}, \] which converges ${\textbf{P}}$-almost surely to some $\eta\in L^0(\Omega,{\mathcal{F}},{\textbf{P}};{\mathbb{R}}^d)$. \end{lemma*}} 1.70 of \cite{follmerschied} there exists a sequence $\psi_n\in\text{conv}\{\mathds{1}_{A_n},\mathds{1}_{A_{n+1}},\dots\}$, $n\in{\mathbb{N}}$, which converges ${\textbf{P}}$-a.s. to some $\psi\in L^\infty([\Omega,{\mathcal{F}}_T,{\textbf{P}};[0,1])$. Note that it is not clear if $\psi$ is an indicator function of some ${\mathcal{F}}_T$-measurable set. We will show that $\psi=1$ ${\textbf{P}}$-a.s. For $n\in{\mathbb{N}}$, $\psi_n$ is of the form \begin{equation} \label{eq:psirepresentation} \psi_n=\sum_{k=n}^\infty \lambda_k^n \mathds{1}_{A_k}, \end{equation} for some $(\lambda_k^n)_{k=n}^\infty\geq 0$ such that $\sum_{k=n}^\infty\lambda_k^n=1$. By dominated convergence and \eqref{eq:psirepresentation} we obtain \begin{equation} \label{eq:setproblempsi1} {\mathbb{E}}_{\textbf{P}}[\psi]=\lim_{n\to\infty}{\mathbb{E}}_{\textbf{P}}[\psi_n]=\lim_{n\to\infty}{\mathbb{E}}_{\textbf{P}}\left[\sum_{k=n}^\infty\lambda_k^n\mathds{1}_{A_k}\right]=\lim_{n\to\infty}\left(\sum_{k=n}^\infty\lambda_k^n{\mathbb{E}}_{\textbf{P}}\left[\mathds{1}_{A_k}\right]\right). \end{equation} Because $\sum_{k=n}^\infty\lambda_k^n=1$ and by the definition of the limes inferior, equation \eqref{eq:setproblempsi1} yields \begin{align} \nonumber {\mathbb{E}}_{\textbf{P}}[\psi]&\geq \lim_{n\to\infty}\left(\sum_{k=n}^\infty\lambda_k^n\inf_{l\geq n}{\mathbb{E}}_{\textbf{P}}\left[\mathds{1}_{A_l}\right]\right) =\lim_{n\to\infty}\left(\inf_{l\geq n}{\mathbb{E}}_{\textbf{P}}\left[\mathds{1}_{A_l}\right]\right)\\ \label{eq:setproblempsi2} &=\liminf_{n\to\infty}{\mathbb{E}}_{\textbf{P}}[\mathds{1}_{A_n}]=\liminf_{n\to\infty}{\textbf{P}}(A_n)\geq \liminf_{n\to\infty}\alpha_n =1. \end{align} Since $0\leq \psi\leq 1$, it follows that $\psi=1$ ${\textbf{P}}$-a.s. By \eqref{eq:setproblemapprox} and with similar arguments as in \eqref{eq:setproblempsi1} and \eqref{eq:setproblempsi2} using the supremum instead of the infimum, we obtain by dominated convergence for any ${\textbf{P}}^*\in{\mathcal{P}}$ that \begin{align} \nonumber \limsup_{n\to\infty}\left(\inf_{A\in{\mathcal{F}}^{\alpha_n}}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]+\varepsilon\right)&\geq \limsup_{n\to\infty}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_{A_n}]\\ \label{eq:setproblemliminf} &\geq \lim_{n\to\infty}{\mathbb{E}}^*[H\psi_n]={\mathbb{E}}^*[H\psi]={\mathbb{E}}^*[H]. \end{align} Since the limit on the left hand side in \eqref{eq:setproblemliminf} exists by \eqref{eq:setproblemmonotone} and \eqref{eq:setproblemliminf} holds for all ${\textbf{P}}^*\in{\mathcal{P}}$, we get \begin{equation} \label{eq:setproblemliminf2} \lim_{n\to\infty}\left(\inf_{A\in{\mathcal{F}}^{\alpha_n}}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]+\varepsilon\right)\geq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]. \end{equation} Thus, we observe that \eqref{eq:setproblemmonotone} and \eqref{eq:setproblemliminf2} yields \[ \lim_{n\to\infty}\left(\inf_{A\in{\mathcal{F}}^{\alpha_n}}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]\right)\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]\leq \lim_{n\to\infty}\left(\inf_{A\in{\mathcal{F}}^{\alpha_n}}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]+\varepsilon\right). \] As $\varepsilon>0$ was arbitrary this implies that \[ \lim_{n\to\infty}\left(\inf_{A\in{\mathcal{F}}^{\alpha_n}}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mathds{1}_A]\right)=\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]. \] \end{proof}\noindent \subsubsection{Success ratios} \label{sec:successratio} Let ${\mathcal{R}}:=L^\infty(\Omega,{\mathcal{F}}_T,{\textbf{P}};[0,1])$ be the set of randomized tests. For $\alpha\in(0,1)$ we denote by ${\mathcal{R}}^\alpha$ the set \[ {\mathcal{R}}^\alpha:=\lbrace\varphi\in{\mathcal{R}}:\ {\mathbb{E}}_{\textbf{P}}[\varphi]\geq \alpha\rbrace. \] We now consider the following minimization problem \begin{equation} \label{eq:quantilehedgingtestfunctions} \inf\left\{ \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*\left[H\varphi\right]:\varphi\in{\mathcal{R}}^\alpha\right\}. \end{equation} In a first step, we prove that this problem admits an explicit solution. In a second step, we show that the solution is given by the so-called success ratio, see Definition \ref{def:successratio} below. In particular, \eqref{eq:quantilehedgingtestfunctions} can be formulated in terms of success ratios, see also \cite{foellmer99}. In Proposition \ref{prop:testfunctionsolution} and \ref{prop:successratiosolution} we provide a proof for some result of \cite{foellmer99} for the sake of completeness. \begin{proposition} \label{prop:testfunctionsolution} There exists a randomized test $\tilde\varphi\in{\mathcal{R}}$ such that \[ {\mathbb{E}}_{\textbf{P}}[\tilde\varphi]=\alpha, \] and \begin{equation} \label{eq:minimizationthm} \inf_{\varphi \in{\mathcal{R}}^\alpha}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi]=\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\tilde \varphi]. \end{equation} \end{proposition} \begin{proof} Take a sequence $(\varphi_n)_{n\in{\mathbb{N}}}\subset {\mathcal{R}}^\alpha$ such that \begin{equation} \label{eq:successratioapproxlim} \lim_{n\to\infty}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi_n]=\inf_{\varphi\in{\mathcal{R}}^\alpha} \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi]. \end{equation} By Lemma 1.70 of \cite{follmerschied} there exists a sequence of convex combinations $\tilde\varphi_n\in\text{conv}\{\varphi_n,\varphi_{n+1},\dots\}$ converging ${\textbf{P}}$-a.s. to a function $\tilde\varphi\in{\mathcal{R}}$ because $\varphi_n\in [0,1]$ for all $n\in{\mathbb{N}}$. Clearly $\tilde\varphi_n\in{\mathcal{R}}^\alpha$ for each $n\in{\mathbb{N}}$. Hence, dominated convergence yields that \begin{equation} \label{eq:tildeconstraint} {\mathbb{E}}_{\textbf{P}}[\tilde\varphi]=\lim_{n\to\infty}{\mathbb{E}}_{\textbf{P}}[ \tilde\varphi_n]\geq \alpha, \end{equation} and we get that $\tilde\varphi\in{\mathcal{R}}^\alpha$. In the following we use similar arguments as in the proof of Theorem \ref{thm:convergencesuccessset}. In particular, $\tilde{\varphi}_n$ is of the form \begin{equation} \label{eq:testfunctionrepresentation} \tilde\varphi_n=\sum_{k=n}^\infty\lambda_k^n \varphi_k, \end{equation} for some $(\lambda_k)_{k=n}^\infty$ such that $\sum_{k=n}^\infty\lambda_k^n=1$. By \eqref{eq:testfunctionrepresentation} we obtain for any ${\textbf{P}}^*\in{\mathcal{P}}$ that \begin{equation} \label{eq:limsupconvcomb} \limsup_{n\to\infty} {\mathbb{E}}^*\left[H\varphi_n\right]=\lim_{n\to\infty}\left(\sup_{k\geq n}{\mathbb{E}}^*\left[H\varphi_k\right]\right)\geq \lim_{n\to\infty}\left(\sum_{k=n}^\infty\lambda_k^n{\mathbb{E}}^*\left[H\varphi_k\right]\right)=\lim_{n\to\infty}{\mathbb{E}}^*\left[H\tilde\varphi_n\right]= {\mathbb{E}}^*\left[H\tilde\varphi\right], \end{equation} where we used monotone convergence. Moreover, we obtain by \eqref{eq:successratioapproxlim}, \eqref{eq:limsupconvcomb} and dominated convergence that \begin{equation} \label{eq:optimizerinquality} \inf_{\varphi\in{\mathcal{R}}^\alpha}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi]=\limsup_{n\to\infty}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi_n]\geq \limsup_{n\to\infty}{\mathbb{E}}^*[H\varphi_n]\geq \lim_{n\to\infty}{\mathbb{E}}^*[H\tilde\varphi_n]={\mathbb{E}}^*[H\tilde\varphi]. \end{equation} Since \eqref{eq:optimizerinquality} holds for all ${\textbf{P}}^*\in{\mathcal{P}}$ we obtain \[ \inf_{\varphi\in{\mathcal{R}}^\alpha}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi]\geq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\tilde\varphi]. \] Furthermore, $\tilde\varphi\in{\mathcal{R}}^\alpha$ by \eqref{eq:tildeconstraint} yields \[ \inf_{\varphi\in{\mathcal{R}}^\alpha}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi]= \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\tilde\varphi]. \] So $\tilde\varphi$ is the desired minimizer.\\ We now show that ${\mathbb{E}}_{\textbf{P}}[\tilde\varphi]=\alpha$ holds. If ${\mathbb{E}}_{\textbf{P}}[\tilde\varphi]>\alpha$, then we can find $\varepsilon>0$ such that $\varphi_\varepsilon:=(1-\varepsilon)\tilde\varphi\in{\mathcal{R}}^\alpha$, and \begin{equation} \label{eq:varphitildealpha} \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi_\varepsilon]=(1-\varepsilon)\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\tilde\varphi]<\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\tilde\varphi], \end{equation} which contradicts the minimality property of $\tilde\varphi$. Thus, \[ \label{eq:varphitildealpha1} {\mathbb{E}}_{\textbf{P}}[\tilde\varphi]=\alpha. \] \end{proof}\noindent \begin{definition} \label{def:successratio} For an admissible strategy with value process $V\in{\mathcal{V}}$ we define its success ratio by \begin{equation} \label{eq:successratiodef} \varphi_V:=\mathds{1}_{\{V_T\geq H\}}+\frac{V_T}{H}\mathds{1}_{\{V_T<H\}}. \end{equation} For $\alpha\in (0,1)$ we denote by ${\mathcal{V}}^\alpha$ the set \[ {\mathcal{V}}^\alpha:=\left\{\varphi_V\in{\mathcal{R}}:\ V\in{\mathcal{V}},\ {\mathbb{E}}_{\textbf{P}}\left[\varphi_V\right]\geq \alpha\right\}. \] \end{definition}\noindent \begin{remark} Note that for $V\in{\mathcal{V}}$ we have that $V_T\geq 0$ ${\textbf{P}}$-a.s. In particular, ${\textbf{P}}(\{H=0\}\cap \{V_T<H\})=0$ and hence \eqref{eq:successratiodef} is well-defined. \end{remark}\noindent In the following, we formulate the optimization problem \eqref{eq:quantilehedging} in terms of success ratios and prove that it is equivalent to \eqref{eq:quantilehedgingtestfunctions}, see Proposition \ref{prop:successratiosolution} below.\\ Consider the minimization problem \begin{equation} \label{eq:quantilehedgingsuccessratios} \inf\left\{\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*\left[\varphi_V\right]:\ V\in{\mathcal{V}}^\alpha\right\}. \end{equation} \begin{proposition} \label{prop:successratiosolution} There exists an admissible strategy with value process $\tilde V$ such that \[ {\mathbb{E}}_{\textbf{P}}\left[\varphi_{\tilde V}\right]=\alpha, \] and \begin{equation} \label{eq:successratiominimizer} \inf_{V\in{\mathcal{V}}^\alpha}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*\left[H\varphi_{V}\right]=\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*\left[H\varphi_{\tilde V}\right], \end{equation} where $\varphi_V$ denotes the success ratio associated to a portfolio $V\in{\mathcal{V}}$ as in \eqref{eq:successratiodef}. Moreover, $\varphi_{\tilde V}$ coincides with the solution $\tilde \varphi$ from Proposition \ref{prop:testfunctionsolution}. \end{proposition} \begin{proof} Note that \[ \left\{\varphi_V\in{\mathcal{R}}:\ V\in{\mathcal{V}}^\alpha\right\}\subseteq {\mathcal{R}}^\alpha, \] and thus \begin{equation} \label{eq:successratiossolutionviatest} \inf_{\varphi\in{\mathcal{R}}^\alpha}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi]\leq \inf_{V\in{\mathcal{V}}^\alpha}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*\left[H\varphi_{V}\right]. \end{equation} By Proposition \ref{prop:testfunctionsolution}, we know that the left hand side of \eqref{eq:successratiossolutionviatest} admits a solution $\tilde\varphi\in{\mathcal{R}}$. We prove that there exists $\tilde V\in{\mathcal{V}}^\alpha$ such that \[ \tilde\varphi=\varphi_{\tilde V}\quad {\textbf{P}}\text{-a.s.} \] Define the the modified claim \[ \tilde H:=H\tilde\varphi. \] By Theorem 7.13 of \cite{follmerschied} there exists a minimal superhedging strategy $\tilde\xi$ with value process $\tilde V$ for $\tilde H$ such that \[ \tilde V_0 = \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*\left[\tilde H\right]. \] First, $\tilde\xi$ can be assumed to be admissible by Remark \ref{rem:admissiblestrategies} and hence $\tilde V\in{\mathcal{V}}$. Now, we show that $\tilde V\in{\mathcal{V}}^\alpha$. We have \begin{equation} \label{eq:varphitildeinequality1} \varphi_{\tilde V}=\mathds{1}_{\{\tilde V_T\geq H\}}+\frac{\tilde V_T}{H}\mathds{1}_{\{\tilde V_T<H\}}\geq \tilde\varphi\mathds{1}_{\{\tilde V_T\geq H\}} + \frac{H\tilde\varphi}{H}\mathds{1}_{\{\tilde V_T< H\}}=\tilde\varphi, \end{equation} where we used that $\tilde V$ is the value process of the minimal superhedging strategy of $\tilde H=H\tilde\varphi$ and $0\leq \tilde\varphi\leq 1$. Therefore, we get \[ {\mathbb{E}}_{\textbf{P}}[\varphi_{\tilde{V}}]\geq {\mathbb{E}}_{\textbf{P}} [\tilde\varphi]\geq \alpha, \] so $\tilde V\in{\mathcal{V}}^\alpha$ and $\varphi_{\tilde{V}}\in{\mathcal{R}}^\alpha$. It is left to show that $\tilde\varphi=\varphi_{\tilde V}$ ${\textbf{P}}$-a.s. By \eqref{eq:varphitildeinequality1} we obtain $\varphi_{\tilde V}\geq \tilde\varphi$. For the reverse direction we first show that $\varphi_{\tilde V}$ is also a minimizer of the problem \eqref{eq:minimizationthm}, i.e., \[ \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi_{\tilde V}]\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\tilde\varphi]. \] Indeed, since $\tilde V$ is the value process of an admissible strategy, $V$ is a ${\textbf{P}}^*$-martingale for all ${\textbf{P}}^*\in{\mathcal{P}}$ by Theorem 5.14 of \cite{follmerschied} and thus we get that \begin{equation} \label{eq:varphivminimal} \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi_{\tilde V}]=\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*\left[H\left(\mathds{1}_{\{\tilde V_T\geq H\}}+\frac{\tilde V_T}{H}\mathds{1}_{\{\tilde V_T<H\}}\right)\right]\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[\tilde V_T]= \tilde V_0 = \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\tilde{\varphi}], \end{equation} where we used in the last equality that $\tilde V_0$ is the superhedging price of $\tilde H=H\tilde\varphi$. In particular, $\varphi_{\tilde{V}}\in{\mathcal{R}}^\alpha$ is a minimizer. By the same arguments as in \eqref{eq:varphitildealpha} it follows that \begin{equation} \label{eq:varphivalpha} {\mathbb{E}}_{\textbf{P}}[\varphi_{\tilde{V}}]=\alpha. \end{equation} Thus, we get by \eqref{eq:varphitildealpha} and \eqref{eq:varphivalpha} that \[ {\mathbb{E}}_{\textbf{P}}[\varphi_{\tilde V}]=\alpha={\mathbb{E}}_{\textbf{P}}[\tilde\varphi], \] i.e., ${\mathbb{E}}[\varphi_{\tilde V}-\tilde\varphi]=0$. Together with \eqref{eq:varphitildeinequality1}, this implies $\varphi_{\tilde V}=\tilde\varphi$ ${\textbf{P}}$-a.s. We have proved that $\tilde V\in{\mathcal{V}}^\alpha$ and \[ \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*\left[H\varphi_{\tilde V}\right]=\inf_{\varphi\in{\mathcal{R}}^\alpha}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi]\leq \inf_{V\in{\mathcal{V}}^\alpha}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*\left[H\varphi_{V}\right]. \] In particular, $\varphi_{\tilde V}$ solves \eqref{eq:successratiominimizer} and the quantile hedging formulations of \eqref{eq:quantilehedgingtestfunctions} and \eqref{eq:quantilehedgingsuccessratios} are equivalent. \end{proof}\noindent \begin{corollary} \label{cor:successratioconvergence} The following convergence holds: \[ \inf_{V\in{\mathcal{V}}^\alpha}\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\varphi_V]\xrightarrow{\alpha\uparrow 1} \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H], \] where $\varphi_V$ denotes the success ratio associated to a portfolio $V\in{\mathcal{V}}$ as in \eqref{eq:successratiodef}. \end{corollary} \begin{proof} The proof is similar to the one of Theorem \ref{thm:convergencesuccessset} and is omitted. \end{proof}\noindent \subsection{Neural network approximation for $t=0$} \label{sec:NNprice0} We now study how to approximate the superhedging price at $t=0$ by using neural networks. \\ We recall the following definition, see e.g.\ \cite{buehler2018}. Common choices for $\sigma$ below are $\sigma(x)=\frac{1}{1-e^{-x}}$ and $\sigma(x)=\tanh(x)$. \begin{definition}\label{def:nn} Consider $L, N_0,N_1,\ldots,N_L \in \mathbb{N}$ with $L \geq 2$, $\sigma \colon (\mathbb{R},{\mathcal{B}}({\mathbb{R}})) \to (\mathbb{R},{\mathcal{B}}({\mathbb{R}}))$ measurable and for any $\ell=1,\ldots,L$, let $W_\ell \colon \mathbb{R}^{N_{\ell-1}} \to \mathbb{R}^{N_\ell}$ be an affine function. A function $F \colon \mathbb{R}^{N_0} \to \mathbb{R}^{N_L}$ defined as \[ F(x)=W_L \circ F_{L-1} \circ \cdots \circ F_1 \text{ with } F_\ell = \sigma \circ W_\ell \, \text{ for } \ell=1,\ldots,L-1, \] is called a \emph{(feed forward) neural network}. Here the \textit{activation function} $\sigma$ is applied componentwise. $L$ denotes the number of layers, $N_1,\ldots,N_{L-1}$ denote the dimensions of the hidden layers and $N_0$, $N_L$ the dimension of the input and output layers, respectively. For any $\ell=1,\ldots,L$ the affine function $W_\ell$ is given as $ W_\ell(x) = A^\ell x + b^\ell$ for some $A^\ell \in \mathbb{R}^{N_\ell \times N_{\ell-1}}$ and $b^\ell \in \mathbb{R}^{N_\ell}$. For any $i=1,\ldots N_\ell, j=1,\ldots,N_{\ell-1}$ the number $A^\ell_{i j}$ is interpreted as the weight of the edge connecting the node $i$ of layer $\ell-1$ to node $j$ of layer $\ell$. The number of non-zero weights of a network is $ \sum_{\ell=1}^L \|A^\ell\|_0 + \|b^\ell\|_0$, i.e.\ the sum of the number of non-zero entries of the matrices $A^\ell$, $\ell=1,\ldots,L$, and vectors $b^\ell$, $\ell=1,\ldots,L$. \end{definition}\noindent For $k=1,\ldots,T+1$ we denote the set of all possible neural network parameters corresponding to neural networks mapping $\mathbb{R}^{mk} \to \mathbb{R}^d$ by \[ \Theta_k = \cup_{L \geq 2} \cup_{(N_0,\ldots,N_L) \in \{mk \} \times \mathbb{N}^{L-1} \times \{d\}} \left(\times_{\ell = 1}^{L} \mathbb{R}^{N_\ell \times N_{\ell-1}} \times \mathbb{R}^{N_\ell} \right). \] With $F^{\theta_k}$ we denote the neural network with parameters specified by $\theta_k\in\Theta_k$, see Definition \ref{def:nn}. Recall that ${\mathcal{F}}_t=\sigma(Y_0,\dots,Y_t)=\sigma({\mathcal{Y}}_t)$ for $t=0,\dots,T$, and for some ${\mathbb{R}}^m$-valued stochastic process $Y$. Then, any ${\mathcal{F}}_t$-measurable random variable $Z$ can be written as $Z=f_t({\mathcal{Y}}_t)$ for some measurable function $f_t$. Using Theorem \ref{thm:universalapproxprob}, $f_t$ can be approximated by a deep neural network in a suitable metric.\\ The approximate superhedging price is then \begin{equation}\label{eq:NNprice} \resizebox{15cm}{!}{ $\inf{\mathcal{U}}_0^\Theta= \inf \left\lbrace u\in \mathbb{R} \,:\, \exists \, \theta_{k,\xi} \in \Theta_k, k=1,\dots,T, \text{ s.t. } u + \sum_{k=1}^T F^{\theta_{k,\xi}}(\mathcal{Y}_{k-1}) \cdot (X_k - X_{k-1}) \geq H\ {\textbf{P}}\text{-a.s.} \right\rbrace.$ } \end{equation} For $\alpha\in (0,1)$ the approximate $\alpha$-quantile hedging price is then \begin{equation}\label{eq:NNpriceAlpha} \resizebox{15cm}{!}{ $\inf{\mathcal{U}}_0^{\Theta,\alpha}= \inf \left\lbrace u \in \mathbb{R} \,:\, \exists \, \theta_{k,\xi} \in \Theta_k, k=1,\ldots,T \text{ s.t. } \P\left( u + \sum_{k=1}^T F^{\theta_{k,\xi}}(\mathcal{Y}_{k-1}) \cdot (X_k - X_{k-1}) \geq H \right) \geq \alpha \right\rbrace.$} \end{equation} For $C>0$ we also define the truncated approximate superhedging price $\inf{\mathcal{U}}_0^{\Theta,C}$ and the truncated approximate $\alpha$-quantile hedging price $\inf{\mathcal{U}}_0^{\Theta,C,\alpha}$ with \begin{equation} \resizebox{15cm}{!} { ${\mathcal{U}}_0^{\Theta,C}:=\left\lbrace u\in{\mathbb{R}}:\exists\theta_{k,\xi}\in\Theta_k, k=1,\dots,T\text{ s.t. }u+\sum_{k=1}^T \left(\left(F^{\theta_{k,\xi}}\wedge C\right)\vee (-C)\right)({\mathcal{Y}}_{t-1})\cdot(X_k-X_{k-1})\geq H\, {\textbf{P}}\text{-a.s.}\right\rbrace$} \end{equation} and \begin{equation} \resizebox{15cm}{!}{ ${\mathcal{U}}_0^{\Theta,C,\alpha}:=\left\lbrace u\in{\mathbb{R}}:\exists\theta_{k,\xi}\in\Theta_k, k=1,\dots,T\text{ s.t. }{\textbf{P}}\left(u+\sum_{k=1}^T \left(\left(F^{\theta_{k,\xi}}\wedge C\right)\vee (-C)\right)({\mathcal{Y}}_{t-1})\cdot(X_k-X_{k-1})\geq H\right)\geq\alpha\right\rbrace$}, \end{equation} where the maximum and minimum are taken componentwise. \begin{assumption} \label{ass:bdd} Suppose that \[ \resizebox{15cm}{!} { $\inf{\mathcal{U}}_0=\inf{\mathcal{U}}_0^{\text{bdd}}:=\inf\left\lbrace u\in{\mathbb{R}}:\exists\xi\text{ pred. s.t. }\xi_k\in L^\infty\ \forall k\in\{1,\dots,T\},\ u+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})\geq H\ {\textbf{P}}\text{-a.s.}\right\rbrace,$} \] where $\|\cdot\|_\infty$ denotes the $L^\infty$-norm. \end{assumption} The next result shows that $\inf{\mathcal{U}}_0^{\Theta,C,\alpha}$ can be used as an approximation of the superhedging price $\inf{\mathcal{U}}_0$. \begin{theorem} \label{thm:universalapprox2} Assume $\sigma$ is bounded and non-constant. Further, suppose Assumption \ref{ass:bdd} is fulfilled. Then for any $\varepsilon>0$ there exists $\alpha=\alpha(\varepsilon)\in (0,1)$ and $C=C(\varepsilon)\in (0,\infty)$ such that \begin{equation} \label{eq:NNpriceAlphaApproxB} \inf{\mathcal{U}}_0+\varepsilon\geq \inf{\mathcal{U}}_0^{\Theta,C,\alpha}\geq \inf{\mathcal{U}}_0-\varepsilon. \end{equation} \end{theorem} \begin{proof} By Assumption \ref{ass:bdd} we can consider $\inf{\mathcal{U}}_0^{\text{bdd}}$ instead of $\inf{\mathcal{U}}_0$. Set $\tilde u_0=\inf{\mathcal{U}}_0^{\text{bdd}}$. Then for $\varepsilon>0$ there exists a predictable strategy $ \tilde{\xi}$ such that $\sup_{1\leq k\leq T}\| \tilde\xi_k\|_\infty<\infty$ and $\tilde u_0 + \frac{\varepsilon}{2} + \sum_{k=1}^T \tilde{\xi}_k \cdot (X_k - X_{k-1}) \geq H,\ \P\text{-a.s.}$ Define $C=C(\varepsilon)$ by \begin{equation} \label{eq:NNdefC} C:=\sup_{1\leq k\leq T}\| \tilde\xi_k\|_\infty+1. \end{equation} Further, for $\alpha\in (0,1]$ define ${\mathcal{U}}_0^{C,\alpha}$ by \[ {\mathcal{U}}_0^{C,\alpha}:=\left\lbrace u\in{\mathbb{R}}:\exists\xi\text{ pred. s.t. }\sup_{1\leq k\leq T}\|\xi_k\|_\infty\leq C,\ {\textbf{P}}\left(u+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})\geq H\right)\geq \alpha\right\rbrace. \] First, we prove that the limit of $\inf{\mathcal{U}}_0^{C,\alpha}$ for $\alpha$ tending to $1$ exists and that \begin{equation} \label{eq:NNobjectiveconv} \inf{\mathcal{U}}_0^{\text{bdd}}\leq \lim_{\alpha\to 1}\inf{\mathcal{U}}_0^{C,\alpha}\leq \inf{\mathcal{U}}_0^{\text{bdd}}+\varepsilon. \end{equation} Let $(\alpha_n)_{n\in{\mathbb{N}}}\subset (0,1)$ be a sequence such that $\alpha_n\uparrow 1$ as $n$ tends to infinity. Then, $\inf{\mathcal{U}}_0^{C,\alpha_n}\leq \inf{\mathcal{U}}_0^{C,\alpha_{n+1}}\leq \inf{\mathcal{U}}_0^{C,1}=:\inf{\mathcal{U}}_0^C$ since \[ {\mathcal{U}}_0^{C,\alpha_n}\supset{\mathcal{U}}_0^{C,\alpha_{n+1}}, \] and therefore $u_n\leq u_{n+1}$, where $u_n:=\inf{\mathcal{U}}_0^{C,\alpha_n}$, for $n\in{\mathbb{N}}$. Thus, the limit $u^C=\lim_{n\to\infty} u_n$ is well-defined and $u^C\leq \inf{\mathcal{U}}_0^C$. Furthermore, for $n\in{\mathbb{N}}$ and $\delta>0$, there exists $\xi^{(n)}$ predictable such that $\sup_{1\leq k\leq T}\|\xi_k^{(n)}\|_\infty \leq C$ and \begin{equation} \label{eq:increasingprobability} {\textbf{P}}\left(u_n+\delta+\sum_{k=1}^T\xi_k^{(n)}\cdot(X_k-X_{k-1})\geq H\right)\geq \alpha_n. \end{equation} For $n\in{\mathbb{N}}$, define $A_n\in{\mathcal{F}}_T$ by \[ A_n:=\left\lbrace u_n+\delta+\sum_{k=1}^T\xi_k^{(n)}\cdot(X_k-X_{k-1})\geq H\right\rbrace. \] Then ${\textbf{P}}(A_n)\geq \alpha_n$ and hence ${\textbf{P}}(A_n)\uparrow 1$ as $n$ tends to infinity. Since $\sup_{1\leq k\leq T}\|\xi_k^{(n)}\|_\infty\leq C$ for all $n\in{\mathbb{N}}$ we get by Theorem 5.14 of \cite{follmerschied} for any ${\textbf{P}}^*\in{\mathcal{P}}$ that \begin{align} \label{eq:proof0conv1} u_{n}+\delta &={\mathbb{E}}^*\left[u_n+\delta+\sum_{k=1}^T\xi_k^{(n)}\cdot(X_k-X_{k-1})\right]\\ \nonumber &\geq {\mathbb{E}}^*\left[H\mathds{1}_{A_n}\right]+{\mathbb{E}}^*\left[\left(u_n+\delta+\sum_{k=1}^T\xi_k^{(n)}\cdot(X_k-X_{k-1})\right)\mathds{1}_{A_n^c}\right]\\ \nonumber &\geq {\mathbb{E}}^*[H\mathds{1}_{A_n}]+{\mathbb{E}}^*\left[\left(u_n+\delta-\sum_{k=1}^T \sum_{i=1}^d |\xi_k^{i,(n)}| |X_k^i-X_{k-1}^i|\right)\mathds{1}_{A_n^c}\right]\\ \label{eq:proof0conv2} &\geq {\mathbb{E}}^*[H\mathds{1}_{A_n}]+{\mathbb{E}}^*\left[\left(u_n+\delta-C\sum_{k=1}^T \sum_{i=1}^d|X_k^i-X_{k-1}^i|\right)\mathds{1}_{A_n^c}\right]. \end{align} Recall that $X=(X^1,\dots,X^d)$ is a $d$-dimensional ${\textbf{P}}^*$-martingale, $u_n\leq u^C\leq \tilde u_0+\frac{\varepsilon}{2}$, $n\in{\mathbb{N}}$ and thus for all $n\in{\mathbb{N}}$ \[ \left|u_n+\delta-C\sum_{k=1}^T\sum_{i=1}^d |X_k^i-X_{k-1}^i|\right|\leq \left(\left|u_C+\delta\right|+|C|\sum_{k=1}^T\sum_{i=1}^d \|X_k-X_{k-1}\|\right)\in L^1(\Omega,{\mathcal{F}}_T,{\textbf{P}}^*). \] Furthermore, $\mathds{1}_{A_n}$ converges to $1$ in probability as $n$ tends to infinity, since for any $\gamma \in (0,1)$ we have \[ {\textbf{P}}\left(\left|\mathds{1}_{A_n}-1\right|>\gamma\right)={\textbf{P}}\left(\mathds{1}_{A_n^c}>\gamma\right)={\textbf{P}}(A_n^c)\xrightarrow{n\to\infty} 0, \] because of \eqref{eq:increasingprobability}. By dominated convergence we obtain that \[ \lim_{n\to\infty}{\mathbb{E}}^*[H\mathds{1}_{A_{n}}]={\mathbb{E}}^*[H], \] and \[ \lim_{n\to\infty}{\mathbb{E}}^*\left[\left(u_{n}+\delta-C\sum_{k=1}^T\sum_{i=1}^d|X_k^i-X_{k-1}^i|\right)\mathds{1}_{A_{n}^c}\right]=0. \] Note that for dominated convergence, it is sufficient that $\mathds{1}_{A_n}$ converges only in probability. Taking $n$ to infinity in \eqref{eq:proof0conv1} and \eqref{eq:proof0conv2} yields \begin{equation} \label{eq:limitC} \resizebox{15cm}{!}{ $\lim_{n\to\infty}u_{n}+\delta=u^C+\delta \geq \lim_{n\to\infty}\left({\mathbb{E}}^*[H\mathds{1}_{A_{n}}]+{\mathbb{E}}^*\left[\left(u_{n}+\delta-C\sum_{k=1}^T\sum_{i=1}^d|X_k^i-X_{k-1}^i|\right)\mathds{1}_{A_{n}^c}\right]\right)={\mathbb{E}}^*[H].$} \end{equation} As \eqref{eq:limitC} holds for all ${\textbf{P}}^*\in{\mathcal{P}}$ we get by the superhedging duality that \[ \lim_{n\to\infty}\inf{\mathcal{U}}_0^{C,\alpha_n}+\delta=u^C+\delta \geq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]=\inf{\mathcal{U}}_0=\inf{\mathcal{U}}_0^{\text{bdd}}. \] Because $\delta>0$ was arbitrary, this implies \[ \lim_{\alpha\to 1}\inf{\mathcal{U}}_0^{C,\alpha}\geq \inf{\mathcal{U}}_0=\inf{\mathcal{U}}_0^{\text{bdd}}. \] To conclude the proof of \eqref{eq:NNobjectiveconv}, we note that $(\tilde u_0+\frac{\varepsilon}{2})\in{\mathcal{U}}_0^C$ by definition and that ${\mathcal{U}}_0^C\subset {\mathcal{U}}_0^{\text{bdd}}$. This implies that $\inf {\mathcal{U}}_0^{\text{bdd}}\leq \inf{\mathcal{U}}_0^C$, and \[ \lim_{\alpha\to 1}\inf{\mathcal{U}}_0^{C,\alpha}\leq \inf{\mathcal{U}}_0^C\leq \tilde u_0+\frac{\varepsilon}{2}\leq \inf{\mathcal{U}}_0^{\text{bdd}}+\varepsilon, \] hence \eqref{eq:NNobjectiveconv} follows. We observe that ${\mathcal{U}}_0^{\Theta,C,\alpha}\subset {\mathcal{U}}_0^{C,\alpha}$ for all $\alpha\in (0,1)$. Furthermore, by \eqref{eq:NNobjectiveconv} for $\varepsilon>0$ there exists $\alpha=\alpha(\varepsilon)\in (0,1)$ such that \begin{equation} \label{eq:NNalpha} \inf{\mathcal{U}}_0-\varepsilon=\inf{\mathcal{U}}_0^{\text{bdd}}-\varepsilon\leq \inf{\mathcal{U}}_0^{C,\alpha}\leq \inf{\mathcal{U}}_0^{\Theta,C,\alpha}, \end{equation} which proves the second inequality in \eqref{eq:NNpriceAlphaApproxB}.\\ To prove the first inequality in \eqref{eq:NNpriceAlphaApproxB}, let $\alpha$ be given. Consider \[ M_n = \left\lbrace \tilde u_0 + \frac{\varepsilon}{2} + \sum_{k=1}^T \tilde{\xi}_k \cdot (X_k - X_{k-1}) \geq H \right\rbrace \cap \{ \|X_k-X_{k-1}\| \leq n \text{ for } k=1,\ldots,T\}, \] for $n\in{\mathbb{N}}$. Then $M_n \subset M_{n+1}$ and therefore by continuity from below \[ 1= \P\left(\tilde u_0 + \frac{\varepsilon}{2} + \sum_{k=1}^T \tilde{\xi}_k \cdot (X_k - X_{k-1}) \geq H \right) = \P(\cup_{n \in \mathbb{N}} M_n ) = \lim_{n \to \infty} \P(M_n). \] Thus, we may choose $n \in \mathbb{N}$ such that $\P(M_n) \geq \frac{\alpha+1}{2}$. As $ \tilde{\xi}$ is predictable, for each $k=1,\ldots,T$ there exists a measurable function $f_k \colon (\mathbb{R}^{mk},{\mathcal{B}}({\mathbb{R}}^{mk})) \to (\mathbb{R}^d,{\mathcal{B}}({\mathbb{R}}^d))$ such that $\tilde{\xi}_k = f_k(\mathcal{Y}_{k-1})$. By the universal approximation theorem \cite[Theorem~1 and Section~3]{hornik1991}, see also Theorem \ref{thm:universalapproxprob} in the appendix, with measure $\mu$ given by the law of $\mathcal{Y}_{k-1}$ under $\P$, for each $k=1,\ldots,T$ there exists $\theta_{k,\tilde\xi} \in \Theta$ such that \begin{equation} \label{eq:NNapproxD} \P(D_k) < \frac{1-\alpha}{2T}, \quad \text{ where } D_k =\left\lbrace \omega \in \Omega \colon \|f_k(\mathcal{Y}_{k-1}(\omega))-F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1}(\omega))\|>\left(\frac{\varepsilon}{2nT}\wedge\frac{1}{2}\right)\right\rbrace. \end{equation} Define \[ \tilde F^{\theta_{k,\tilde\xi}}:=\left(F^{\theta_{k,\tilde\xi}}\wedge C\right)\vee (-C),\quad k=1,\dots,T. \] By the definition of $C$ in \eqref{eq:NNdefC}, we get that \[ \|\tilde\xi_k\|_\infty+\left(\frac{\varepsilon}{2nT}\wedge\frac{1}{2}\right)< C \quad \text{ for all }k=1,\dots,T. \] On $D_k^c$ we have for $i\in \{1,\dots,d\}$ that \[ \left|F_i^{\theta_{k,\tilde\xi}}({\mathcal{Y}}_{k-1})\right|\leq \left\|F^{\theta_{k,\tilde\xi}}({\mathcal{Y}}_{k-1})\right\|\leq \left\|\tilde\xi_k\right\|_\infty+\left(\frac{\varepsilon}{2nT}\wedge \frac{1}{2}\right)<C, \] and hence $\tilde F_i^{\theta_{k,\tilde\xi}}({\mathcal{Y}}_{k-1})=F_i^{\theta_{k,\tilde\xi}}({\mathcal{Y}}_{k-1})$ on $D_k^c$. Conversely, for $\omega\in\Omega$ such that \[ \|f_k(\mathcal{Y}_{k-1}(\omega))-\tilde F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1}(\omega))\|\leq\left(\frac{\varepsilon}{2nT}\wedge\frac{1}{2}\right), \] we get for $i\in\{1,\dots,d\}$ that \[ \left|\tilde F_i^{\theta_{k,\tilde\xi}}({\mathcal{Y}}_{k-1}(\omega))\right|\leq \left\|\tilde F^{\theta_{k,\tilde\xi}}({\mathcal{Y}}_{k-1}(\omega))\right\|\leq \left\|\tilde\xi_k\right\|_\infty+\left(\frac{\varepsilon}{2nT}\wedge \frac{1}{2}\right)<C, \] and hence $\tilde F_i^{\theta_{k,\tilde\xi}}({\mathcal{Y}}_{k-1}(\omega))=F_i^{\theta_{k,\tilde\xi}}({\mathcal{Y}}_{k-1}(\omega))$. In particular, \[ \resizebox{15cm}{!} { $\left\lbrace \omega\in\Omega\colon \|f_k(\mathcal{Y}_{k-1}(\omega))-\tilde F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1}(\omega))\|\leq\left(\frac{\varepsilon}{2nT}\wedge\frac{1}{2}\right)\right\rbrace=\underbrace{\left\lbrace \omega\in\Omega\colon \|f_k(\mathcal{Y}_{k-1}(\omega))-F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1}(\omega))\|\leq\left(\frac{\varepsilon}{2nT}\wedge\frac{1}{2}\right)\right\rbrace}_{=D_k^c},$} \] for all $k=1,\dots,T$. Therefore, we get that $D_k=\tilde D_k$ with \[ \tilde D_k =\left\lbrace \omega \in \Omega \colon \|f_k(\mathcal{Y}_{k-1}(\omega))-\tilde F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1}(\omega))\|>\left(\frac{\varepsilon}{2nT}\wedge\frac{1}{2}\right)\right\rbrace, \] and \[ \P(\tilde D_k) < \frac{1-\alpha}{2T}. \] On $M_n \cap \tilde D_1^c \cap \ldots \cap \tilde D_T^c$ we have \[\begin{aligned} \sum_{k=1}^T \tilde{\xi}_k \cdot (X_k - X_{k-1}) & = \sum_{k=1}^T (\tilde{\xi}_k-\tilde F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1})) \cdot (X_k - X_{k-1}) + \sum_{k=1}^T \tilde F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1}) \cdot (X_k - X_{k-1}) \\ & \leq \sum_{k=1}^T \|f_k(\mathcal{Y}_{k-1})-\tilde F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1})\| \|X_k - X_{k-1}\| \\ &+ \sum_{k=1}^T \tilde F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1}) \cdot (X_k - X_{k-1}) \\ & \leq \frac{\varepsilon}{2} + \sum_{k=1}^T \tilde F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1}) \cdot (X_k - X_{k-1}) \end{aligned} \] and therefore \[ M_n \cap \tilde D_1^c \cap \ldots \cap \tilde D_T^c \subset \left\lbrace \tilde u_0 + \varepsilon + \sum_{k=1}^T \tilde F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1}) \cdot (X_k - X_{k-1}) \geq H \right\rbrace. \] This inclusion and the Fr\'echet inequalities\footnote{For $C_1,\dots,C_l\in{\mathcal{F}}$ it holds that $P(C_1\cap\dots\cap C_l)\geq \max\{0,{\textbf{P}}(C_1)+\dots+{\textbf{P}}(C_l)-(l-1)\}$.} yield \[\begin{aligned} \P & \left( \tilde u_0 + \varepsilon + \sum_{k=1}^T \tilde F^{\theta_{k,\tilde\xi}}(\mathcal{Y}_{k-1}) \cdot (X_k - X_{k-1}) \geq H\right) \\ \geq & \P(M_n \cap \tilde D_1^c \cap \ldots \cap \tilde D_T^c)\\ \geq &\P(M_n) + \P(\tilde D_1^c)+ \cdots + \P(\tilde D_T^c) - T \geq \frac{\alpha+1}{2} + T\left(1-\frac{1-\alpha}{2T}\right) - T = \alpha. \end{aligned}\] This proves the left inequality of \eqref{eq:NNpriceAlphaApproxB}. \end{proof} \begin{remark} Note that in the proof of Theorem \ref{thm:universalapprox2} we compute both the price at $t=0$ and the superhedging strategy for the complete interval. \end{remark} \begin{remark} Thanks to the universal approximation theorem in \cite{hornik1991}, we could in fact restrict our attention to neural networks with one hidden layer and the result in Theorem~\ref{thm:universalapprox2} remains valid. Thus, for each $k=1,\ldots,T$ we could fix $L=2$, $N_0 = mk$, $N_2=d$ and consider instead the simpler parameter sets \[\begin{aligned} \Theta_k & = \cup_{N_1 \in \mathbb{N}} (\mathbb{R}^{N_1 \times mk} \times \mathbb{R}^{N_1}) \times (\mathbb{R}^{d \times N_1} \times \mathbb{R}^d) \\ \Theta^C_k & = ([-C,C]^{C \times mk} \times [-C,C]^{C}) \times ([-C,C]^{d \times N_1} \times [-C,C]^d). \end{aligned} \] Note the simpler form of $\Theta^C_k$, which is due to the fact that all one-hidden layer networks with $N_1 \leq C$ hidden nodes can be written as one-hidden layer networks with $C$ hidden nodes and appropriate weights set to $0$. \end{remark} \section{Superhedging price for $t>0$} \label{sec:superhedgingt} In this section we establish a method to approximate superhedging prices for $t>0$. Using a version of the uniform Doob decomposition, see Theorem 7.5 of \cite{follmerschied}, the problem reduces to the approximation of the so-called process of consumption. In the first part, we build the theoretical basis for this approach. In the second part we prove that this method can be used to approximate the superhedging price for $t>0$ by neural networks. \subsection{Uniform Doob Decomposition} \label{sec:uniformDoob} We briefly summarize some results on superhedging in discrete time in Corollary \ref{cor:resultsfollmerschied} below. For a more detailed overview we refer to Chapter 7 of \cite{follmerschied}.\\ Recall that $H$ denotes a discounted European claim satisfying \[ \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]<\infty. \] The superhedging price at $t=0$, $\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]$ and the associated strategy $\xi$ can be calculated as in Section \ref{sec:price0} and so we consider them as known. The remaining unknown component is the process of consumption $B$ given by \eqref{eq:uniformdoob}. By Corollary \ref{cor:resultsfollmerschied}, \[ \left(\esssup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mid{\mathcal{F}}_t]\right)_{t=0,1,\dots,T} \] is the smallest ${\mathcal{P}}$-supermartingale whose terminal value dominates $H$. Consider the stochastic process $\tilde B=(\tilde B_t)_{t=0,\dots,T}$ defined as $\tilde B_0:=0$ and for $t=1,\dots,T$, \begin{equation} \label{eq:optimizationB} \tilde B_t:=\esssup{\mathcal{B}}_t, \end{equation} where \begin{equation} \label{eq:optimizationBset} {\mathcal{B}}_t:=\left\{D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}):\tilde B_{t-1}\leq D_t\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\ {\textbf{P}}\text{-a.s.}\right\}. \end{equation} \begin{proposition} \label{prop:optimizationB} We have that \[ B_t=\tilde B_t\quad{\textbf{P}}\text{-a.s., for all }t=0,\dots,T, \] where $B$ is given in \eqref{eq:uniformdoob} and $\tilde B$ in \eqref{eq:optimizationBset}, respectively. \end{proposition} \begin{proof} The proof follows by induction. For $t=0$ we have $B_0=0=\tilde B_0$ by definition. For the induction step assume that $B_{t-1}=\tilde B_{t-1}$ ${\textbf{P}}$-a.s. for some $1\leq t\leq T$. First we observe that $B_t\geq \tilde B_{t-1}$ because $B$ is increasing and by the assumption of the induction step. In addition, by \eqref{eq:optimization} we obtain \begin{equation} \label{eq:proofBdom} \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\geq B_t. \end{equation} In particular, $B_t\in {\mathcal{B}}_t$ and thus $B_t\leq \tilde B_t$ ${\textbf{P}}$-a.s. Assume that ${\textbf{P}}(B_{t}<\tilde B_{t})>0$. Then define $\tilde V=(\tilde V_s)_{s=0,\dots,T}$ by \[ \tilde V_s:=\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^s\xi_k\cdot (X_k-X_{k-1})-\tilde B_s \] First, we note that \[ \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot (X_k-X_{k-1})\geq H\geq 0\quad{\textbf{P}}\text{-a.s.} \] and thus by Theorem 5.14 of \cite{follmerschied} we have for any ${\textbf{P}}^*\in{\mathcal{P}}$ that \[\left(\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^s\xi_k\cdot (X_k-X_{k-1})\right)_{s=0,\dots,T} \] is ${\textbf{P}}^*$-martingale for all ${\textbf{P}}^*\in{\mathcal{P}}$. Further, by \eqref{eq:proofBdom} and \eqref{eq:uniformdoob} we obtain \[ 0\leq \tilde B_s\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot (X_k-X_{k-1})-H\quad \text{for all }T=0,\dots,T,\ {\textbf{P}}^*\in{\mathcal{P}}, \] and \[ \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot (X_k-X_{k-1})-H\in L^1(\Omega,{\mathcal{F}},{\textbf{P}}^*)\quad \text{for all }{\textbf{P}}^*\in{\mathcal{P}}, \] implies that $\tilde V_s\in L^1(\Omega,{\mathcal{F}}_s,{\textbf{P}}^*)$ for all ${\textbf{P}}^*\in{\mathcal{P}}$ and all $s=0,\dots,T$. In particular, since $\tilde B$ is increasing and non-negative, we can conclude that $\tilde V$ is a $\P^*$-supermartingale for all ${\textbf{P}}^*\in{\mathcal{P}}$. Furthermore, we show that $\tilde V_s\geq 0$ ${\textbf{P}}$-a.s. for all $s=0,\dots,T$. To this end, let ${\textbf{P}}^*\in{\mathcal{P}}$ be arbitrary, then we have by the ${\textbf{P}}^*$-supermartingale property that \begin{align*} \tilde V_s&\geq {\mathbb{E}}^*[\tilde V_T\mid{\mathcal{F}}_s]\\ &={\mathbb{E}}^*\left[\sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot (X_k-X_{k-1})-\tilde B_T\mid{\mathcal{F}}_s\right]\\ &={\mathbb{E}}^*[H\mid{\mathcal{F}}_s]\geq 0. \end{align*} The terminal value of $\tilde V$ dominates $H$ by construction and since $B_s\leq \tilde B_s$ for all $s=0,\dots,T$, we have \[ \tilde V_s\leq \esssup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mid{\mathcal{F}}_s]\quad{\textbf{P}}\text{-a.s. for all }s=0,1\dots,T. \] Then we obtain \[ {\textbf{P}}(\tilde V_{t}<\esssup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mid{\mathcal{F}}_t])={\textbf{P}}(B_t<\tilde B_t)>0, \] which contradicts the fact that $(\esssup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H\mid{\mathcal{F}}_s])_{s=0,\dots,T}$ is the smallest ${\mathcal{P}}$-supermartingale whose terminal value dominates $H$. Thus $B_t= \tilde B_t$ ${\textbf{P}}$-a.s. This concludes the proof. \end{proof} \begin{remark} \label{rem:processofConsumptionID} In the definition of \eqref{eq:optimizationB} we can equivalently consider $\esssup \widehat{\mathcal{B}}_t$, where \[ \widehat {\mathcal{B}}_t:=\left\{D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}): 0\leq D_t\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\ {\textbf{P}}\text{-a.s.}\right\}, \] for $t=1,\dots,T$. This is due to the fact that, on the one hand ${\mathcal{B}}_t\subset \widehat {\mathcal{B}}_t$ for all $t=1,\dots,T$. On the other hand, for $D_t\in \widehat {\mathcal{B}}_t$ we have that $\tilde D_t:=D_t\vee B_{t-1}\in{\mathcal{B}}_t$ and $D_t\leq \tilde D_t$ ${\textbf{P}}$-a.s. Therefore, $\esssup \widehat {\mathcal{B}}_t= \esssup{\mathcal{B}}_t=B_t$ for all $t=1,\dots,T$. \end{remark} \subsection{Neural network approximation for $t>0$} \label{sec:NNapproxt} We now study a neural network approximation for the superhedging price process for $t>0$. Throughout this section we use the notation of Section \ref{sec:price0}. For $\varepsilon,\tilde\varepsilon\in (0,1)$ we define the set \begin{align*} {\mathcal{B}}_t^{\theta_t^*,\varepsilon,\tilde\varepsilon}:=\Bigg\lbrace &F^{\theta_t}({\mathcal{Y}}_t):\, \theta_{t} \in \Theta_{t+1}\text{ and }\\ & {\textbf{P}}\left(B_{t-1}-\tilde\varepsilon\leq F^{\theta_t}({\mathcal{Y}}_t)\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H+\tilde\varepsilon\right)>1-\varepsilon\Bigg\rbrace, \end{align*} where $B$ is the consumption process for $H$ introduced in \eqref{eq:uniformdoob}. We now construct an approximation of $B$ by neural networks. \begin{proposition} \label{prop:NNconsumption} Assume $\sigma$ is bounded and non-constant. Then for any $\varepsilon,\tilde\varepsilon>0$ there exist neural networks $(F^{\theta_0,\varepsilon,\tilde\varepsilon},\dots,F^{\theta_T,\varepsilon,\tilde\varepsilon})$ such that $F^{\theta_t,\varepsilon,\tilde\varepsilon}({\mathcal{Y}}_t)\in{\mathcal{B}}_t^{\theta_t^*,\varepsilon,\tilde\varepsilon}$ for all $t=0,\dots,T$ and \[ {\textbf{P}}\left(\left|F^{\theta_t,\varepsilon,\tilde\varepsilon}({\mathcal{Y}}_t)-B_t\right|>\tilde\varepsilon\right)<\varepsilon,\quad \text{for all }t=0,\dots,T. \] In particular, there exists a sequence of neural networks $\left(F^{\theta_0^n},\dots,F^{\theta_T^n}\right)_{n\in{\mathbb{N}}}$ with $F^{\theta_t^n}({\mathcal{Y}}_t)\in{\mathcal{B}}_t^{\theta_t^*,\frac{1}{n},\frac{1}{n}}$ for all $n\in{\mathbb{N}}$ and for all $t=0,\dots,T$ such that \[ \left(F^{\theta_0^n}({\mathcal{Y}}_0),\dots,F^{\theta_T^n}({\mathcal{Y}}_T)\right)\xrightarrow{{\textbf{P}}\text{-a.s.}}(B_0,\dots,B_T)\quad \text{for }n\to\infty. \] \end{proposition} \begin{proof} Fix $\varepsilon,\tilde\varepsilon>0$ and $t\in\{1,\dots,T\}$. Note that $B_0=0$ by definition. Let $B$ be given by the representation \eqref{eq:optimizationB}. Observe that the set ${\mathcal{B}}_t$ from \eqref{eq:optimizationBset} is directed upwards. By Theorem A.33 of \cite{follmerschied} there exists an increasing sequence \[ (B_t^k)_{k\in{\mathbb{N}}}\subset {\mathcal{B}}_t, \] such that $B_t^k$ converges ${\textbf{P}}$-almost surely to $\tilde B_t=B_t$ as $k$ tends to infinity. Since almost sure convergence implies convergence in probability, there exists $K=K(\varepsilon,\tilde\varepsilon)\in{\mathbb{N}}$ such that \begin{equation} \label{eq:approxsequenceB} {\textbf{P}}\left(\left|B_t^k-B_t\right|> \frac{\tilde\varepsilon}{2}\right)<\frac{\varepsilon}{2},\quad\text{for all }k\geq K. \end{equation} For all $k\geq K$ there exist measurable functions $f_t^k:{\mathbb{R}}^{mt}\to{\mathbb{R}}$ such that $B_t^k=f_t^k({\mathcal{Y}}_t)$. Fix $k\geq K$. By the universal approximation theorem \cite[Theorem~1 and Section~3]{hornik1991}, see also Theorem \ref{thm:universalapproxprob} in the appendix, (with measure $\mu$ given by the law of ${\mathcal{Y}}_t$ under $\P$) there exists $\theta_t=\theta_t^k\in\Theta_{t+1}$ and $F^{\theta_t}=F^{\theta_t^k,\varepsilon,\tilde\varepsilon}$ such that \[ {\textbf{P}}\left(\left|f_t^k({\mathcal{Y}}_t)-F^{\theta_t}({\mathcal{Y}}_t)\right|>\frac{\tilde\varepsilon}{2}\right)<\frac{\varepsilon}{2}. \] By the triangle inequality and by De Morgan's law we obtain that \begin{align*} &\left\lbrace \omega\in\Omega:\left|B_t(\omega)-F^{\theta_t}({\mathcal{Y}}_t(\omega))\right|>\tilde\varepsilon\right\rbrace\subseteq \left\lbrace \omega\in\Omega:\left|B_t(\omega)-B_t^k(\omega)\right|+\left|B_t^k-F^{\theta_t}({\mathcal{Y}}_t(\omega))\right|>\tilde\varepsilon\right\rbrace\\ \subseteq&\left\lbrace\omega\in\Omega:\left|B_t(\omega)-B_t^k(\omega)\right|>\frac{\tilde\varepsilon}{2}\right\rbrace \cup\left\lbrace\omega\in\Omega:\left|B_t^k(\omega)-F^{\theta_t}({\mathcal{Y}}_t(\omega))\right|>\frac{\tilde\varepsilon}{2}\right\rbrace. \end{align*} In particular, we obtain by sub-addidivity that \[ {\textbf{P}}\left(\left|B_t-F^{\theta_t}({\mathcal{Y}}_t)\right|>\tilde\varepsilon\right)\leq {\textbf{P}}\left(\left|B_t-B_t^k\right|>\frac{\tilde\varepsilon}{2}\right)+{\textbf{P}}\left(\left|B_t^k-F^{\theta_t}({\mathcal{Y}}_t)\right|>\frac{\tilde\varepsilon}{2}\right)<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon. \] Next, we show that $F^{\theta_t}\in{\mathcal{B}}_t^{\theta_t^*,\varepsilon,\tilde\varepsilon}$. For this purpose, we note that \[ B_{t-1}\leq B_t\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot\left(X_k-X_{k-1}\right)-H\quad{\textbf{P}}\text{-a.s.} \] Therefore, we have that \[ \resizebox{15cm}{!} { ${\textbf{P}}\left(B_{t-1}-\tilde\varepsilon\leq F^{\theta_t}({\mathcal{Y}}_t)\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot\left(X_k-X_{k-1}\right)-H+\tilde\varepsilon\right)\geq {\textbf{P}}\left(\left|B_t-F^{\theta_t}({\mathcal{Y}}_t)\right|\leq\tilde\varepsilon\right)>1-\varepsilon,$} \] which implies that $F^{\theta_t}({\mathcal{Y}}_t)=F^{\theta_t^k,\varepsilon,\tilde\varepsilon}({\mathcal{Y}}_t)\in{\mathcal{B}}_t^{\theta_t^*,\varepsilon,\tilde\varepsilon}$. We set $\varepsilon=\frac{1}{n}=\tilde\varepsilon$ for $n\in{\mathbb{N}}$ and consider the neural network \[ F^{\theta_t^n}:=F^{\theta_t^{K(n)},\frac{1}{n},\frac{1}{n}},\quad t\in\{1,\dots,T\},\ n\in{\mathbb{N}}, \] where $K(n)=K(\frac{1}{n},\frac{1}{n})$ is given by \eqref{eq:approxsequenceB}. Then, $F^{\theta_t^n}\in{\mathcal{B}}_t^{\theta_t^*,\frac{1}{n},\frac{1}{n}}$ for all $n\in{\mathbb{N}}$ and for all $t=1,\dots,T$. Further, we have \[ {\textbf{P}}\left(\left|F^{\theta_t^n}({\mathcal{Y}}_t)-B_t\right|>\frac{1}{n}\right)<\frac{1}{n}\quad \text{for all }t=1,\dots,T, \] which implies convergence in probability, i.e., \[ F^{\theta_t^n}({\mathcal{Y}}_t)\xrightarrow{{\textbf{P}}}B_t\quad\text{for }n\to\infty,\text{ for all }t=0,\dots,T. \] By passing to a suitable subsequence, convergence also holds ${\textbf{P}}$-a.s. simultaneously for all $t=0,\dots,T$. \end{proof}\noindent Let $\tilde\varepsilon>0$. Recursively, we define the set \begin{equation}\begin{aligned} \label{eq:NNapproxTset} \tilde{\mathcal{B}}_t^{\theta_t^*,\tilde\varepsilon}:= \large\lbrace & F^{\theta_t}({\mathcal{Y}}_t)\mathds{1}_A + B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}\mathds{1}_{A^c} :\, \theta_{t} \in \Theta_{t+1}, A \in \mathcal{F}_t, \\ & B_{t-1}^{\theta_{t-1,\tilde\varepsilon}^*}\leq F^{\theta_t}({\mathcal{Y}}_t)\mathds{1}_A + B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}\mathds{1}_{A^c} \leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H+\tilde\varepsilon\large\rbrace \end{aligned} \end{equation} for $t=1,\dots,T$, and the approximated process of consumption by $B_0^{\theta_0^*,\tilde\varepsilon}=0$ and \begin{equation}\begin{aligned} \label{eq:NNapproxTprocess} B_t^{\theta_t^*,\tilde\varepsilon}:= \esssup \tilde{\mathcal{B}}_t^{\theta_t^*,\tilde\varepsilon}\quad \text{ for }t=1,\dots,T. \end{aligned} \end{equation} \begin{theorem} \label{thm:NNgeneralt} Assume $\sigma$ is bounded and non-constant. Then \[ \left|B_t^{\theta_t^*,\tilde\varepsilon}-B_t\right|\leq\tilde\varepsilon \quad{\textbf{P}}\text{-a.s. for all }t=0,\dots,T. \] \end{theorem} \begin{proof} We prove the statement by induction. For $t=0$ we have by definition $B_0^{\theta_0^*,\tilde\varepsilon}=B_0=0$. Assume now that \[ \left|B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}-B_{t-1}\right|\leq\tilde\varepsilon \quad{\textbf{P}}\text{-a.s.} \] for some $t\in \{1,\dots,T\}$. First we note that $B_s^{\theta_s^*,\tilde\varepsilon}\leq B_{s+1}^{\theta_{s+1}^*,\tilde\varepsilon}$ by \eqref{eq:NNapproxTset} and \eqref{eq:NNapproxTprocess}, and because $B_0^{\theta_0^*,\tilde\varepsilon}=0$ it follows that $B_s^{\theta_s^*,\tilde\varepsilon}\geq 0$ for all $s=1,\dots,T$. Let $\theta_t\in \Theta_{t+1}$ and $A\in{\mathcal{F}}_t$ such that \[ 0\leq B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}\leq F^{\theta_t}({\mathcal{Y}}_t)\mathds{1}_A + B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}\mathds{1}_{A^c} \leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H+\tilde\varepsilon. \] Then, we can easily see that \[ \resizebox{15cm}{!} { $F^{\theta_t}({\mathcal{Y}}_t)\mathds{1}_A + B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}\mathds{1}_{A^c}\in \left\lbrace D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}):0 \leq D_t\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H+\tilde\varepsilon \ {\textbf{P}}\text{-a.s.}\right\rbrace$}. \] We now prove that \begin{equation} \label{eq:NNdir1B} \resizebox{15cm}{!} {$ B_t+\tilde\varepsilon=\esssup\left\lbrace D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}):-\tilde\varepsilon \leq D_t-\tilde\varepsilon\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\ {\textbf{P}}\text{-a.s.}\right\rbrace$}. \end{equation} On the one hand we have \begin{align*} &\left\lbrace \tilde D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}): 0\leq \tilde D_t\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\ {\textbf{P}}\text{-a.s.}\right\rbrace+\tilde\varepsilon\\ =&\left\lbrace D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}): 0\leq D_t-\tilde\varepsilon\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\ {\textbf{P}}\text{-a.s.}\right\rbrace\\ \subseteq &\left\lbrace D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}):-\tilde\varepsilon \leq D_t-\tilde\varepsilon\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\ {\textbf{P}}\text{-a.s.}\right\rbrace, \end{align*} which by Remark \ref{rem:processofConsumptionID} implies that \[ \resizebox{15cm}{!} {$ B_t+\tilde\varepsilon\leq \esssup\left\lbrace D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}):-\tilde\varepsilon \leq D_t-\tilde\varepsilon\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\ {\textbf{P}}\text{-a.s.}\right\rbrace$}. \] On the other hand, let \[ D_t\in \left\lbrace D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}):-\tilde\varepsilon \leq D_t-\tilde\varepsilon\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\ {\textbf{P}}\text{-a.s.}\right\rbrace, \] and define $\tilde D_t:=D_t\vee \tilde\varepsilon$. Then $D_t\leq \tilde D_t$ ${\textbf{P}}$-a.s. and \[ \tilde D_t\in \left\lbrace \bar D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}):0 \leq \bar D_t-\tilde\varepsilon\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\ {\textbf{P}}\text{-a.s.}\right\rbrace, \] which implies that \[ \resizebox{15cm}{!} {$ B_t+\tilde\varepsilon\geq \esssup\left\lbrace D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}):-\tilde\varepsilon \leq D_t-\tilde\varepsilon\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\ {\textbf{P}}\text{-a.s.}\right\rbrace$}, \] and hence \eqref{eq:NNdir1B} follows. Further, we also have that \begin{align*} &\left\lbrace D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}):0 \leq D_t\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H+\tilde\varepsilon \ {\textbf{P}}\text{-a.s.}\right\rbrace\\ =&\left\lbrace D_t\in L^0(\Omega,{\mathcal{F}}_t,{\textbf{P}}):-\tilde\varepsilon \leq D_t-\tilde\varepsilon\leq \sup_{{\textbf{P}}^*\in{\mathcal{P}}}{\mathbb{E}}^*[H]+\sum_{k=1}^T\xi_k\cdot(X_k-X_{k-1})-H\ {\textbf{P}}\text{-a.s.}\right\rbrace. \end{align*} Therefore, we obtain by \eqref{eq:NNdir1B} that \[ F^{\theta_t}({\mathcal{Y}}_t)\mathds{1}_A + B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}\mathds{1}_{A^c}\leq B_t+\tilde\varepsilon\quad {\textbf{P}}\text{-a.s.,} \] and hence \begin{equation} \label{eq:NNconsumptiondir1} B_t^{\theta_t^*,\tilde\varepsilon}\leq B_t+\tilde\varepsilon\quad{\textbf{P}}\text{-a.s.} \end{equation} For the converse direction let $\varepsilon\in (0,1)$. By the proof of Proposition \ref{prop:NNconsumption} there exists a neural network $F^{\tilde\theta_t}=F^{\tilde\theta_t,\varepsilon,\tilde\varepsilon}$ such that \[ {\textbf{P}}\left(\left|F^{\tilde\theta_t}({\mathcal{Y}}_t)-B_t\right|>\tilde\varepsilon\right)<\varepsilon. \] Define the sets $A_1,A_2\in{\mathcal{F}}_t$ by \[ A_1:=\left\lbrace \omega\in\Omega: B_t(\omega)-\tilde\varepsilon \leq F^{\tilde\theta_t}({\mathcal{Y}}_t(\omega))\leq B_t(\omega)+\tilde\varepsilon\right\rbrace, \] and \[ A_2:=\left\lbrace \omega\in\Omega: B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}(\omega)\leq F^{\tilde\theta_t}({\mathcal{Y}}_t(\omega))\right\rbrace. \] Then, ${\textbf{P}}(A_1)> 1-\varepsilon$. Note that by the assumption of the induction \[ B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}\leq B_{t-1}+\tilde\varepsilon\leq B_t+\tilde\varepsilon\quad{\textbf{P}}\text{-a.s.} \] For $A:=A_1\cap A_2$ we have by construction, \[ F^{\tilde\theta_t}({\mathcal{Y}}_t)\mathds{1}_A+B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}\mathds{1}_{A^c}=F^{\tilde\theta_t}({\mathcal{Y}}_t)\mathds{1}_{A_1\cap A_2}+B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}\mathds{1}_{A_1^c\cup A_2^c}\in \tilde{\mathcal{B}}_t^{\theta_t^*,\tilde\varepsilon}. \] For $\omega\in A_1\cap A_2^c$ we get that \[ F^{\tilde\theta_t}({\mathcal{Y}}_t(\omega))\mathds{1}_{A_1\cap A_2}(\omega)+B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}(\omega)\mathds{1}_{A_1^c\cup A_2^c}(\omega)=B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}(\omega) \] and \[ B_t(\omega)-\tilde\varepsilon\leq F^{\tilde\theta_t}({\mathcal{Y}}_t(\omega))< B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}(\omega)\leq B_{t}(\omega)+\tilde\varepsilon. \] For $\omega\in A_1\cap A_2$ we have \[ F^{\tilde\theta_t}({\mathcal{Y}}_t(\omega))\mathds{1}_{A_1\cap A_2}(\omega)+B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}(\omega)\mathds{1}_{A_1^c\cup A_2^c}(\omega)=F^{\tilde\theta_t}({\mathcal{Y}}_t(\omega)) \] and \[ \left| F^{\tilde\theta_t}({\mathcal{Y}}_t(\omega))- B_t(\omega)\right|\leq \tilde\varepsilon. \] Thus, using that that $A_1=(A_1\cap A_2)\cup (A_1\cap A_2^c)$ and ${\textbf{P}}(A_1)>1-\varepsilon$ we get \begin{equation} \label{eq:NNconsumptionProbsets} {\textbf{P}}\left(\left|\left(F^{\tilde\theta_t}({\mathcal{Y}}_t)\mathds{1}_A+B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}\mathds{1}_{A^c}\right)-B_t\right|>\tilde\varepsilon\right)\leq {\textbf{P}}(A_1^c)<\varepsilon. \end{equation} Then, \eqref{eq:NNconsumptionProbsets} implies \begin{equation} \label{eq:NNconsumptionconclusion} {\textbf{P}}\left(B_t^{\theta_t^*,\tilde\varepsilon}<B_t-\tilde\varepsilon\right)\leq {\textbf{P}}\left(F^{\tilde\theta_t}({\mathcal{Y}}_t)\mathds{1}_A+B_{t-1}^{\theta_{t-1}^*,\tilde\varepsilon}\mathds{1}_{A^c}<B_t-\tilde\varepsilon\right)<\varepsilon. \end{equation} Because $\varepsilon\in (0,1)$ was arbitrary, it follows that $B_t\leq B_t^{\theta_t^*,\tilde\varepsilon}+\tilde\varepsilon$ ${\textbf{P}}$-a.s. by \eqref{eq:NNconsumptionconclusion}. By \eqref{eq:NNconsumptiondir1} and \eqref{eq:NNconsumptionconclusion} we conclude that $|B_t^{\theta_t^*,\tilde\varepsilon}-B_t|\leq\tilde\varepsilon$ ${\textbf{P}}$-a.s. for all $t=0,\dots,T$. \end{proof} \section{Numerical results} \label{sec:numericalresults} In this section, we present some numerical applications for the results in Section \ref{sec:price0} and \ref{sec:superhedgingt}. Combining Theorem \ref{thm:convergencesuccessset} and \ref{thm:universalapprox2}, we obtain a two-step approximation for the superhedging price at $t=0$. Then, we use Theorem \ref{thm:NNgeneralt} to simulate the superhedging process for $t>0$. \subsection{Case $t=0$} \label{sec:case0} \subsubsection{Algorithm and implementation} \label{sec:algorithm} Let $N\in{\mathbb{N}}$ denote a fixed batch size. For fixed $\lambda>0$ we implement the following iterative procedure: for each iteration step $i$ we generate i.i.d.\ samples $Y(\omega_{0}^{(i)}), \ldots, Y(\omega_{N}^{(i)})$ of $Y$ and consider the empirical loss function \begin{align*} L_\lambda^{(i)}(\theta) = &\left|F^{\theta_u}\left(\mathcal{Y}_0\left(\omega_{0}^{(i)}\right)\right)\right|^2 + \frac{\lambda}{N} \sum_{j=1}^N l\Bigg(H\left(\omega_{j}^{(i)}\right)\\ &-\left[F^{\theta_u}\left(\mathcal{Y}_0\left(\omega_{j}^{(i)}\right)\right) + \sum_{k=1}^T F^{\theta_{k,\xi}}\left(\mathcal{Y}_{k-1}\left(\omega_{j}^{(i)}\right)\right) \cdot \left(X_k\left(\omega_{j}^{(i)}\right) - X_{k-1}\left(\omega_{j}^{(i)}\right)\right) \right]\Bigg), \end{align*} with $\theta=(\theta_u,\theta_{1,\xi},\dots,\theta_{T,\xi})$ and $l:{\mathbb{R}}\to [0,\infty)$ denoting the squared \emph{rectifier} function, i.e., \[ l(x)=\left(\max\left\{x,0\right\}\right)^2. \] We then calculate the gradient of $L_\lambda^{(i)}(\theta)$ at $\theta^{(i)}$ and use it to update the parameters from $\theta^{(i)}$ to $\theta^{(i+1)}$ according to the \emph{Adam} optimizer, see \cite{kingma2014adam}. After sufficiently many iterations $i$, the parameter $\theta^{(i)}$ should be sufficiently close to a local minimum of the loss function \begin{equation}\label{eq:loss} L_\lambda(\theta) = \left|F^{\theta_u}\left(\mathcal{Y}_0\right)\right|^2 + \lambda \mathbb{E}\left[ l\left(H-\left(F^{\theta_u}\left(\mathcal{Y}_0\right) + \sum_{k=1}^T F^{\theta_{k,\xi}}\left(\mathcal{Y}_{k-1}\right) \cdot \left(X_k - X_{k-1}\right) \right)\right) \right]. \end{equation} Note that ${\mathcal{Y}}_0$ is constant and hence $F^{\theta_u}({\mathcal{Y}}_0)$ is a constant. We obtain a small value for the first term of $L_\lambda$ if $F^{\theta_u}({\mathcal{Y}}_0)$ representing the superhedging price is small. On the other side, the second summand in \eqref{eq:loss} is equal $0$ when the portfolio dominates the claim $H$. Thus, minimizing the second summand in \eqref{eq:loss} corresponds to maximizing the superhedging probability. The weight $\lambda$ offers the opportunity to balance between a small initial price of the portfolio and a high probability of superhedging. In particular, if $\theta$ is the minimum for the loss function $L_\lambda(\theta)$, then $F^{\theta_u}({\mathcal{Y}}_0)$ is close to the minimal price required to superhedge the claim $H$ with a certain probability, i.e., to the quantile hedging price for a certain $\alpha=\alpha(\lambda)$. In view of Theorem \ref{thm:universalapprox2} we thus expect $F^{\theta_u}({\mathcal{Y}}_0)\approx \inf{\mathcal{U}}_0$ for $\lambda$ large enough.\\ Also other choices for $l$ in \eqref{eq:loss} are possible. We considered the scaled \textit{sigmoid} function for $l$ in \eqref{eq:loss}. In this case, however, we did not obtain stable results.\\ The algorithm is implemented in Python, using Keras with backend TensorFlow to build and train the neural networks. More precisely, we create a \emph{Sequential} object to build the models and compile with a customized loss function.\\ We use a Long-Short-Term-Memory network (LSTM), see \cite{hochreiter1997lstm}, with the following architecture: the network has two LSTM layers of size $30$, which return sequences and one dense layer of size $1$. Between the layers the \emph{swish} activation function is used. The activation functions within the LSTM layers are set to default, i.e., activation between cells is \emph{tanh} and the recurrent activation is the \emph{sigmoid} function. The kernel and bias initializer of the first LSTM layer are set to \emph{truncated normal}, i.e., the initial weights are drawn from a standard normal distribution but we discard and re-draw values, which are more than two standard deviations from the mean. This gives $11191$ trainable parameters. The training is performed using the \emph{Adam} optimizer with a learning rate of $0.001$ or $0.0001$. We generate $1024000$ samples, which we split in $70\%$ for the training set and $30\%$ for the test set. The batch size is set to $1024$. We apply the procedure described above in two examples, which we present in the following. \subsubsection{Trinomial model} \label{sec:trinomial} We consider a discrete time financial market model given by an arbitrage-free trinomial model with $X_0=100$ and \[ X_t=X_0\prod_{k=1}^t (1+R_t), \quad t\in \{0,\dots,T\}, \] where $R_t$ is ${\mathcal{F}}_t$-measurable for $t\in \{1,\dots,T\}$, and takes values in $\{d,m,u\}$ with equal probability, where $-1<d<m<u$. Here, we set $d=-0.01$, $m=0$, and $u=0.01$ and $T=29$ yielding $3^{29}$ possible paths. In this model, we want to superhedge a European Call option $H=(X_T-K)^+$ with strike price $K=100$. For this choice of parameters the theoretical superhedging price is $2.17$, as it can be easily obtained by the results of \cite{carassus2010super}. \\ The network is trained and evaluated for different $\lambda$ to illustrate the impact of $\lambda$ in \eqref{eq:loss} and the relation between $\alpha(\lambda)\in (0,1)$ and the corresponding $\alpha(\lambda)$-quantile hedging price. More precisely, we consider $\lambda\in \{10, 50, 100, 500, 1000, 2000, 4000, 10000\}$. For each $\lambda$ the network is trained over $40$ epochs.\\ In Figure \ref{fig:impactlambda_fin}(a)-(c), we see that $\alpha(\lambda)$ as well as the $\alpha(\lambda)$-quantile hedging price increase in $\lambda$, and that the $\alpha(\lambda)$-quantile hedging price increases in $\alpha(\lambda)$. Figure~\ref{fig:impactlambda_fin}(d) shows the superhedging performance on the test set for all $\lambda$'s, i.e., samples of \begin{equation} \label{eq:superhedgingperformance} F^{\theta_u(\lambda)}\left(\mathcal{Y}_0\right) + \sum_{k=0}^T F^{\theta_{k,\xi}(\lambda)}\left(\mathcal{Y}_{k-1}\right) \cdot \left(X_k - X_{k-1}\right) -H, \end{equation} for each $\lambda$. Table \ref{tab:lambda_fin} summarizes the values for $\lambda$, $\alpha(\lambda)$ and the $\alpha(\lambda)$-quantile hedging price. In particular, for $\lambda=10000$ we obtain a numerical price of $2.15$ and $\alpha(\lambda)=99.24\%$. \begin{figure}[htbp] \begin{subfigure}[$\alpha(\lambda)$-quantile hedging price depending on $\lambda$]{\includegraphics[width=0.49\textwidth]{price_weight_fin.png}} \label{subfig:price_weight} \end{subfigure} \begin{subfigure}[$\alpha(\lambda)$ depending on $\lambda$]{\includegraphics[width=0.49\textwidth]{prob_weight_fin.png}} \end{subfigure} \begin{subfigure}[$\alpha(\lambda)$-quantile hedging price depending on $\alpha(\lambda)$]{\includegraphics[width=.49\textwidth]{price_prob_fin.png}} \label{subfig:price_prob} \end{subfigure} \begin{subfigure}[Superhedging performance]{\includegraphics[width=.49\textwidth]{hist_different_lambda_fin.png}} \label{subfig:price_prob} \end{subfigure} \caption{Impact of $\lambda$ on the quantile hedging price and on the superhedging probability.} \label{fig:impactlambda_fin} \end{figure} \begin{table}[hbt] \begin{center} \begin{tabular}{ |l|c|r| } \hline $\lambda$ & $\alpha(\lambda)$ & $\alpha(\lambda)$-quantile hedging price \\ \hline $10$ & $15.23\%$ & $1.61$ \\ \hline $50$ & $55.61\%$ & $1.81$ \\ \hline $100$ & $70.75\%$ & $1.86$ \\ \hline $500$ & $92.16\%$ & $1.96$ \\ \hline $1000$ & $95.42\%$ & $2.00$ \\ \hline $2000$ & $96.88\%$ & $2.04$ \\ \hline $4000$ & $98.48\%$ & $2.09$ \\ \hline $10000$ & $99.24\%$ & $2.15$ \\ \hline \end{tabular} \end{center} \caption{Impact of $\lambda$ on $\alpha(\lambda)$ and on the $\alpha(\lambda)$-quantile hedging price.} \label{tab:lambda_fin} \end{table} \subsubsection{Discretized Black Scholes model} \label{sec:blackscholes} Here we consider a discrete time financial market given by a discretized Black-Scholes model for the asset price $X$. We consider a Barrier Up and Out Call option $H=\prod_{t=0}^T \mathds{1}_{\{X_t<U\}} (X_T-K)^+$ with strike $K=100$ and upper bound $U=105$ such that $K<U$ and $X_0<U$. We set $X_0=100$, $\sigma=0.3$ and $\mu=0$. We assume to have $250$ trading days per year and a time horizon $T$ of $30$ trading days with daily rebalancing. In particular, for a European contingent claim the time until expiration for the option is $\tau=30/250$. \\ The weight $\lambda$ of the loss function is set to $10000000$ in order to obtain a high superhedging probability. Indeed, we obtain a superhedging probability of $100\%$ on the training set as well as on the test set with an approximate price of $3.73$. By \cite{carassus2007class}, the theoretical superhedging price $\pi^H$ is given by \[ \pi^H=X_0\left(1-\frac{K}{U}\right)\approx 4.76. \] In the Black-Scholes model the asset price process at time $t>0$ has unbounded support and thus the additional error, which arises from the discretization of the probability space, is non-negligible. Although the Barrier option artificially bounds the support of the model, the numerical price still significantly deviates from the theoretical price. \\ Finally, we consider a European call option $H=(X_T-K)^+$ with strike $K=100$ and parameters $X_0=100$, $\sigma=0.1$ and $\mu=0$. By \cite{carassus2007class} the theoretical price of $H$ for the discrete time version of the Black-Scholes model is equal to $X_0=100$. The theoretical price of $H$ in a standard Black-Scholes model in the continuous time is $1.38$, and by following the $\delta$-hedging strategy we superhedge $H$ with a probability of $53.69\%$. Here we consider $\lambda=50$ in \eqref{eq:loss} in order to compare the result to the discretized $\delta$-hedging strategy of the Black-Scholes model, and $\lambda = 10000$ in order to obtain a high superhedging probability. For $\lambda=50$, we obtain an approximate price of $1.41$ and a superhedging probability of $54.43\%$. In Figure \ref{fig:bscall}(a) we compare the $\delta$-hedging strategy with the approximated superhedging strategy obtained for $\lambda=50$. Further, in Figure \ref{fig:bscall}(b) we compare the results for $\lambda=50$ and $\lambda=10000$, respectively. For $\lambda=10000$, the superhedging probability on the test set is $99.79\%$ with an approximated price of $2.18$. \begin{figure}[htbp] \begin{subfigure}[$\delta$-hedging strategy compared to approximate strategy for $\lambda=50$]{\includegraphics[width=0.49\textwidth]{delta_vs_50.png}} \label{subfig:price_weight} \end{subfigure} \begin{subfigure}[Approximate strategy for $\lambda=50$ and $\lambda=10000$]{\includegraphics[width=0.49\textwidth]{hist_quant10000_50.png}} \label{subfig:prob_weight} \end{subfigure} \caption{Hedging losses for $\lambda=50$, $\lambda=10000$ and for the $\delta$-hedging strategy.} \label{fig:bscall} \end{figure} \subsection{Case $t>0$} \label{sec:caset} In this section we approximate the process of consumption by neural networks as proposed in Section \ref{sec:NNapproxt}. We implement the same iterative procedure as introduced in Section \ref{sec:algorithm}. We define $G^{(i)}$ as the difference of the approximated superhedging strategy obtained from Section \ref{sec:case0} and the claim $H$, i.e., \[ \resizebox{15cm}{!}{ $G_j^{(i)}(\theta^*):=\left[F^{\theta_u^*}\left(\mathcal{Y}_0\left(\omega_{j}^{(i)}\right)\right) + \sum_{k=1}^T F^{\theta_{k,\xi}^*}\left(\mathcal{Y}_{k-1}\left(\omega_{j}^{(i)}\right)\right) \cdot \left(X_k\left(\omega_{j}^{(i)}\right) - X_{k-1}\left(\omega_{j}^{(i)}\right)\right) -H\left(\omega_{j}^{(i)}\right)\right].$} \] Then, the empirical loss function is given by \[ \tilde L_{t,\beta}^{(i)}(\theta_t)=\frac{1}{N}\sum_{j=1}^N-\left|B_t^{\theta_t}\left(\omega_j^{(i)}\right)\right|^2+\beta \max\left\{\left(B_t^{\theta_t}\left(\omega_j^{(i)}\right)-G_j^{(i)}(\theta^*)\right),0\right\}, \] where $B_t^{\theta_t}$ is given by \[ B_t^{\theta_t}\left(\omega_j^{(i)}\right):=\max\left\{F^{\theta_t}\left({\mathcal{Y}}_t\left(\omega_j^{(i)}\right)\right),B_{t-1}^{\theta_{t-1}}\left(\omega_j^{(i)}\right)\right\}. \] At a local minimum the two terms of $\tilde L$ guarantee that $F^{\theta_t}$ is as big as possible but less or equal than $G(\theta^*)$.\\ Here, we also consider a discretized Black-Scholes model as in Section \ref{sec:blackscholes} but only a time horizon of $10$ trading days and set $X_0=100$, $\sigma=0.1$ and $\mu=0$. For each $t>0$ the neural network consists of two LSTM layers of size $30$ and $20$ respectively, which return sequences, one LSTM layer of size $20$ providing one single value and one dense layer of size $1$. The remaining parameters are chosen as in Section \ref{sec:algorithm}.\\ As in Section \ref{sec:blackscholes}, we compute an approximated superhedging price and strategy for the complete interval. Setting $\lambda=1024$ yields an approximated price of $1.35$ and a superhedging probability of $98.87\%$ for $t=0$. For $t\geq 1$, we choose $\beta=500$ and then obtain a superhedging probability of $98.78\%$. In Figure \ref{fig:price_process}(a), we show trajectories of the approximated superhedging price process generated by this method. Figure \ref{fig:price_process}(b) illustrates paths given by the $\delta$-hedging strategy of the discretized Black-Scholes model. Finally, we plot the difference of the approximated superhedging price processes and the corresponding price process obtained by the $\delta$-hedging strategy in Figure \ref{fig:price_process}(c). \begin{figure}[htbp] \begin{subfigure}[Superhedging price process]{\includegraphics[width=0.49\textwidth]{pricepath.png}} \label{subfig:pricepath} \end{subfigure} \begin{subfigure}[$\delta$-hedging price process]{\includegraphics[width=0.49\textwidth]{delta_hedging_process.png}} \label{subfig:delta_process} \end{subfigure} \begin{center} \begin{subfigure}[Difference of the price processes]{\includegraphics[width=.6\textwidth]{price_diff.png}} \label{subfig:price_diff} \end{subfigure} \end{center} \caption{Superhedging price process compared to the $\delta$-hedging price process.} \label{fig:price_process} \end{figure} \subsection{Discussion} In finite market models as in Section \ref{sec:trinomial}, our methodology delivers an approximation of $\alpha$-quantile hedging and approximated superhedging prices with small approximation error. It is also worth noting, that the predicted superhedging price and the corresponding superhedging probability of the training set are consistent with the values on the test set.\\ In contrast, in models in which the price process has unbounded support, our numerical results indicate that the additional error caused by the discretization of the probability space cannot be ignored. However, we obtain consistent results of the $\alpha$-quantile hedging price for the training set and test set. Note also that, in Section \ref{sec:blackscholes}, the Barrier option can be superhedged with $100\%$ on the training and on the test set. \\ A further possible application of our methodology is given by superhedging in a model-free setting on prediction sets, see \cite{bartl2020pathwise}, \cite{bartl2019duality}, \cite{hou2018robust}, where prediction sets offer the opportunity to include beliefs in price developments or to select relevant price paths.
1,477,468,750,201
arxiv
\section{Introduction} Let $\Mman$ be a compact connected surface and $\Pman$ be either the real line $\RRR$ or the circle $S^1$. For a closed subset $\Cman\subset\Mman$ denote by $\DiffMC$ the group of all $C^{\infty}$ diffeomorphisms of $\Mman$ fixed on $\Cman$. This group acts from the right on the space $C^{\infty}(\Mman,\Pman)$ by the following rule: if $\dif\in\DiffMC$ and $\func\in C^{\infty}(\Mman,\Pman)$, then the result of the action of $\dif$ on $\func$ is the composition map $\func\circ\dif:\Mman\to\Pman$. For $\func\in C^{\infty}(\Mman,\Pman)$ let \begin{align*} \StabfC &= \{\dif \in \DiffMC \mid \func \circ \dif = \func \}, & \OrbfC &= \{\func \circ \dif \mid \dif \in \DiffMC \} \end{align*} be respectively the \textit{stabilizer} and the \textit{orbit} of $\func$ under that action. Endow these spaces with $C^{\infty}$ topologies and denote by $\DiffIdMC$ and $\StabIdfC$ the corresponding path components of $\id_{\Mman}$ in $\DiffMC$ and $\StabfC$, and by $\OrbffC$ the path component of $\OrbfC$ containing $\func$. We will omit $\Cman$ from notation whenever it is empty. \begin{definition}\label{def:MorseMap} A smooth map $\func:\Mman\to\Pman$ will be called \textit{Morse} if \begin{itemize} \item all critical points of $\func$ are non-degenerate and belong to the interior of $\Mman$; \item the restriction of $\func$ to each connected component of $\partial\Mman$ is a constant map. \end{itemize} \end{definition} In a series of papers~\cite{Maksymenko:AGAG:2006, Maksymenko:TrMath:2008, Maksymenko:ProcIM:ENG:2010, Maksymenko:UMZ:ENG:2012} the author studied the homotopy types of $\StabIdf$ and $\Orbff$ for certain classes of smooth mappings $\func:\Mman\to\Pman$. The results concerning Morse maps can be summarized as follows. \begin{theorem}\label{th:fibration_DMX_Of} {\rm \cite{Maksymenko:AGAG:2006, Maksymenko:TrMath:2008, Maksymenko:ProcIM:ENG:2010, Maksymenko:UMZ:ENG:2012}.} Let $\func:\Mman\to\Pman$ be a Morse map with $n$ critical points, and $\Cman$ be a finite (possibly empty) union of regular (that is containing no critical points) connected components of certain level sets of $\func$. Then the following statements hold. \medskip {\rm (1)} $\OrbffC = \Orb_{\func}(\func,\Cman\cup\partial\Mman)$ and this space {\bfseries has a homotopy type of some (possibly non-compact) $(2n-1)$-dimensional CW-complex}. \medskip {\rm (2)} The map $\sigma:\DiffMC \longrightarrow \OrbfC$ defined by $\sigma(\dif) = \func \circ \dif$ is a Serre fibration with fiber $\StabfC$, i.e.\! it has a homotopy lifting property for CW-complexes. Suppose either $\func$ has a critical point of index $1$ or $\Mman$ is non-orientable. Then $\StabIdf$ is contractible, $\pi_k\Orbf = \pi_k\Mman$ for $k\geq3$, $\pi_2\Orbf=0$, and for $\pi_1\Orbf$ we have the following short exact sequence of fibration $\sigma$: \begin{equation}\label{equ:pi1Of_exact_sequence} 1 \longrightarrow \pi_1\DiffM \xrightarrow{~~\sigma~~} \pi_1\Orbf \xrightarrow{~~\partial~~} \pi_0\StabPrf\longrightarrow 1, \end{equation} where $\StabPrf=\Stabf \cap \DiffIdM$. If $\func$ is {\bfseries generic}, i.e.\! each critical level set of $\func$ contains exactly one critical point, then $\Orbff$ is {\bfseries homotopy equivalent to some $\pp$-dimensional torus $\Trs{\pp}$}. \medskip {\rm(3)} The restriction $\sigma|_{\DiffIdMC}:\DiffIdMC \longrightarrow \OrbffC$ is also a Serre fibration with fiber $\StabPrfC = \Stabf \cap \DiffIdMC$. Suppose either $\chi(\Mman)<0$ or $\Cman\not=\varnothing$. Then $\DiffIdMC$ and $\StabIdfC$ are contractible and from the exact sequence of homotopy groups of the fibration $\sigma|_{\DiffIdMC}$ we get that $\pi_k\OrbfC=0$ for $k\geq2$ and the boundary map \begin{equation}\label{equ:boundaryX_iso} \partial: \pi_1\OrbfC \ \longrightarrow \ \pi_0 \StabPrfC \end{equation} is an isomorphism. \qed \end{theorem} In the present note we obtain exact description of the homotopy types of $\Orbff$ for all Morse maps $\func:\Mman\to\Pman$ for the case when $\Mman$ is orientable and distinct from $S^2$ and $\Trs{2}$, see Theorem~\ref{th:main_result_Of_TpG} below. Let $\func:\Mman\to\Pman$ be a Morse map. Consider the partition of $\Mman$ into connected components of level-sets $\func^{-1}(c)$, $c\in\Pman$. Then the corresponding factor space has a structure of a finite one-dimensional CW-complex and is called the \textit{Kronrod-Reeb} graph of $\func$, see e.g.~\cite{BolsinovFomenko:1997, Sharko:UMZ:2003}. We will denote this graph by $\fKRGraph$. Its vertices correspond to \textit{critical} (i.e.\! containing critical points) connected components of level-sets of $\func$ and to the connected components of $\partial\Mman$. Recall, \cite[\S3.1]{Maksymenko:AGAG:2006}, that $\Stabf$ naturally acts on $\fKRGraph$ by the following rule. Let $\dif\in\Stabf$. Then $\func\circ\dif=\func$, and so $\dif(\func^{-1}(c))= \func^{-1}(c)$ for all $c\in\Pman$. Hence $\dif$ permutes connected components of $\func^{-1}(c)$ being \textit{points} of $\fKRGraph$. In other words, $\dif$ yields a bijection of $\fKRGraph$ which is in fact homeomorphism. Therefore we get an action homomorphism $\lambda:\Stabf \to \Aut(\fKRGraph)$ into the group of all homeomorphisms of $\fKRGraph$. The image of $\lambda$ is finite. Let \[ \fKRAut := \lambda\bigl(\StabPrf\bigr) \] be the group of all automorphisms of $\fKRGraph$ induced by isotopic to the identity diffeomorphisms from $\Stabf$. Our main result is the following theorem. \begin{theorem}\label{th:main_result_Of_TpG} Suppose $\Mman$ is orientable and distinct from $2$-sphere $S^2$ and $2$-torus $\Trs{2}$. Then for each Morse map $\func:\Mman\to\Pman$ there exists a free action of the group $\fKRAut=\lambda(\StabPrf)$ on a $\pp$-dimensional torus $\Trs{\pp}$ for some $\pp\geq0$ such that $\Orbff$ is homotopy equivalent to the factor space $\Trs{\pp}/\fKRAut$. \end{theorem} This result refines statements (1) and (2) of Theorem~\ref{th:fibration_DMX_Of}. A sketch of a proof of Theorem~\ref{th:main_result_Of_TpG} will be given in \S\ref{sect:proof:th:main_result_Of_TpG} and \S\ref{sect:proof:prop:lambda_is_T_hom}. The proof is constructive in the sense that for every Morse map $\func:\Mman\to\Pman$ one can explicitly describe the corresponding free action of $\fKRAut$ on $\Trs{\pp}$ which gives a homotopy equivalence $\Trs{\pp}/\fKRAut \simeq \Orbff$. If $\Mman=S^2$, then by (2) of Theorem~\ref{th:fibration_DMX_Of} $\Orbff$ is not aspherical, and so it is not homotopy equivalent to $\Trs{\pp}/\fKRAut$ for any free $\fKRAut$-action on any torus $\Trs{\pp}$. On the other hand there are some partial results supporting the conjecture that Theorem~\ref{th:main_result_Of_TpG} might hold for $\Mman=\Trs{2}$, see~\cite{MaksymenkoFeshchenko:UMZ:ENG:2014}. In fact, Theorem~\ref{th:main_result_Of_TpG} is true for a larger class of maps $\func$ having singularities smoothly equivalent to homogeneous polynomials $\RRR^2\to\RRR$ without multiple factors. But in this case one should extend the notion of Kronrod-Reeb graph for such maps. This version of Theorem~\ref{th:main_result_Of_TpG} together with detailed proofs will appear elsewhere. \section{Preliminaries}\label{sect:preliminaries} \subsection{Wreath products}\label{sect:wreath_products} Let $\GGG$ and $\HHH$ be two groups. Then the set $\Maps{\HHH}{\GGG}$ of all maps $\HHH\to\GGG$ (not necessarily homomorphisms) is a group with respect to the point-wise multiplication. Moreover, the group $\HHH$ acts on $\Maps{\HHH}{\GGG}$ from the right by the following rule: if $\alpha:\HHH\to\GGG$, and $\hel\in\HHH$, then the result $\act{\alpha}{\hel}$ of the action of $\hel$ on $\alpha$ is defined by the formula: $\act{\alpha}{\hel}(\selg) = \alpha(\selg \hel)$, for all $\selg\in\HHH$. The semidirect product $\Maps{\HHH}{\GGG} \rtimes \HHH$ associated with this action is called the \emph{wreath product} of $\GGG$ and $\HHH$ and is denoted by $\GGG\wr\HHH$. More generally, let $\KKK$ be another group and $\mu:\KKK\to\HHH$ be a homomorphism. This homomorphism induces a natural right action of $\KKK$ on $\HHH$, and so one can define the corresponding semidirect product $\Maps{\HHH}{\GGG} \rtimes_{\mu} \KKK$ which will be called the \emph{wreath product of $\GGG$ and $\KKK$ over $\mu$} and denoted by $\GGG\wr_{\mu}\KKK$. Evidently, if $\mu = \id_{\HHH}:\HHH\to\HHH$ is the identity isomorphism, then $\GGG\wr_{\mu}\HHH$ is the same as $\GGG\wr\HHH$. In particular, for $m\geq2$ let $\mu:\ZZZ\to\ZZZ_m$ be the natural $\mathrm{mod}~m$ homomorphism. Then for every group $\Stab$ the wreath product $\Stab\wr_{\mu}\ZZZ$ will be denoted by \[\Stab\wrm{m}\ZZZ.\] Thus $\Stab\wrm{m}\ZZZ$ is the set $\Maps{\ZZZ_m}{\Stab}\times\ZZZ$ with the multiplication defined in the following way: if $(\amapgf, \aelg), (\bmapgf, \belg) \in \Maps{\ZZZ_m}{\Stab} \times \ZZZ$, then $(\amapgf, \aelg) (\bmapgf, \belg) =(\cmapgf, \aelg + \belg)$, where $\cmapgf:\ZZZ_m\to\Stab$ is given by $ \cmapgf(\selg) = \amapgf(\selg +\belg \ \mathrm{mod}~m) \bmapgf(\selg)$ for $\selg\in\ZZZ_m$. \subsection{Free actions on tori} For $\pp\geq0$ let $\Trs{\pp} = \RRR^{\pp}/\ZZZ^{\pp}$ be a $\pp$-dimensional torus which can also be regarded as a topological product of $\pp$ circles. We also assume that $\Trs{0}$ is a point. Suppose a finite group $G$ freely acts on $\Trs{\pp}$. Then the factor map $q:\Trs{\pp} \to \Trs{\pp}/ G$ is a covering map. This implies that $\Trs{\pp}/G$ is aspherical and we have the following short exact sequence: \[ 1 \longrightarrow \pi_1\Trs{\pp} \xrightarrow{~~q~~} \pi_1(\Trs{\pp}/G) \xrightarrow{~~\delta~~} G \longrightarrow 1, \] where $\delta$ is the corresponding boundary homomorphism. \begin{definition}\label{defn:Tprop} Let $\lambda:\Stab\to G$ be a group homomorphism. We will say that $\lambda$ \emph{arises from a free action on a torus} if there exists a free action of $G$ on $\Trs{\pp}$ for some $\pp\geq0$ and the following commutative diagram: \[ \xymatrix{ \pi_1(\Trs{\pp}/G) \ar[d]_{\delta} \ar[rr]^-{\alpha}_-{\cong} && \Stab \ar[d]^{\lambda}\\ G \ar[rr]^-{\beta}_-{\cong} && G } \] in which $\alpha$ and $\beta$ are isomorphisms. \end{definition} \begin{lemma}\label{lm:examples_T_prop} {\rm (1)}~A trivial homomorphism $e_n:\ZZZ^{n}\to \{1\}$, $n\geq0$, arises from the free trivial action of the unit group $\{1\}$ on $\Trs{n}$. \medskip {\rm (2)}~For each $m\geq 1$ the canonical $(\mathrm{mod}\ m)$-epimorphism $\mu:\ZZZ\to\ZZZ_m$ arises from the free action of $\ZZZ_m$ on $\Trs{1}$ defined as follows: if $x\in \Trs{1} = \RRR/\ZZZ$ and $k \in \ZZZ_m$, then $z\cdot k = z + \tfrac{k}{m} \ \mathrm{mod} \ 1$. \medskip {\rm (3)}~Suppose a homomorphism $\lambda_i:\Stab_i\to G_i$ arises from a free action of $G_i$ on $\Trs{\pp_i}$ for $i=1,\ldots,n$. Denote $\pp = \sum_{i=1}^{n}\pp_i$ Then the product homomorphism \[ \mathop{\times}\limits_{i=1}^{n}\lambda_i : \ \mathop{\times}\limits_{i=1}^{n}\Stab_i \ \longrightarrow \ \mathop{\times}\limits_{i=1}^{n} G_i\] arises from the free action of $\mathop{\times}\limits_{i=1}^{n} G_i$ on $\Trs{\pp} = \Trs{\pp_1}\times\cdots\times\Trs{\pp_n}$ defined as follows: if $x_i\in\Trs{\pp_i}$ and $g\in G_i$, then $(x_1,\ldots,x_n) \cdot (g_1,\ldots,g_n) = (x_1 \,g_1, \ldots, x_n\,g_n)$. \medskip {\rm (4)}~Suppose $\lambda:\Stab\to G$ arises from a free action of $G$ on $\Trs{\pp}$ and let $n\geq0$. Then the homomorphism $\bar{\lambda}:\Stab\times\ZZZ^n \to G$ defined by $\bar{\lambda}(s,z) = \lambda(s)$ arises from the free $G$-action on $\Trs{\pp+n}=\Trs{\pp}\times\Trs{n}$ defined as follows: if $x\in\Trs{\pp}$, $y\in\Trs{n}$, and $g\in G$, then $(x,y)\cdot g = (xg, y)$. \medskip {\rm (5)}~Suppose $\lambda:\Stab\to G$ arises from a free action of $G$ on $\Trs{\pp}$ and let $m\geq2$. Define the following homomorphism $\bar{\lambda}: \Stab\wrm{m} \ZZZ \longrightarrow G \wr \ZZZ_m$ by \begin{equation}\label{equ:wr_Tprop} \bar{\lambda}(\xi, k) = \bigl(\lambda\circ\xi, \ k \ \mathrm{mod}\ m\bigr), \end{equation} for $\bigl(\xi:\ZZZ_m\to\Stab, \ k\bigr) \in \Stab\wrm{m}\ZZZ$. Then $\bar{\lambda}$ arises from the free action of $G\wr\ZZZ_m$ on $\Trs{\pp m + 1} = \underbrace{\Trs{\pp}\times \cdots \times \Trs{\pp}}_{m} \times \Trs{1}$ defined as follows: if $x_i\in \Trs{\pp}$, $i=0,\ldots,m-1$, $y\in\Trs{1}$, and $\bigl(\alpha:\ZZZ_m \to G, \ k) \in G\wr\ZZZ_m$, then \[ (x_0,x_1,\ldots,x_{m-1}, y) \cdot (\alpha, k) = \Bigl(x_k\,\alpha(0), x_{k+1}\,\alpha(1),\ldots, y + \tfrac{k}{m} \ \mathrm{mod} \ 1 \Bigr), \] where all indices are taken modulo $m$. \end{lemma} \begin{proof} Statements (1)-(4) are easy. The proof of (5) is technical and will be published elsewhere. \end{proof} \section{Proof of Theorem~\ref{th:main_result_Of_TpG}}\label{sect:proof:th:main_result_Of_TpG} Suppose $\Mman$ is orientable and distinct from $S^2$ and $\Trs{2}$. Let also $\func:\Mman\to\Pman$ be a Morse map, $\lambda:\Stabf\to\Aut(\fKRGraph)$ be the action homomorphism, $G=\lambda(\StabPrf)$, and $\Cman$ be any (possibly empty) collection of connected components of $\partial\Mman$. Notice that each $\dif\in\StabPrf$ preserves every connected component of $\partial\Mman$ with its orientation, and so it is isotopic in $\StabPrf$ to a diffeomorphism $\dif_1$ fixed on $\partial\Mman$, see e.g.~\cite[Lemma~6.1]{Maksymenko:UMZ:ENG:2012}. In particular, we obtain that $\lambda(\dif)=\lambda(\dif_1)$, whence \[ \fKRAut=\lambda(\StabPrf) = \lambda\bigl(\Stab'(\func,\Cman)\bigr) = \lambda\bigl(\Stab'(\func,\partial\Mman)\bigr). \] Moreover, it is easy to see that for each $\dif\in\Stab'(\func,\Cman)$ its image $\lambda(\dif)$ depends only on the isotopy class of $[\dif]$ in $\Stab'(\func,\Cman)$, and so $\lambda$ induces the following epimorphism \begin{equation}\label{equ:lambdaX} \lambda:\pi_0\Stab'(\func,\Cman) \to \fKRAut. \end{equation} \begin{proposition}\label{prop:lambda_is_T_hom} There exists a collection $\Cman$ of connected components of $\partial\Mman$ satisfying the following conditions: \begin{itemize} \item[{\rm(a)}] the boundary map $\partial:\pi_1\Orb(\func,\Cman) \to \pi_0\Stab'(\func,\Cman)$ is an isomorphism; \item[{\rm(b)}] the homomorphism~\eqref{equ:lambdaX} arises from a free $\fKRAut$-action on some torus $\Trs{\pp}$. \end{itemize} \end{proposition} It follows from (b) that we have an isomorphism $\isop:\pi_1(\Trs{\pp}/\fKRAut) \to \pi_0\Stab'(\func,\Cman)$, which together with {\rm(1)} of Theorem~{\rm\ref{th:fibration_DMX_Of}} and (a) gives another isomorphism: \[ \partial^{-1} \circ\isop: \ \pi_1(\Trs{\pp}/\fKRAut) \ \xrightarrow{~~\isop~~} \ \pi_0\Stab'(\func,\Cman) \ \xrightarrow{~~\partial^{-1}~~} \ \pi_1\Orb(\func,\Cman) \ \equiv \ \pi_1\Orbff. \] As both $\Trs{\pp}/\fKRAut$ and $\Orbff$ are aspherical, there exists a homotopy equivalence between these spaces such that the corresponding isomorphism of fundamental groups coincides with $\partial^{-1} \circ\isop$. This proves Theorem~\ref{th:main_result_Of_TpG} modulo Proposition~\ref{prop:lambda_is_T_hom}. \section{Proof of Proposition~\ref{prop:lambda_is_T_hom}}\label{sect:proof:prop:lambda_is_T_hom} \subsection*{Case $\Mman=D^2$ or $S^1\times I$} Let $\Cman$ be a connected component of $\partial\Mman$ and $\Stab = \Stab'(\func,\Cman)$. Then by~\eqref{equ:boundaryX_iso} the boundary map $\partial:\pi_1\Orb_{\func}(\func,\Cman) \to \pi_0\Stab$ is an isomorphism. Therefore it remains to show that the homomorphism $\lambda:\pi_0\Stab \to G$ arises from a certain free $\fKRAut$-action on a torus $\Trs{\pp}$ for some $\pp\geq0$. We will use an induction on the number $n$ of critical points of $\func$. Notice also that $\Diff(\Mman,\Cman)$ is contractible, \cite{Smale:ProcAMS:1959, EarleSchatz:DG:1970, Gramain:ASENS:1973}. Hence $\Diff(\Mman,\Cman) = \DiffId(\Mman,\Cman)$ and so $\Stab = \Stab(\func) \cap\DiffId(\Mman,\Cman) = \Stab(\func) \cap\Diff(\Mman,\Cman) = \Stab(\func,\Cman)$. \begin{figure}[h] \centerline{\includegraphics[height=2cm]{function_n01.eps}} \caption{Cases $n=0,1$} \label{fig:function_n01} \end{figure} 1) Suppose $n=0$. This is possible only when $\Mman=S^1\times I$, see Figure~\ref{fig:function_n01}. We can also assume that $\Cman=S^1\times 0$ and each of the circles $S^1\times s$, $s\in I$, is a connected component of some level set of $\func$. In this case $\fKRGraph$ consists of a unique edge, and therefore $\fKRAut = \lambda(\pi_0\Stab) = \{1\}$. It follows from the proof of Case 2 of Lemma~6.3 from~\cite[page 261]{Maksymenko:AGAG:2006} that $\pi_0\Stab = \{1\}$. Then by (1) of Lemma~\ref{lm:examples_T_prop} $\lambda:\pi_0\Stab\to\fKRAut$ arises from a trivial action of $\fKRAut$ on $\Trs{0}$. \medskip 2) Suppose $n=1$. In this case $\Mman=D^2$, $\Cman=\partial\Mman$, a unique critical point $q$ of $\func$ is a local extreme, and $\fKRGraph$ again consists of a unique edge, so $\fKRAut = \{1\}$, see Figure~\ref{fig:function_n01}. By Morse lemma we have that $f(x,y) = x^2+y^2$ in some local representation of $\func$ at $q$, therefore not loosing generality one can assume that $\Mman =\{ z\in\CCC \mid |z|=1 \}$ is a unit $2$-disk in the complex plane, and the level sets of $\func$ are the origin $0\in\CCC$ and concentric circles $S_t = \{|z| = t\}$, $t\in(0,1]$. It follows from the proof of Case 3 of Lemma~6.3 from~\cite[page 261]{Maksymenko:AGAG:2006} that $\pi_0\Stab = \{1\}$. Hence again by (1) of Lemma~\ref{lm:examples_T_prop} $\lambda:\pi_0\Stab\to\fKRAut$ arises from a trivial action of $\fKRAut$ on $\Trs{0}$. \medskip 3) Now let $n>1$. Assume that we proved our statement for all Morse maps $D^2\to\Pman$ and $S^1\times I\to\Pman$ having less than $n$ critical points, and let $\func:\Mman\to\Pman$ be a Morse map with exactly $n$ critical points. Notice that the Kronrod-Reeb graph $\fKRGraph$ of $\func$ is a finite tree. Let $\vv$ be a vertex $\fKRGraph$ of $\func$ corresponding to $\Cman$. Then there exists a unique edge $e = (\vv,\uu)$ in $\fKRGraph$ incident to $\vv$. Consider another vertex $\uu$ of $e$ and let $\Uman$ be the connected component of the level set of $\func$ corresponding to $\uu$, see Figure~\ref{fig:function_n2}. For $\eps>0$ let $J_{\eps}$ be a closed $\eps$-neighbourhood of the point $t = \func(\Uman)$ in $\Pman$ and $\Nman$ be the connected component of $\func^{-1}(J_{\eps})$ containing $\Uman$. We can choose $\eps$ so small that $\Nman \cap\partial\Mman = \varnothing$ and the set $\Nman\setminus\Uman$ contains no critical points of $\func$. \begin{figure}[h] \centerline{\includegraphics[height=4cm]{function_n2.eps}} \caption{} \label{fig:function_n2} \end{figure} Let $\Zman$ be a connected component of $\overline{\Mman\setminus\Nman}$ containing $\Cman$. Then $\Zman$ is a cylinder having no critical points of $\func$. Let $\tau:\Mman\to\Mman$ be a Dehn twist preserving $\func$ and supported in the interior of $\Zman$, see~\cite[\S6]{Maksymenko:AGAG:2006}, and $\tb\in\pi_0\Stab$ be its isotopy class. Evidently, $\lambda(\tb) = \id_{\fKRGraph}$. \begin{lemma}\label{lm:epimorphism_eta}{\rm\cite{Maksymenko:pi1Repr:2014}} There exists an epimorphism $\eta:\pi_0\Stab \to \ZZZ$ having the following properties. \begin{itemize} \item[\rm(a)] Denote $m = \eta(\tb)$. Then $m\geq 1$. \item[\rm(b)] If $\hb\in\pi_0\Stab$ is such that $\eta(\hb)$ is divided by $m$, then the class $\hb$ contains a representative fixed on some neighbourhood of $\Nman$. \item[\rm(c)] If $\eta(\hb) = 0$, then $\hb$ contains a representative fixed on some neighbourhood of $\Nman \cup \Zman$. \end{itemize} \end{lemma} For each connected component $\Xman$ of $\overline{\Mman\setminus\Nman}$ put $\hat{\Xman} := \Xman \cap \partial\Nman$. Then the pair $(\Xman,\hXman)$ is diffeomorphic either to $(D^2,\partial D^2)$ or to $(S^1\times I, S^1\times 0)$. As $\func$ is constant on connected components of $\partial\Xman$, the restriction $\func_{\Xman}=\func|_{\Xman}: \Xman \to \Pman$ is Morse in the sense of Definition~\ref{def:MorseMap}. Let $\Gamma_{\Xman}$ be the Kronrod-Reeb graph of $\func_{\Xman}$. Then $\Gamma_{\Xman}$ can be regarded as a subtree of $\fKRGraph$. Denote $\Stab_{\Xman} := \Stab(\func_{\Xman},\hXman)$. Let also $\lambda_{\Xman}:\pi_0\Stab_{\Xman}\to \Aut(\Gamma_{\Xman})$ be the corresponding action homomorphism, and $G_{\Xman} = \lambda_{\Xman}\bigl(\pi_0\Stab_{\Xman}\bigr)$. Since the number of critical points of $\func_{\Xman}$ is less than $n$, we have by inductive assumption that $\lambda_{\Xman}:\pi_0\Stab_{\Xman}\to\GGG_{\Xman}$ arises from a free action on some torus. Now the situation splits into two cases. 3a) Suppose $m=1$. Then by (b) of Lemma~\ref{lm:epimorphism_eta} every $\hb\in\pi_0\Stab$ has a representative fixed near $\Nman$. Hence if we denote by $\Xman_1,\ldots,\Xman_a$ all the connected components of $\overline{\Mman\setminus\Nman}$ distinct from $\Zman$, then $\dif(\Xman_i) = \Xman_i$ for all $i=1,\ldots,a$ and $\dif\in\Stab$. \begin{lemma}\label{lm:m_1_iso_pi0S} {\rm\cite{Maksymenko:DefFuncI:2014, Maksymenko:pi1Repr:2014}} There is the following commutative diagram: \begin{equation}\label{equ:m_1_iso_pi0S} \xymatrix { \bigl(\mathop{\times}\limits_{i=1}^{a} \pi_0\SX{i}\bigr) \ \times \ \ZZZ \ar[rr]^-{\alpha}_-{\cong} \ar[d]_-{\bar{\lambda}} && \pi_0\Stab \ar[d]^-{\lambda} \\ \mathop{\times}\limits_{i=1}^{a} \GX{i} \ar[rr]^-{\beta}_-{\cong} && \fKRAut } \end{equation} in which $\alpha$ and $\beta$ are isomorphisms, and $\bar{\lambda}$ is defined by \[ \bar{\lambda}(\hb_1,\ldots,\hb_a,k) = \bigl(\lambda_1(\hb_1),\ldots,\lambda_a(\hb_a)\bigr)\] for $\hb_i\in\SX{i}$ and $k\in\ZZZ$. Hence by {\rm(3)} and {\rm(4)} of Lemma~{\rm\ref{lm:examples_T_prop}}\, $\lambda$ arises from a free action on a torus. \end{lemma} \begin{proof} Isomorphisms $\alpha$ and $\beta$ are constructed~\cite{Maksymenko:pi1Repr:2014} and~\cite{Maksymenko:DefFuncI:2014} respectively. We will just recall their definitions. Let $\hb_i \in \pi_0\SX{i}$, $i=1,\ldots,a$, and $k\in\ZZZ$. Choose a representative $\dif_i\in\SX{i}$ of $\hb_i$ such that $\dif_i$ is fixed on some neighbourhood of $\hXman_i$, and so it extends by the identity to a diffeomorphism of $\Mman$. Let $\hb\in\pi_0\Stab$ be the isotopy class of the composition $\dif_1\circ\cdots\circ\dif_a$. Put $\alpha(\hb_1,\ldots,\hb_a, k) = \hb\tb^{k}$. It is shown in~\cite{Maksymenko:pi1Repr:2014} that $\alpha$ is a group isomorphism. Let $\gamma_i\in\GX{i}$, $i=1,\ldots,a$. Notice that $\GrX{i}$ is a subtree of $\fKRGraph$ and that $\gamma_i$ extends by the identity to an automorphism of all of $\fKRGraph$. Let $\gamma = \gamma_1\circ\cdots\circ\gamma_n$. Then it is easy to show that $\gamma\in\fKRAut$, i.e.\! it is induced by some $\dif\in\Stab$, so we set $\beta(\gamma_1,\ldots,\gamma_n) = \gamma$. It is proven in~\cite{Maksymenko:DefFuncI:2014} that $\beta$ is a group isomorphism. It also easily follows from the definitions that the diagram~\eqref{equ:m_1_iso_pi0S} is commutative. \end{proof} \medskip 3b) Suppose $m\geq2$. Let $\Xman_1,\ldots,\Xman_a$ be all the connected components of $\overline{\Mman\setminus\Nman}$ distinct from $\Zman$ and \textit{invariant} with respect to all $\dif\in\Stab$, and $\Yman$ be the collection of all connected components of $\overline{\Mman\setminus\Nman}$ being \textit{not invariant} under $\Stab$. \begin{lemma}\label{lm:m_geq_2_iso_pi0S} {\rm\cite{Maksymenko:DefFuncI:2014, Maksymenko:pi1Repr:2014}} There exist a subcollection $\widetilde{\Yman}=\{\Yman_{1},\ldots,\Yman_{b}\} \subset \Yman$ and $\gdif \in \Stab$ having the following properties: \begin{enumerate} \item $\eta(\gdif) = 1$ and $\gdif$ is fixed near each $\Xman_i$, $i=1,\ldots,a$; \item $\gdif^{i}(\widetilde{\Yman}) \cap \gdif^{j}(\widetilde{\Yman}) = \varnothing$ for $i\not=j \in \{0,\ldots,m-1\}$; \item $\Yman = \mathop{\cup}\limits_{i=0}^{m-1} \gdif^{i}(\widetilde{\Yman})$; \item $\gdif^{m}$ preserves every connected component of $\overline{\Mman\setminus\Nman}$; \item $\lambda(\gdif^{m}) = \id_{\fKRGraph}$. \end{enumerate} Moreover, there is the following commutative diagram: \begin{equation}\label{equ:m_geq_2_iso_pi0S} \xymatrix { \bigl(\mathop{\times}\limits_{i=1}^{a} \pi_0\SX{i}\bigr) \ \times \ \Bigl(\bigl(\mathop{\times}\limits_{j=1}^{b}\pi_0\SY{j}\bigr) \ \wrm{m}\ \ZZZ\Bigr) \ar[rr]^-{\alpha}_-{\cong} \ar[d]_-{\bar{\lambda}} && \pi_0\Stab \ar[d]^-{\lambda} \\ \bigl(\mathop{\times}\limits_{i=1}^{a} \GX{i}\bigr) \ \times \ \Bigl(\bigl(\mathop{\times}\limits_{j=1}^{b}\GY{j}\bigr) \ \wr\ \ZZZ_m\Bigr) \ar[rr]^-{\beta}_-{\cong} && \fKRAut } \end{equation} in which $\alpha$ and $\beta$ are isomorphisms and $\bar{\lambda}$ is defined by \[ \bar{\lambda}(\hb_1,\ldots,\hb_a, \xi, k) = \Bigr(\lambda_1(\hb_1),\ldots, \lambda_a(\hb_a), \ \ \bigl(\mathop{\times}\limits_{j=1}^{b} \lambda_{j}\bigr) \circ \xi, \ \ k \ \mathrm{mod} \ m \Bigl), \] for $\hb\in\pi_0\SX{i}$, a map $\xi:\ZZZ_m\to\mathop{\times}\limits_{j=1}^{b}\pi_0\SY{j}$, and $k\in\ZZZ$. Hence by {\rm(3)} and {\rm(5)} of Lemma~{\rm\ref{lm:examples_T_prop}} \ $\lambda$ arises from a free action on a torus. \end{lemma} \begin{proof} Isomorphisms $\alpha$ and $\beta$ are constructed in~\cite{Maksymenko:pi1Repr:2014} and~\cite{Maksymenko:DefFuncI:2014} respectively. Again we will just recall their definitions. Let $\hb_i \in \pi_0\SX{i}$, $i=1,\ldots,a$, $\xi=(\xi_1,\ldots,\xi_b):\ZZZ_m \to \mathop{\times}\limits_{j=1}^{b}\pi_0\SY{j}$ be any map, and $k\in\ZZZ$. Choose a representative $\dif_i\in\SX{i}$ of $\hb_i$ fixed on some neighbourhood of $\hXman_i$. Let also $\dif_{js}$, $1\leq j\leq b$, $s\in\ZZZ_m$, be a representative of $\xi_j(s) \in \pi_0\SY{j}$ fixed on some neighbourhood of $\hYman_{j}=\Yman_{j}\cap\Nman$, and $\dif:\Mman\to\Mman$ be a map given by the formula: \[ \dif= \begin{cases} \dif_i, & \text{on} \ \Xman_i, \ i=1,\ldots,a \\ \gdif^{s+k}\circ \dif_{js} \circ \gdif^{-s}, & \text{on} \ \gdif^{s}(\Yman_{j}), \ \text{for} \ s\in\ZZZ_m \ \text{and} \ j=1,\ldots,b, \\ \gdif^{k}, & \text{on} \ \Nman \cup \Zman. \end{cases} \] Then $\dif\in\Stab$ and we define $\alpha(\hb_1,\ldots,\hb_a, \xi, k) \in \pi_0\Stab$ to be the isotopy class of $\dif$. It is shows in~\cite{Maksymenko:pi1Repr:2014} that $\alpha$ is a group isomorphism. Let $\gamma_i\in\GX{i}$, $i=1,\ldots,a$, $\nu=(\nu_1,\ldots,\nu_b):\ZZZ_m \to \mathop{\times}\limits_{j=1}^{b}\GY{j}$ be map, and $k\in\ZZZ_m$. Define $\gamma:\fKRGraph\to\fKRGraph$ by the following formula: \[ \gamma= \begin{cases} \gamma_i, & \text{on} \ \GrX{i}, \ i=1,\ldots,a \\ \lambda(\gdif)^{s+k}\circ \nu_j(s) \circ \lambda(\gdif)^{-s}, & \text{on} \ \lambda(\gdif)^{s}(\GrY{j}), \ \text{for} \ s\in\ZZZ_m \ \text{and} \ j=1,\ldots,b, \\ \lambda(\gdif)^{k}, & \text{on the remained part of $\fKRGraph$}. \end{cases} \] Then $\gamma\in\fKRAut$ and the correspondence $\beta:(\gamma_1,\ldots,\gamma_n) \longmapsto \gamma$ is a group isomorphism, see~\cite{Maksymenko:DefFuncI:2014}. It also easily follows from the definitions that the diagram~\eqref{equ:m_geq_2_iso_pi0S} is commutative. \end{proof} \subsection*{General case} Suppose $\Mman$ is orientable and distinct from $S^2$, $\Trs{2}$, $\Disk$, and $S^1\times I$. Set $\Cman = \partial\Mman$ and denote $\Stab = \Stab'(\func,\Cman)$. Then $\chi(\Mman)<0$ whence by~\eqref{equ:boundaryX_iso} the boundary map $\partial:\pi_1\Orb_{\func}(\func,\Cman) \to \pi_0 \Stab$ is an isomorphism. \begin{lemma}{\rm\cite{Maksymenko:MFAT:2010, Maksymenko:DefFuncI:2014}} If $\chi(\Mman)<0$, then there exist mutually disjoint subsurfaces $\Bman_1,\ldots,\Bman_n$ of $\Mman$ having the following properties: {\rm(a)} $\Bman_i$ is diffeomorphic either to a $2$-disk $\Disk$ or to a cylinder $S^1\times I$, and the restriction $\func_i := \func|_{\Bman_i}: \Bman_i \to \Pman$ is Morse for all $i=1,\ldots,n$. {\rm(b)} Let $\fiKRGraph{i}$ be the Kronrod-Reeb graph of $\func_i$, $\Stab_i = \Stab'(\func_i,\partial\Bman_i)$, $\lambda_i:\pi_0\Stab_i\to \Aut(\fiKRGraph{i})$ be the corresponding action homomorphism, and $\fiKRAut{i} = \lambda_i(\Stab_i)$, $i=1,\ldots,n$. Then we have the following commutative diagram: \begin{equation}\label{equ:surf_B_iso} \xymatrix{ \mathop{\times}\limits_{i=1}^{n}\pi_0\Stab_i \ar[rr]^-{\alpha}_-{\cong} \ar[d]^{\mathop{\times}\limits_{i=1}^{n}\lambda_i} && \pi_0\Stab \ar[d]_{\lambda} \\ \mathop{\times}\limits_{i=1}^{n} \fiKRAut{i} \ar[rr]^-{\beta}_-{\cong} && \fKRAut } \end{equation} in which $\alpha$ and $\beta$ are isomorphisms. \end{lemma} \begin{proof} Surfaces $\{\Bman_i\}_{i=1}^{n}$ and the isomorphism $\alpha$ are defined in~\cite[Theorem~1.7]{Maksymenko:MFAT:2010}. The isomorphism $\beta$ and diagram~\eqref{equ:surf_B_iso} are constructed in~\cite{Maksymenko:DefFuncI:2014}. In fact $\alpha$ and $\beta$ are similar to the corresponding isomorphisms from Lemma~\ref{lm:m_1_iso_pi0S} and therefore we skip their descriptions. \end{proof} Since each $\Bman_i$ is either a $2$-disk $\Disk$ or a cylinder $S^1\times I$, the homomorphism $\lambda_i$ arises from a free action of $\fiKRAut{i}$ on some torus. Then by (3) of Lemma~\ref{lm:examples_T_prop} so does $\lambda:\pi_0\Stab\to G$. Proposition~\ref{prop:lambda_is_T_hom} is completed. \def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,477,468,750,202
arxiv
\section{Introduction}\label{sect:intro} \noindent Schanuel's lemma is a useful tool in homological algebra and category theory. It appears to have come about as a response to a question by Kaplansky, see \cite[p.~166]{LamLectures}, and simplifies the definition of the projective (or, injective) homological dimension in module categories, hence in abelian categories. The typical categories that arise in functional analysis are not abelian but lately, the use of exact structures on additive categories of Banach modules and related ones has been suggested and indeed been exploited successfully. In~\cite{B11}, B\"uhler develops homological algebra for bounded cohomology in the setting of Quillen's exact categories. In~\cite{AM2}, exact categories of sheaves of operator modules over \textsl{C*}-ringed spaces are studied. Relative cohomology and cohomological dimension for (not necessarily self-adjoint) operator algebras is the topic of~\cite{MR2021}, see also~\cite{Rosb2020}. In view of this, it seems beneficial to establish an injective version of Schanuel's lemma for exact categories and show how it yields the injective dimension theorem. When we equip an additive category $\mathcal A$ with an exact structure we fix a pair $(\mathcal M,\mathcal P)$ consisting of a class of monomorphisms $\mathcal M$ and a class of epimorphisms $\mathcal P$ such that each $\mu\in\mathcal M$ and $\pi\in\mathcal P$ form a kernel-cokernel pair which we write as \[ \begin{tikzpicture}[auto] \node(E) {$E$}; \node[right=11mm of E](F) {$F$}; \node[right=11mm of F](G) {$G$}; \draw[>->] (E) to node {\footnotesize{$\mu$}} (F); \draw[->>] (F) to node {\footnotesize{$\pi$}} (G); \end{tikzpicture} \] where $E,F$ and $G$ are objects in~$\mathcal A$. We require that $\mathcal M$ and $\mathcal P$ contain all identity morphisms and are closed under composition, and term their elements as \textit{admissible monomorphisms\/} and \textit{admissible epimorphisms}, respectively. Furthermore, the push-out of an admissible monomorphism along an arbitrary morphism exists and yields an admissible monomorphism, and, likewise, the pull-back of an admissible epimorphism along an arbitrary morphism exists and yields an admissible epimorphism. If these conditions are fulfilled and $(\mathcal M,\mathcal P)$ is invariant under isomorphisms, $(\mathcal M,\mathcal P)$ is called \textit{an exact structure\/} on~$\mathcal A$ and will typically be denoted by~$\Ex$. The pair $(\mathcal A,\Ex)$ is said to be an \textit{exact category}. Unlike in abelian categories not every morphism in an exact category has a canonical factorisation into an epimorphism followed by a monomorphism. One therefore has to restrict to \textit{admissible morphisms\/} which are those that arise as $\mu\circ\pi$ for some $\mu\in\mathcal M$ and $\pi\in\mathcal P$. (It is easy to check that, once such factorisation exists, it is unique up to unique isomorphism.) The kernel-cokernel pairs replace the usual short exact sequences in abelian categories while long exact sequences are built from admissible morphisms. A very readable introduction into exact categories is given in~\cite{B08}. In this note, we provide the details of how Schanuel's lemma works in general exact categories and establish the Injective Dimension Theorem (Theorem~\ref{Theorem: Injective dimension theorem}). \section{Preliminaries}\label{Section: Prelim} \noindent We include here the necessary terminology and initial results, for a fixed exact category $(\mathcal A,\Ex)$, where $\Ex=(\mathcal M,\mathcal P)$. \begin{definition}\label{Definition: M-injective} An object $I$ in an exact category $(\mathcal{A},\Ex)$ is \textit{$\mathcal{M}$-injective} if, when given $E\xrighttail{\mu}{F}$ and a morphism $f\in\Morunder{E}{I}{\mathcal{A}}$, for objects $E,F\in\mathcal{A}$, there exists a morphism ${g}\in\Morunder{F}{I}{\mathcal{A}}$ making the following diagram commutative \[ \begin{tikzpicture}[auto] \node (E) {$E$}; \node (F) [right=2cm of E]{$F$}; \node (I)[below=1.2cm of E]{$I$}; \draw[>->] (E)--node{\footnotesize{$\mu$}}(F); \draw[->] (E)--node[swap]{\footnotesize{$f$}}(I); \draw[->, dashed] (F)--node{ }(I); \end{tikzpicture} \] The exact category has \textit{enough $\mathcal{M}$-injectives} if, for every $E\in\mathcal{A}$, there exist an $\mathcal{M}$-injective object~$I$ and an admissible monomorphism $E\xrighttail{ }{I}$. \end{definition} We will also make use of the following characterisations of $\mathcal{M}$-injective objects. \begin{prop}\label{Prop: M-injective} Let $E$ be an object in an exact category $(\mathcal{A},\Ex)$. The following are equivalent. \begin{enumerate}[label=\upshape(\roman*)] \item $E$ is $\mathcal{M}$-injective; \item Every admissible monomorphism $E\xrighttail{ }{F}$, for $F\in\mathcal{A}$, has a left inverse; \item There exist an $\mathcal{M}$-injective object $I\in\mathcal{A}$ and a morphism $E\longrightarrow{I}$ with a left inverse (i.e.,~$E$ is a retract of an $\mathcal{M}$-injective object). \end{enumerate} \end{prop} \noindent The arguments are standard. As exact categories are additive, we can form the product of any two objects (and thus, of any finite number of objects). \begin{prop}\label{Prop: Product addcats} Let $E, F, G$ be objects in an additive category~$\mathcal{A}$. The following are equivalent: \begin{enumerate}[label=\upshape(\roman*)] \item $F$ is a product of $E$ and $G$; \item $F$ is a coproduct of $E$ and $G$; \item There exist a kernel-cokernel pair in $\mathcal{A}$, \begin{equation}\label{Diagram: biproduct} \begin{tikzpicture}[auto, baseline=(current bounding box.center)] \node(E) {$E$}; \node[right=11mm of E](F) {$F$}; \node[right=11mm of F](G) {$G$}; \draw[>->] (E) to node {\footnotesize{$\mu$}} (F); \draw[->>] (F) to node {\footnotesize{$\pi$}} (G); \end{tikzpicture} \end{equation} and morphisms $\widetilde{\mu}\in\Morunder{F}{E}{\mathcal{A}}$ and $\widetilde{\pi}\in\Morunder{G}{F}{\mathcal{A}}$ such that $\widetilde{\mu}\circ\mu=\idntyof{E}$ and $\pi\circ\widetilde{\pi}=\idntyof{G}$, and $\mu\circ\widetilde{\mu}+\widetilde{\pi}\circ\pi=\idntyof{F}$; \item There exist a kernel-cokernel pair in $\mathcal{A}$, \begin{equation}\label{Diagram: biproduct admon} \begin{tikzpicture}[auto, baseline=(current bounding box.center)] \node(E) {$E$}; \node[right=11mm of E](F) {$F$}; \node[right=11mm of F](G) {$G$}; \draw[>->] (E) to node {\footnotesize{$\mu$}} (F); \draw[->>] (F) to node {\footnotesize{$\pi$}} (G); \end{tikzpicture} \end{equation} and a morphism $\widetilde{\mu}\in\Morunder{F}{E}{\mathcal{A}}$ such that $\widetilde{\mu}\circ\mu=\idntyof{E}$, the identity morphism on~$E$. \end{enumerate} Moreover, if these equivalent conditions are met, the kernel-cokernel pair in Diagram~\eqref{Diagram: biproduct admon} will belong to every exact structure that can be placed on~$\mathcal{A}$. \begin{proof} Finite products, coproducts and biproducts coincide in an additive category (see, e.g., \cite[Proposition~7.1--Corollary~7.3.]{O2000}), and condition~(iii) is just the definition of $F$ being a biproduct of $E$ and~$G$. That condition~(iii) is equivalent to condition~(iv) can be proven in the exact same way as the `Splitting Lemma' in module theory (see, e.g., \cite[Proposition~4.3.]{MacLane_Homology}). The final statement of this proposition is a direct consequence of the conditions required for monomorphisms and epimorphisms to be admissible; see \cite[Lemma~2.7.]{B08} for details. \end{proof} \end{prop} Kernel-cokernel pairs satisfying condition (iii) of Proposition~\ref{Prop: Product addcats} are said to be \textit{split}. For objects $E$ and $F$ in $\mathcal{A}$ we will denote their (co)product by $E\oplus{F}$. \begin{prop}\label{Prop: Direct sum kercokpair} Suppose $\kercok{E}{F}{G}{\footnotesize{\mu}}{\footnotesize{\pi}}$ is a kernel-cokernel pair in $\Ex$. \begin{enumerate}[label=\upshape(\roman*)] \item For any $A\in\mathcal{A}$, there is a kernel-cokernel pair in $\Ex$, \[ \kercokvar{E\oplus{A}}{F\oplus{A}}{G}{ }{ }{10} \] \item If $F\cong E\oplus{G}$, then $F$ is $\mathcal{M}$-injective if and only if both $E$ and $G$ are $\mathcal{M}$-injective. \end{enumerate} \begin{proof} We first prove (i). For $A\in\mathcal{A}$, there exist split kernel-cokernel pairs \[\begin{tikzpicture}[auto] \node(1) {$E$}; \node[right=9mm of 1] (2) {$E {\oplus} A$}; \node[right=9mm of 2] (3) {$A$}; \node[right=10mm of 3] (and) {\textup{and}}; \node[right=10mm of and](4) {$A$}; \node[right=9mm of 4] (5) {$F {\oplus} A$}; \node[right=9mm of 5] (6) {$F$}; \draw[>->] (1) to node {{\footnotesize{$\iota$}}} (2); \draw[>->] (4) to node {{\footnotesize{$\theta$}}} (5); \draw[->>] (2) to node {{\footnotesize{$\rho$}}} (3); \draw[->>] (5) to node {{\footnotesize{$\tau$}}} (6); \end{tikzpicture} \] and $\pi\circ\tau\in\Morunder{F\oplus{A}}{G}{\mathcal{A}}$ is an admissible epimorphism, as a compostion of morphisms in~$\mathcal{P}$. Define $\varphi\in\Morunder{E\oplus{A}}{F\oplus{A}}{\mathcal{A}}$ by \[\varphi=\widetilde{\tau}\circ{\mu}\circ\widetilde{\iota} +\theta\circ\rho\] using the same notation as in Proposition~\ref{Prop: Product addcats}. Then $(\varphi, \pi\circ\tau)$ is the desired kernel-cokernel pair. To show this, it is enough to demonstrate that $\varphi$ is a kernel of $\pi\circ\tau$. First note the composition \[ (\pi\circ\tau)\circ\varphi = \idntyof{F}\circ(\pi\circ{\mu})\circ\widetilde{\iota} + \pi\circ(\tau\circ\theta)\circ\rho= 0. \] Now suppose there exist $B\in\mathcal{A}$ and a morphism $f\in\Morunder{B}{F\oplus{A}}{\mathcal{A}}$ such that $(\pi\circ\tau)\circ{f}=0$. As $\mu$ is a kernel for $\pi,$ there exists a unique morphism $g'\in\Morunder{B}{E}{\mathcal{A}}$ such that $\mu\circ{g'}=\tau\circ{f}$. Define $g\in\Morunder{B}{E\oplus{A}}{\mathcal{A}}$ by \[ g=\iota\circ{g'}+\widetilde{\rho}\circ\widetilde{\theta}\circ{f}. \] Then $\varphi\circ{g}=(\widetilde{\tau}\circ\tau)\circ{f} + (\theta\circ\widetilde{\theta})\circ{f}= \idntyof{F\oplus{A}}\circ{f}=f$. To finish the proof of (i), we show that there is no other morphism $h\in\Morunder{B}{E\oplus{A}}{\mathcal{A}}$ such that $\varphi\circ{h}=f$. Suppose we have such a morphism $h$. Then, $\widetilde{\theta}\circ{f} = \widetilde{\theta}\circ\varphi\circ{h} = \rho\circ{h}$, and $\mu\circ{g'} = \tau\circ f = \tau \circ \varphi\circ h = \mu \circ \widetilde{\iota}\circ h,$ and therefore $g'=\widetilde{\iota}\circ h$. Combining these facts gives: \[ h = \idntyof{E \oplus A}\circ h = (\iota \circ \widetilde{\iota} + \widetilde{\rho} \circ \rho) =\iota \circ g' + \widetilde{\rho}\circ\widetilde{\theta}\circ{f} =g, \] as required. For assertion~(ii) suppose $F\cong E\oplus G$. Then there exist morphisms $\widetilde{\mu}\in\Morunder{F}{E}{\mathcal{A}}$ and $\widetilde{\pi}\in\Morunder{G}{F}{\mathcal{A}}$ such that $\widetilde{\mu}\circ\mu=\idntyof{E}$ and $\pi\circ\widetilde{\pi}=\idntyof{G}$, and $\mu\circ\widetilde{\mu}+\widetilde{\pi}\circ\pi=\idntyof{F}$. In particular, $E$ and $G$ are retracts of~$F$. By Proposition~\ref{Prop: M-injective}, if $F$ is $\mathcal{M}$-injective so are $E$ and $G$. Finally, suppose $E$ and $G$ are $\mathcal{M}$-injective and there is an admissible monomorphism \[\begin{tikzpicture}[auto] \node(F) {$F$}; \node[right=14mm of E](B) {$B$}; \draw[>->] (F) to node {{\footnotesize{$f$}}} (B); \end{tikzpicture} \] where $B\in\mathcal{A}$. Because $E$ and $G$ are $\mathcal{M}$-injective, there exist $g_E\in\Morunder{B}{E}{\mathcal{A}}$ such that $\widetilde{\mu}=g_E \circ f$ and $g_G\in\Morunder{B}{G}{\mathcal{A}}$ such that and $\pi= g_F \circ f$. Let $g= \mu\circ g_E + \widetilde{\pi}\circ g_G$, then $g$ is a left inverse of $f$, indeed: \[ g \circ f= \mu\circ (g_E \circ f) + \widetilde{\pi}\circ (g_G \circ f) = \mu\circ\widetilde{\mu}+\widetilde{\pi}\circ\pi=\idntyof{F}. \] Hence, by Proposition~\ref{Prop: M-injective}, $F$ is $\mathcal{M}$-injective. \end{proof} \end{prop} \section{Schanuel's Lemma}\label{Section: Schanuel's Lemma} \noindent Fix an exact category $(\mathcal{A},\Ex)$. The following is the injective version of Schanuel's lemma for exact categories. \begin{prop}\label{Prop: Schanuel's Lemma} Suppose $\kercok{E}{I}{F}{{\footnotesize{\mu}}}{{\footnotesize{\pi}}}$ and $\kercok{E}{I'}{F'}{{\footnotesize{\mu'}}}{{\footnotesize{\pi'}}}$ are kernel-cokernel pairs in $\Ex$, and that $I, I'$ are $\mathcal{M}$-injective objects. Then $I\oplus{F'}\cong I'\oplus{F}$ in $\mathcal{A}$. \begin{proof} First, by the axioms of an exact structure, we can form the following push-out, \begin{equation}\label{Diagram: Square} \begin{tikzpicture}[auto, baseline=(current bounding box.center)] \matrix(m)[column sep=4em, row sep=3em]{ \node(E) {$E$}; & \node(I) {$I$}; \\ \node(I') {$I'$}; & \node(C) {$C$}; \\}; \draw[>->] (E) to node {\footnotesize{$\mu$}} (I); \draw[>->] (E) to node [swap]{\footnotesize{$\mu'$}} (I'); \draw[>->] (I) to node {\footnotesize{$h$}} (C); \draw[>->] (I') to node [swap]{\footnotesize{$h'$}} (C); \end{tikzpicture} \end{equation} where every morphism is an admissible monomorphism. Extending this diagram to include the given cokernels, and adding in some zero morphisms, we get the following commutative diagram: \vspace*{-16mm} \begin{equation}\label{Diagram: Square with 0s} \begin{tikzpicture}[auto, baseline=(current bounding box.center)] \matrix(m)[column sep=4em, row sep=3em]{ \node(E) {$E$}; & \node(I) {$I$}; & \node(F) {$F$}; \\ \node(I') {$I'$}; & \node(C) {$C$}; &{} \\ \node(F') {$F'$}; & {} &{}\\ }; \draw[>->] (E) to node {\footnotesize{$\mu$}} (I); \draw[>->] (E) to node [swap]{\footnotesize{$\mu'$}} (I'); \draw[>->] (I) to node {\footnotesize{$h$}} (C); \draw[>->] (I') to node [swap]{\footnotesize{$h'$}} (C); \draw[->>] (I) to node {\footnotesize{$\pi$}} (F); \draw[->>] (I') to node [swap]{\footnotesize{$\pi'$}} (F'); \draw[->] (I'.south east) to [out=300, in=270, looseness=1.3] node [swap]{\footnotesize{$0$}} (F); \draw[->] (I.north) to [out=120, in=180, looseness=2] node [swap]{\footnotesize{$0$}} (F'); \end{tikzpicture} \end{equation} By the universal property of push-outs, there are a unique morphism $p\in\Mor{C}{F}$ such that $ph'=0$ and $ph=\pi$, and a unique morphism $p'\in\Mor{C}{F'}$ such that $p'h=0$ and $p'h'=\pi'$. Hence, we have the following commutative diagram: \begin{equation}\label{Diagram: Square with more squares} \begin{tikzpicture}[auto, baseline=(current bounding box.center)] \matrix(m)[column sep=4em, row sep=3em]{ \node(E) {$E$}; & \node(I) {$I$}; & \node(F) {$F$}; \\ \node(I') {$I'$}; & \node(C) {$C$}; &\node(F2) {$F$}; \\ \node(F') {$F'$}; & \node(F'2) {$F'$}; &{}\\ }; \draw[>->] (E) to node {\footnotesize{$\mu$}} (I); \draw[>->] (E) to node [swap]{\footnotesize{$\mu'$}} (I'); \draw[>->] (I) to node {\footnotesize{$h$}} (C); \draw[>->] (I') to node [swap]{\footnotesize{$h'$}} (C); \draw[->>] (I) to node {\footnotesize{$\pi$}} (F); \draw[->>] (I') to node [swap]{\footnotesize{$\pi'$}} (F'); \draw[->] (C) to node [swap]{\footnotesize{$p$}} (F2); \draw[->] (C) to node {\footnotesize{$p'$}} (F'2); \draw[->] (F) to node {\footnotesize{$\idntyof{F}$}} (F2); \draw[->] (F') to node [swap]{\footnotesize{$\idntyof{F'}$}} (F'2); \end{tikzpicture} \end{equation} The result will follow if the middle row and middle column are both split kernel-cokernel pairs. As $h, h'\in\mathcal{M}$ and $I,I'$ are $\mathcal{M}$-injective, this will be the case if both $(h',p)$ and $(h,p')$ are kernel-cokernel pairs. We deal with $(h',p)$, the other pair is done in the exact same way. To show that $(h',p)$ is a kernel-cokernel pair, it is enough to verify that $p$ is a cokernel of~$h'$. Suppose there exist an object $G\in\mathcal{A}$ and a morphism $q\in\Mor{C}{G}$ such that $qh'=0$. We are done if we find a unique morphism $\psi\in\Mor{F}{G}$ such that the following diagram is commutative: \begin{equation}\label{Diagram: cokernel} \begin{tikzpicture}[auto, baseline=(current bounding box.center)] \matrix(m)[column sep=4em, row sep=3em]{ \node(I') {$I'$}; & \node(C) {$C$}; &\node(F2) {$F$}; \\ {}; & \node(G) {$G$}; &{}\\ }; \draw[>->] (I') to node [swap]{\footnotesize{$h'$}} (C); \draw[->] (C) to node [swap]{\footnotesize{$p$}} (F2); \draw[->] (C) to node [swap]{\footnotesize{$q$}} (G); \draw[->, dashed] (F2) to node {\footnotesize{$\psi$}} (G); \draw[->, bend left] (I') to node {\footnotesize{$0$}} (F2); \draw[->, bend right] (I') to node [swap]{\footnotesize{$0$}} (G); \end{tikzpicture} \end{equation} We have $(qh)\mu=q(h\mu)=q(h'\mu')=0$ and, because $(\mu,\pi)$ is a kernel-cokernel pair, there exists a unique morphism $t\in\Mor{F}{G}$ such that $t\pi=qh$. Therefore, the following diagram is commutative: \begin{equation}\label{Diagram: Square+G} \begin{tikzpicture}[auto, baseline=(current bounding box.center)] \matrix(m)[column sep=4em, row sep=3em]{ \node(E) {$E$}; & \node(I) {$I$}; & {} \\ \node(I') {$I'$}; & \node(C) {$C$}; & {}\\ {}; & { }; & \node(G) {$G$}; \\}; \draw[>->] (E) to node {\footnotesize{$\mu$}} (I); \draw[>->] (E) to node [swap]{\footnotesize{$\mu'$}} (I'); \draw[>->] (I) to node {\footnotesize{$h$}} (C); \draw[>->] (I') to node [swap]{\footnotesize{$h'$}} (C); \draw[->] (C) to node {\footnotesize{$q$}} (G); \draw[->, bend right] (I') to node [swap]{\footnotesize{$0$}} (G); \draw[->, bend left] (I) to node {\footnotesize{$t\pi$}} (G); \end{tikzpicture} \end{equation} By the universal property of push-outs, $q$ is the unique morphism $C\rightarrow{G}$ that makes Diagram~\eqref{Diagram: Square+G} commutative. However, $(tp)h=t(ph)=t\pi$ and $(tp)h'=t(ph')=0$. So, $q=tp$ and setting $\psi=t$ makes Diagram~\eqref{Diagram: cokernel} commutative. Finally, suppose there also exists $t'\in\Mor{F}{G}$ such that $q=t'p$. Recalling from Diagram~\eqref{Diagram: Square with more squares} that $\pi=ph$, we have \[ t'\pi= t'(ph)=(t'p)h=(tp)h=t(ph)=t\pi, \] and, because $\pi$ is an epimorphism, $t'=t$. Thus, uniqueness has been verified. \end{proof} \end{prop} \begin{cor}\label{Corollary: Schanuel's iso} Suppose there is a diagram of morphisms in an exact category $(\mathcal A,\Ex)$ of the form \[ \begin{tikzpicture}[auto] \matrix[column sep={1.1cm}, row sep={1.3cm,between origins}, matrix of math nodes](m) { E{\hphantom{'}} & I{\hphantom{'}} & F{\hphantom{'}} \\ E'& I' & F' \\ }; \draw[>->] (m-1-1) to node{ } (m-1-2); \draw[->>] (m-1-2) to node{ } (m-1-3); \draw[>->] (m-2-1) to node{ } (m-2-2); \draw[->>] (m-2-2) to node{ } (m-2-3); \draw[->] (m-1-1) to node[swap]{$\footnotesize{\cong}$} (m-2-1); \end{tikzpicture} \] such that $I$ and $I'$ are $\mathcal{M}$-injective, the horizontal lines are in $\Ex$ and the vertical arrow is an isomorphism. Then $I\oplus{F'}\cong I'\oplus{F}$ in $\mathcal{A}$. \end{cor} We extend Schanuel's lemma to injective resolutions in Proposition~\ref{Prop:Schanuel's Lemma resolution} below. Recall that a morphism is \textit{admissible\/} if it is the composition $\mu\circ\pi$ for some $\mu\in\mathcal M$ and $\pi\in\mathcal P$. Such factorisation is unique up to unique isomorphism (\cite[Lemma~8.4]{B08}). \begin{definition}\label{Definition: Injective res} For an object $E\in\mathcal{A}$, an \textit{$\mathcal{M}$-injective resolution} of $E$ is a sequence of admissible morphisms of the form: \[\begin{tikzpicture}[auto]\matrix[column sep={0.5cm}, row sep={1.3cm,between origins}]{ \node(E) {$E$}; &{} & \node(0) {$I^0$};&{} & \node(dots) {$\cdots$};&{} & \node(1) {$I^{n-1}$};&{} & \node(m) {$I^n$};&{} & \node(F) {$\cdots$};\\ \node(not1){ };&\node(E2) {$G^0$};&{}&\node(K1) {$G^1$};&{}&\node(K1m) {$G^{n-1}$};&{}&\node(Km) {$G^n$}; &{} &\node(Km1) {$G^{n+1}$};\\ }; \draw[>->] (E) to node {} (0); \draw[->] (0) to node {} (dots); \draw[->] (dots)to node {} (1); \draw[->] (1) to node {} (m); \draw[->] (m) to node {} (F); \draw[->>] (E) to node [swap]{{\footnotesize{$\cong$}}} (E2); \draw[->>] (0) to node { } (K1);\draw[->>] (dots) to node { } (K1m); \draw[->>] (1) to node { } (Km); \draw[>->] (E2) to node { } (0);\draw[>->] (K1) to node { } (dots); \draw[>->] (K1m) to node { } (1); \draw[>->] (Km) to node { } (m); \draw[->>] (m) to node { } (Km1); \draw[>->] (Km1) to node { } (F); \end{tikzpicture} \] such that, for each $n\geq0,$ the object $I^n$ is $\mathcal{M}$-injective, and \[ \begin{tikzpicture}[auto, baseline=(current bounding box.center)] \node(n) {$G^n$}; \node[right=11mm of n](I) {$I^n$}; \node[right=11mm of I](n+1) {$G^{n+1}$}; \draw[>->] (n) to node { } (I); \draw[->>] (I) to node { } (n+1); \end{tikzpicture} \] forms a kernel-cokernel pair in $\Ex$. \end{definition} If $\mathcal{A}$ has enough $\mathcal{M}$-injectives, we can build an injective resolution for every object in~$\mathcal{A}$. \begin{prop}\label{Prop:Schanuel's Lemma resolution} Suppose we have the following $\mathcal{M}$-injective resolutions of $E$, with the factorisation of each admissible morphism included: \[\begin{tikzpicture}[auto]\matrix[column sep={0.5cm}, row sep={1.3cm,between origins}]{ \node(E) {$E$}; &{} & \node(0) {$I^0$};&{} & \node(dots) {$\cdots$};&{} & \node(1) {$I^{n-1}$};&{} & \node(m) {$I^n$};&{} & \node(F) {$\cdots$};\\ \node(not1){ };&\node(E2) {$G^0$};&{}&\node(K1) {$G^1$};&{}&\node(K1m) {$G^{n-1}$};&{}&\node(Km) {$G^n$}; &{}&\node(Km1) {$G^{n+1}$};\\ }; \draw[->] (E) to node {} (0); \draw[->] (0) to node {} (dots); \draw[->] (dots)to node {} (1); \draw[->] (1) to node {} (m); \draw[->] (m) to node {} (F); \draw[->>] (E) to node [swap]{{\footnotesize{$\cong$}}} (E2); \draw[->>] (0) to node { } (K1);\draw[->>] (dots) to node { } (K1m); \draw[->>] (1) to node { } (Km); \draw[>->] (E2) to node { } (0);\draw[>->] (K1) to node { } (dots); \draw[>->] (K1m) to node { } (1); \draw[>->] (Km) to node { } (m); \draw[->>] (m) to node { } (Km1); \draw[>->] (Km1) to node { } (F); \end{tikzpicture} \] and \[\begin{tikzpicture}[auto]\matrix[column sep={0.5cm}, row sep={1.3cm,between origins}]{ \node(E) {$E$}; &{} & \node(0) {$J^0$};&{} & \node(dots) {$\cdots$};&{} & \node(1) {$J^{n-1}$};&{} & \node(m) {$J^n$};&{} & \node(F) {$\cdots$};\\ \node(not1){ };&\node(E2) {$H^0$};&{}&\node(K1) {$H^1$};&{}&\node(K1m) {$H^{n-1}$};&{}&\node(Km) {$H^n$}; &{}&\node(Km1) {$H^{n+1}$};\\ }; \draw[->] (E) to node {} (0); \draw[->] (0) to node {} (dots); \draw[->] (dots)to node {} (1); \draw[->] (1) to node {} (m); \draw[->] (m) to node {} (F); \draw[->>] (E) to node [swap]{{\footnotesize{$\cong$}}} (E2); \draw[->>] (0) to node { } (K1);\draw[->>] (dots) to node { } (K1m); \draw[->>] (1) to node { } (Km); \draw[>->] (E2) to node { } (0);\draw[>->] (K1) to node { } (dots); \draw[>->] (K1m) to node { } (1); \draw[>->] (Km) to node { } (m); \draw[->>] (m) to node { } (Km1); \draw[>->] (Km1) to node { } (F); \end{tikzpicture} \] Then, for each $n\geq{1}$, we have isomorphisms \[ I^0{\oplus}J^1{\oplus}I^2{\oplus}\cdots{\oplus}J^{2n-1}{\oplus}G^{2n} \,{}\cong{}\, J^0{\oplus}I^1{\oplus}J^2{\oplus}\cdots{\oplus}I^{2n-1}{\oplus}H^{2n} \] and \[ I^0{\oplus}J^1{\oplus}I^2{\oplus}\cdots{\oplus}J^{2n-1}{\oplus}I^{2n}{\oplus}H^{2n+1} \,{}\cong{}\, J^0{\oplus}I^1{\oplus}J^2{\oplus}\cdots{\oplus}I^{2n-1}{\oplus}J^{2n}{\oplus}G^{2n+1}. \] \begin{proof} We prove this by induction. For $n=1$, first note that Corollary~\ref{Corollary: Schanuel's iso}, applied to the diagram \[ \begin{tikzpicture}[auto] \matrix[column sep={1.1cm}, row sep={1.3cm,between origins}, matrix of math nodes](m) { G^0 & I^0 & G^1 \\ H^0 & J^0 & H^1 \\ }; \draw[>->] (m-1-1) to node{ } (m-1-2); \draw[->>] (m-1-2) to node{ } (m-1-3); \draw[>->] (m-2-1) to node{ } (m-2-2); \draw[->>] (m-2-2) to node{ } (m-2-3); \draw[->] (m-1-1) to node[swap]{{\footnotesize{$\cong$}}} (m-2-1); \end{tikzpicture} \] gives $I^0{\oplus}H^1 \cong J^0{\oplus}G^1$. By Proposition~\ref{Prop: Direct sum kercokpair}, there is a diagram of the form \[ \begin{tikzpicture}[auto] \matrix[column sep={1.1cm}, row sep={1.3cm,between origins}, matrix of math nodes](m) { I^0{\oplus}H^1 & I^0{\oplus}J^1\vphantom{H^2} & \vphantom{I^0{\oplus}J^1}H^2 \\ J^0{\oplus}G^1 & J^0{\oplus}I^1\vphantom{G^2} & \vphantom{J^0{\oplus}I^1}G^2 \\ }; \draw[>->] (m-1-1) to node{ } (m-1-2); \draw[->>] (m-1-2) to node{ } (m-1-3); \draw[>->] (m-2-1) to node{ } (m-2-2); \draw[->>] (m-2-2) to node{ } (m-2-3); \draw[->] (m-1-1) to node[swap]{{\footnotesize{$\cong$}}} (m-2-1); \end{tikzpicture} \] and Corollary~\ref{Corollary: Schanuel's iso} gives $I^0{\oplus}J^1{\oplus}G^2 \cong J^0{\oplus}I^1{\oplus}{H^2}$. To finish the proof for $n=1$, we again apply Proposition~\ref{Prop: Direct sum kercokpair} followed by Corollary~\ref{Corollary: Schanuel's iso}, to get a diagram \[ \begin{tikzpicture}[auto] \matrix[column sep={1.1cm}, row sep={1.3cm,between origins}, matrix of math nodes](m) { I^0{\oplus}J^1{\oplus}G^2\vphantom{G^3} & I^0{\oplus}J^1{\oplus}I^2\vphantom{G^3} & G^3 \\ J^0{\oplus}I^1{\oplus}{H^2}\vphantom{H^3} & J^0{\oplus}I^1{\oplus}{J^2}\vphantom{H^3} & H^3 \\ }; \draw[>->] (m-1-1) to node{ } (m-1-2); \draw[->>] (m-1-2) to node{ } (m-1-3); \draw[>->] (m-2-1) to node{ } (m-2-2); \draw[->>] (m-2-2) to node{ } (m-2-3); \draw[->] (m-1-1) to node[swap]{{\footnotesize{$\cong$}}} (m-2-1); \end{tikzpicture} \] and an isomorphism $I^0{\oplus}J^1{\oplus}I^2{\oplus}{H^3} \cong J^0{\oplus}I^1{\oplus}{J^2}{\oplus}{G^3}.$ \smallskip Assume the result holds some $n\geq 1$. By Proposition~\ref{Prop: Direct sum kercokpair}, there is a diagram of the form \[ \begin{tikzpicture}[auto] \matrix[column sep={1.1cm}, row sep={1.3cm,between origins}, matrix of math nodes](m) { I^0{\oplus}\cdots\oplus{I^{2n}}{\oplus}H^{2n+1} & I^0{\oplus}\cdots\oplus{I^{2n}}{\oplus}J^{2n+1} & H^{2(n+1)} \\ J^0{\oplus}\cdots\oplus{J^{2n}}{\oplus}G^{2n+1} & J^0{\oplus}\cdots\oplus{J^{2n}}{\oplus}I^{2n+1}\vphantom{G^{2(n+1)}} & G^{2(n+1)} \\ }; \draw[>->] (m-1-1) to node{ } (m-1-2); \draw[->>] (m-1-2) to node{ } (m-1-3); \draw[>->] (m-2-1) to node{ } (m-2-2); \draw[->>] (m-2-2) to node{ } (m-2-3); \draw[->] (m-1-1) to node[swap]{\footnotesize{$\cong$}} (m-2-1); \end{tikzpicture} \] and Corollary~\ref{Corollary: Schanuel's iso} gives \[ \begin{split} &I^0{\oplus}J^1{\oplus}I^2{\oplus}\cdots{\oplus}J^{2(n+1)-1}{\oplus}G^{2(n+1)}\\ \cong \,& J^0{\oplus}I^1{\oplus}J^2{\oplus}\cdots{\oplus}I^{2(n+1)-1}{\oplus}H^{2(n+1)}. \end{split} \] One final application of Proposition~\ref{Prop: Direct sum kercokpair} yields the following diagram: \[ \begin{tikzpicture}[auto] \matrix[column sep={1.1cm}, row sep={1.5cm,between origins}, matrix of math nodes](m) { {I^0{\oplus}J^1{\oplus}I^2{\oplus}\cdots{\oplus}J^{2n+1}{\oplus}G^{2(n+1)}} & J^0{\oplus}I^1{\oplus}J^2{\oplus}\cdots{\oplus}I^{2n+1}{\oplus}H^{2(n+1)} \\ I^0{\oplus}J^1{\oplus}I^2{\oplus}\cdots{\oplus}J^{2n+1}{\oplus}I^{2(n+1)} & J^0{\oplus}I^1{\oplus}J^2{\oplus}\cdots{\oplus}I^{2n+1}{\oplus}J^{2(n+1)} \\ G^{2(n+1)+1} & H^{2(n+1)+1} \\ }; \draw[>->] (m-1-1) to node{ } (m-2-1); \draw[->>] (m-2-1) to node{ } (m-3-1); \draw[>->] (m-1-2) to node{ } (m-2-2); \draw[->>] (m-2-2) to node{ } (m-3-2); \draw[->] (m-1-1) to node {{\footnotesize{$\cong$}}} (m-1-2); \end{tikzpicture} \] By Corollary~\ref{Corollary: Schanuel's iso}, \[ \begin{split} & I^0{\oplus}J^1{\oplus}I^2{\oplus}\cdots{\oplus}J^{2(n+1)-1}{\oplus}I^{2(n+1)}{\oplus}H^{2(n+1)+1}\\ \cong\,& J^0{\oplus}I^1{\oplus}J^2{\oplus}\cdots{\oplus}I^{2(n+1)-1}{\oplus}J^{2(n+1)}{\oplus}G^{2(n+1)+1} \end{split} \] as required. \end{proof} \end{prop} We can now prove the Injective Dimension Theorem. \goodbreak \begin{theorem}\label{Theorem: Injective dimension theorem} Let $\mathcal{M}$ be the class of admissible monomorphisms in an exact category $(\mathcal{A},\Ex)$. Suppose $\mathcal{A}$ has enough $\mathcal{M}$-injectives. The following are equivalent for $n\geq{1}$ and every $E\in\mathcal{A}$. \begin{enumerate}[label=\upshape(\roman*)] \item If there is an exact sequence of admissible morphisms \begin{equation}\label{Diagram: injective dimension theorem diagram 1} \begin{tikzpicture}[auto, baseline=(current bounding box.center)] \matrix(m)[matrix of math nodes, column sep=3em]{ {E}\vphantom{I^0} & {I^0} & {\cdots}\vphantom{I^0} & {I^{n-1}} & {F}\vphantom{I^0} \\}; \draw[>->] (m-1-1) to node { } (m-1-2); \draw[->] (m-1-2) to node { } (m-1-3); \draw[->] (m-1-3) to node { } (m-1-4); \draw[->>] (m-1-4) to node { } (m-1-5); \end{tikzpicture} \end{equation} with each $I^{m}$, $0\leq m\leq n-1$ injective, then $F$ must be injective; \item There is an exact sequence of admissible morphisms \begin{equation}\label{Diagram: injective dimension theorem diagram 2}\begin{tikzpicture}[auto, baseline=(current bounding box.center)] \matrix(m)[matrix of math nodes, column sep=3em]{ {E}\vphantom{I^0} & {I^0} & {\cdots}\vphantom{I^0} & {I^{n-1}} & {I^n}\vphantom{I^0} \\}; \draw[>->] (m-1-1) to node { } (m-1-2); \draw[->] (m-1-2) to node { } (m-1-3); \draw[->] (m-1-3) to node { } (m-1-4); \draw[->>] (m-1-4) to node { } (m-1-5); \end{tikzpicture} \end{equation} with each $I^{m}$, $0\leq m\leq n$ injective. \end{enumerate} \begin{proof} Let $E\in\mathcal{A}$. First we show (i) implies (ii). As $\mathcal{A}$ has enough $\mathcal{M}$-injectives, we can build an $\mathcal{M}$-injective resolution of $E$: \[ \begin{tikzpicture}[auto]\matrix[column sep={0.5cm}, row sep={1.3cm,between origins}]{ \node(E) {$E$}; &{} & \node(0) {$J^0$};&{} & \node(dots) {$\cdots$};&{} & \node(1) {$J^{n-1}$};&{} & \node(m) {$J^n$};&{} & \node(F) {$\cdots$};\\ \node(not1){ };&\node(E2) {$G^0$};&{}&\node(K1) {$G^1$};&{}&\node(K1m) {$G^{n-1}$};&{}&\node(Km) {$G^n$}; &{}&{}\\ }; \draw[->] (E) to node {} (0); \draw[->] (0) to node {} (dots); \draw[->] (dots)to node {} (1); \draw[->] (1) to node {} (m); \draw[->] (m) to node {} (F); \draw[->>] (E) to node [swap]{{\footnotesize{$\cong$}}} (E2); \draw[->>] (0) to node { } (K1);\draw[->>] (dots) to node { } (K1m); \draw[->>] (1) to node { } (Km); \draw[>->] (E2) to node { } (0);\draw[>->] (K1) to node { } (dots); \draw[>->] (K1m) to node { } (1); \draw[>->] (Km) to node { } (m); \end{tikzpicture} \] Relabel $J^k$ as $I^k$ for all $0\leq k \leq n-1$ and $G^n$ as $I^n$, this gives an exact sequence as in Diagram~\eqref{Diagram: injective dimension theorem diagram 2}, and $I^n$ must be $\mathcal{M}$-injective, by condition (i). Now suppose that condition~(ii) holds. There must exist an injective resolution of $E$ of the form \[ \begin{tikzpicture}[auto]\matrix[column sep={0.5cm}, row sep={1.3cm,between origins}]{ \node(E) {$E$}; &{} & \node(0) {$J^0$};&{} & \node(dots) {$\cdots$};&{} & \node(1) {$J^{n-1}$};&{} & \node(m) {$J^n$};&{} & \node(F) {$\cdots$};\\ \node(not1){ };&\node(E2) {$H^0$};&{}&\node(K1) {$H^1$};&{}&\node(K1m) {$H^{n-1}$};&{}&\node(Km) {$J^n$}; &{}&{}\\ }; \draw[->] (E) to node {} (0); \draw[->] (0) to node {} (dots); \draw[->] (dots)to node {} (1); \draw[->] (1) to node {} (m); \draw[->] (m) to node {} (F); \draw[->>] (E) to node [swap]{{\footnotesize{$\cong$}}} (E2); \draw[->>] (0) to node { } (K1);\draw[->>] (dots) to node { } (K1m); \draw[->>] (1) to node { } (Km); \draw[>->] (E2) to node { } (0);\draw[>->] (K1) to node { } (dots); \draw[>->] (K1m) to node { } (1); \draw[>->] (Km) to node [swap]{{\footnotesize{$\cong$}}} (m); \end{tikzpicture} \] and for any exact sequence as in Diagram~\eqref{Diagram: injective dimension theorem diagram 1}, with each $I^n$ injective, there exists an injective resolution \[ \begin{tikzpicture}[auto]\matrix[column sep={0.5cm}, row sep={1.3cm,between origins}]{ \node(E) {$E$}; &{} & \node(0) {$I^0$};&{} & \node(dots) {$\cdots$};&{} & \node(1) {$I^{n-1}$};&{} & \node(m) {$I^n$};&{} & \node(F) {$\cdots$};\\ \node(not1){ };&\node(E2) {$G^0$};&{}&\node(K1) {$G^1$};&{}&\node(K1m) {$G^{n-1}$};&{}&\node(Km) {$G^n$}; &{}&{}\\ }; \draw[->] (E) to node {} (0); \draw[->] (0) to node {} (dots); \draw[->] (dots)to node {} (1); \draw[->] (1) to node {} (m); \draw[->] (m) to node {} (F); \draw[->>] (E) to node [swap]{{\footnotesize{$\cong$}}} (E2); \draw[->>] (0) to node { } (K1);\draw[->>] (dots) to node { } (K1m); \draw[->>] (1) to node { } (Km); \draw[>->] (E2) to node { } (0);\draw[>->] (K1) to node { } (dots); \draw[>->] (K1m) to node { } (1); \draw[>->] (Km) to node [swap]{ } (m); \end{tikzpicture} \] with $G^n=F$. By Proposition~\ref{Prop:Schanuel's Lemma resolution}, there exists a kernel-cokernel pair \[ \begin{tikzpicture}[auto, baseline=(current bounding box.center)] \node(F) {$F$}; \node[right=11mm of F](I) {$I$}; \node[right=11mm of I](G) {$G$}; \draw[>->] (F) to node {\footnotesize{$\mu$}} (I); \draw[->>] (I) to node {\footnotesize{$\pi$}} (G); \end{tikzpicture} \] and a morphism $\widetilde{\mu}\in\Morunder{I}{F}{\mathcal{A}}$ such that $\widetilde{\mu}\circ\mu=\idntyof{F}$, and $I$ is a finite product of $\mathcal{M}$-injective objects. Then by Proposition~\ref{Prop: Direct sum kercokpair}, $I$ is injective and $\widetilde{\mu}$ is a left inverse for $\mu$, hence, by Proposition~\ref{Prop: M-injective}, $F$ is $\mathcal{M}$-injective. \end{proof} \end{theorem} \begin{definition}\label{Definition: Injective Dimension} Let $\mathcal{M}$ be the class of admissible monomorphisms in an exact category $(\mathcal{A},\Ex)$. We say $E\in\mathcal{A}$ has \textit{finite $\mathcal{M}$-injective dimension} if there exists an exact sequence of admissible morphisms as in Diagram~\eqref{Diagram: injective dimension theorem diagram 2} with all $I^m$ $\mathcal{M}$-injective. If $E$ is of finite $\mathcal{M}$-injective dimension we write $\Midim{E}=0$ if $E$ is $\mathcal{M}$-injective and $\Midim{E}=n$ if $E$ is not $\mathcal{M}$-injective and $n$ is the smallest natural number such that there exists an exact sequence of admissible morphisms as in Diagram~\eqref{Diagram: injective dimension theorem diagram 2} where every $I^m$ is $\mathcal{M}$-injective. If $E$ is not of finite $\mathcal{M}$-injective dimension, we write $\Midim{E}=\infty$. The \textit{global dimension} of the exact category $(\mathcal{A},\Ex)$ is \[ \sup\setst{\Midim{E}}{E\in\mathcal{A}}\in\NN_0\cup\{\infty\}. \] \end{definition} \begin{remark} The $\mathcal{M}$-injective dimension of an object $E$ in an exact category $(\mathcal{A},\Ex)$ can be obtained by examining any of its $\mathcal{M}$-injective resolutions. Indeed, suppose the following is an $\mathcal{M}$-injective resolution of $E$ (with the factorisation of each admissible morphism included): \[\begin{tikzpicture}[auto]\matrix[column sep={0.5cm}, row sep={1.3cm,between origins}]{ \node(E) {$E$}; &{} & \node(0) {$J^0$};&{} & \node(dots) {$\cdots$};&{} & \node(1) {$J^{n-1}$};&{} & \node(m) {$J^n$};&{} & \node(F) {$\cdots$};\\ \node(not1){ };&\node(E2) {$G^0$};&{}&\node(G1) {$G^1$};&{}&\node(G1m) {$G^{n-1}$};&{}&\node(Gm) {$G^n$}; &{}&{}\\ }; \draw[->] (E) to node {} (0); \draw[->] (0) to node {} (dots); \draw[->] (dots)to node {} (1); \draw[->] (1) to node {} (m); \draw[->] (m) to node {} (F); \draw[->>] (E) to node [swap]{{\footnotesize{$\cong$}}} (E2); \draw[->>] (0) to node { } (G1);\draw[->>] (dots) to node { } (G1m); \draw[->>] (1) to node { } (Gm); \draw[>->] (E2) to node { } (0);\draw[>->] (G1) to node { } (dots); \draw[>->] (G1m) to node { } (1); \draw[>->] (Gm) to node { } (m); \end{tikzpicture} \] Then, by Theorem~\ref{Theorem: Injective dimension theorem}, $\Midim{E}\leq n$ if and only if $G^n$ is $\mathcal{M}$-injective. \end{remark} The original version of Schanuel's lemma is formulated for projective resolutions, see, e.g., \cite[Lemma~5.1]{LamLectures} or \cite[Theorem~3.41]{MR0409590}. An analogous version using the epimorphisms in the class $\mathcal P$ can be obtained in any exact category with exact structure $(\mathcal M,\mathcal P)$. \medskip
1,477,468,750,203
arxiv
\section{Introduction} \begin{figure*}[!t] \centering \includegraphics[width=18.3cm]{conceptart.eps} \caption{Schematic of phase-sensitive detection with (a) a conventional balanced homodyne detector (b) an optical parametric amplifier (OPA). In the both cases, the measured light is squeezed light generated in an OPA. Squeezed light has quantum entanglements between the sidebands, and phase-sensitive detection provides a way to measure the strength of this correlation, namely squeezing level. Squeezed quadrature amplitude is especially vulnerable to optical loss. Parametric amplification along the measurement axis anti-squeezes the measured quadrature, and improves the tolerance to optical loss. $\Omega$ is the center frequency of the OPAs, and $\omega$ is sideband frequency. BHD, balanced homodyne detector.} \label{concept} \end{figure*} All-optical quantum computation is an ultimate goal of the research of quantum information processing which pursues speed of computation. Generation, detection, and feed-forward control of a multipartite entangled state such as a two-dimensional cluster state \cite{twoDcluster2,twoDcluster} are the essential processes of measurement-based quantum computation \cite{OneWay,MillionModes:Yoshikawa}. The speed of quantum computation is determined by the bandwidths of the generated entangled state, detection and feed-forward control. As for the entangled state, its bandwidth is inherited from the squeezed light used for its generation. Squeezed light has non-classical correlation between its sidebands, and the bandwidth of squeezed light recently reaches 2 THz \cite{PPLNOPA6dB}. However, the bandwidth of detection and feed-forward control would be limited up to several GHz as long as electrical circuits are used. Here, the all-optical scheme aims to break the limitation by replacing the electrical circuits with nonlinear optical elements. In 1999, an all-optical implementation of quantum teleportation, which is the simplest case of measurement-based quantum computation, is theoretically proposed \cite{alloptical}. Here, optical parametric amplifiers (OPAs) made of nonlinear media play a role in converting quantum field of light into loss-tolerant ``classical'' field, which can be directly used for feed-forward control. Meanwhile, in conventional non-all-optical scheme, quantum field is converted into electrical signal with balanced homodyne detection, and then the signal is used for modulation for feed-forward control of another light \cite{FurusawaTeleportation}. Balanced homodyne detection is phase-sensitive detection in which signal light and a local oscillator are mixed by an optical beamsplitter and converted into a low-frequency electrical signal by balanced photodiodes. For the measurement of squeezing level, the electrical signal is conventionally detected by an electrical spectrum analyzer as in Fig. \ref{concept}(a), where one can get information from both sidebands simultaneously and obtain their non-classical correlation, namely squeezing level \cite{Yuen1,Yuen2,EPRinSqueezing}. Here, the quadrature amplitude is vulnerable to optical loss, and it is required for photodiodes used in homodyne detection to have nearly 100\% quantum efficiency. Photodiodes with high quantum efficiency and low electrostatic capacitance have been developed and homodyne detection with the gigahertz-order bandwidth has been achieved \cite{GHzSq:Schnabel,FastHomodyne1,FastHomodyne2}. However, broadening bandwidth further might accompany decrease in quantum efficiency \cite{serikawatheory,serikawadetector}. Since the bandwidth of an OPA can reach several terahertz, \cite{BroadbandOPA,FiberBroadbandOPA,WaveguideBroadbandOPA,OPAFurusawa}, the utilization of OPAs for detection is promising to speed up quantum information processing. Theoretical proposals have been made to utilize parametric amplification for measurement of light so far \cite{YamamotoQND,NoiseFactorOfAmplifiers,NoiseFactorOfAmplifiers2,SU(2)andSU(11)}. Especially, the measurement of light with non-classical correlation is important for quantum computation. In 2020, a measurement method for Einstein-Podolsky-Rosen-type entanglement using an OPA and a low-quantum-efficiency receiver was proposed \cite{EntangledetectionbyOPA}. Parametrically amplified light is tolerant to optical loss \cite{LossToleranceOfSU11}, so that it can be converted into electrical signals with broadband low-quantum-efficiency detector \cite{EntangledetectionbyOPA} or directly used for all-optical feed-forward control \cite{alloptical,allopticalexperiment}. For the implementation of this technique, an OPA should have large gain for continuous waves, but it is challenging because it requires high durability for an intense pump beam \cite{Kashiwazaki}. In 2018, the measurement of squeezing level with parametric amplification was achieved using a $\chi^{(3)}$ photonic crystal fiber pumped by a pulsed laser \cite{LiftingBandwidth}. Here, the pulsed pump is used to attain momentary large parametric gain at its peak. However, it could be a problem that the pulsed pump limits the operation time of quantum computing. To measure time-domain-multiplexed quantum states, a continuous operation is an indispensable feature because it can always accept input light, while a pulsed operation accepts input light only at the moment of pumping. Moreover, the bandwidth of the measured squeezed light was limited by chromatic dispersion in the experiment, which might be an obstacle to realize ultra-fast quantum information processing. In addition, $\chi^{(3)}$ nonlinearity requires intense pump light at the center wavelength of the signal, which could cause unwanted nonlinear effects, such as self-phase modulation or cross-phase modulation, and difficulties in separation of the pump light. Here, in $\chi^{(2)}$ parametric amplification, the wavelength of the pump light is a half of that of the signal, and they can be easily separated by dichroic mirrors \cite{4dBFiberedOPA}. In 2019, a highly durable $\chi^{(2)}$ OPA made of a periodically poled LiNbO${}_{3}$ (PPLN) waveguide was developed and the gain up to 30 dB for continuous-wave input light was achieved with over-one-watt continuous-wave pump light \cite{Kashiwazaki}. In this letter, we demonstrate ultra-broadband detection of a continuous-wave squeezed light with a single-mode PPLN waveguide \cite{PPLNOPA6dB}. The waveguide OPA is connected to another waveguide OPA for squeezed-light generation with single-mode polarization-maintaining (PM) fibers. The fiber-coupled input would be compatible with integrated quantum circuits in the future. 3 dB of squeezing is observed over 3 THz sideband frequency with an optical spectrum analyzer. Here, phase locking of the squeezed light is performed with a variable bandpass fiter, which allows to lock the phase to an arbitrary point. Furthermore, we demonstrate dispersion compensation of the broadband squeezed light, and the phase of the squeezed light is maintained over 1 THz. Our work would contribute to the realization of all-optical quantum computing with over-THz clock frequency. \section{All-optical phase-sensitive detection of a squeezed vacuum} \begin{figure*}[!t] \centering\includegraphics[width=18.3cm]{fig_setup.eps} \caption{A schematic of the experimental setup. The two OPAs are pumped by using separate frequency doublers and fiber amplifiers. The output of the second OPA is split into two beams. One is used for phase locking, and the other is measured by an optical spectrum analyzer. BPF, band-pass filter; PM, phase modulator; PD, photo detector; VBPF, variable band-pass filter; FPGA, field-programmable gate array.} \label{setup} \end{figure*} Quantum entanglement between each sideband is the essential feature of squeezed light. Phase-sensitive detection measures the strength of the quantum correlation, namely squeezing level \cite{Yuen1,Yuen2,EPRinSqueezing}. Conventionally, homodyne detection followed by an electrical intensity detector is used for measuring squeezing level as in Fig. \ref{concept}(a). In this scheme, correlated sidebands of the squeezed light at the frequency of $\Omega\pm\omega$ is down-converted into an electrical signal at $\omega$. Here, we consider generating a squeezed vacuum with an OPA and then measuring it with another OPA and an optical intensity detector, as in Fig. \ref{concept}(b). In this case, the optical intensity at $\Omega\pm\omega$ is directly measured without any down-conversion. Let the gain of the second OPA be $G$. Photon number operator $\hat{n}$ of the light at the output of the second OPA at a frequency $\Omega + \omega$ is written as \cite{LiftingBandwidth}: \begin{eqnarray} \hat{n}(\Omega+\omega) &=& \frac{G}{4}\hat{x}^{\dag}(\omega)\hat{x}(\omega) + \frac{1}{4G}\hat{p}^{\dag}(\omega)\hat{p}(\omega)-\frac{1}{2}, \\ \hat{x}(\omega) & \equiv & \hat{a}(\Omega+\omega)+\hat{a}^{\dag}(\Omega-\omega), \\ \hat{p}(\omega) & \equiv & i\hat{a}^{\dag}(\Omega+\omega)-i\hat{a}(\Omega-\omega), \end{eqnarray} where $\Omega$ is the center frequency of the OPAs, and annihilation operators $\hat{a}(\Omega\pm\omega)$ corresponds to the input for the second OPA, which is non-classically correlated sidebands of the squeezed vacuum generated in the first OPA. $\hat{x}^{\dag}(\omega)\hat{x}(\omega)$ and $\hat{p}^{\dag}(\omega)\hat{p}(\omega)$ represent the variances of the quadrature amplitudes of the squeezed vacuum in a rotating frame at the center frequency of $\Omega$. The variance varies between the squeezing level $R_{-}$ and the anti-squeezing level $R_{+}$ depending on the phase of the first OPA, $\theta$, as follows \cite{PhaseFluctuationEffect}: \begin{eqnarray} \braket{\hat{x}^{\dag}(\omega)\hat{x}(\omega)} & = & R_{-}\cos^2\theta+R_{+}\sin^2\theta,\label{phasedepend1}\\ \braket{\hat{p}^{\dag}(\omega)\hat{p}(\omega)} & = & R_{-}\sin^2\theta+R_{+}\cos^2\theta.\label{phasedepend2} \end{eqnarray} Here, we consider measuring the optical intensity at the frequency of $\Omega+\omega$ at the output of the cascaded OPAs by using an optical spectrum analyzer or a power meter with a color filter. The maximum and minimum values of the intensity $I_{max}$ and $I_{min}$ are obtained at $\theta=\pi/2,\pi$ as: \begin{eqnarray} I_{max} & = & \frac{\hbar(\Omega+\omega)}{4}\left\{GR_{+}+\frac{R_{-}}{G}\right\}, \\ I_{min} & = & \frac{\hbar(\Omega+\omega)}{4}\left\{GR_{-}+\frac{R_{+}}{G}\right\}. \end{eqnarray} Especially, when $R_{-}=R_{+}=1$, $I_{max}$ and $I_{min}$ are identical and the value corresponds to the intensity of the amplified vacuum \begin{equation} I_{0} = \frac{\hbar(\Omega+\omega)}{4}\left(G+\frac{1}{G}\right). \end{equation} Normalized by the intensity of the amplified vacuum $I_0$, the measured squeezing level $R_{-}'$ is calculated as follows: \begin{eqnarray} {R_{-}'} & \equiv & \frac{I_{min}}{I_{0}} = \frac{1}{1+G^2}R_{+}+\frac{G^2}{1+G^2}R_{-}.\label{measuredSQ} \end{eqnarray} Here, ${R_{-}'}$ decreases monotonically and approaches $R_{-}$ as $G$ increases. Compared to Eqs. \ref{phasedepend1} and \ref{phasedepend2}, the equation indicates that the effect of the finite gain is equivalent to phase deviation of \begin{equation} \theta_{eff} = \arcsin \sqrt{\frac{1}{1+G^2}}. \label{effectivetheta} \end{equation} In conventional homodyne measurements, the typical value of the phase deviation is from $0.8^{\circ}$ \cite{OPO:Serikawa} to $4.3^{\circ}$ \cite{AokiPhase}. The larger anti-squeezing level is, the smaller acceptable phase deviation is. To know how large gain is needed for all-optical phase-sensitive detection, we consider two cases for example. The first case is -3.0 dB squeezing with 3.0 dB anti-squeezing, where 13 dB ($G=20$) of gain is enough to measure the squeezing level correctly in the significant digit. The second case is -3.0 dB squeezing with 15.0 dB anti-squeezing, where 19 dB ($G=80$) of gain is required. In addition, when gain is finite, the actual squeezing level is obtained by following equations: \begin{eqnarray} {R_{-}} & = & \frac{G^2}{G^2-1}{R_{-}'}-\frac{1}{G^2-1}{R_{+}'},\label{equation1} \\ {R_{+}} & = & -\frac{1}{G^2-1}{R_{-}'}+\frac{G^2}{G^2-1}{R_{+}'},\label{equation2} \end{eqnarray} where ${R_{+}'}=I_{max}/I_0$. This correction formula might be useful in the measurement of squeezing level with a low-gain bulk-crystal OPA such as \cite{LossToleranceOfSU11}. \section{Experimental Setup} Figure \ref{setup} shows a schematic of our experimental setup. A source of continuous-wave laser light at 1545.3 nm (194 THz) is a narrow-linewidth and low-noise seed laser (NKT Photonics, BASIK module). The output of the seed laser is split by 3-dB fiber coupler. One of the outputs of the coupler is amplified by a Erbium-doped fiber amplifier (Keopsys, CEFA-C-PB-HP), and the amplified beam passes through an optical band pass filter (Alnair Labs, TFF-15-1-PM-L-100-FS) to reduce noise from the fiber amplifier. Then, it pumps a frequency doubler (NTT Electronics, WH-0772-000-F-B-C). The frequency-doubled beam pumps a pigtaled PPLN waveguide OPA module OPA1, which is assembled similarly as in \cite{4dBFiberedOPA}, and generates a squeezed vacuum. The intensity of the frequency-doubled beam is monitored at the transmittance port of OPA1 by a photo detector PD1 (Newport, 818-SL). The other output of the 3-dB coupler is phase-controlled by a phase modulator (Covega, Mach-10) and then amplified by another Erbium-doped fiber amplifier (Keopsys, CEFA-C-PB-HP). The amplified beam passes through an optical band pass filter (Alnair Labs, TFF-15-1-PM-L-100-SS-SA), and then the amplified beam pumps another frequency doubler (NTT Electronics, WH-0772-000-F-B-C). The frequency-doubled beam pumps another pigtaled OPA module OPA2. The intensity of the frequency doubled beam is monitored at the transmittance port of OPA2 by a photo detector PD2 (Newport, 818-SL). OPA1 and OPA2 are connected with their 1-m polarization-maintaining optical fiber pigtales (Fujikura, SM15-PS-U25D), and a quadrature amplitude of the squeezed vacuum from OPA1 is amplified in OPA2. We call the output of OPA2 at 1.5 $\mu$m "amplified squeezed vacuum." The output is split by a 10-dB coupler, and its main output is injected into an optical spectrum analyzer (Advantest, Q8384). The other output passes through a variable band-pass filter (santec, OTF-350) and its power and wavelength are measured by a photo detector PD3 (Thorlabs, PDA10CS2) and another optical spectrum analyzer (Anritsu, MS9710C) used as a wavelength monitor. The signal from PD3 is processed by a field-programmable gate array (FPGA) board (Red Pitaya, STEMLAB 125-14). In the FPGA board, the signal is numerically subtracted from a target value and then time-integrated. The time-integrated signal from the FPGA board drives the phase modulator in the optical path. This feedback loop controls the value detected by PD3 to approach a target value. An integral control circuit locks signal not at peaks but on slope. However, since the phase differs depending on the wavelength due to chromatic dispersion, it is possible to lock the phase at any point by operating the variable optical bandpass filter. Phase control of parametric amplification of squeezed light would be not only indispensable for all-optical quantum computing, but also useful for quantum meteorology. For instance, the technique could be applied for a nonlinear interferometer \cite{SU(2)andSU(11),NonlinearInterferometers} whose phase is currently not locked but scanned as in \cite{Gaetano}. \section{Result and Discussion} \begin{figure}[!t] \centering\includegraphics[width=8.6cm]{fig_blue.eps} \caption{The result of all-optical phase-sensitive detection obtained with an optical spectrum analyzer in (a) a zero-span mode at 1545.0 nm and (b) a range of wavelength from 1500 nm to 1590 nm. The resolution bandwidth is set to be 0.2 nm, and the smoothing-window width is (a) 2 ms, (b) 0.1 nm. The green curve is the optical spectrum of an amplified vacuum. Blue and red curves are those of a phase-locked amplified squeezed vacuum. Orange curve is phase-scanned signal of the amplified squeezed vacuum at 1545.0 nm. The intensity of the pump beam for OPA1 measured at PD1 is 100 mW and the gain of second OPA is 23 dB.} \label{blue} \end{figure} \begin{figure}[!t] \centering\includegraphics[width=8cm]{fig_param_kai.eps} \caption{Optical spectra of phase-locked amplified squeezed vacua with various locking phases and pump powers. That of an amplified vacuum is indicated by green curve. The resolution bandwidth is set to be 0.2 nm, and the smoothing-window width is 0.1 nm. Green curve is the optical spectrum of an amplified vacuum. Other colored curves are those of an amplified squeezed vacuum at different phase-locking points. The intensity of the pump beam for OPA1 measured at PD1 is (a) 200 mW, (b) 100 mW, and (c) 50 mW.} \label{param} \end{figure} \subsection{Phase-locked all-optical phase-sensitive detection} Figure \ref{blue} (a) shows the optical signal of all-optical phase-sensitive detection at 1545.0 nm. The pump power for OPA1 measured at PD1 is 100 mW, and that for OPA2 measured at PD2 is 300 mW. The green and orange curves are the intensity of amplified vacuum and squeezed vacuum with scanned phase, respectively. The blue and red curves are those with locked phases. Figure \ref{blue} (b) shows the spectra of the phase-locked amplified squeezed vacuum with the spectrum of the amplified vacuum. Although the phase is locked for each wavelength, the phase of the amplified squeezed vacuum depends on the wavelength due to the chromatic dispersion of the 2-m PM fiber between the OPAs. Here, wavelengths indicated by dashed lines are selected by the variable bandpass filter and used as an error signal for phase locking. The intersection of the solid curves and dashed lines indicated by a dashed circle is placed on the middle of the ripple, which corresponds to the target value set in the FPGA board. The parametric gain of OPA2 with 300-mW pump is measured to be 23 dB, namely 200, in advance. Assigning the gain into Eq. \ref{effectivetheta}, the effective phase deviation for the gain is $0.3^{\circ}$ and is negligible within the significant digits of squeezing and anti-squeezing levels, which means the gain of 23 dB is enough for all-optical phase-sensitive detection of the squeezed light. Figure \ref{param} shows the spectrum for various pump powers and phase-locking points. The phase-locking point is changed by manually rotating a micrometer of the variable bandpass filter. The squeezing level and anti-squeezing level depend on the pump power for OPA1, and those around the center frequency are measured to be -2.2 dB and 14.9 dB at 200-mW pump, -3.2 dB and 9.9 dB at 100-mW pump, and -2.7 dB and 5.6 dB at 50-mW pump, respectively. Thanks to the flat amplification characteristic of the OPA from 1520 nm (197 THz) to 1570 nm (191 THz), squeezing at sideband frequencies up to 3 THz is well observed at 100-mW pump. The slight decrease in the squeezing level at 200-mW pump is considered due to contamination with the large anti-squeezing components caused by residual phase noise from the light source and imperfections of wavelength filtering in the optical spectrum analyzer. \begin{figure}[!t] \centering\includegraphics[width=7cm]{fig_scanfit.eps} \caption{Pump power dependency of squeezing and anti-squeezing level obtained by (a) the all-optical phase-sensitive detection and (b) a conventional balanced homodyne measurement. The pump power is measured at PD1. The results of the two measurement methods are in good agreement.} \label{fitting} \end{figure} \subsection{Comparison with balanced homodyne measurement} To compare with the all-optical phase-sensitive detection, we also perform measurement of a squeezed vacuum with a homemade balanced homodyme detector with InGaAs photodiodes (Laser Components, IGHQEX0100-1550-10-1.0-SPAR-TH-40), which was also used in \cite{4dBFiberedOPA}. To match the conditions, the phase is not locked but scanned as in Fig. \ref{blue}(a). In the balanced homodyne measurement, the squeezed vacuum is interfered with a 2.5-mW local oscillator beam in a fiber beamsplitter (Thorlabs, PN1550R5F2) spliced with AR-coated fiber (P1-1550PMAR-2). The electrical signal from the detector is measured with an electrical spectrum analyzer (Keysight, N9010B). The resolution bandwidth, video bandwidth, and analysis frequency are set to be 3 MHz, 1 kHz, and 10 MHz, respectively. Figure \ref{fitting} shows the pump power dependency of squeezing and anti-squeezing level obtained by two measurement methods. The squeezing and anti-squeezing level are described as \cite{SqPhaseError}: \begin{eqnarray} R_{\pm} &=& L + (1-L) e^{\pm 2\sqrt{ap}} \end{eqnarray} Here, $a$ is nonlinear efficiency of an OPA; $p$ is the intensity of the pump beam for the OPA; $L$ is total optical loss. The efficiency $a$ and loss $L$ are fitted to be 19.1 /W and 42.5\% for the balanced homodyne measurement. Considering the excess loss of 14\% in the fiber beamsplitter, the effective loss of 2\% due to circuit noise, and the quantum efficiency of 93\% in the detector including collimating and focusing lenses, the total detection efficiency in the balanced homodyne detection setup is calculated to be 78\%, and the optical loss of a squeezed vacuum in OPA1 is estimated to be 27\%. The efficiency $a$ and loss $L$ are fitted to be 20.1 /W and 48.7\% for the all-optical phase-sensitive detection. The "quantum efficiency" of the OPA2 as a detector is calculated to be $(1-0.487)/(1-0.27)\approx0.70$, which is considered to be the loss in OPA2 and the fiber joint. The loss could be reduced by improving the coupling efficiency between the waveguide and the fiber in the OPA module and also by reducing the propagation loss in the waveguide due to the surface roughness \cite{4dBFiberedOPA}. \subsection{Dispersion compensation} \begin{figure}[!t] \centering\includegraphics[width=8.5cm]{fig_dcf.eps} \caption{The result of dispersion compensation of a phase-locked squeezed vacuum measured by the optical spectrum analyzer. The setup and settings are same as Figs. \ref{setup} and \ref{param} except the insertion of a dispersion-compensating fiber. The intensity of the pump beam for OPA1 measured at PD1 is 100 mW.} \label{DCF} \end{figure} In the spectrum shown in Fig. \ref{param}, there are ripples due to the chromatic dispersion of the fibers between the OPAs. We also demonstrate the compensation of the dispersion of the squeezed vacuum. In the researches of telecommunication, it is known that the 2nd-order chromatic dispersion $D$ relates to the gain spectrum $g(f)$ of cascaded OPAs as \cite{DispersionAsobe,ShimizuDispersion}: \begin{eqnarray} g(f) &=& g_0 \cos^2 \left(\phi(f)\right) + g_0^{-1} \sin^2 \left(\phi(f)\right), \label{gainspectrum}\\ \phi (f) &=& \pi D c \left(\frac{f_0 - f}{f_0}\right)^2 +\phi_0, \end{eqnarray} where $f$ is frequency of light; $f_0$ is the center frequency, namely 194 THz; $c$ is the speed of light; $g_0$ is the gain of second OPA; $\phi$ is the phase corresponding to the locking point. Equation \ref{gainspectrum} is modified for the measurement of squeezed light as follows: \begin{eqnarray} R(f) &=& R_{+} \cos^2 \left(\phi(f)\right) + R_{-} \sin^2 \left(\phi(f)\right), \end{eqnarray} where $R(f)$ is the measured spectrum normalized by the amplified vacuum level. By using the equation, we estimate the dispersion $D$ for the spectra in Fig. \ref{param} to be 0.033 ps/nm, which is reasonable value as the dispersion of the 2-m single-mode PM optical fiber \cite{DispersionValue}. To eliminate the dispersion, a dispersion compensating fiber (DCF) patch cable (assembled by Optronscience inc.) is inserted between the optical fiber pigtales of the two OPAs. The DCF patch cable consists of 70-cm DCF (Thorlabs, PMDCF) spliced to 15-cm PM optical fibers (Thorlabs, PM1550-XP) at both ends and SC/PC connectors, and the length of the fibers is designed to counteract the dispersion of the 2-m PM fiber. Figure \ref{DCF} shows the result of all-optical phase-sensitive detection with the DCF. The interval of the ripple is lengthened, and it corresponds to the residual dispersion of 0.0045 ps/nm. The red curve in the Fig. \ref{DCF} shows that the phase of the squeezed light is maintained over 1 THz (from 1537 nm to 1545 nm), which is about twice as much as that without the DCF as shown in Fig. \ref{param} (from around 1541 nm to 1545 nm). Further reduction of chromatic dispersion could be achieved with a spatial light modulator as in \cite{ShimizuDispersion} or by integrating two OPAs into one LiNbO${}_{3}$ integrated optical circuit such as \cite{fiberOPA20dB0,integratedLN1,integratedLN2}. Additionally, the squeezing and anti-squeezing level, -1.2 dB and 7.1 dB around the center wavelength, are consistent with the insertion loss of the DCF, 2.9 dB. Squeezed light with locked phase maintained over 1 THz would allow a ``qumode \cite{QumodeVanLoock}'' to be defined in a micron-order wave packet in time-domain multiplexed quantum computation \cite{PPLNOPA6dB}. \section{Conclusion} We demonstrated all-optical phase-sensitive detection of a quadrature amplitude of squeezed light using a fiber-coupled PPLN waveguide OPA. Squeezing level of 3 dB is observed over 3 THz of sideband frequency. The measured squeezing level is consistent with that of conventional phase-sensitive detection, homodyne measurement. The phase of broadband squeezed light is locked with a variable optical bandpass filter, which enables to lock the phase to an arbitrary point. Furthermore, we performed the dispersion compensation of the broadband squeezed light, so that the phase of the squeezed light is maintained over 1 THz. Our work would help to realize all-optical quantum computation with over-THz clock frequency. \section*{Funding Information} This work is funded by Core Research for Evolutional Science and Technology (CREST) (JPMJCR15N5) of Japan Science and Technology Agency (JST), KAKENHI (18H05207) of Japan Society for the Promotion of Science (JSPS), APLS of Ministry of Education, Culture, Sports, Science and Technology (MEXT), and The University of Tokyo Foundation. \section*{Disclosures} The authors declare no conflicts of interest.
1,477,468,750,204
arxiv
\section{Introduction} \label{sec:intro} Evidence for an early start of planet formation, when the disk is still embedded in its envelope, has been accumulating. For example, rings in continuum emission that are ubiquitously observed toward Class II protoplanetary disks \citep[e.g.,][]{Andrews2018} and could be a signpost of forming planets \citep[e.g.,][]{Bryden1999,Zhu2014,Dong2018}, are now also observed in disks as young as only $\sim$0.5 Myr \citep{ALMAPartnership2015,Segura-Cox2020,Sheehan2020}. Evidence for grain growth beyond interstellar medium (ISM) sizes has been inferred from low dust opacity spectral indexes in Class 0 sources \citep{Kwon2009,Shirley2011}, dust polarization \citep[e.g.,][]{Kataoka2015,Kataoka2016,Yang2016}, decreasing dust masses derived from (sub-)millimeter observations for more evolved systems \citep[e.g.][]{Tychoniec2020}, and CO isotopologue emission \citep{Harsono2018}. In addition, outflows present in this early phase may provide a way to overcome the radial drift barrier \citep{Tsukamoto2021}. One of the key parameters in planet-formation models is the location of the water snowline, that is, the disk midplane radius at which water molecules freeze out onto the dust grains. At this location, the growth of dust grains, and thus the planet formation efficiency, is expected to be significantly enhanced through triggering of the streaming instability \citep[e.g.,][]{Stevenson1988,Schoonenberg2017,Drazkowska2017}. In addition, since water is the dominant carrier of oxygen, the elemental carbon-to-oxygen (C/O) ratio of the planet forming material changes across the water snowline \citep{Oberg2011,Eistrup2018}. \citet{Lichtenberg2021} illustrated the importance of the snowline location during disk evolution as migration of the snowline may be an explanation for the isotopic dichotomy of Solar System meteorites \citep[e.g.,][]{Leya2008,Trinquier2009,Kruijer2017}. In a different perspective, theoretical studies have shown that the position of the water snowline depends on the disk viscosity and dust opacity \citep{Davis2005,Lecar2006,Garaud2007,Oka2011}, hence snowline measurements will provide important information for disk evolution models. Overall, observational constraints on the snowline location are thus crucial to understand planet formation and its outcome, and observations of young disks are particularly important as they represent the earliest stages in planet formation. Unfortunately, water emission is difficult to detect in both young and more evolved disks \citep{Du2017,Notsu2018,Notsu2019,Harsono2020}, and thus determining the exact location of the snowline is challenging. However, observations of protostellar envelopes have shown that \htcop can be used as an, indirect, chemical tracer of the water snowline \citep{Jorgensen2013,vantHoff2018a,vantHoff2022,Hsieh2019}. This is based on gaseous water being the most abundant destroyer of \hcop in warm dense gas around young stars. \hcop is therefore expected to be abundant only in the region where water is frozen out and gaseous CO is available for its formation \citep{Phillips1992,Bergin1998}. The high optical depth of the main isotopologue, \hcop, impedes snowline measurements in protostellar envelopes \citep{vantHoff2022}, warranting the use of the less abundant isotopologues \htcop or HC$^{18}$O$^+$. Modeling of \hcop emission from Herbig disks has shown that this optical depth problem is partly mitigated in disks due to their Keplerian velocity pattern, as different velocities trace different radii \citep{Leemker2021}. Here, we present Atacama Large Millimeter/submillimeter Array (ALMA) observations of \hcop and \htcop in the young disk L1527 IRS (also known as IRAS 04368+2557 and hereafter referred to as L1527). This well-studied Class 0/I protostar located in the Taurus molecular cloud (142 pc, \citealt{GAIAcollaboration2021,Krolikowski2021}) is surrounded by a 75--125 au Keplerian disk \citep{Tobin2012,Tobin2013,Aso2017} that is viewed nearly edge-on \citep{Tobin2008,Oya2015} and is embedded in an extended envelope \citep[e.g.,][]{Ohashi1997,Tobin2008}. The observations are described and presented in Sects.~\ref{sec:Observations} and \ref{sec:Results}, respectively. In Sect.~\ref{sec:Modeling} we use the physical structure for L1527 derived by \citet{Tobin2013} to model the \hcop abundance, and \hcop and \htcop emission, incorporating the \hcop abundance through either simple parametrization (Sect.~\ref{sec:ParametrizedModel}) or the use of a small chemical network (Sect.~\ref{sec:ChemicalModel}). In Sect.~\ref{sec:SnowlineLocation} we then use the chemical modeling results to constrain the water snowline location in L1527. Finally, we discuss the cosmic ray (CR) ionization rate in Sect.~\ref{sec:CRrate} and summarize the main conclusions in Sect.~\ref{sec:Conclusions}. \section{Observations} \label{sec:Observations} L1527 was observed with ALMA in \hcop on 2014 June 14 (project code 2012.1.00346.S, PI: N. Evans) for a total on source time of 11 minutes. These observations were carried out using 33 antennas sampling baselines up to 650 m. The correlator setup consisted of four 234 MHz spectral windows, including one targeting the \hcop $J=4-3$ transition at 356.734223 GHz, with 61 kHz ($\sim$0.05 km~s$^{-1}$) spectral resolution. In addition, L1527 was observed in \htcop on 2015 August 11 and 12 and September 2 (project code 2012.1.00193.S, PI: J.J. Tobin) for a total of 43 minutes on source per execution ($\sim$2.2 hours total). The observations were carried out with 42, 44 and 34 antennas for the three respective observing dates and sampled baselines up to 1.6 km. The correlator setup contained two 117 MHz spectral windows, including one targeting the \htcop $J=3-2$ transition at 260.255339 GHz, with 31 kHz ($\sim$0.05 km~s$^{-1}$) spectral resolution and two 2 GHz spectral windows with 15.6 MHz resolution, aimed for continuum measurements. Calibration, self-calibration and imaging of the \hcop and \htcop datasets were done using versions 4.2.1 and 4.3.1 of the Common Astronomy Software Application (CASA, \citealt{McMullin2007}), respectively, where the \hcop data were calibrated using the ALMA Pipeline. For the \hcop observations, J0510+1800 was used as bandpass, phase and flux calibrator. For the \htcop observations, the bandpass calibrator was J0423--0120, the flux calibrator was J0423--0130, and the phase calibrator was J0510+1800 for the August observations and J0440+2728 for the September observations. Both lines are imaged at a spectral resolution of 0.1 \kms. A uv taper of 500 k$\lambda$ was applied to increase the signal-to-noise ratio of the \htcop image cube. The restoring beam is 0$\farcs$50 $\times$ 0$\farcs$30 (PA = -3.2$^\circ$) for \hcop and 0$\farcs$47 $\times$ 0$\farcs$28 (44.7$^\circ$) for H$^{13}$CO$^+$, and the images have an rms of 20 mJy beam$^{-1}$ channel$^{-1}$ and 3.9 mJy beam$^{-1}$ channel$^{-1}$, respectively. The maximum recoverable scale is 2$\farcs$7 (380 au) for the \hcop observations and 2$\farcs$0 (280 au) for \htcop, that is, spanning the disk (75--125 au; \citealt{Tobin2012,Tobin2013,Aso2017}) and innermost envelope. \begin{figure} \centering \includegraphics[trim={0cm 5.2cm 4cm 1.4cm},clip]{L1527_overview_C18O.pdf} \caption{Integrated intensity maps (top) and position-velocity (\textit{pv}) diagrams (middle) for the \hcop $J=4-3$ (left) and \htcop $J=3-2$ (right) transitions toward L1527. Central velocity channels ($\Delta v \leq |0.5|$ \kms) with resolved out emission are omitted from the integrated intensity maps. The velocity axis of the \textit{pv} diagrams is centered on the systemic velocity of 5.9 \kms and the C$^{18}$O $J=2-1$ \textit{pv} diagram is overlaid in black contours (3$\sigma$). The color scale is in mJy beam$^{-1}$ km s$^{-1}$ for the integrated intensity maps and in mJy beam$^{-1}$ for the \textit{pv} diagrams. The beam is shown in the bottom left corner of the top panels and the velocity resolution is 0.1 km s$^{-1}$. The bottom panel shows cuts through the \textit{pv} diagrams close to the midplane ($0\farcs2$ and -$0\farcs2$ for, respectively, redshifted and blueshifted C$^{18}$O and \hcop emission, and $\pm 0\farcs5$ for \htcop) to highlight the difference in velocity extent between C$^{18}$O (solid grey), \hcop (blue line) and \htcop (orange line). The flux is expressed in factors of $\sigma$ for each dataset, and the horizontal line marks the $3\sigma$ level. } \label{fig:L1527_observations} \end{figure} \section{Results} \label{sec:Results} Figure~\ref{fig:L1527_observations} (top panels) presents the integrated intensity maps for \hcop $J=4-3$ and \htcop $J=3-2$ toward L1527. Emission from channels near the systemic velocity ($\Delta v \leq |0.5|$ \kms) where most of the emission is resolved out are omitted. Both molecules display emission elongated along the north-south direction, that is, along the major axis of the edge-on disk, with the blueshifted emission south of the protostar. The \hcop emission is radially more compact than the \htcop emission, likely because the $J=4-3$ transition traces warmer and denser material than the $J=3-2$ transition. The higher sensitivity of the \htcop observations and more resolved out emission for the optically thicker \hcop emission possibly play a role as well. For both lines, a central depression is visible, which at first thought may be interpreted as a lack of \hcop and \htcop in the inner region of the disk. However, modeling of \hcop emission by \citet{Hsieh2019} showed that a ring-shaped distribution of \hcop molecules in an embedded disk does not result in a central depression in emission for highly inclined sources. For the edge-on disk L1527 the central depressions are thus due to a combination of optically thick continuum emission in the central beam, resolved out line emission and the subtraction of continuum from optially thick line emission. A better picture of the spatial origin of the emission can be obtained from position-velocity (\textit{pv}) diagrams as shown in Fig.~\ref{fig:L1527_observations} (middle panels). In principle, in these diagrams, disk emission is located at small angular offsets and high velocities, while envelope emission extends to larger offsets but has lower velocities. The \textit{pv} diagrams show that the \hcop emission peaks at angular offsets of $\sim$1$^{\prime\prime}$ and velocities between $\sim$1--2 km s$^{-1}$, while the \htcop emission peaks at larger offsets ($\sim$1.5--3$^{\prime\prime}$) and lower velocities ($\lesssim$1 km s$^{-1}$). The presence of an infalling envelope is also evident from the presence of redshifted emission on the predominantly blueshifted south side of the source and blueshifted emission in the north. These components are strongest for H$^{13}$CO$^+$. Together, this suggests that the \hcop emission is dominated by the disk and innermost envelope and that the \htcop emission originates mostly at larger radii ($\gtrsim$ 140 au). However, if the emission is optically thick, emission observed at small spatial offsets from source center may in fact originate at much larger radii (see e.g., \citealt{vantHoff2018b}), so the difference between \hcop and \htcop can be partially due to an optical depth effect. An absence of \hcop inside the water snowline in the inner disk would show up in the \textit{pv} diagram as an absence of emission at the highest velocities. Because at these highest velocities only emission from the disk, and not from the envelope, is present (see e.g., Fig.~\ref{fig:VelocityField}) this effect can still be visible even if the emission becomes optically thick in the envelope. As a reference, the 3$\sigma$ contour of C$^{18}$O $J=2-1$ emission at comparable resolution ($0\farcs43 \times 0\farcs28$) is overlaid on the \hcop and \htcop \textit{pv} diagrams. These C$^{18}$O observations were previously presented by \citet{vantHoff2018b}, but to maximize the signal-to-noise ratio, we show here the combined data from the long and short baseline tracks of the observing program, while \citet{vantHoff2018b} only used the long baseline executions. C$^{18}$O is present throughout the entire disk, so an absence of \hcop and \htcop emission at the highest C$^{18}$O velocities signals a depression or absence of these molecules in the inner region of the disk. The highest blue- and redshifted velocities observed for C$^{18}$O are $-3.6$ \kms and $+3.0$ \kms, respectively, with respect to the source velocity. \hcop reaches velocities close to the highest redshifted C$^{18}$O velocity, that is, $-2.8$ and $+2.9$ km s$^{-1}$, while \htcop is confined between $-2.1$ and $2.2$ km s$^{-1}$ at the 3$\sigma$ level of the observations (see Fig.\ref{fig:L1527_observations}, bottom panel). A more quantitative constraint on the spatial origin of the emission can be set by considering the velocity structure. To calculate the velocity field, we adopt a Keplerian rotating disk with an outer radius of 125 au \citep{Tobin2013} embedded in a rotating infalling envelope following the prescription by \citet{Ulrich1976} and \citet{Cassen1981}. We use a stellar mass of 0.4 $M_\odot$ as this was found to best reproduce ALMA observations of $^{13}$CO and C$^{18}$O \citep{vantHoff2018b}. This is slightly lower than the $\sim$0.45 $M_\odot$ derived by \citet{Aso2017}. The resulting midplane velocity field is displayed in Fig.~\ref{fig:VelocityField}. For this stellar mass and disk radius, emission at velocities $\gtrsim |2.6|$ km s$^{-1}$ offset from the source velocity originates solely in the disk. The highest velocity \hcop emission observed at the current sensitivity is predominantly coming from the disk at radii $\gtrsim$ 42 au. All H$^{13}$CO$^+$ velocity channels contain emission from both disk and envelope. This means that either the observed H$^{13}$CO$^+$ emission originates solely in the envelope, or there is some emission coming from the outer disk (radii $\gtrsim$ 73 au) as well. As illustrated in Fig.~\ref{fig:VelocityField}, these cases are not trivial to distinguish as the envelope velocity profile results in envelope emission being present at small angular offsets from the protostellar position. However, taken together, these results thus suggest an absence of \hcop emission in the inner $\sim$40 au at the sensitivity of our observations. \begin{figure*} \centering \includegraphics[trim={0cm 12.8cm 0cm 0cm},clip]{AbundanceJump_HCOp_4-3_R60au_2e-11_2e-10.png} \caption{Selected channels of \hcop $J=4-3$ emission from observations (top two rows) and from a L1527 specific model with an abundance of $2\times10^{-11}$ at radii $\leq$60 au and an abundance of $2\times10^{-10}$ at larger radii (middle two rows). The velocities offset from the source velocity (km s$^{-1}$) are listed in the top right corner of each panel, and channels at velocities $\gtrsim|2.6|$ km s$^{-1}$ contain only emission from the disk. A white contour denotes the 3$\sigma$ level. Residuals after subtracting the model from the observations are shown in the bottom two rows. Black contours are in steps of 3$\sigma$, starting at 3$\sigma$ and orange contours are in steps of -3$\sigma$ starting at -3$\sigma$. The black cross marks the continuum peak and the beam is shown in bottom left corner of the right most panels. } \label{fig:L1527_HCO+model} \end{figure*} \section{Modeling of the HCO$^+$ emission}\label{sec:Modeling} To further interpret these observations, we make synthetic \hcop and \htcop images using the physical structure for L1527 derived by \citet{Tobin2013} and that was also used by \citet{vantHoff2018b} and \citet{vantHoff2020} to model the $^{13}$CO, C$^{18}$O and C$^{17}$O emission. In short, this model contains a 125 au Keplerian disk within a rotating infalling envelope \citep{Ulrich1976,Cassen1981} and is the result of fitting a large grid of 3D radiative transfer models to the thermal dust emission in the (sub-)millimeter, the scattered light $L^{\prime}$ image, and the multi-wavelength SED. In order to fit the multi-wavelength continuum emission, a parameterized sub/millimeter dust opacity was adopted with a value of 3.5 cm$^{2}$ g$^{-1}$ at 850 $\mu$m \citep{Andrews2005} and the best fit model has a dust opacity spectral index $\beta$ of 0.25. This dust opacity suggests that some grain growth has occured (see \citealt{Tobin2013} for more discussion). In our model, the dust then becomes optically thick at radii $\lesssim$4 au for different angular offsets along the disk major axis at the frequency of the \hcop $J=4-3$ transition (356.734288 GHz) (see Fig.~\ref{fig:VelocityField}). The temperature and density structure of the model is shown in Fig.~\ref{fig:PhysicalStructure}. We employ two approaches to constrain the spatial origin of the \hcop and \htcop emission and the water snowline location. First, we adopt a parametrized abundance structure where the \hcop abundance is vertically constant but can change at different radii (Sect.~\ref{sec:ParametrizedModel}). This simple type of model will allow us to address whether the non-detection of \hcop and \htcop emission at velocities as high as observed for C$^{18}$O is due to a steep drop in abundance, as expected inside the water snowline. Second, we use a small chemical network for \hcop as presented by \citet{Leemker2021} for a more detailed study of the snowline location (Sect~\ref{sec:ChemicalModel}). In both cases, image cubes are simulated with the 3D radiative transfer code LIME \citep{Brinch2010}, assuming LTE and using molecular data files from the LAMDA database \citep{Schoier2005,vanderTak2020}. The synthetic image cubes are continuum subtracted and convolved with the observed beam size. \subsection{Parametrized abundance structure} \label{sec:ParametrizedModel} Our goal here is to determine whether the absence of \hcop and \htcop emission in the inner disk is due to a sharp drop in abundance, as expected inside the water snowline. We therefore parametrize the \hcop abundance as function of radius and focus on the intermediate and high velocity channels that contain emission from the disk and inner envelope. Velocity-channel emission maps of a model that reproduces the \hcop emission at velocities $\geq |2.3|$ \kms reasonably well are presented in Fig.~\ref{fig:L1527_HCO+model}. This model has a \hcop abundance of $2\times10^{-11}$ at radii $\leq$ 60 au, and an abundance of $2\times10^{-10}$ at larger radii. This latter abundance is not high enough to reproduce the observed envelope emission at lower velocities, and this is most likely the reason that the redshifted emission at intermediate velocities (2.3--2.5 \kms) is slightly underestimated. However, the important result here is that the \hcop abundance inside 60 au is low, and therefore, the non-detection of emission at velocities $\geq |2.9|$ \kms (tracing the inner $\sim$40 au) could be due to the sensitivity of the observations. Abundances higher than $2\times10^{-11}$ produce emission above the observed 3$\sigma$ level at velocities $\geq |2.9|$ \kms, but a further drop in abundance at radii $\lesssim$40 au cannot be assessed. The abundance in the outer disk ($>$ 60 au) is hard to constrain as well, because the abundance in the outer disk and inner envelope are degenerate. A model with an abundance of $2\times10^{-11}$ throughout the entire disk and an envelope abundance of $1\times10^{-9}$ reproduces the observations equally well as the model displayed in Fig.~\ref{fig:L1527_HCO+model}. We can break this degeneracy using the \htcop observations. As shown in Fig.~\ref{fig:L1527_H13CO+model}, the \htcop emission at velocities $\geq |1.9|$ km s$^{-1}$ can be reproduced by a model with a constant \htcop abundance of $3\times10^{-12}$ in both disk and envelope. For an elemental $^{12}$C/$^{13}$C ratio of 68 \citep{Milam2005}, an \htcop abundance of $3\times10^{-12}$ suggests an \hcop abundance of $2\times10^{-10}$. Together these modeling results thus suggest that the \hcop abundance is lower in the disk than in the envelope, with abundances of $2\times10^{-10}$ in the outer disk ($>$ 60 au), $2\times10^{-11}$ at 40--60 au, and $\leq 2\times10^{-11}$ at radii $<$ 40 au. \begin{figure*} \centering \subfloat{\includegraphics[trim={0.2cm 17.2cm 7.7cm 0cm},clip]{Spectra_HCO+_fiducial-CR.pdf}} \subfloat{\includegraphics[trim={0.2cm 17.2cm 7.7cm 0cm},clip]{Spectra_H13CO+_fiducial-CR.pdf}} \caption{Spectra of the \hcop (left panel) and \htcop (right panel) emission extracted in a 0$\farcs$75 aperture centered on the blueshifted emission peak (left sides of each panel) and on the redshifted emission peak (right sides of each panel). The observations are displayed in discrete velocity bins of 0.1 \kms with the shaded area depicting the 3$\sigma$ uncertainty and a 10\% flux calibration uncertainty. The smooth lines are for models with a CO abundance of 10$^{-4}$ and a H$_2$O abundance of 10$^{-6}$, but with varying cosmic ray ionization rates of 10$^{-18}$ s$^{-1}$ (thick line; referred to as the fidicial model) and 10$^{-17}$ s$^{-1}$ (thin line; canonical cosmic ray ionization rate). The horizontal black line marks the 3$\sigma$ level. The velocity range containing only emission from the disk is marked by grey bars in the top of the panels, and the maximum radius probed at certain velocities is indicated. The displayed velocity range is different for both molecules.} \label{fig:Spectra_fiducial-CR} \end{figure*} \citet{Jorgensen2004a} derived an \htcop abundance of $8.5\times10^{-12}$ for the envelope around L1527 from multiple single-dish observations. This is within a factor of three of our derived abundance of $3\times10^{-12}$, and consistent with our result that the abundance increases at larger radii. Our derived \hcop abundances on disk scales are consistent with the modeling results from \citet{Leemker2021} for protoplanetary disks around Herbig stars, which show a similar \hcop abundance gradient in the outer disk (20--100 au) and a stronger decrease across the snowline (4.5 au) with the abundance dropping below $10^{-12}$. The current observations are not sensitive enough to constrain such low abundances. However, since the \hcop abundance also depends on density and ionization, chemical modeling using a physical model specific for L1527 is required to link the abundance structure to the snowline. \begin{figure*} \centering \includegraphics[width=\textwidth,trim={.5cm 9.6cm .3cm 2.3cm},clip]{Model6a.pdf} \caption{Abundance structure for CO (top panels), H$_2$O (middle panels) and \hcop (bottom panels) predicted by the fiducial chemical model with initial CO and H$_2$O abundances of 10$^{-4}$ and 10$^{-6}$, respectively, and a cosmic ray ionization rate of $10^{-18}$ s$^{-1}$ for the physical structure of L1527. From left to right, panels display larger spatial scales. The disk outer radius is 125 au. The dashed line in the two left most columns marks the H$_2$O snow surface and the midplane snowline at 3.4 au.} \label{fig:Chemistry_fiducial} \end{figure*} \subsection{Chemical modeling} \label{sec:ChemicalModel} The chemical network presented by \citet{Leemker2021} was developed to study the relationship between the water snowline and \hcop (and \htcop) emission in Herbig disks. It contains the main formation and destruction reactions for \hcop as well as the freeze out and thermal desorption of water, as illustrated in Fig.~\ref{fig:ChemicalNetwork}. Reaction rate constants are taken from the UMIST database \citep{McElroy2013}, and are listed in Table 2 of \citet{Leemker2021}. For water, a binding energy of 5775 K is used, which corresponds to an amorphous water ice \citep{Fraser2001}. Freeze out of CO, the parent gas-phase molecule of \hcop, was not included in the study by \citet{Leemker2021} as this only occurred in the low-density outer region of the Herbig disks. Although there is no sign of CO freeze out in the disk around L1527 \citep{vantHoff2018b,vantHoff2020}, we have added the freeze out and thermal desorption of CO to the chemical network for completeness and to display the \hcop abundance structure in the envelope. The extact temperature at which CO desorbs depends on the composition of the ice \citep[e.g.,][]{Collings2003}, with pure CO ice desorbing at lower temperatures (855 K; \citealt{Bisschop2006}) than CO ice on top of water ice (1150 K; \citealt{Garrod2006}). The resulting desorption temperatures differ by $\sim$6 K; for example, 18 K versus 24 K for a density of $10^{7}$ cm$^{-3}$. In either case the CO snowline is located outside the L1527 disk, at $\sim$500 au or $\sim$200 au, respectively, and we adopt the binding energy of 855 K for a pure CO ice \citep{Bisschop2006}. Including or excluding freeze out of CO does not influence the \hcop emission in the disk and inner envelope velocity channels that we are interested in here. The freeze out rates depend on the available surface area of the dust grains. Following \citet{Leemker2021}, we assume a typical grain number density of $10^{12} \times n$(H$_2$) and a grain size of 0.1 $\mu$m. Even if a fraction of the grains have grown to larger sizes, the smallest grains will dominate the surface area and \citet{vantHoff2017} showed that adopting a more detailed description for the available surface area did not significantly affect the predicted N$_2$H$^+$ abundance for the protoplanetary disk around TW Hya. For the radiative transfer, we use the dust opacities from \citet{Tobin2013} as described at the beginning of Sect.~\ref{sec:Modeling}. Initially, all abundances are set to zero, except for H$_2$, gas-phase CO and gas-phase H$_2$O, and we run the chemistry for $10^5$ year. Running the chemistry for $10^6$ year, as typically done for protoplanetary disk studies, does not affect the \hcop abundance structure in the disk and inner envelope (radii $\lesssim$ 1000 au; see Appendix \ref{ap:Chemistry}). The main free parameters in the model are the initial CO and H$_2$O abundance and the cosmic ray ionization rate, which initiates the ion-neutral chemistry by ionizing H$_2$. The model does not include isotope-specific reactions and we adopt a $^{12}$C/$^{13}$C ratio of 68 \citep{Milam2005} to generate \htcop image cubes. \citet{vantHoff2018b} did not find evidence for a CO abundance much lower than the canonical value of $10^{-4}$ in the L1527 disk and \citet{Harsono2020} derived an upper limit for the H$_2$O abundance of $10^{-6}$. A model with these initial abudances reproduces the \hcop and \htcop observations equally well as the parametrized model described in Sect.~\ref{sec:ParametrizedModel}. For \hcop a cosmic ray ionization rate of $10^{-18}$ s$^{-1}$ (about one order of magnitude below the canonical value) needs to be adopted (see Fig.~\ref{fig:Spectra_fiducial-CR}). The asymmetry between blueshifted and redshifted emission is due to the kinematics of the disk and envelope, with the envelope in front of the disk for redshifted emission and the envelope behind the disk for blueshifted emission (see Appendix~\ref{ap:Chemistry}). Since the \hcop observations are more sensitive to the disk than the \htcop observations, as discussed in Sections \ref{sec:Observations} and \ref{sec:ParametrizedModel}, we will focus first on the model that reproduces the \hcop emission and we will discuss the ionization rate in more detail in Sect.~\ref{sec:CRrate}. The abundance structure reproducing the \hcop observations (our fiducial model with X(CO) = 10$^{-4}$, X(H$_2$O) = 10$^{-6}$ and $\zeta_{\rm{CR}} = 10^{-18}$ s$^{-1}$) is presented in Fig.~\ref{fig:Chemistry_fiducial}. For the adopted temperature and density structure, the water snowline is located at 3.4 au, which corresponds to a temperature of 140 K. The snow surface is located high up in the disk surface layers, making most of the disk and the inner envelope devoid of gas-phase water. Water is present in the gas phase at radii $\gtrsim$ 3000 au because the density in the outer envelope becomes too low for water to freeze out in the timescale of the model. A similar water abundance profile was found by \citet{Schmalzl2014}, who used a simplified chemical network that was benchmarked against three full chemical networks to model \textit{Herschel} observations of water in protostellar envelopes. Overall, the \hcop abundance gradually decreases with increasing density and steeply decreases across the water snow surface. At the high midplane densities, the \hcop abundance remains low directly outside the water snowline as shown in earlier work \citep{vantHoff2018a,Hsieh2019}. In the midplane, the \hcop abundance drops from $10^{-11}$ at 16 au to $3\times10^{-12}$ at 5 au and then steeply drops to abundances $< 10^{-12}$ inside the snowline. These abundances are all at least an order of magnitude lower than the upper limit derived for the high velocity channels probing radii $\leq 40$ au using the parametrized model in Sect.~\ref{sec:ParametrizedModel}. The sensitivity of the observations is thus not high enough to probe the \hcop abundance drop across the snowline and the absence of emission in the highest velocity channels cannot be linked to the snowline. The high \hcop abundance in the uppermost surface layers of the disk is likely because CO photodissociation is not included in the model. In this region, the rate for the reaction between \hcop and H$_2$O, $R_\mathrm{destruction}$, \begin{equation} R_{\mathrm{destruction}} \propto n(\mathrm{HCO}^+) n(\mathrm{H}_2\mathrm{O}), \end{equation} is low, because the low density results in a low H$_2$O number density, $n$(H$_2$O). As discussed by \citet{Leemker2021}, electron recombination becomes the dominant destruction mechanism of \hcop in this region. At the same time, the \hcop formation rate, $R_\mathrm{formation}$, \begin{equation} R_{\mathrm{formation}} \propto n(\mathrm{CO}) n(\mathrm{H}_3^+), \end{equation} remains high as the H$_3^+$ number density is set by the cosmic ray ionization rate and is therefore independent of density. Including CO photodissociation would remove the parent molecule CO and hence prevent \hcop formation, but knowledge of the UV field is required for a proper implementation. However, this low-density layer does not significantly contribute to the total \hcop emission. Manually removing this layer before radiative transfer results in flux differences less than 0.2\% and identical spectra as to those displayed in Fig.~\ref{fig:Spectra_fiducial-CR}. The \hcop abundance is barely influenced by CO freeze out, because this occurs only in a small region of the envelope. In the disk and inner envelope ($\lesssim$900 au), the temperature is too high for CO to freeze out, while at radii $\gtrsim$2500 au the density becomes too low for CO to freeze out within $10^5$ yr, resulting in CO being present in the gas phase at the initial abundance throughout the majority of the system. A similar abundance profile was derived by \citet{Jorgensen2005} for a sample of 16 sources, and $^{13}$CO has been observed out to radii of $\sim$10,000 au toward L1527 \citep{Zhou1996}. Running the chemistry for $10^6$ yr results in a midplane region with a decreased \hcop abundance centered around 2500 au due to higher levels of CO freeze out. Simultaneously, the higher levels of H$_2$O freeze out increase the \hcop abundance, resulting overall in only a small region with a lower \hcop abundance at radii $\gtrsim$1000 au after $10^6$ year. The \hcop abundance at smaller radii is unaffected (see Fig.~\ref{fig:Chemistry_1e6yr}). \begin{figure*} \centering \subfloat{\includegraphics[trim={0.2cm 17.2cm 7.7cm 0cm},clip]{Spectra_HCO+_Tdisk.pdf}} \subfloat{\includegraphics[trim={0.2cm 17.2cm 7.7cm 0cm},clip]{Spectra_HCO+_colddiskCR.pdf}} \caption{Spectra of the \hcop emission extracted in a 0$\farcs$75 aperture centered on the blueshifted emission peak (left side of each panel) and on the redshifted emission peak (right side of each panel). The observations are binned to 0.5 km s$^{-1}$ and the shaded area depicts the 3$\sigma$ uncertainty and a 10\% flux calibration uncertainty. The thick dark colored lines are for the fiducial model with a snowline at 3.4 au. The other lines represent models where the disk temperature has been multiplied by a constant factor to obtain snowline locations of 1.5 au (dotted line), 1.8 au (dashed line) and 4.1 au (thin solid line). The cosmic ray ionization rate is $10^{-18}$ s$^{-1}$ for the models in the left panel and $10^{-17}$ s$^{-1}$ in for the models in the right panel, except for the fiducial model (thick line) which is the same as in the left panel. The horizontal black line marks the 3$\sigma$ level. The velocity range containing only emission from the disk is marked by grey bars in the top of the panels, and the maximum radius probed at certain velocities is indicated. } \label{fig:Spectra_Tdisk} \end{figure*} The \hcop abundance structure in the disk as presented in Fig.~\ref{fig:Chemistry_fiducial} is very robust with respect to changes in the initial CO and H$_2$O abundance. \citet{Aso2017} suggested a CO abundance of $2\times10^{-5}$ based on ALMA observations of C$^{18}$O, but lowering the CO abundance two orders of magnitude does not affect the \hcop abundance in the disk significantly (see Fig.~\ref{fig:Chemistry_lowCO}). The only changes are a lower \hcop abundance in the upper most surface layers of the disk (above the snow surface) and a lower abundance at radii $\gtrsim$900 au. A canonical H$_2$O abundance of $10^{-4}$ increases the vertical height of the \hcop depleted layer above the snow surface and strongly decreases the \hcop abundance at radii $\gtrsim$1500 au (see Fig.~\ref{fig:Chemistry_highH2O}). Lowering the H$_2$O abundance further to $10^{-7}$ slightly increases the \hcop abundance above the snow surface. The \hcop abundance thus remains unaltered throughout the majority of the disk for CO abundances between $10^{-4}$ and $10^{-6}$ and H$_2$O abundances between $10^{-4}$ and $10^{-7}$. The only parameter that has a strong impact on the \hcop abundance in the disk is the cosmic ray ionization rate. As described in \citet{vantHoff2018a}, the \hcop abundance is proportional to the square root of the cosmic ray ionization rate. Hence, a canonical value of $10^{-17}$ s$^{-1}$ results in a \hcop abundance higher by a factor $\sim$3 compared to a rate of $10^{-18}$ s$^{-1}$ (Fig.~\ref{fig:Chemistry_CR} versus Fig.~\ref{fig:Chemistry_fiducial}), and a too high \hcop flux compared to what is observed (see Fig.~\ref{fig:Spectra_fiducial-CR}). The predicted \hcop flux for a CR ionization rate of $10^{-17}$ s$^{-1}$ is less than a factor 3 higher than the flux predicted for a rate of $10^{-18}$ s$^{-1}$, signaling that the emission becomes optically thick. \section{Discussion} \subsection{The water snowline location in L1527}\label{sec:SnowlineLocation} For the temperature and density structure derived by \citet{Tobin2013}, the water snowline is predicted to be at 3.4 au in the disk around L1527 and the corresponding \hcop abundance structure from a small chemical network calculation can reproduce the observed \hcop emission. Although the current observations are not sensitive to the expected \hcop abundance changes across the snowline, the chemical model shows a decrease in the \hcop abundance over a much larger radial range above the snow surface. This suggests that the snowline location may be constrained indirectly from \hcop emission based on the global temperature structure. To investigate the stringency of the current observations, we run a set of models with different snowline locations separated by $\sim$0.2--0.5 au generated by multiplying the fiducial temperature structure in the disk by a constant factor. To obtain the maximum sensitivity, we bin the observations to 0.5 km s$^{-1}$. Figure~\ref{fig:Spectra_Tdisk} displays the \hcop spectra for models with a snowline at 1.5, 1.8, 3.4 (fiducial) and 4.1 au. The differences between these models are too small to be distinguished at disk-only velocities, but the current sensitivity is high enough to see a significant difference at a $\pm$2.4 \kms velocity offset. While a snowline at 1.8 au produces \hcop emission at about the 3$\sigma$ uncertainty level (including a 10\% flux calibration uncertainty), a snowline at 1.5 au clearly underproduces the emission. The effect of a warmer disk is not very pronounced at redshifted velocities, but a snowline at 4.1 au slightly overproduces the blueshifted emission. Lower \hcop emission as a result of a colder disk can partially be compensated by a higher cosmic ray ionization rate. For $\zeta_{\rm{CR}} = 10^{-17}$ s$^{-1}$, a snowline at 1.5 au can reproduce the observed \hcop emission, while a snowline at 1.8 au results in a too high flux. However, \citet{vantHoff2018b} was not able to reproduce the $^{13}$CO and C$^{18}$O $J=2-1$ emission with the temperature structure corresponding to a 1.5 au snowline (their Intermediate Model), instead requiring a warmer disk. Changing the temperature also in the envelope has only a small effect on the emission at $\pm$2.4 \kms (Fig.~\ref{fig:Spectra_Tdisk-full}). Taken together, for the here adopted physical structure, these results thus suggest a snowline radius of 1.8-4.1 au if $\zeta_{\rm{CR}} = 10^{-18}$ s$^{-1}$ and between 1.5 and 1.8 au for $\zeta_{\rm{CR}} = 10^{-17}$ s$^{-1}$. No other molecular line observations are currently available to locate the water snowline in L1527: H$_2^{18}$O emission has not been detected \citep{Harsono2020}, and while weak methanol emission has been observed, the sensitivity and resolution of that data were insufficient to determine its spatial origin \citep{Sakai2014b}. \citet{Aso2017} presented a warmer model based on fitting of sub-millimeter continuum visibilities than the model derived by \citet{Tobin2013} and used in this work, with the snowline at $\sim$8 au. This is at least twice as far out as predicted here based on the \hcop models, but the temperature profile is kept fixed in the fitting procedure of \citet{Aso2017}. \citet{vantHoff2018b} inferred a temperature profile based on optically thick $^{13}$CO and C$^{18}$O emission, which has a temperature of $\sim$35 K at 40 au. The temperature in the inner $\sim$20 au depends strongly on the chosen power-law coefficient and does not provide a strong constraint on the snowline location. Providing stronger constraints on the snowline location using \hcop emission will require significantly deeper observations as illustrated in Fig.~\ref{fig:RequiredSensitivity}. As higher velocities trace emission originating out to smaller radii, the total flux decreases at higher velocities, and hence higher sensitivity is required to distinguish two models. Ideally, one would want to compare the flux in a certain velocity channel with the flux of models with the snowline inside and outside the maximum radius probed by that velocity. However, it is immediately clear from Fig.~\ref{fig:RequiredSensitivity} that this is not possible for L1527 with current facilities as flux differences at $\gtrsim |4.0|$ km s$^{-1}$ (corresponding to a $\sim$20 au radius) become too small to be observed in 20 hours on source with ALMA. Nonetheless, deeper integrations will allow for better constraints in two ways. First, a higher sensitivity will allow different models to be compared over more velocity channels, and in particular, in channels that only trace disk emission. This will remove any influence from the envelope. For example, 10 hours on source with ALMA will result in 5--10 0.1 km s$^{-1}$ disk-only channels (or 2--3 channels at 0.5 km s$^{-1}$) where models with snowlines at 1.8, 3.4 and 4.1 au can be distinguished, as compared to currently one 0.5 km s$^{-1}$ channel containing emission from both disk and envelope. Second, a higher sensitivity will allow to distinguish between models with smaller differences in snowline radius. With 10 hours on source, the snowline can be constrained within a few tenths of an au, although there will be a degeneracy with the cosmic ray ionization rate. \begin{figure} \centering \includegraphics[trim={0.2cm 16.5cm 7.7cm 0.2cm},clip]{RequiredSensitivity.pdf} \caption{Difference in \hcop $J=4-3$ flux between models with different snowline locations compared to the fiducial model with the snowline at 3.4 au at velocity offsets that trace only the disk. The top horizontal axis shows the maximum radius probed at a certain velocity for an edge-on Keplerian disk around a 0.4 $M_\odot$ star as used for L1527. The dotted and dashed lines mark the 3$\sigma$ limit that can be reached with ALMA in 10 hours on source at a spectral resolution of 0.1 and 0.5 km s$^{-1}$, respectively. The solid line is the 3$\sigma$ limit at 0.5 km s$^{-1}$ for 20 hours on source. The 3$\sigma$ sensitivity of the current \hcop observations is 40 mJy at 0.5 km s$^{-1}$.} \label{fig:RequiredSensitivity} \end{figure} Higher \textit{J} transitions will generally trace warmer and denser material, but in turn, higher dust opacity at higher frequencies may prevent to observe these transitions from the inner disk. For the here adopted dust opacities, the continuum $\tau=1$ surface shift outward by only $\sim$1 au for the $J=8-7$ transition (713.342090 GHz) compared to the $J=4-3$ transition (356.734288 GHz) (Fig.~\ref{fig:VelocityField}), suggesting that the dust opacity is not a strong limiting factor in choosing an \hcop transition. For proper treatment of higher $J$ transitions, UV heating of the gas has to be taken into account and this emission may arise from a thin layer in which the gas and dust temperatures are decoupled \citep[e.g.,][]{Visser2012}. Such a modeling approach was adopted by \citet{Leemker2021}, while we here adapt the dust temperature structure for L1527 and assume the gas and dust temperature are equal, which is appropriate for the $J=4-3$ transition which emits from regions where the temperatures are coupled \citep[e.g.,][]{Mathews2013}. That being said, in our model the integrated flux at high velocities ($\gtrsim |3|$ \kms) increases up to a factor $\sim$2 or $\sim$3 for the $J=5-4$ (445.902996 GHz) and $J=7-6$ (624.208673 GHz) transitions, respectively, compared to the $J=4-3$ flux in the fiducial model (not shown). The flux of the $J=8-7$ transition is similar to the $J=7-6$ flux at velocities originating solely in the disk. For the $J=7-6$ transition, the curves in Fig.~\ref{fig:RequiredSensitivity} shift to the right by $\sim$1 \kms, suggesting that it is easier to distinguish between different snowline locations. However, the decrease in atmospheric transmission results in significantly lower sensitivities that make high sensitivity observations very expensive: in 20 hours on source at 0.5 \kms resolution, an rms of 224 mJy, 25 mJy and 20 mJy is reached for $J=5-4$, $J=7-6$, and $J=8-7$, respectively. These sensitivities would just be enough to distinguish between a snowline at 1.8, 3.4 or 4.1 au at velocity offsets $<$ 3 \kms with the $J=7-6$ and $J=8-7$ transitions. As even for the $J=1-0$ (89.188523 GHz) transition the snowline coincides with the dust $\tau=1$ surface, the \hcop $J=4-3$ transition is the best suited to constrain the snowline location. With such long integration times required to derive stronger constraints on the snowline location from \hcop emission, it is worth investigating whether the snowline can be located directly with water observations. As shown in Fig.~\ref{fig:VelocityField}, the snowline is expected to be hidden behind optically thick dust for frequencies above $\sim$90 GHz, so a direct detection of the snowline would not be possible for L1527. However, locating the snowline directly with water observations may still turn out to be a viable route for sources that have the water snowline extend beyond the radius where the dust becomes optically thick. \subsection{Cosmic ray ionization rate}\label{sec:CRrate} Another important result concerns the cosmic ray ionization rate in the disk of L1527. The cosmic ray ionization rate is crucial for both the physical and chemical evolution of the disk. From a physical perspective, ionization plays an important role in angular momentum transport through the magneto-rotational instability (MRI; e.g., \citealt{Balbus1991}). From a chemical point of view are cosmic rays the driver of chemistry in the disk midplane, where other ionizing agents cannot penetrate \citep[e.g.,][]{Eistrup2016,Eistrup2018}. For the here adopted physical structure of L1527 that is able to reproduce multi-wavelength continuum emission \citep{Tobin2013} as well as CO isotopologue emission \citep{vantHoff2018b}, a canonical CR ionization rate of $10^{-17}$ s$^{-1}$ overproduces the \hcop emission which originates predominantly from radii $\gtrsim$40 au (Fig.~\ref{fig:Spectra_fiducial-CR}). The \htcop emission, which is originating predominantly from the inner envelope, does require a CR ionization rate of $10^{-17}$ s$^{-1}$ (Fig.~\ref{fig:Spectra_fiducial-CR}). In order to reproduce the \hcop observations with a CR ionization rate of $10^{-17}$ s$^{-1}$ the disk temperature needs to be lowered such that the snowline is located inside of 1.8 au (instead of 3.4 au; Fig.~\ref{fig:Spectra_Tdisk}). A temperature structure obtained by multiplying our fiducial temperature with a constant factor of 0.6, resulting in a snowline at 1.5 au, is too cold to explain the $^{13}$CO and C$^{18}$O $J=2-1$ emission \citep{vantHoff2018b}, but the CO isotopologue observations are not sensitive enough to confidently say whether the temperature structure associated with a 1.8 au snowline is too cold as well. We have assumed that the temperature changes globally by a constant factor, and this analysis cannot rule out that a model with a slightly flatter temperature profile (i.e., colder in the inner few au) would be able to explain the molecular line observations with a canonical CR ionization rate. Higher sensitivity observations of CO isotopologues or other temperature tracers are required to better constrain the detailed temperature structure. A snowline location different from 3.4 au could be obtained by, for example, a different luminosity or different disk mass. Measurements of the bolometric luminosity based on the spectral energy distribution (SED) range between 1.6 and 2.0 $L_\odot$ \citep{Tobin2008,Green2013,Karska2018}. This is likely an underestimation of the true luminosity as edge-on sources embedded in an envelope can have internal luminosities up to $\sim$2 times higher than the bolometric luminosity \citep{Whitney2003}. For a 1 $L_\odot$ protostar, \citet{Tobin2008} require an accretion luminosity of 1.6 $L_\odot$ to fit the SED, resulting in a true bolometric luminosity of $\sim$2.6 $L_\odot$. The model used here has a total luminosity of 2.75 $L_\odot$ \citep{Tobin2013}, and assuming that the snowline radius scales as the square root of the luminosity, a luminosity of 0.8 $L_\odot$ would be required for a snowline radius of 1.8 au. This is a factor two smaller than derived from the SED. A lower disk mass could also shift the snowline inward, but for an accretion rate of $3\times10^{-7} M_\odot$ yr$^{-1}$ (corresponding to an accretion luminosity of 1.6 $L_\odot$), models by \citet{Harsono2015} show less than 1 au difference between disk masses of 0.05 and 0.1 $M_\odot$. The disk mass of the model used here is 0.0075 $M_\odot$. Modeling of high resolution ALMA data, for example, from the FAUST or eDisk large programs may provide additional constraints on the disk structure. Chemically, a canonical CR ionization rate may be reconciled with the observations if there is a higher destruction rate of \hcop. In our model, \hcop can be destroyed by H$_2$O and electrons, where the electrons are provided by ionization of H$_2$ by cosmic rays. Since grains are likely negatively charged \citep{Umebayashi1980}, ions may also recombine on dust grains. The recombination rate for this process, $R_{\mathrm{grain}}$, is given by \begin{equation} R_{\mathrm{grain}} = a_gn_\mathrm{H}n(\mathrm{HCO}^+), \end{equation} where $a_g$ is the recombination rate coefficient ($a_g \approx 10^{-17}$ cm$^{3}$ s$^{-1}$; \citealt{Umebayashi1980}), and $n_\mathrm{H}$ and $n(\mathrm{HCO}^+)$ are the hydrogen and \hcop number density. The recombination rate in the gas phase, $R_{\mathrm{gas}}$, is given by \begin{equation} R_{\mathrm{gas}} = kn_\mathrm{e}n(\mathrm{HCO}^+), \end{equation} where $n_{\mathrm{e}}$ is the eletron density. The reaction rate coefficient, $k$, has a temperature dependence, and is $4-8 \times 10^{-7}$ cm$^{3}$ s$^{-1}$ for temperatures between 150 and 50 K (UMIST database; \citealt{McElroy2013}). This means that the grain recombination rate becomes larger than the gas-phase recombination rate for electron abundances, $n_\mathrm{e}/n_\mathrm{H}$, $\gtrsim$10$^{-11}$. Since the electron abundance is approximately equal to the \hcop abundance, this condition is only met in the disk midplane inside $\sim$16 au for the fiducial model (Fig.~\ref{fig:Chemistry_fiducial}) and only inside $\sim$5 au for the model with a CR ionization rate of $10^{-17}$ s$^{-1}$ (Fig.~\ref{fig:Chemistry_CR}). Destruction of \hcop via electron recombination on grains is thus unlikely to effect the predicted \hcop abundance. While we cannot fully rule out a canonical CR ionization rate, the different ionization rates derived from \hcop and \htcop are not necessarily in conflict with each other. The lower $J$ transition and lower velocities probed with \htcop make the \htcop observations more sensitive to the inner envelope than the \hcop observations. This would then suggest that the CR ionization rate is lower in the disk compared to the envelope, which could simply be the result of stronger attenuation of external cosmic rays due to the higher density of the disk. The cosmic ray ionization rate is expected to decrease exponentially with an attenuation column of 96 g cm$^{-2}$ \citep{Umebayashi1981,Umebayashi2009} or even higher \citep{Padovani2018}. However, a column larger than 96 g cm$^{-2}$ is only reached in the inner 0.5 au in our L1527 model. Another explanation for a low cosmic ray ionization rate in the disk may be the exclusion of cosmic rays by stellar winds and/or magnetic fields as proposed by \citet{Cleeves2015} for the protoplanetary disk around TW Hya. The same mechanism could explain the gradient in cosmic ray ionization rate derived for the IM Lup disk, where the steep increase in CR ionization rate in the outer disk may indicate the boundary of the ``T Tauriosphere'', that is, a stellar-wind-induced boundary analogous to the Sun's heliosphere \citep{Seifert2021}. While models show that cosmic rays can be produced by jet shocks and by accretion shocks at protostellar surfaces \citep{Padovani2015,Padovani2016}, the transport of cosmic rays in protostellar disks is very complicated (as shown for external CRs by \citealt{Padovani2018}). Models by \citet{Gaches2018} for the simpler case of protostellar cores show five orders of magnitude difference in CR ionization rate between the two limiting cases of transport of internally created CRs through the core. In our chemical network, all ionization is provided by cosmic rays, but UV and X-ray ionization can play a role as well, in particular in higher layers in the disk with X-rays penetrating deeper than UV radiation (see e.g., \citealt{Cleeves2014,Notsu2021,Seifert2021}). However, it is not clear at what point during stellar evolution the dynamo turns on and X-rays are emitted, and no X-ray emission has been detected toward L1527 with \textit{Chandra} \citep{Gudel2007}. Since the observations constrain the \hcop abundance, a significant contribution of UV and/or X-rays to the \hcop chemistry would mean that the cosmic ray ionization rate is even lower than $10^{-18}$ s$^{-1}$. Other observational constraints for the CR ionization rate in the L1527 disk do not currently exist. \citet{Favre2017} used \textit{Herschel} observations of the ratio between \hcop $J=6-5$ and N$_2$H$^+$ $J=6-5$ to constrain the ionization rate in Class 0 protostars, but the upper limit resulting from the non-detection of N$_2$H$^+$ toward L1527 only constrains the CR ionization rate to be smaller than $10^{-14}$ s$^{-1}$. The signal-to-noise ratio of the $J=6-5$ observations is too low to detect emission at velocity offsets $\gtrsim$2 \kms, so they do not help in constraining the \hcop distribution or disk temperature structure. If confirmed, a low CR ionization rate in a young disk may have profound consequences as high ionization levels are crucial for disk evolution. For example, for angular momentum transport through MRI, the gas needs to be coupled to the magnetic field \citep{Gammie1996}, and hence insufficient ionization may suppress MRI and create a low-turbulence ``dead zone'', favorable for planetesimal formation \citep[e.g.,][]{Gressel2012}. From a chemical perspective, the up to two orders of magnitude CO depletion observed in protoplanetary disks can only be reproduced by models with CR ionization rates on the order of $10^{-17}$ s$^{-1}$ \citep{Bosman2018,Schwarz2019}. Currently the CO snowline is located outside the L1527 disk \citep{vantHoff2018b}, but unless the CR ionization rate would increase at later stages, a CR ionization rate of $10^{-18}$ s$^{-1}$ would suggest that no chemical processing of CO will occur in the L1527 disk once the disk has cooled enough for the CO snowline to shift inward. \section{Conclusions} \label{sec:Conclusions} We have presented $\sim0\farcs5$ ($\sim$70 au) resolution ALMA observations of \hcop $J=4-3$ and \htcop $J=3-2$ toward the embedded disk L1527. In order to constrain, for the first time, the water snowline location in a young disk, we modeled the \hcop abundance and emission using a physical model specific for L1527 \citep{Tobin2013} and a small chemical network (based on \citealt{Leemker2021}). Our main results are summarized below. \begin{itemize} \item The observed \hcop emission traces the disk down to a radius of $\sim$40 au. The emission can be reproduced with the L1527-specific physical structure that has the water snowline at 3.4 au, given that the cosmic ray ionization rate is lowered to $10^{-18}$ s$^{-1}$. \item Even though the observations are not sensitive to the expected \hcop abundance change across the midplane snowline, the change across the radial snow surface and the global temperature structure allow us to constrain the snowline location between 1.8 and 4.1 au by multiplying the fidicial temperature structure with a constant factor. The snowline can be inward of 1.8 au if the CR ionization rate is $10^{-17}$ s$^{-1}$, but a previous analysis showed that a temperature structure with the snowline at 1.5 au is too cold to reproduce the $^{13}$CO and C$^{18}$O observations. \item The \hcop abundance structure in the disk predicted by the small chemical network is very robust for the initial H$_2$O and CO abundance, and only significantly depends on the cosmic ray ionization rate. \item The observed \htcop emission extends out to lower velocity offsets than the \hcop emission, indicating that the emission predominantly originates in the inner envelope. For the adopted physical structure, a canonical CR ionization rate of $10^{-17}$ s$^{-1}$ is required to reproduce the \htcop emission. Together, the \hcop and \htcop results suggest that the CR ionization rate has a canonical value of $10^{-17}$ s$^{-1}$ in the inner envelope and may be attenuated to $\sim10^{-18}$ s$^{-1}$ in the disk. \end{itemize} These results demonstrate the use of \hcop as a snowline tracer in embedded disks. However, as long integration times with ALMA are required to detect emission at high velocities to eliminate envelope contribution and to constrain the snowline to within 0.5 au, the direct detection of the snowline through observations of water isotopologues may still prove to be a viable strategy. Deep water observations of a range of different sources are required to constrain when water observations are viable and when we have to retort to indirect tracing with \hcop. Observations of water ice with the James Webb Space Telescope may provide constraints on the (vertical) snowline as well. In sources with a direct measurement of the snowline location, \hcop observations will allow to constrain the cosmic ray ionization rate. \acknowledgments We would like to thank the referee, Ewine van Dishoeck, Martijn van Gelder, and Naomi Hirano for feedback on the manuscript. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2012.1.00193.S and ADS/JAO.ALMA\#2012.1.00346.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. M.L.R.H acknowledges support from the Michigan Society of Fellows. M.L. acknowledges support from the Dutch Research Council (NWO) grant 618.000.001. J. J. T. acknowledges funding from NSF grant AST-1814762. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. J.K.J. acknowledges support from the Independent Research Fund Denmark (grant No. 0135-00123B). E.A.B. acknowledges support from NSF AAG Grant \#1907653. Astrochemistry in Leiden is supported by the Netherlands Research School for Astronomy (NOVA).
1,477,468,750,205
arxiv
\section{Introduction} Knowing the relative importance of gravitational, rotational, turbulent, and magnetic energies is essential to understand the process of star formation, as all of these effects shape structures and affect disk formation in protostellar sources \citep{McKee07, Li14a}. Theoretical studies show that if the magnetic field and rotational axis of a dense core are aligned, magnetic braking is very efficient during the collapse in the ideal magnetohydrodynamic (MHD) condition \citep{Galli06, Allen03, Mellon08}. As a consequence, the disk formation and growth are suppressed, and Keplerian disks with sizes larger than 10 au unlikely form around protostars. When other mechanisms are incorporated, such as the non-ideal MHD effects, misaligned magnetic field and rotational axes in dense cores, or turbulence, the efficiency of magnetic braking is reduced, and a large Keplerian disk with a size of tens of au can form around a protostar in numerical simulations \citep{Dapp12,Joos12,Joos13,Santos12,Seifried13,Li13,Krumholz13,Tsukamoto15,Tsukamoto17,Zhao16,Zhao18,Masson16,Matsumoto17,Wurster19a,Wurster19b}. Keplerian disks with sizes of tens of au have been observed around several young protostars \citep{Murillo13,Lee14,Lee18,Sakai14,Yen17}. However, it remains unclear as to which mechanisms play more important roles to regulate the disk formation and growth in protostellar sources. B335 is an isolated Bok globule with an embedded Class 0 protostar at a distance of 100 pc \citep{Keene80, Keene83, Stutz08, Olofsson09}. B335 is slowly rotating \citep{Saito99,Yen11,Kurono13}, and is associated with a 0.1 pc bipolar molecular outflow as well as several Herbig Haro objects along the east--west direction \citep{Hir88,Gal07}. The infalling motion has been observed on scales from 3000 au to 10 au in B335 \citep{Zhou93, Zhou95, Choi95, Evans05, Evans15, Saito99, Yen10, Yen11, Yen15, Kurono13, Imai19, Bjerkeli19}. On the other hand, the rotational motion in the protostellar envelope on scales from 1000 au to 10 au in B335 has been found to be slow \citep{Yen10, Yen15, Imai19, Bjerkeli19}, and no clear sign of Keplerian rotation is observed even on a 10 au scale \citep{Bjerkeli19}. The James Clerk Maxwell Telescope (JCMT) and Atacama Large Millimeter/submillimeter Array (ALMA) polarimetric observations show that the magnetic field is along the east--west direction, and it is ordered and highly pinched on a 1000 au scale \citep{Maury18,Yen19}. These results suggest that the magnetic field likely plays an important role in the dynamics in B335. On the other hand, the presence of the jets and outflows in B335 could hint at the existence of a circumstellar disk \citep[e.g.,][]{Blandford82}. In addition, a compact continuum component with a size of $\sim$6 au oriented perpendicular to the outflow direction as well as velocity gradients due to the rotational motion on a similar scale have been observed with ALMA \citep{Imai19,Bjerkeli19}. Thus, a small disk with a size of a few au likely has formed in B335. Therefore, B335 is an excellent target to study the disk formation under the influence of a dynamically important magnetic field. To observationally investigate the effects of the magnetic field on the disk formation in B335, we conducted ALMA polarimetric observations at an angular resolution of 0\farcs2 (20 au) to probe the magnetic field structures on a scale close to the disk forming region. In the present paper, we describe the details of our observations in Section \ref{observations}, and introduce our observational results in Section \ref{results}. In addition, we compare our polarization data at 0.87 mm with those at 1.3 mm retrieved from the ALMA archive. In Section \ref{discussion}, we present the results from our non-ideal MHD simulations for B335, and discuss the observed magnetic field structures in comparison with the simulations. Lastly, in Section \ref{summary} we summarize the possible effects of turbulence as well as the misalignment between the magnetic field and rotational axis on the magnetic field structures in B335. \section{Observations}\label{observations} The polarimetric observations with ALMA at 0.87 mm toward B335 were conducted during 2016 to 2018, consisting of 13 successful executions (project code: 2015.1.01018.S). In the observations, 40 to 47 antennae were used in the configurations with baseline lengths ranging from 15 m to 1400 m. The pointing center was $\alpha({\rm J2000}) = 19^{\rm h}37^{\rm d}00.\!\!^{\rm s}89$, $\delta({\rm J2000}) = +7\arcdeg34\arcmin09\farcs6$. The on-source integration time was 7.4 hours. The observations were conducted with the full polarization mode and at the frequency ranges of 335.5--339.5 GHz and 347.5--351.5 GHz with a total bandwidth of 8 GHz. In these observations, J1751$-$0939 was observed as the bandpass calibrator, J1938$+$0448 or J1935$+$2021 as the gain calibrators, and J1924$-$2914 or J2000$-$1748 for polarization calibration. The flux calibration was performed with the observations of quasars or the asteroid, Pallas. The data were manually calibrated by the EA ARC node using the Common Astronomy Software Applications (CASA) of the version 5.1.1 \citep{McMullin07}. We additionally performed self-calibration of the phase using the Stokes {\it I} data. Then the calibrated visibility data were Fourier-transformed with the Briggs weighting with a robust parameter of +0.5 to generate Stokes {\it IQU} images, and the images were cleaned using the CASA task {\it tclean}. The achieved synthesized beam is $0\farcs19 \times 0\farcs17$. The noise level in the Stokes {\it I} image is 40 $\mu$Jy beam$^{-1}$, and it is 9 $\mu$Jy beam$^{-1}$ in the Stokes {\it Q} and {\it U} images. When we generated the polarized intensity ($I_{\rm p}$) map, we debiased the polarized intensity ($I_{\rm p}$) with $I_{\rm p} = \sqrt{Q^2 + U^2 - {\sigma_{Q,U}}^2}$, where $\sigma_{Q,U}$ is the noise level in Stokes {\it Q} and {\it U} \citep{Wardle74,Simmons85}. To extract polarization detections, we first binned up the Stokes {\it IQU} and $I_{\rm p}$ maps to have a pixel size of 0\farcs1, which is approximately half of the beam size, and computed polarization orientations and fractions. Thus, the minimal separation between two polarization detections is 0\farcs1. The Stokes {\it I} and $I_{\rm p}$ maps with their original pixel size of 0\farcs02 are presented below. The polarization detections are extracted when the signal-to-noise ratios (S/N) of both Stokes {\it I} and $I_{\rm p}$ are larger than three, and thus the expected uncertainties in the polarization orientations are $\lesssim$9\arcdeg. \section{Results}\label{results} \begin{figure*} \centering \includegraphics[width=\textwidth]{polmap.eps} \caption{(a) 0.87 mm continuum map (contours) overlaid on the polarized intensity map (color) obtained with our ALMA observations. The polarized intensity map is in units of mJy Beam$^{-1}$. The contour levels are from 5$\sigma$ in steps of a factor of two, where 1$\sigma$ is 40 $\mu$Jy Beam$^{-1}$. (b) Magnetic field orientations (orange segments) inferred by rotating the polarization orientations by 90$\arcdeg$. The contours show the Stokes {\it I} intensity, identical to panel (a). The length of all the segments is the same. The minimal separation between two segments is half of the beam size. The zero offset refers to the pointing center ($19^{\rm h}37^{\rm d}00.\!\!^{\rm s}89 +7\arcdeg34\arcmin09\farcs6$). The filled ellipses at the bottom right corners present the size of the synthesized beam.}\label{polmap} \end{figure*} In the 0.87 mm continuum map obtained with our ALMA observations, there is a compact continuum source at the center with a peak position of $\alpha({\rm J2000}) = 19^{\rm h}37^{\rm d}00.\!\!^{\rm s}9$, $\delta({\rm J2000}) = +7\arcdeg34\arcmin09\farcs5$ (Fig.~\ref{polmap}a). This peak position is consistent with the one measured at 1.3 mm at a higher angular resolution of 0\farcs03 \citep{Bjerkeli19}. The apparent size of this compact continuum source in our observations is 0\farcs2 (20 au). Thus, it is only marginally resolved, and we could not constrain its orientation. The peak brightness temperature is measured to be 30 K, which is more than a factor of two lower than the dust temperature of 67 K at a radius of 25 au estimated by modeling molecular-line and continuum data and the spectral energy distribution of B335 \citep{Shirley11,Evans15}. Therefore, the continuum emission detected with our observations is most likely optically thin. The compact source is embedded in a flattened structure with an apparent size of $\sim$2$\arcsec$ and a position angle (PA) of its major axis of 17\arcdeg. On a larger scale of $\sim$4$\arcsec$ (400 au), the continuum emission shows a bent structure, and its distribution is along the outflow cavity wall. In addition, the continuum emission on the 400 au scale in B335 lies primarily beyond the edge of the outflow traced by the CO (2--1) emission, as seen in Fig.~5 in \citet{Bjerkeli19}. The overall structures in the continuum emission in our observations are consistent with the previous results obtained with ALMA at 1.3 mm and angular resolutions of 0\farcs03 to 0\farcs8 \citep[e.g.,][]{Yen15,Imai16,Maury18,Bjerkeli19}. The polarized continuum emission at 0.89 mm is primarily detected along the outflow cavity and around the central compact continuum source. The polarized intensity is highest in the south and north close to the Stokes {\it I} intensity peak but becomes lower at the peak (Fig.~\ref{polmap}a). As discussed in \citet{Maury18}, more polarized continuum emission detected along the outflow cavity wall could be due to more efficient dust grain alignment caused by stronger illumination from the central star in the outflow cavity \citep{Lazarian07}. Similar enhancements in polarized continuum emission along outflow cavity walls have also been observed in other protostellar sources \citep[e.g.,][]{Hull17b,Hull19,Gouellec19}. The orientations of the polarization detections are rotated by 90$\arcdeg$, displaying the orientations of the magnetic field (Fig.~\ref{polmap}b). As discussed in Section \ref{discussion}, the detected polarized emission is unlikely induced by dust scattering based on the measured polarization fractions \citep{Kataoka15, Yang16, Yang17}. The magnetic field orientations on a 400 au scale are primarily along the outflow cavity wall. There is a northern patch of detections with orientations more along the north--south direction. These features on a scale larger than 100 au are consistent with those observed at 1.3 mm with ALMA at an angular resolution of 0\farcs8 \citep{Maury18,Yen19}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{bmap.eps} \caption{Same as Fig.~\ref{polmap} (b) but zooming in on the central 2$\arcsec$~$\times$~2$\arcsec$ region.}\label{bmap} \end{figure} With a three times higher resolution, our observations reveal the magnetic field structures significantly closer to the central source than was possible with the previous, lower-resolution observations \citep{Maury18,Yen19}. In the present paper, we focus on the features on a small scale within radii of 0\farcs5--1$\arcsec$ (50--100 au). The magnetic field structures on a larger scale in B335 have been discussed in detail in \citet{Maury18} and \citet{Yen19}. Figure \ref{bmap} presents the magnetic field orientations in the central 2$\arcsec$ (200 au) region. At offsets of $\sim$0\farcs5--1$\arcsec$ in the north, the detected magnetic field orientations are along the wall of the outflow cavity. In the inner northern region at offsets of $\sim$0--0\farcs5, the magnetic field is along the northwest--southeast direction with PA of 130\arcdeg--150\arcdeg. Here, the magnetic field segments appear to connect to the continuum peak, with a few segments also along the north--south direction with PA of 0\arcdeg--10\arcdeg. In the south, the magnetic field is primarily along the north--south direction, and the magnetic field becomes more tilted from the north--south direction by 20\arcdeg--30\arcdeg at outer radii of $\sim$0\farcs5--1\arcsec. In addition, there are a few magnetic field segments along the northwest--southeast direction detected at $\sim$0\farcs4 southeast from the continuum peak. In the east, at RA offsets of 0\farcs6--1$\arcsec$, there is a patch of segments with their PA gradually changing from 10$\arcdeg$ to 60$\arcdeg$ toward the center. \begin{figure*} \centering \includegraphics[width=\textwidth]{2pol_3p.eps} \caption{Comparison of (a) the magnetic field orientations and (b) the polarization percentages inferred from the polarization data at 0.87 mm and 1.3 mm. The data were convolved to the same beam size. The dotted lines in the panel (a) show the difference of $\pm$10\arcdeg. (c) presents the polarization percentages at 0.87 mm (orange) and 1.3 mm (black) as a function of projected radius with respect to the continuum peak. Orange solid and black dashed curves show the running means in radial bins of 0\farcs2.}\label{2pol} \end{figure*} For comparison, we retrieved the polarization maps at 1.3 mm obtained with ALMA \citep{Maury18,Yen19}. We convolved our polarization maps at 0.87 mm to have the same beam size as those at 1.3 mm and extracted polarization detections with the same procedure and criteria as those in \citet{Yen19}. Then we compared the polarization orientations and percentages at 0.87 mm and 1.3 mm at the same positions. Figure \ref{2pol} presents the comparison of the inferred magnetic field orientations and measured polarization percentages at 0.87 mm and 1.3 mm. The magnetic field orientations inferred from the polarization data at 0.87 mm and 1.3 mm are consistent within the uncertainties. The mean difference in the orientations at the two wavelengths is 5$\arcdeg$ with a standard deviation of 4$\arcdeg$. More than 90\% of the polarization detections show higher polarization percentages at 1.3 mm than at 0.87 mm. On average, the polarization percentages at 1.3 mm are a factor of 1.7 higher. We note that the polarization percentages at both wavelengths increase with increasing radii, and the percentages at 0.87 mm are systematically lower than those at 1.3 mm at all radii (Fig.~\ref{2pol}c). \section{Discussion}\label{discussion} \subsection{Origins of the polarized continuum emission} With the ALMA polarization data at 0.87 mm and 1.3 mm, we found higher polarization percentages at the longer wavelength. This is the opposite of the expected trend for the scattering-induced polarization by dust grains with sizes up to $\sim$100 $\mu$m, where the scattering cross-section drops steeply with wavelength as $\lambda^{-4}$ \citep{Kataoka15, Yang16}, unless the region is optically thick. In addition, the observed polarization fraction is higher than typically predicted by scattering models. In B335, the continuum emission on the scale which we observe is optically thin (Section \ref{results}). Thus, the polarized continuum emission down to a 20 au scale in B335 most likely originates from magnetically aligned grains. Therefore, the magnetic field structures in B335 can be inferred by rotating the observed polarization orientations by 90\arcdeg. The polarization percentages as a function of wavelength could depend on the compositions and sizes of dust grains \citep[e.g.,][]{Bethell07,Draine09,Ashton18,Valdivia19}. Among a few dust models that reproduce the spectral energy distributions and extinction curve of the interstellar medium (ISM), the one with spheroidal silicate and spherical carbon grains shows increasing polarization percentages from submillimeter to millimeter wavelengths \citep{Draine09}, similar to the trend observed in B335. Nevertheless, the properties of dust grains and the environments, such as grain sizes and radiation fields, are expected to be different in the ISM and dense cores \citep{Bethell07,Ashton18}. Consequently, ISM dust models might not be directly applicable to B335. More specifically, \citet{Valdivia19} computed polarized radiative transfer in typical protostellar envelopes with the radiative torque theory \citep{Lazarian07} and dust size distributions similar to the Mathis-Rumpl-Nordsieck (MRN) distribution \citep{Mathis77}. They further studied the dependences of the polarization percentages at 0.8 mm and 1.3 mm on the maximum grain sizes. They found that the presence of dust grains with sizes larger than 10 $\mu$m in protostellar envelopes on a scale of a few hundred au is needed to explain the observed polarized percentages of a few to more than 10\% in protostellar sources. In addition, in their model calculations with maximum grain sizes of 30--50 $\mu$m in protostellar envelopes, the polarization percentages on a scale of a few hundred au are higher at 1.3 mm than at 0.8 mm. These trends are consistent with our observational results of B335. Thus, our results may hint at the presence of large dust grains with sizes of a few tens of $\mu$m on a 100 au scale in B335. \subsection{Magnetic field structures} The previous JCMT and ALMA polarimetric observations \citep{Maury18, Yen19} show that the magnetic field structures in B335 are ordered and along the outflow (east--west) direction on the scale of the natal dense core of 6000 au, and then become highly pinched on the scale of the infalling envelope of 1000 au. In addition, close to the center, the magnetic field orientations are almost along the north--south direction, which is the direction of the disk midplane. These observed magnetic field structures can be explained with non-ideal MHD simulations of a collapsing dense core with a weak magnetic field of mass-to-flux ratios of $\sim$6--10 aligned with the rotational axis \citep{Maury18, Yen19}. With our observations at higher angular resolutions, the magnetic field along the north--south direction is detected within a 100 au scale north and south to the center. In addition to these field lines along the direction of the disk midplane, there are also magnetic field lines tilted with respect to the midplane in the 100 au region around the central compact source (Fig.~\ref{bmap}). Our observations also show that the orientation of its surrounding flattened envelope on a 100 au scale is not aligned with the disk midplane but is tilted eastwards by $\sim$20$\arcdeg$ (Fig.~\ref{bmap}). Such more complicated magnetic field structures around the central disk and the different orientations between the flattened envelope and the disk are not expected in the non-turbulent simulations of a collapsing dense core with a magnetic field and rotational axis that are aligned \citep[e.g.,][]{Li14b, Masson16, Tsukamoto17, Zhao18, Machida19}. \subsubsection{Simulation with the aligned magnetic field and rotational axis} We have compared the magnetic field structures detected with our ALMA observations with those in the non-ideal MHD simulation of a collapsing dense core with the aligned magnetic field and rotational axis. The synthetic {\it IQU} maps of the simulation results were obtained from \citet{Yen19}. Among the three non-ideal MHD effects, the simulation only includes ambipolar diffusion, which is the most dominant non-ideal MHD effect in protostellar envelopes on a scale of hundreds of au \citep{Zhao18}. In this simulation, the dense core has the same mass and angular velocity as the observational estimates of B335, and an initial mass-to-flux ratio of 9.6. A cosmic ray ionization rate of $5 \times 10^{-17}$ was adopted. The simulation was stopped when the central stellar mass reached 0.1 $M_\sun$. At the end of our simulation, the disk size was $<$10 au. These stellar mass and disk size are comparable to the observational estimates of B335 \citep{Evans15, Yen15, Bjerkeli19, Imai19}. The synthetic {\it IQU} maps were generated using the radiative transfer code, Simulation Package for Astronomical Radiative Xfer (SPARX; \url{https://sparx.tiara.sinica.edu.tw/}), on the assumption of a constant polarization efficiency meaning that the polarized intensity is simply proportional to the column density. The details of the simulation are described in \citet{Yen19}. This simulation can explain observed magnetic field structures on a few hundred au to 6000 au scales in B335. We then synthetically observed the Stokes {\it IQU} maps output by SPARX using the CASA simulator. The resulting simulated maps are presented in Fig.~\ref{simob} (a)--(c). \begin{figure*} \centering \includegraphics[width=\textwidth]{simob.eps} \caption{(a) Density (color; in units of g~cm$^{-3}$) and magnetic field lines (white lines) in our non-ideal MHD simulation of a collapsing dense core with the aligned magnetic field and rotational axis. The outflow direction is along the vertical direction. (b) Synthetic ALMA observational results of our simulation in (a). The synthetic observations are rotated to have an inclination angle of 80$\arcdeg$ and an outflow direction along the east--west direction, the same as in B335. The contours show the distribution of the Stokes {\it I} intensity, and the segments show the magnetic field orientations inferred by rotating the polarization orientations by 90$\arcdeg$. (c) Comparison of the observed (orange) and model (blue) magnetic field orientations. The blue segments are the same as those in (b). Panels (d)--(f) are the same as (a)--(c) but for our simulation of a collapsing dense core with its magnetic field misaligned with the rotational axis by 15\arcdeg. The outflow direction is also along the vertical direction in (d). In (a) and (d), the magnetic field lines were plotted using iso-magnetic flux contours as approximation, and the dashed contours near the polar axis denote the regions with negative magnetic fluxes, where the structures of the magnetic field lines cannot be properly evaluated in two-dimensional plots.}\label{simob} \end{figure*} In the simulation with the aligned magnetic field and rotational axis, the Stokes {\it I} intensity from large to small scale is all elongated along the direction of the disk midplane, which is the north--south direction (Fig.~\ref{simob}b). The magnetic field inferred from the synthetic polarization maps is perpendicular to the disk midplane within 0\farcs2 in the north and south of the intensity peak. Then, it becomes almost parallel to the disk midplane at outer radii in the north and the south (Fig.~\ref{simob}b). Thus, this simulation can explain the magnetic field orientations parallel to the disk midplane observed in the north and the south on a 100 au scale in B335. Especially, it reproduces most of the observed magnetic field orientations in the south (Fig.~\ref{simob}c). However, the magnetic field near the intensity peak in the synthetic maps is not along the northwest--southeast direction, which is different from the observations. Thus, the observed magnetic field structures within a 100 au scale cannot be fully explained with the simulation of a collapsing dense core with the aligned magnetic field and rotational axis, although this simulation was able to explain the pinched magnetic field observed on scales from 6000 au down to a few hundred au \citep[Fig.~\ref{jcmt_simob} upper; ][]{Yen19}. \subsubsection{Simulation with the misaligned magnetic field and rotational axis} On the contrary, in simulations of a collapsing dense core with the misaligned magnetic field and rotational axis or with turbulence, the magnetic field structures tend to become irregular, different from a simple hourglass shape \citep[e.g.,][]{Li14b, Seifried15, Masson16, Hull17a, Tsukamoto17, Wurster19a}. Thus, we additionally performed non-ideal MHD simulations of a collapsing dense core with its magnetic field misaligned with its rotational axis by 15\arcdeg. The mass, angular velocity, and initial mass-to-flux ratio of the dense core are the same as those in our simulation with the aligned magnetic field and rotational axis. Due to the slight misalignment, the disk is easier to grow in size compared to the simulation without the misalignment. Thus, in this simulation with the misalignment, the cosmic-ray ionization rate was increased to be $1\times10^{-16}$~s$^{-1}$, which enhances the coupling between the magnetic field and the matter, and the disk size was also suppressed to be $<$10 au to match the observational constraint of the disk size in B335. The simulation was stopped at a similar evolutionary time when the central stellar mass reached 0.1 $M_\sun$. Thus, the only differences between the initial conditions of these two simulations are the cosmic-ray ionization rates and the angles between the magnetic field and rotational axis. Finally, we produced synthetic observations using the CASA simulator in the same manner as for the simulations with the aligned magnetic field and rotation axis. The synthetic maps from our simulation with the misaligned magnetic field and rotational axis are presented in Fig.~\ref{simob}(d)--(f). Because the rotational axis and magnetic field are misaligned, the flattened envelope becomes warped on the small scale of 100 au (Fig.~\ref{simob}d), where the rotational energy becomes also important compared to the gravitational and magnetic energy. As a result, the elongations of the Stokes {\it I} intensity distributions in the synthetic map change from large to small scales (Fig.~\ref{simob}e). The magnetic field is dragged by the accretion flows along the warped envelope, resulting in more complicated magnetic field structures. We note that this simulation with the misalignment has a higher cosmic ray ionization rate. In the simulations, the magnetic field structures are shaped by the gas motions. Increasing the cosmic ray ionization rate does not directly affect the magnetic field structures but only strengthens the coupling between the gas motions and the magnetic field \citep{Yen19}. Thus, the complicated magnetic field structures seen in this simulation is due to the changes in the gas motion caused by the misalignment. In the synthetic map, the magnetic field is parallel to the disk plane at the intensity peak and is along the northwest--southeast direction within 0\farcs5 around the intensity peak. Then, the magnetic field becomes more aligned along the north--south direction in the outer regions (Fig.~\ref{simob}e). These features are similar to what is observed on a 100 au scale in B335, although the regions showing the magnetic field along the northwest--southeast direction extend further away from the disk midplane in the observations compared to our simulation. Outside the central 100 au region, the magnetic field structures in the synthetic maps of the simulations with and without misalignment are similar (Fig.~\ref{simob}b \& e). Thus, our new simulation with the misalignment can also explain the magnetic field structures in the protostellar envelope on a scale of a few hundred au in B335 detected with the previous ALMA observations at a lower spatial resolution of 70--80 au \citep{Maury18,Yen19}. \subsubsection{Comparison between the aligned and misaligned cases} In Figure \ref{diff}, we present the number distributions of the angle difference of the magnetic field orientations in the central 100 au region extracted from the observed and synthetic polarization data. The number of the model segments with their orientations consistent with the observations within 30$\arcdeg$ increases by 50\%, when we include the misalignment in our simulations. Thus, the simulation with the misalignment can explain the observed magnetic field orientations better than the simulation without the misalignment. We have also compared the magnetic field structures on the large scale in this simulation with those observed with JCMT \citep{Yen19}. The degree of the misalignment between the magnetic field and rotational axis in the initial core in our simulation is only 15$\arcdeg$ in the three-dimensional space. Thus, after projection on the plane of the sky, the simulation with the misalignment (Fig.~\ref{jcmt_simob} lower panel) can reproduce the observed magnetic field orientations on the 6000 au scale in B335 similar to the simulation with the aligned magnetic field and rotation axis (Fig.~\ref{jcmt_simob} upper panel). Hence, the critical difference between the aligned and the misaligned cases only manifests itself in the inner 100 au region. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{model_diff.eps} \caption{Number distributions of angle difference between observed and model magnetic field orientations in the central 100 au region. Red solid and blue dashed histograms show the cases for the simulations with the magnetic field misaligned and aligned with the rotational axis, respectively.}\label{diff} \end{figure} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{jcmt_simob.eps} \caption{Comparison of the magnetic field orientations observed with JCMT (orange segments) and those from our non-ideal MHD simulations (blue segments) of collapsing dense cores with the magnetic field aligned (upper panel) and misaligned (lower panel) with the rotational axes. The contours show the Stokes {\it I} map of B335 obtained with JCMT. The JCMT polarimetric data were retrieved from \citet{Yen19} and were reduced again with the updated software and procedure. The S/N were improved with the new data reduction, and we obtained more detections than \citet{Yen19}.}\label{jcmt_simob} \end{figure} We note that the intensity distribution is elongated along the northeast--southwest direction on a 1$\arcsec$ scale and then becomes along the northwest--southeast direction on a 0\farcs5 scale in our synthetic Stokes {\it I} map of the simulation with the misalignment (Fig.~\ref{simob}e). This change in the intensity distribution is not fully consistent with the observations. The observed intensity distribution is elongated along the northeast--southwest direction on a 1$\arcsec$ scale, and then transitions into a roundish morphology on a 0\farcs5 scale. Our simulations were performed with the barotropic approximation, which could underestimate the temperature in the inner region of tens of au compared to radiative MHD simulations \citep{Tomida10}. Consequently, the material could be more concentrated in the midplane in our simulations than in reality. Because the Stokes {\it I} intensity distribution is sensitive to the density distribution, this effect could make the warped structures more evident in our synthetic intensity map, compared to the observations. On the contrary, the orientations of the magnetic field lines are more sensitive to the directions of the gas flows. In addition, a constant polarization efficiency was adopted in our radiative transfer calculations, so our synthetic polarization maps primarily trace the magnetic field in the envelope. Thus, there are no magnetic field orientations along the outflow cavity wall detected in our synthetic maps, while such detections are seen in the northeast in the observed maps (Fig.~\ref{bmap}). We also note that the magnetic field structures and density distributions in our simulations (even for the misaligned case) are more or less axisymmetric. Thus, the toroidal components of the magnetic field along the line of slight may be canceled out in our radiative transfer calculations, causing our synthetic polarization maps to show axisymmetric patterns (Fig.~\ref{simob} and \ref{jcmt_simob}). However, the observations show the magnetic field orientations are different in the north and the south of the center in B335, which is not seen in our synthetic maps and could suggest the presence of asymmetric structures in the magnetic field or the density distributions along the line of slight. This observed asymmetry is often seen in numerical simulations with turbulence \citep[e.g.,][]{Li14b, Masson16, Matsumoto17, Wurster19a}. Thus, in addition to seeing magnetic field perturbations caused by the misalignment of the magnetic field and the rotation axis, we may also be seeing the effect of gravo-turbulence in B335, which could cause asymmetries that become more evident on small scales as the core collapses. Besides B335, ordered magnetic fields in a pinched hourglass shape have been observed on a 1000 au scale in several protostellar sources \citep{Girart06, Stephens13, Rao14, Cox18, Sadavoy18a, Sadavoy19, Kwon19, Ko19}. These results hint at an important role of the magnetic field in the dynamics in these protostellar sources. On the other hand, there are also protostellar sources showing disordered magnetic field structures on a 1000 au scale, which can be explained better with simulations with turbulence \citep{Hull17a, Hull17b, Cox18, Sadavoy18b, Sadavoy19}. Our observations together with the previous JCMT and ALMA observations show that the magnetic field from large to small scales changes from the ordered pinched structures to more complicated and asymmetric structures in B335. A similar trend has also been seen in other protostellar sources, such as IRAS 16293$-$2422 \citep{Rao14,Sadavoy18b}. These results suggest that the relative importance among magnetic field, rotation, and turbulence changes as a function of scale. Furthermore, it seems that the influence of rotation and turbulence, which are passed down from larger scales and enhanced by gravitational collapse, may become more significant in the inner envelopes around protostellar disks. \section{Summary}\label{summary} We present our observational results of the polarized 0.87 mm continuum emission at an 0\farcs2 resolution in B335 obtained with ALMA. We compare our results with those from the ALMA polarimetric observations at 1.3 mm with an 0\farcs8 resolution as well as with the synthetic maps generated from our non-ideal MHD simulations of collapsing dense cores. The main results are summarized below. \begin{enumerate} \item{ The polarization orientations at 0.87 mm and 1.3 mm are consistent within the uncertainties. The polarization percentage is higher at 1.3 mm than 0.87 mm on scales from 1000 au down to tens of au in B335, suggesting that the polarized emission originates from magnetically aligned dust grains. In addition, the peak brightness temperature of the 0.87 mm continuum emission on a 20 au scale is 30 K, more than a factor of two lower than the expected dust temperature. Thus, the 0.87 mm continuum emission is most likely optically thin down to a 20 au scale in B335, and the observed polarization orientations can be rotated by 90$\arcdeg$ to infer the magnetic field orientations.} \item{Our observations show that the magnetic field structures in B335 change from an ordered pinched morphology on a 1000 au scale to more complicated structures on a 100 au scale. Within a 100 au scale, in addition to the magnetic field along the equatorial plane in the north and the south to the center, there are also magnetic field lines along the northwest--southeast direction that are connected to the central disk-forming region.} \item{We have performed non-ideal MHD simulations of collapsing dense cores with their magnetic field aligned and misaligned with their rotational axes, and generated synthetic polarization maps to compare with the observations. The simulation with the aligned magnetic field and rotational axis can explain the observed magnetic field orientations along the equatorial plane on a 100 au scale, but cannot explain those along the northwest--southeast direction connecting the central disk-forming region. On the contrary, the simulation with the misaligned magnetic field and rotational axis can reproduce both the magnetic field orientations along the equatorial plane and the northwest--southeast direction observed on a 100 au scale in B355. The misalignment is 15$\arcdeg$ in our simulation, which is relatively small, and thus, this simulation can also explain the observed magnetic field orientations on a 6000 au scale with JCMT.} \item{Our results suggest that the magnetic field and rotational axis in B335 are likely slightly misaligned. In addition, the observed different magnetic field orientations in the north and south on a scale within tens of au, which is not seen in our simulation with the misaligned magnetic field and rotation axis, could hint at the contribution of gravo-turbulence in B335. Therefore, our observational results suggest that the relative importance between magnetic field, rotation, and turbulence changes as a function of scale in protostellar sources. On the small scale around the disk-forming region in B335, the rotational energy and influence by turbulence become more significant, and thus, the protostellar envelope is warped. The magnetic field lines could be dragged by the accretion flows along the warped envelope, resulting in more complicated and asymmetric structures. } \end{enumerate} \begin{acknowledgements} We thank Alfonso Trejo-Cruz for his efforts on the manual data calibration. We thank I-Ta Hsieh and Sheng-Yuan Liu for their assistance and advice on our radiative transfer calculation using SPARX. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2015.1.01018.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. We thank all the ALMA staff supporting this work. H.-W.Y. acknowledges support from MOST 108-2112-M-001-003-MY2. P.M.K. acknowledges support from MOST 108-2112-M-001-012 and MOST 107-2119-M-001-023, and from an Academia Sinica Career Development Award. ZYL is supported in part by NASA 80NSSC18K1095 and NSF AST-1716259 and 1815784. S.T. acknowledges a grant from JSPS KAKENHI grant No. JP18K03703 in support of this work. This work was supported by NAOJ ALMA Scientific Research grant No. 2017-04A. \end{acknowledgements}
1,477,468,750,206
arxiv
\section{Running of $\alpha_s$(Q) and Power Corrections} Most experimental measurements of $\alpha_s$ are limited in precision by theoretical errors arising from missing higher order terms and uncertainties in the hadronisation corrections which are normally derived from the phenomenological Monte Carlo models. Recently, improved perturbative QCD predictions for the event shape observables, C-parameter~\cite{Catani} and the Jet Broadenings~\cite{Dokshetal} (B$_W$, B$_T$), have become available to add to those already produced for Thrust (T) and Heavy Jet Mass (M$_H^2$)~\cite{DW}. Leading non-perturbative (1/Q, where Q = $\sqrt s$) power corrections have also been calculated for these observables. They are based on an effective strong coupling, $\alpha_{0}(\mu_{I})$, at an infra-red matching scale, $\mu_{I}$ (usually set to 2 GeV). This coupling is expected to be approximately universal but must be derived from experiment. Once evaluated, power corrected distributions can be compared directly with experiment leaving the Monte Carlo models for detector corrections only. Recently, a 2-loop analysis of these 1/Q power corrections has been performed~\cite{DLMS} for Thrust rescaling the original prediction by a so-called $'$Milan$'$ factor ($M$) which has been shown to apply equally to the other aforementioned observables~\cite{Milan}. The power corrections basically shift the perturbative predictions for the differential spectra of each observable linearly in $\alpha_{0}$ although additional logarithmic terms are thought to be required for the jet broadenings~\cite{Dokshetal}. In a contribution to this conference, ALEPH~\cite{ALEPH940} fit the power corrected perturbative predictions in the 3-jet regions of the 1-T, C, M$_H^2$ and B$_W$ distributions to the measurements at 5 values of $\sqrt s$ from 91.2 to 183 GeV. The perturbative predictions are ${\cal O}(\alpha_{s}^2)$ with $\ln$(R) (or R) matched NLL terms. The perturbative renormalisation scale parameter, x$_{\mu}$(=$\mu/\sqrt s$), is set to 1. They extract simultaneously values for $\alpha_{0}(\mu_{I})$ and $\alpha_{s}$(M$_Z$) from each observable. The fits are reasonably satisfactory except to the B$_W$ distributions where the results are far from those obtained for the other observables with either matching scheme. The bad quality of these fits disfavours the concept of a logarithmically enhanced shift~\cite{Dokshetal}. DELPHI~\cite{DELPHI137} and a JADE group~\cite{JADE646}, who also include data from higher energies, fit instead to the means of event shape distributions simultaneously at all $\sqrt s$ values. In this case, only the ${\cal O}(\alpha_{s}^2)$ terms can be used for the perturbative contribution. OPAL~\cite{OPAL305} presented a similar analysis applied also to the second and third moments of the 1-T and C distributions. In this case, the power corrections are suppressed by ($\mu_{I}$/Q)$^n$ where $n$ = 2 or 3. Using Monte Carlo models, they show that the power corrections for the higher moments are small above Q = M$_Z$ and also a low value of x$_{\mu}$ is required to obtain consistent values of $\alpha_{s}$ from them. They fit simultaneously to the first three moments in the data for each observable at 91.2, 133, 161 and 172 GeV. When correlations are taken into account, the fits are not stable if x$_{\mu}$ = 1 indicating that higher order perturbative terms are large. Thus, x$_{\mu}$ is allowed to vary as well and is found to be highly correlated to $\alpha_{0}(\mu_{I})$. As expected, the results are insensitive to the power terms in the higher moments which are therefore set to zero. \noindent \begin{table} \begin{center} \caption{Values of $\alpha_{s}$(M$_Z$) and $\alpha_{0}(\mu_{I})$ as obtained from the fits. $\dagger$ Results from C and (1-T) are consistent; $\ddagger$ values are compatible only at 30\% level.} \label{alpha_zero} \begin{tabular}{|c|l|l|l|} \hline Expt & $\alpha_s$(M$_Z$) & $\alpha_{0}(\mu_{I})$ & x$_{\mu}$ \\ \hline ALEPH & 0.1168$\pm$0.0048 & 0.451$\pm$0.065$\dagger$ & 1.0 \\ DELPHI & 0.1176$\pm$0.0057 & 0.494$\pm$0.009 (T) & 1.0 \\ & 0.1172$\pm$0.0037 & 0.558$\pm$0.025 (C) & \\ JADE & 0.1188$^{+0.0044}_{-0.0034}$ & 0.325 - 0.616$\ddagger$ & 1.0 \\ OPAL(1)& 0.1143$\pm$0.0088 & 0.322 (T) & 0.038 \\ & 0.1158$\pm$0.0107 & 0.249 (C) & 0.052 \\ OPAL(2)& 0.141 & as above & 1.0 \\ \hline \end{tabular} \end{center} \end{table} Table~\ref{alpha_zero} compares the values of $\alpha_{0}(\mu_{I})$ and $\alpha_{s}$(M$_Z$) obtained from each experiment. The latter values are renormalised to M$_Z$ assuming compatibility with QCD; they are in good agreement except for the OPAL(2) result where x$_{\mu}$ is kept fixed. Theory systematics dominate and are typically evaluated from varying x$_{\mu}$ from 0.25 to 4.0 and $\mu_{I}$ between 1 and 3 GeV. All the LEP experiments have tested the running of $\alpha_{s}$(Q) using event shape observables. L3~\cite{L3-536} have fitted resummed ${\cal O}(\alpha_{s}^2)$ predictions of T, M$_H^2$, B$_W$ and B$_T$ to measured distributions. In this case, the predictions are corrected to the hadron level by Monte Carlo models. Values of $\alpha_{s}$(Q) are determined at 11 $\sqrt s$ energy points between 30 and 183 GeV using events with isolated photons to extend the energy range below the $Z$. The 3-loop QCD running $\alpha_{s}$ curve gives an excellent fit $\chi^2$ from which $\alpha_{s}$(M$_Z$) = 0.1216$\pm$0.0017$\pm$0.0058. The most precise and comprehensive test was submitted by a JADE group~\cite{JADE646} as described earlier incorporating data from other PETRA, TRISTAN, LEP and the SLD experiments. The 2-loop power corrected ${\cal O}(\alpha_{s}^2$) predictions for the mean values of 5 event shape observables as a function of $\sqrt s$ are fitted individually to the data over a CM energy range from 13 to 172 GeV. Fig.~\ref{fig:Bethke} show the best fits to the heavy jet mass and wide jet broadening observables. As seen by ALEPH, the fits to the jet broadenings have poor $\chi^2$s. The values of $\alpha_{s}$ found are averaged with those from the other 3 observables, $\langle$(1-T)$\rangle, \langle$B$_T\rangle, \langle$C$\rangle$, to give the results quoted in Table~\ref{alpha_zero}. It was pointed out at the conference that the power correction terms provided for the jet broadenings (where fits are generally poor) are incorrect and need to be revised~\cite{Dokshit}. This may improve the level of consistency found. \begin{figure} \center \epsfig{figure=fig1.ps,height=8.0cm} \vskip 1cm \epsfig{figure=fig2.ps,height=8.0cm} \caption{Energy dependence of $\langle M_H^2 \rangle$ and $\langle B_W \rangle$; the dashed line is the perturbative prediction only.} \label{fig:Bethke} \end{figure} \section{Precise determination of $\alpha_{s}$(M$_Z$) from oriented event shapes} The DELPHI Collaboration reported a new precise determination of $\alpha_{s}$(M$_Z$) from a high statistics study of 1.4 million re-processed hadronic events at the $Z$ ~\cite{DELPHI142}. Eighteen event shape distributions were measured as a function of the polar angle of the thrust axis. The corresponding ${\cal O}(\alpha_{s}^2)$ QCD predictions were corrected to the hadron level using Monte Carlo models and fitted to the data in defined ranges individually chosen for each observable to avoid regions where these corrections are greater than 40\% and the acceptance less than 80\%. In addition, the range was adjusted until the value of $\alpha_{s}$ obtained was stable. With the renormalisation scale fixed to x$_{\mu}$ = 1 a large scatter is observed in the 18 fitted values of $\alpha_{s}$ found. This arises mainly from missing higher order perturbative QCD contributions (resummed terms are not available for many of the observables used). This procedure was repeated using the experimentally-optimised-scale (EOS) procedure~\cite{SBethke} where x$_{\mu}$ is allowed to vary with $\alpha_{s}$ in a 2-parameter fit to each observable. An impressive reduction in the scatter is achieved although the values of x$_{\mu}$ required vary considerably between 0.0033 for T to 6.33 for D$_2^{Geneva}$. Such an improvement was not observed in a similar analysis by the SLD experiment~\cite{Burrows}. Although this analysis was based on only 50,000 events, it would appear that statistics do not account for the discrepancy since the total errors are largely dominated by theory and hadronisation uncertainties in both experiments. It is likely that the choice of fit ranges are crucial since x$_{\mu}$ and $\alpha_{s}$ are correlated in the EOS procedure. Three alternative theoretically motivated schemes to determine values of x$_{\mu}$ for each observable were tried by DELPHI all of which reduce the scatter observed with the fixed x$_{\mu}$ but are not so successful as the EOS procedure. Table~\ref{tab:scales} shows the results using the the ECH scheme~\cite{Grunberg}, the PMS scheme~\cite{Stevenson} and the BLM scheme~\cite{Brodsky} compared with the EOS and fixed procedures. \begin{table} \begin{center} \caption{Weighted values of $\alpha_{s}$(M$_Z$) obtained from 18 event shape distributions in the DELPHI experiment using various choices of renormalisation scales. $\dagger$ 4 fits fail to converge and correlation with experimentally optimised procedure is poor.} \label{tab:scales} \begin{tabular}{|c|l|l|l|} \hline Scale choice & $\alpha_s$(M$_Z$) & $\chi^2$/dof & x$_{\mu}$ range \\ \hline Optimised & 0.1164$\pm$0.0025 & 7.3/16 & 0.0033 - 6.33 \\ fixed & 0.1243$\pm$0.0080 & 40/15 & 1.0 \\ ECH & 0.1148$\pm$0.0038 & 18/16 & - \\ PMS & 0.1147$\pm$0.0040 & 21/16 & - \\ BLM & 0.1168$\pm$0.0053 & 24/13$\dagger$ & - \\ \hline \end{tabular} \end{center} \end{table} The DELPHI submission~\cite{DELPHI142} to the conference also includes further studies of selected observables where resummed NLL terms to combine with the ${\cal O}(\alpha_{s}^2)$ predictions are available. Fits are made to the data selecting regions of the shape distributions which are: (a) restricted to the 2-jet region where pure NLLA should apply and (b) to the 2+3 jet region using the full theory. The renormalisation scale, x$_{\mu}$, is set to 1. Acceptable fits are obtained to the data but in general the fits using the EOS procedure without resummed terms are superior over a wider range of the distributions. In conclusion, although apparently successful, the EOS procedure remains a controversial and somewhat unsatisfactory method of producing a precise measurement of $\alpha_{s}$. A better method may be to constrict the analysis to observables for which resummed and power-law predictions are available. A proper solution to the missing higher orders and other non-perturbative effects is still highly desirable. \section*{References}
1,477,468,750,207
arxiv
\section{Introduction} Temporal networks have become an important framework to understand the dynamics of complex systems over the last decade~\cite{Holme2012Temporal, Masuda2016Guide, Holme2019Temporal}. By integrating the topological knowledge of a system, described by a graph, with the information about the temporal nature of the interaction between its components, represented by time series, we can precisely track who interacts with whom and when. The interaction dynamics can be captured at several different levels. First, the interaction between each pair of nodes specifies the dynamics of the link. Second, by aggregating the interaction between a node and all of its neighbors, one obtains the dynamics of the node, which shows how the node interacts with others. Lastly, by collecting all the interaction between every pair of nodes, one can tell about the dynamics of the entire system. For instance, in communications systems where people interact by sending messages, the link dynamics corresponds to the message correspondence between a pair of individuals, while the node dynamics corresponds to the inbox of messages sent or received by an individual. Human communication is known to exhibit non-Markovian, inhomogeneous temporal patterns which are commonly referred to as being \textit{bursty}~\cite{Barabasi2005Origin, Karsai2018Bursty}. When each communication event is instantaneous or lasts for a short period so that its duration can be neglected compared to other time scales, one can regard the communication sequence as a realization of a point process. The burstiness of a point process is mainly characterized by a heavy-tailed distribution of time intervals between consecutive events, or inter-event times (IETs), in contrast to Poisson processes for which the IET distributions are exponential. Interestingly, empirical data suggest that in communications systems, the communication sequences of nodes and of links are both characterized by power-law distributions with a similar scaling exponent~\cite{Karsai2012Correlated, Saramaki2015Seconds}. This cannot be taken for granted for the following reason. As mentioned above, the communication sequence of a node is the superposition of the communication events on all the links between the node and its neighbors. However, a superposition of independent renewal processes does not retain the statistics of the original processes in general. In fact, the IET distribution for the superposed process tends to an exponential distribution in the limiting case where the number of independent source processes is large~\cite{Cox1954Superposition, Hoopen1966Superposition, Lindner2006Superposition}. Therefore, the observation that both node dynamics and link dynamics are bursty suggests the presence of correlations across communication processes on different links. Such link-link correlations can have a significant impact on the dynamical processes taking place in the network~\cite{Miritello2011Dynamical, Kivela2012Multiscale, Backlund2014Effects, Saramaki2015Seconds}, but their origin has yet to be understood. Here, we study the mechanisms behind the burstiness in node and link activity patterns by considering a model in which the nodes are activated randomly in time with non-Poissonian statistics and two nodes may communicate if and only if they are simultaneously activated. In Sec.~\ref{sec:model}, we introduce two variants of the model with different communication rules. In Sec.~\ref{sec:numerical}, we report results of numerical simulations performed on networks with various topologies. We show that, for both models and for all the networks, the communication patterns are characterized by heavy-tailed IET distributions for both nodes and links. Section~\ref{sec:analysis} is devoted to explaining the origin of the burstiness in node and link activity patterns for each model. We describe the behavior of the model in a system of two nodes by relating it to the statistics of the sum of a random number of random variables. We use the same approach to derive the activity patterns of nodes and links in larger networks. Finally, we conclude our work in Sec.~\ref{sec:conclusion}. \section{Model} \label{sec:model} We consider a network of size $N$ with a given structure, in which each node is activated randomly at discrete times and its activation pattern is modeled by a renewal process with a given inter-activation time (IAT) distribution, denoted by $P(r)$. In order to start the activation process at equilibrium, the first activation time $t_0\geq 0$ of each node is assigned according to the residual time distribution~\cite{Cox1962Renewal} \begin{equation} P_0(t_0)=\frac{1}{\langle r \rangle}\sum_{r = t_0}^\infty P(r), \label{eq:residual_distribution} \end{equation} where $\langle r \rangle$ denotes the average IAT. The node is then activated at times $t_l = t_0 + \sum_{l'=1}^l r_{l'}$ for $l = 1, 2, \cdots$, where each IAT, denoted by $r$, is independently drawn from $P(r)$ \footnote{The dummy variable $l'$ is omitted for the sake of simplicity. The same notation rule applies in the rest of the paper.}. In our work, we adopt a power-law IAT distribution, \begin{equation} P(r)=\frac{r^{-\alpha}}{\zeta(\alpha)}\ \textrm{for}\ r=1,2,\cdots, \label{eq:power_law_IAT_distribution} \end{equation} where $\zeta(\alpha) \equiv \sum_{x = 1}^\infty 1 / x^\alpha$ is the Riemann zeta function. We choose $\alpha > 2$ to make Eq.~\eqref{eq:residual_distribution} converge. \begin{figure}[tb] \centering \includegraphics[width=0.95\columnwidth]{Fig1.pdf} \caption{Schematically illustrated snapshots of communication according to the two models given the same set of activated nodes, which are enclosed by thick solid lines. The red filled circles and red solid lines represent the nodes and links with a communication event, respectively. (a) The polyvalent model assumes that the activated nodes communicate with all the neighbors that are simultaneously activated. (b) In the monovalent model, each activated node communicates with at most one neighbor. } \label{fig:schematic} \end{figure} At each time step $t$ ($0 \leq t \leq T$), pairs of activated nodes communicate with each other. As depicted in Fig.~\ref{fig:schematic}, here we consider two variants of the model. The first variant, which we call the \textit{polyvalent} model, assumes that an activated node communicates with all the activated neighbors. The case where an activation does not lead to communication only occurs when the node does not have any simultaneously activated neighbors. In the second variant, which we refer to as the \textit{monovalent} model, an activated node is randomly paired with one of its activated neighbors to have at most one communication partner at the same time, as is the case for one-to-one phone calls. An activated node cannot communicate with others if none of the neighbors are simultaneously activated, or if all the simultaneously activated neighbors are already paired with other nodes. In both models, the communication events between a pair of adjacent nodes can be associated not only with the link but also with the nodes. In other words, we can define a communication event sequence for each link as well as for each node, the latter being the union of the communication events on all the links attached to the node. Hereafter, we refrain from the wording ``inter-event time" to avoid confusion and instead use ``inter-communication time" (ICT) to represent the time interval between consecutive communication events on the sequence affiliated with a node or with a link. Note that the communication sequence of a node does not agree with the activation sequence of the node in general, because an activated node may not communicate with anyone, as shown in Fig.~\ref{fig:schematic}. We denote the ICTs by $\tau$. Whenever necessary, subscripts $i$ and $ij$ will distinguish between node $i$'s properties and link $ij$'s properties; sub- or superscripts $\mathrm{p}$ and $\mathrm{m}$ will indicate variables and functions related to the polyvalent and monovalent models, respectively. In the following sections, we discuss the statistics of node and link ICTs. \section{Numerical Results} \label{sec:numerical} \begin{figure*}[tb] \centering \includegraphics[width=\textwidth]{Fig2.pdf} \caption{The inter-communication time (ICT) distributions $\psi(\tau)$ of nodes and links for the polyvalent model (top panels) and for the monovalent model (bottom panels). The network structures are, from left to right, a complete graph (a,~f), random regular graphs with degree $k = 15$ (b,~g) and with $k = 6$ (c,~h), a scale-free graph with degree distribution $\propto k^{-2.1}$ (d,~i), and Zachary's karate club network (e,~j). The parameters used are $N = 100$ and $T = 10^7$ for the complete and random regular graphs, $N = 1000$ and $T = 10^6$ for the scale-free graph, and $N = 34$ and $T = 10^7$ for Zachary's karate club network. The inter-activation time (IAT) distribution, which follows Eq.~\eqref{eq:power_law_IAT_distribution} with $\alpha = 2.5$, is represented by the dashed line.} \label{fig:ICT_distributions} \end{figure*} We carry out numerical simulations for synthetic networks with different topologies such as complete graphs, random regular graphs, and scale-free graphs, as well as Zachary's karate club network~\cite{Zachary1977Information} as an example of a real-world network. Figure~\ref{fig:ICT_distributions} summarizes the node and link ICT distributions $\psi(\tau)$. Here we set $\alpha = 2.5$. The polyvalent model yields the node and link ICTs both of which are distributed almost indistinguishably from the power-law IAT distributions for the various network topologies. In contrast, the monovalent model results in different communication patterns depending on the network structure. For homogeneous graphs such as complete and random regular graphs, the node ICT distributions are almost identical to those of IATs, while the link ICT distributions show a hump at short time scales that is not in the power-law IAT distributions. As we make the network sparser by reducing the degree of random regular graphs, the hump becomes smaller and the range of $\tau$ in which the distribution approximately follows a power law becomes wider, implying that the sparseness of networks plays an important role in realizing bursty communication patterns on links. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{Fig3.pdf} \caption{(a) Node ICT distributions grouped by degree $k_i$. (b) Link ICT distributions grouped by the product of the degrees of the end nodes, $k_i k_j$. Both figure panels are based on the simulation results for the monovalent model on the scale-free graph shown in Fig.~\ref{fig:ICT_distributions}(i). } \label{fig:degree_based_ICT_distributions} \end{figure} For scale-free graphs, both the node and link ICTs differ from the IATs in terms of the distribution. To examine the effect of structural heterogeneity, we group the nodes by degree and consider the node ICT distribution for each group. Figure~\ref{fig:degree_based_ICT_distributions}(a) shows that the deviation between the ICT and IAT distributions is larger for nodes with smaller degrees. This can be understood intuitively as follows: When activated, a node with more neighbors is more likely to find a communication partner, i.e., a neighbor who is simultaneously activated and available. In the extreme case where the degree of a node is large enough so that the node almost always finds a partner whenever activated, the node ICT distribution would coincide with the IAT distribution. On the other hand, the ICT distribution would deviate more from the IAT distribution for nodes with less neighbors because of the difficulty of finding a partner. Similarly, we group the links by the product of the degrees of the end nodes of the link, $k_i$ and $k_j$, and measure the distribution of $\tau_{ij}$ conditioned on each value of $k_i k_j$. As shown in Fig.~\ref{fig:degree_based_ICT_distributions}(b), the ICT distributions for links with larger values of $k_i k_j$ show larger deviations from the IAT distribution. Suppose that two adjacent nodes are simultaneously activated. If the degrees of the nodes are larger, then the nodes are likely to have larger numbers of other simultaneously activated neighbors that they potentially communicate with. Therefore, the probability that the two nodes communicate with each other decreases. In contrast, if the degrees are small, then the two nodes are more likely to communicate with each other. We discuss these intuitions more quantitatively in the following Section. Finally, for Zachary's karate club network, the results are similar to those for the homogeneous graphs. The node ICT distribution shows a small deviation from the IAT distribution, which is due to the degree heterogeneity. \section{Analytical results} \label{sec:analysis} In this section, we provide an analytical examination of the behavior of the models. For that, we start by considering a minimal system that consists of a pair of adjacent nodes, which we call a \textit{dimer}. We show that the number of IATs that compose each ICT is a random variable that follows a power-law distribution. Thus, we can describe an ICT as the sum of a random number of random variables, and we show that the sum is also distributed as a power law. Then, we derive the statistics of the link and node ICTs for the polyvalent model directly from the results for a dimer. Finally, we describe the monovalent communication as a result of random success and failure of the polyvalent communication. This leads to expressions of link and node ICTs for the monovalent model as a geometric sum of polyvalent link and node ICTs, respectively. We obtain power-law statistics in this case as well. \subsection{Case of dimers} \label{subsec:dimer} A dimer is a pair of nodes connected only to each other. Under this setup, the monovalent model is equivalent to the polyvalent model because the nodes have no other nodes to communicate with except for each other. Moreover, the communication sequences of the two nodes are identical to each other as well as to that of the link connecting them. \begin{figure*}[tb] \centering \includegraphics[width=\textwidth]{Fig4.pdf} \caption{ Schematic illustrations of communication sequences between a pair of nodes $i$ and $j$. The red and white rectangles represent activations with and without communication, respectively. The solid vertical lines identify communication events between the pair. In (b) and (c), nodes are activated with the same temporal pattern as in the left panel of (a). (a) The case where the two nodes form a dimer. Variable $n_i$ denotes the number of activations (represented by rectangles) of node $i$ between two communication events (red rectangles). The right panel in (a) shows how the activation pattern of node $j$ can be mapped to an activation-modulated Poisson process where each IAT is associated with an activation probability $\lambda_j$. Variable $\tilde{n}_i$ denotes the number of activations of $i$ in each period segmented by activations of $j$. (b) The polyvalent model. The communication pattern on the link is the same as that of the dimer, while each node also communicates with other neighbors and experiences more frequent communication with smaller ICANs and ICTs, which are denoted by $n^\mathrm{p}_i$ and $\tau^\mathrm{p}_i$, respectively. (c) The monovalent model. Simultaneous activation of adjacent nodes may fail to trigger communication, as indicated by light blue rectangles and vertical dotted lines. Variable $m_{ij}$ denotes the number of events that nodes $i$ and $j$ are simultaneously activated (represented by vertical lines) between two consecutive communication events on link $ij$ (solid vertical lines), while $m_i$ denotes the number of activations of $i$ that concurred with activations of any of its neighbors (colored rectangles) between two consecutive communication events of $i$ with one of its neighbors (red rectangles). See main text for details. } \label{fig:schematic2} \end{figure*} In general, each of the two nodes of the dimer, denoted by $i$ and $j$, can be activated more than once between two consecutive communication events, as sketched in Fig.~\ref{fig:schematic2}(a). This leads to expressions of an ICT, denoted by $\tau$, as the sum of successive IATs of each node: \begin{equation} \tau = \sum_{n'=1}^{n_i} r_{i, n'} = \sum_{n'=1}^{n_j} r_{j, n'}, \label{eq:ICT_is_equal_to_sum_of_IAT} \end{equation} where $r_{\omega, n'}$ denotes the $n'$th IAT of node $\omega \, (\omega = i, j)$ within the ICT and $n_\omega$ denotes the number of times that node $\omega$ is activated between the two communication events. We call $n_\omega$ an inter-communication activation number (ICAN) of node $\omega$. The random variables $n_i$ and $n_j$ will have the same statistics by symmetry, which allows us to focus on node $i$'s point of view from now on. Keeping Eq.~\eqref{eq:ICT_is_equal_to_sum_of_IAT} in mind, our goal is (i) to derive the statistics of $n_i$ and (ii) to compute the statistics of $\tau$ as the sum of the $n_i$ independent random variables $r_i$. Let us consider the activation processes of the two nodes between two consecutive communication events as follows (see the right panel of Fig.~\ref{fig:schematic2}(a)). Node $j$ is activated and communicates with node $i$ at time $t_{j, 0}$ for the first time, and then activated at $t_{j, 1}, t_{j, 2}, \dots, t_{j, n_j - 1}$ until it communicates with $i$ again at $t_{j, n_j}$ for the second time. The number of activations of node $i$ between the two consecutive activations of node $j$ at time $t_{j, n'-1}$ and $t_{j, n'}$ is denoted by $\tilde{n}_{i, n'}$, where $1 \leq n' \leq n_j$. The ICAN is then written as the sum of the numbers of activations in each segment indexed by $n'$, \begin{equation} n_i = \sum_{n'=1}^{n_j}\tilde{n}_{i, n'}. \end{equation} In order to derive the distribution of ICANs, we map the renewal process of the activations of node $j$ to an inhomogeneous Bernoulli process, in which the activation probability is a time-dependent parameter. Particularly, we adopt the framework of mapping a continuous renewal process into an event-modulated Poisson process~\cite{Masuda2018Gillespie}. An event modulated Poisson process is a process where the event rate $\lambda$ is independently redrawn from a distribution $F(\lambda)$ after every event and remains constant until the next event occurs with that rate. The cumulative IET distribution is then shown to be the Laplace transform of $F(\lambda)$~\cite{Masuda2018Gillespie}. In our case, we should instead consider an activation-modulated Bernoulli process since time is discrete. In this framework, node $j$ is activated at each time step with a probability $\lambda_j$, which is independently redrawn from a distribution $F(\lambda_j)$ upon every activation. Then, from node $i$'s point of view, each of its $\tilde{n}_{i, n'}$ activations between two consecutive activations of node $j$ can be considered as an independent Bernoulli trial with the success probability $\lambda_{j, n'}$ that node $j$ is activated at the same time (see the right panel of Fig.~\ref{fig:schematic2}(a)). Now, we hypothesize that a large $n_i$ is likely to be dominated by the numbers of activations that occur within a few long IATs of node $j$ governed by small activation rates. This leads to a simplification of the argument: instead of calculating $n_i$ by a combination of processes with different activation rates, we regard the process between two communication events as almost entirely homogeneous and approximate the distribution of large $n_i$ by a geometric distribution with parameter \begin{equation} \underline{\lambda}_j \simeq \min_{n'} \lambda_{j, n'}. \end{equation} In other words, we replace the activation-modulated process by a communication-modulated process. The distribution of ICANs is then given as \begin{equation} \begin{aligned} \phi(n_i) &= \int_0^\infty \underline{\lambda}_j(1 - \underline{\lambda}_j)^{n_i - 1} G(\underline{\lambda}_j)d\underline{\lambda}_j\\ &\simeq \int_0^\epsilon \underline{\lambda}_j\exp(-\underline{\lambda}_j n_i) G(\underline{\lambda}_j)d\underline{\lambda}_j, \end{aligned} \label{eq:p_n} \end{equation} where $G(\underline{\lambda}_j)$ is the distribution of the activation probability for each ICT. The approximation in the second line is derived from the fact that the tail behavior of $\phi(n_i)$ will be dominated by the contributions from small $\underline{\lambda}_j$. We put a small finite cutoff $\epsilon$ and perform the integration up to this value. \begin{figure*}[tb] \centering \includegraphics[width=\textwidth]{Fig5.pdf} \caption{ (a,~b) The scaling exponents of ICT and ICAN distributions, each for nodes and links, as a function of the scaling exponent $\alpha$ of IAT distributions in the cases of (a) the polyvalent and (b) monovalent models, respectively. The results are obtained from numerical simulations run for $T = 4 \times 10^7$ time steps on a random regular graph with $N = 100$ and $k = 6$. The dotted lines represent identity. The insets show the ICAN distributions when $\alpha = 3.2$. (c) The probability $q_{ij}$ of communication between pairs of simultaneously activated adjacent nodes of degree $k_i$ and $k_j$ in a scale-free graph. The parameters are the same as in Fig.~\ref{fig:ICT_distributions}(i). The continuous lines, for each $k_i$, represent the theoretical curves given by Eq.~\eqref{eq:communication_prob_given_coactivation} with a numerically obtained value of $\rho$. (d, e) The distributions of (d) link ICCANs and (e) node ICCANs, both obtained from simulations of the monovalent model with $\alpha = 3.2$. The network topology is the same as in (b).} \label{fig:exponents} \end{figure*} In order to estimate the functional form of $G(\underline{\lambda})$, we go back to the original activation-modulated picture. For a given activation rate $\lambda$, the IATs are distributed as \begin{equation} p(r | \lambda) = \lambda e^{-\lambda r}. \end{equation} Conversely, the distribution of $\lambda$ conditional on $r$ is given by the Bayes' theorem, \begin{equation} p(\lambda|r) = \frac{p(r|\lambda)p(\lambda)}{\int_0^\infty p(r|\lambda) p(\lambda) d\lambda}. \label{eq:Bayes} \end{equation} As we do not assume anything about the prior $p(\lambda)$ except $\lambda > 0$, we adopt the non-informative density, which is uniform throughout its domain. Eq.~\eqref{eq:Bayes} then reads \begin{equation} p(\lambda | r) = r^2\lambda e^{-\lambda r} \end{equation} with the normalization factor $r^2$. When the IAT distribution scales as $P(r) \sim r^{-\alpha}$ at the tail, the following scaling holds for small values of $\lambda$: \begin{equation} \begin{aligned} F(\lambda) &= \int_0^\infty p(\lambda | r) P(r) dr \\ &\sim \lambda \int_0^\infty r^{2-\alpha} e^{-\lambda r}dr \sim \lambda^{\alpha - 2}. \end{aligned} \end{equation} This is consistent with the fact that for the event-modulated Poisson processes, $P(r)$ will have a power-law tail with exponent $\alpha$ when $F(\lambda)$ is a gamma distribution with shape parameter $\alpha - 1$~\cite{Masuda2018Gillespie}, which scales as $F(\lambda) \sim \lambda^{\alpha - 2}$ for small $\lambda$. We assume that $G(\underline{\lambda}) \simeq F(\underline{\lambda})$ for $0 < \underline{\lambda} \leq \epsilon$ and plug the scaling into Eq.~\eqref{eq:p_n} to obtain \begin{equation} \phi(n_i) \sim \int_0^\epsilon \underline{\lambda}_j^{\alpha - 1} \exp(-\underline{\lambda}_j n_i)d\underline{\lambda}_j \sim {n_i}^{- \alpha}. \label{eq:ICAN_distribution_is_power_law} \end{equation} This derivation tells us that the statistics of the ICANs of node $i$ is determined by the activation process of node $j$. If the IAT distributions for node $i$ and $j$ are characterized by different exponents $\alpha_i$ and $\alpha_j$, respectively, then $\phi(n_i) \sim n_i^{- \alpha_j}$. We now turn to our second question regarding the statistics of ICTs as the sum of an ICAN of IATs. Since the ICAN and IAT are independent random variables, we exploit the analytical results in Ref.~\cite{Jo2013Contextual}: We consider the following sum \begin{equation} \tau = \sum_{n' = 1}^n r_{n'} \end{equation} where the summands $r$ and the number of summands $n$ are independent random variables and they both follow power-law distributions, $P(r) \sim r^{-\alpha}$ and $\phi(n) \sim n^{-\beta}$. Then, $\tau$ also asymptotically obeys a power-law distribution $\psi(\tau) \sim \tau^{-\alpha'}$ where \begin{equation} \alpha' = \min\{(\alpha - 1)(\beta - 1) + 1, \alpha, \beta\}. \label{eq:exponent_rel} \end{equation} In our case, since the IAT $r_i$ and ICAN $n_i$ in Eq.~\eqref{eq:ICT_is_equal_to_sum_of_IAT} are shown to have the same scaling exponent as $\beta = \alpha$, the ICT distribution also follows a power law with the same exponent $\alpha' = \alpha$, that is, \begin{equation} \psi(\tau) \sim \tau^{-\alpha}. \end{equation} \subsection{Polyvalent model} The polyvalent model assumes that communication occurs on a link every time when the two end nodes are activated at the same time, irrespective of the states of other nodes in the system. Therefore, the dimer case discussed in the previous subsection directly translates into the communication patterns on links (compare the left panel of (a) to (b) in Fig.~\ref{fig:schematic2}). Indeed, Fig.~\ref{fig:exponents}(a) shows that the link ICAN distributions follow power laws, and that the scaling given by Eq.~\eqref{eq:ICAN_distribution_is_power_law} agrees well with the numerical results. The distribution of polyvalent link ICTs is the same as that of ICTs for the dimer case and given by \begin{equation} \psi_\mathrm{p}(\tau^\mathrm{p}_{ij}) \sim \left[\tau^\mathrm{p}_{ij}\right]^{-\alpha}. \end{equation} In order to investigate the node communication processes, we introduce a time frame defined by counting the number of activations of a node, formally expressed as \begin{equation} \nu_i(t) = \sum_{l = 0}^{\infty}\theta\left(t - t_{i, l}\right), \label{eq:activation_clock} \end{equation} where $t$ is the wall-clock time, $t_i$ are the times that node $i$ is activated, and $\theta(\cdot)$ is the Heaviside step function. Simply put, the activation-based time $\nu_i$ is measured by a clock that ticks one unit forward upon every activation of node $i$. This time transformation $t \mapsto \nu_i(t)$ rescales an ICT $\tau = t' - t''$ into an ICAN $n_i = \nu_i(t') - \nu_i(t'')$, meaning that an ICAN is an ``inter-communication time" for the processes in the time frame $\nu_i$. We note that similar concepts of time frame transformation, named ``relative clock" and ``activity clock," are used in recent studies~\cite{Zhou2012Relative, Panisson2012Dynamics, Gauvin2013Activity}. In the wall-clock time frame, the communication processes on adjacent links $ij$ and $ij'$ are correlated because of the underlying activation process of node $i$. However, in the activation-based time frame $\nu_i$, in which the activations of node $i$ are regularized, the two communication processes are independent because when $i$ is activated, communication between nodes $i$ and $j$ depends only on whether $j$ is simultaneously activated and is not affected by the behavior of node $j'$. Since the communication process on each link is characterized by power-law distributed ICANs as in Eq.~\eqref{eq:ICAN_distribution_is_power_law}, the node communication process as the superposition of independent link processes has a thinner-tailed ICAN distribution, that is, \begin{equation} \phi_\mathrm{p}\left(n^\mathrm{p}_i\right) \sim \left[n^\mathrm{p}_i\right]^{-\beta} \end{equation} with $\beta > \alpha$ (see Fig.~\ref{fig:exponents}(a)). By taking the same approach as in the previous subsection, we write a node ICT as follows: \begin{equation} \tau^\mathrm{p}_i = \sum_{n' = 1}^{n^\mathrm{p}_i} r_{n'}, \end{equation} where $P(r) \sim r^{-\alpha}$. Then, by following the scaling relation given by Eq.~\eqref{eq:exponent_rel}, we find that the polyvalent node ICTs are distributed as \begin{equation} \psi_\mathrm{p}\left(\tau^\mathrm{p}_i\right) \sim \left[\tau^\mathrm{p}_i\right]^{-\alpha}. \end{equation} \subsection{Monovalent model} In contrast to the polyvalent model, simultaneous activation of two adjacent nodes do not necessarily trigger communication between them in the monovalent model. In order to study the statistics of link ICTs, we first need to account for the random pattern of successful communication when a pair of adjacent nodes are simultaneously activated. Suppose that node $i$ with degree $k_i$ is activated at a time step along with $\kappa_i + 1$ activated neighbors including node $j$. Because activation processes of different nodes are independent of each other, variable $\kappa_i$ is binomially distributed as \begin{equation} \kappa_i \sim B(\kappa_i; k_i - 1, \rho) = \binom{k_i - 1}{\kappa_i} \rho^{\kappa_i} (1 - \rho)^{(k_i - 1) - \kappa_i}, \end{equation} where $\rho = 1 / \langle r \rangle$ denotes the probability that each neighbor of node $i$ is activated when node $i$ is activated. If $\kappa_i > 0$, the communication between nodes $i$ and $j$ occurs only if $i$ selects $j$ as the counterpart as a result of random matching. Although the probability of selecting each of the activated neighbors is not uniform in general, we assume the uniformity for simplicity so that the probability that node $j$ is selected is equal to $1 / (\kappa_i + 1)$. The same goes for node $j$ for its $\kappa_j + 1$ activated neighbors including node $i$. Then, the probability that simultaneous activation of nodes $i$ and $j$ leads to communication between them is approximated by \begin{equation} \begin{aligned} q_{ij} &= \sum_{\kappa_i = 0}^{k_i - 1}\frac{B(\kappa_i; k_i - 1, \rho)}{\kappa_i + 1}\sum_{\kappa_j = 0}^{k_j - 1}\frac{B(\kappa_j; k_j - 1, \rho)}{\kappa_j + 1}\\ &= \frac{\left[1 - (1 - \rho)^{k_i}\right]\left[1 - (1 - \rho)^{k_j}\right]}{\rho^2 k_i k_j}. \end{aligned} \label{eq:communication_prob_given_coactivation} \end{equation} This form reduces to $q_{ij} = 1$ for the dimer case of $k_i = k_j = 1$, in which they communicate with each other every time they are simultaneously activated. In the limit where $k_i, k_j \gg 1$, we have $q_{ij} \simeq 1 / \rho^2 k_i k_j$. Equation~\eqref{eq:communication_prob_given_coactivation} is, on the whole, numerically supported as shown in Fig.~\ref{fig:exponents}(c), although deviations and fluctuations are notable. We think these deviations originate from perturbation by higher-order effects involving more than two nodes, which violates the uniformity assumption. Let $m_{ij}$ be the number of times that adjacent nodes $i$ and $j$ are simultaneously activated between two consecutive communication events, including their activation that triggered the latter of the two communication events but excluding the one that triggered the former (see Fig.~\ref{fig:schematic2}(c)). We call $m_{ij}$ an inter-communication coactivation number (ICCAN) of link $ij$. Because random pairing is done independently at each time step, $m_{ij}$ is geometrically distributed (see Fig.~\ref{fig:exponents}(d)) as \begin{equation} \chi(m_{ij}) = (1 - q_{ij})^{m_{ij} - 1} q_{ij}. \label{eq:ICCAN_distribution_is_geometric} \end{equation} As depicted in Fig.~\ref{fig:schematic2}(c), a monovalent link ICT, denoted by $\tau^\mathrm{m}_{ij}$, is equal to the sum of $m_{ij}$ successive polyvalent link ICTs, \begin{equation} \tau^\mathrm{m}_{ij} = \sum_{m' = 1}^{m_{ij}} \tau^\mathrm{p}_{ij, m'}. \label{eq:monovalent_link_ICT_is_equal_to_sum_of_polyvalent_link_ICT} \end{equation} The distribution of $\tau^\mathrm{m}_{ij}$ is written as \begin{equation} \psi_\mathrm{m}(\tau^\mathrm{m}_{ij}) = \sum_{m_{ij}=1}^\infty h(\tau^\mathrm{m}_{ij}; m_{ij}) \chi(m_{ij}), \end{equation} where \begin{equation} \begin{aligned} h(\tau; m) \equiv &\sum_{\tau_1 = 0}^{\infty} \dots \sum_{\tau_m = 0}^{\infty} \psi_\mathrm{p}(\tau_1) \dots \psi_\mathrm{p}(\tau_m) \\ &\times \delta\left(\tau - \sum_{m'=1}^m \tau_{m'}\right) \end{aligned} \end{equation} is the probability that a monovalent link ICT is equal to $\tau$ and it is composed of $m$ polyvalent link ICTs. Here $\delta(\cdot)$ denotes the Dirac delta function. An analytical evaluation of the discrete power-law distribution $\psi_\mathrm{p}(\tau)$ is not straightforward. Instead, we consider a continuous counterpart given by \begin{equation} \psi_\mathrm{p}(\tau^\mathrm{p}_{ij}) = (\alpha - 1) \left[\tau^\mathrm{p}_{ij}\right]^{-\alpha}\theta(\tau^\mathrm{p}_{ij} - 1), \label{eq:continuous_polyvalent_link_ICT_distribution} \end{equation} where $\alpha > 1$. The Laplace transform of Eq.~\eqref{eq:continuous_polyvalent_link_ICT_distribution} is given as \begin{equation} \tilde{\psi}_\mathrm{p}(s)=(\alpha-1)s^{\alpha-1}\Gamma(1-\alpha,s), \end{equation} where $\Gamma(\cdot, \cdot)$ denotes the upper incomplete gamma function. In the asymptotic limit of $s\to 0$, one gets \begin{equation} \tilde{\psi}_\mathrm{p}(s) = 1 + bs^{\alpha-1} + cs + \mathcal{O}(s^2) \end{equation} with $b \equiv \Gamma(1-\alpha)(\alpha-1)$, where $\Gamma(\cdot)$ is the gamma function, and $c\equiv (\alpha-1)/(2-\alpha)$. By only keeping the leading terms of expansion with respect to $s$, we have \begin{equation} \tilde{\psi}_\mathrm{m}(s) \simeq 1 + \frac{bs^{\alpha - 1} + cs}{q_{ij}}. \label{eq:Laplace_transform_of_monovalent_link_ICT_distribution} \end{equation} The inverse Laplace transform of Eq.~\eqref{eq:Laplace_transform_of_monovalent_link_ICT_distribution} in the limit of $\tau \to \infty$ yields \begin{equation} \psi_\mathrm{m}(\tau^\mathrm{m}_{ij}) \simeq \frac{\alpha - 1}{q_{ij}}\left[\tau^\mathrm{m}_{ij}\right]^{-\alpha}. \label{eq:monovalent_link_ICT_follows_power_law} \end{equation} Equation~\eqref{eq:monovalent_link_ICT_follows_power_law} is valid for any values of $\alpha$ because considering higher-order terms in the expansion of Eq.~\eqref{eq:Laplace_transform_of_monovalent_link_ICT_distribution} does not affect the asymptotic form given by Eq.~\eqref{eq:monovalent_link_ICT_follows_power_law}. This result indicates that the link ICT distribution for the monovalent model has a power-law tail with the same exponent as the link ICT distribution for the polyvalent model, which is consistent with the numerical results presented in Fig.~\ref{fig:exponents}(b). At the same time, the geometric distribution of ICCANs contributes to the hump at the bulk part of the monovalent link ICT distribution. For a dense network where the degrees are generally large, $q_{ij}$ is small and the geometric decay of $\chi$ in Eq.~\eqref{eq:ICCAN_distribution_is_geometric} is slow; this is why the hump is larger in denser networks as shown in Fig.~\ref{fig:ICT_distributions}. Lastly, we discuss the node ICT distribution for the monovalent model. Unlike the polyvalent case, the monovalent communication events on adjacent links (i.e., links sharing a node) are not independent of each other even in the activation-based time frame because communication between a pair of nodes forbids the nodes to communicate with other nodes at the same time. Nevertheless, Fig.~\ref{fig:exponents}(e) shows that node ICCANs $m_i$, i.e., the numbers of times that node $i$ is activated simultaneously with any of its neighbors until it communicates with one of them, are geometrically distributed. This observation suggests that the probability that a node succeeds to communicate with another node is constant every time it is simultaneously activated with at least one of its neighbors. A monovalent node ICT, denoted by $\tau^\mathrm{m}_i$, can be written as the sum of $m_i$ successive polyvalent node ICTs as follows: \begin{equation} \tau^\mathrm{m}_i = \sum_{m' = 1}^{m_i} \tau^\mathrm{p}_{i,m'}. \label{eq:monovalent_node_ICT_is_equal_to_sum_of_polyvalent_node_ICTs} \end{equation} Equation~\eqref{eq:monovalent_node_ICT_is_equal_to_sum_of_polyvalent_node_ICTs} is analogous to the relation between the monovalent and polyvalent link ICTs given by Eq.~\eqref{eq:monovalent_link_ICT_is_equal_to_sum_of_polyvalent_link_ICT}. Again using the scaling relation given by Eq.~\eqref{eq:monovalent_link_ICT_follows_power_law}, one obtains the monovalent node ICT distribution with a power law at its tail as follows: \begin{equation} \psi_\mathrm{m} \left(\tau^\mathrm{m}_i\right) \sim \left[\tau^\mathrm{m}_i\right]^{-\alpha}. \end{equation} This result is in good agreement with the numerically obtained scaling relations shown in Fig.~\ref{fig:exponents}(b). \section{Conclusion} \label{sec:conclusion} In order to explain the origin of the bursty activity patterns of nodes and links observed in empirical communication systems, we have proposed a temporal network model where the nodes communicate with each other according to their non-Poissonian random activation. The two variants of the model that we discussed are both able to reproduce heavy tails in the inter-communication time (ICT) distributions for nodes and links for various network topologies. We have shown that the polyvalent ICTs are power-law distributed because each of them is a sum of inter-activation times (IATs) where both the summands and the number of summands are power-law distributed random numbers, which stem from the node activation processes. A monovalent ICT is described as a sum of polyvalent ICTs, where the number of summands is geometrically distributed because the activated nodes are paired randomly and independently at each time step. As a result, an exponential hump appears in the bulk part of the ICT distributions, especially prominently for small-degree nodes and for links between large-degree nodes; nevertheless, the tail part of the distributions follows a power law with the same scaling exponent as the power law in the IAT distribution. The superposition of independent communication sequences with power-law distributed ICTs does not yield a sequence with the ICTs distributed as a power law with the same scaling exponent. Therefore, the assumption of independence between links is unable to account for the real-world observations. Our results suggest a possible mechanism behind the reconciliation between bursty dynamics of nodes and of links: Link-link correlations emerge as a result of underlying node activations, each of which may or may not realize actual communication. Further steps can be taken in this line of research. In this work, we have considered a homogeneous population of nodes that shares the same activation statistics. In reality, the activity levels of nodes and the weights of links, i.e., the frequency of communication between pairs of individuals, are heterogeneous~\cite{Onnela2007Analysis}. It would be straightforward to include such heterogeneity into our modeling framework if we consider nodes endowed with different scaling exponents of the IAT distribution. We have also assumed that a node behaves in an equal manner toward every neighbor. However, empirical data show that individuals allocate their efforts to communicate with others unevenly among alters~\cite{Saramaki2014Persistence}. This effect can be taken into account in the monovalent model by setting biases toward certain links when pairing communication partners. Another possible extension is to implement communication among a group of nodes, which corresponds to ``conference calls" or ``group chats," in a direction similar to Ref.~\cite{Petri2018Simplicial}. One can also tailor the temporal structure of node activation patterns to account for the empirical observation of long-range correlated node ICTs in human communication~\cite{Karsai2012Universal, Jo2019Bursttree}. Future work also includes how the presence of link-link correlations affects dynamical processes taking place in temporal networks and the associated network control problems~\cite{Li2017Fundamental}. \begin{acknowledgments} T.H. thanks M.I.D. Fudolig for careful reading of the manuscript. A.L. was supported by the International Human Frontier Science Program (HFSP) Postdoctoral Fellowship (Grant No. LT000696/2018-C) and Foster Lab at Oxford. H.-H.J. was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1D1A1A09081919). \end{acknowledgments}
1,477,468,750,208
arxiv
\section{Introduction} Increasingly, transparency is becoming a key practical and ethical concern for developers of natural language technologies {\cite{candello_cuichi_2020}}. Conversational research suggests knowledge about the type of partner a person is talking to (human or computer) has a significant impact: on language choices in dialogue \cite{cowan_whats_2019,cowan_does_2015}; on perceptions of a partner’s knowledge and capabilities \cite{doyle_mapping_2019}; on perceptions of interpersonal connection \cite{doyle_mapping_2019, purington_alexa_2017}; and on perceptions of trustworthiness \cite{torre_trust_2018}. Failure to disclose when someone is talking to a computer rather than a person can also lead to heightened expectations about system capability. Subsequent failure to meet these expectations can lead to frustration, limited use and even abandonment of CUIs \cite{luger_like_2016}. Transparency is also a key element of recent policy implementations such as GDPR, which guarantees European citizens a right to transparency in their interactions with technology \cite{european_2016}. Here, we propose a structure for 'levels of automation' that can be used to clearly delineate the roles of humans and machines in generating output. Our hope is that the levels of automation posited here - inspired by the SAE taxonomy of driving automation - will be a first step toward greater transparency in the field of conversational technology, inspiring iterative refinement of this taxonomy that produces universal descriptions for these technologies. \section{SAE Levels} Issued in 2014, the SAE taxonomy of driving automation levels clearly categorizes automated driving systems according to the level of control possessed by the driver and the vehicle, respectively. The clarity of language afforded by the SAE taxonomy allows for marketing of new technologies that avoids misleading consumers \cite{barry_levels_2019} as well as allowing researchers, policymakers and journalists to discuss emerging technologies \cite{milakis_long-term_2019}. It also provides consumers with clarity around levels of automation and control. Table 1 presents a summarized version of the SAE levels of driving automation. \vspace*{-\baselineskip} \newcolumntype{s}{>{\hsize=.22\hsize}X} \begin{table}[h] \caption{SAE Levels of driving automation} \label{Table 1:} \begin{tabularx}{\linewidth}{s|X} SAE Level & Description \\ \hline 0 & No automation \\ \hline 1 & Driver assistance (a single automated system like cruise control) \\ \hline 2 & Partial automation (vehicle can operate autonomously, but human monitors the environment and can take control at any time) \\ \hline 3 & Conditional automation (vehicle can monitor environment, operate autonomously, but human must be available to takeover in some situations) \\ \hline 4 & High automation (under certain circumstances, the vehicle is fully autonomous, human takeover is option in other circumstance) \\ \hline 5 & Full automation: (No human interaction required, takeover may be disabled) \end{tabularx} \end{table} SAE levels are defined in terms of the role and responsibility of the driver in relation to the vehicle's automated features. As such, they give insight into varying levels of human/system control across different in-car automation systems and across various driving events a system may or may not be designed to handle. As seen in Table 1, human operators are in control of levels 2 and below, whilst the automated driving system is in control at levels 3 and above. This delineation helps in setting appropriate expectations for drivers, aids policymakers in establishing appropriate legal frameworks, and allows for greater accuracy in reporting when these human-computer interactions fail. The SAE levels were developed through committee discussion among driving engineers and industry stakeholders to develop a common set of terms for this specific issue. It is our hope that by structuring similar levels in the field of language generation, we too can give a common language to our research community, enhancing transparency in the field. \section{Levels of Language Generation Automation} Like the SAE levels, we propose a structure of 6 levels of automation that define the role of a human author when supported by automated language generation systems, including those using both rule-based and probabilistic approaches. Below, we posit definitions and examples of each of the six levels of language generation. \textbf{Level 0: Fully human-written language: } indicative of language written and selected exclusively by a human. While editing assistance like spell check may be used at this level, lexical choices are entirely controlled by a human. \textbf{Level 1: Language assistance:} includes language that is entirely written and selected by a human, but may be scripted to present automatically. These may include highly constrained chatbots designed for a specific role, or phone trees limited by predetermined sequences. Systems like this present users with a limited set of dialogue options per turn, rather than allowing a user to freely enter language. Dialogues may take varied branching paths, but only within the confines of predetermined sets of commands and responses that were generated by a human author. No novel language is generated throughout these interactions. In this way, level 1 automation allows for automation of scripted interactions rather than automation of novel generation. \textbf{Level 2: Partial automation:} includes language generated\\ through shared effort between a human author and an automated system. This may include language written by a human then selected algorithmically, and/or language generated algorithmically then selected by a human. Many modern Twitterbots take the former approach, including Twitterbots built using Tracery (e.g. Lost Tesla Bot\footnote{twitter.com/LostTesla}. Tracery is a language generation approach that generates text through slot filling using random or conditional selection \cite{compton_tracery_2015}. An author can create a number of templates, called origins, and define slots and lists of potential entries for each slot. A Twitterbot employing Tracery thus produces novel text by combining several human-written texts. An example of partial automation can also be seen in output from Voicebox, a predictive text tool by Botnik Studios\footnote{botnik.org}. Voicebox is a Markov chain-based predictive text keyboard that can be trained on existing text uploaded by the author (e.g.. a compilation of an artist’s lyrics, a body of text from a novel). The author then generates a new text by selecting one word at a time. Level 2 partial automation still requires a high level of human involvement as language must be initially provided by a human author, but novel generation is possible. \textbf{Level 3: Conditional automation:} Here language generation is accomplished by automation, with human effort reserved primarily for selecting the generated language for use. At this level, the role of authorship is shared between a human and a machine. As an example, in a recent article in The Economist, contributor Tom Standage interviewed GPT-2 by inputting interview questions about technology to watch in 2020 \cite{standage_artificial_2019}. GPT-2, a large stochastic language model \cite{radford_language_2019}, was instantiated in a shared writing environment in which an author could input text cues for GPT-2 to continue to generate language. Standage authored questions, received several generated answers, and selected which answers to publish. Although a higher level of automation was asserted, this masks the role Standage had in composing the interview, particularly in selecting which answers to use. An interaction like this is more accurately described as a Level 3, conditional automation involving a shared task between a human and a machine. \textbf{Level 4: High automation: } Level 4 language automation requires no human supervision. Here, language is generated and selected stochastically, though constrained to a specific domain or language task. One currently available implementation of level 4 language automation is AI Dungeon, a GPT-2 based text adventure role-playing game\footnote{aidungeon.io}. Gameplay in AI Dungeon is generated by an instantiation of GPT-2 trained on text from role-playing games, with the game responding to human text input with stochastically generated story development, creating emergent gameplay between the game and player. Similar implementations could be used in a variety of other use-cases, like educational tools to encourage children to write, and marketing tools such as embodied agents promoting brand engagement. While training models and defining domains may require high human effort, language generation is not performed by a human at this level of automation. This fully automatised generation differentiates level 4 from level 3, while domain constraints differentiate it from level 5. \textbf{Level 5: Fully automated language: }Level 5 represents fully automated language generation. Like level 4, fully automated language requires no human supervision and is both produced and selected stochastically. However, unlike level 4, language is not constrained to a particular domain, task, or topic. This is a long-held goal for general artificial intelligence \cite{nilsson_quest_2010} and has not yet been accomplished. To merit consideration as a level 5 automated language system, a system would need to be capable, without modification, of various language tasks including both task-oriented transactional dialogue and open-ended social dialogue. Level 5 systems may be most useful as a template that users could then constrain for different specific purposes, thus rendering specific implementations into level 4 systems. While there is some evidence that people enjoy chatting with natural-sounding chatbots \cite{shum_eliza_2018}, other work casts doubt suggesting these preferences may be due to the novelty of the interaction \cite{clark_what_2019}, which may mean there are limited use-cases for this level of automation. It should be made clear that the level of automation of a language system does not necessarily correspond to the quality of outputs nor to the utility of the system overall. \section{Conclusion} Conversational systems and language generation tools are becoming increasingly advanced, blurring lines between human-generated and computer-generated language. Degrees of automation have been clearly delineated in the field of automated driving though the use of a shared set of definitions that can be understood by a variety of stakeholders. By using a similar taxonomy in the field of conversational technology, we can ensure that this field maintains transparency in discussion of generated language, ensuring consumers are not misled when interacting with these systems. \bibliographystyle{ACM-Reference-Format}
1,477,468,750,209
arxiv
\section{Introduction} Let $ E $ be an elliptic curve defined over $ \mathbb{Q} $, with conductor $ N = N(E) $ and Tamagawa product (also called `fudge factor') $ \tau = \tau(E) $. In my 1998 paper \cite{dW2} I proved a (conditional) upper bound for $ \tau $ in terms of $ N $, namely \begin{lemma}[de Weger, 1998] Szpiro's Conjecture implies that for all elliptic curves \[ \tau \ll N^{\epsilon} . \] \end{lemma} This is short for: For each $ \epsilon > 0 $ there exists an absolute constant $ C_{\epsilon} $ such that for all elliptic curves over $ \mathbb{Q} $ it is true that $ \tau < C_{\epsilon} N^{\epsilon} $. In fact, I proved a slightly better result, namely the existence (assuming Szpiro's Conjecture) of an absolute constant $ C $ such that $ \tau < N^{C/\log\log N} $, but this was not needed in that paper. Actually I wanted $ \tau $ to be small, as the goal was to find curves with exceptionally big Tate-Shafarevich groups, and the Birch and Swinnerton-Dyer Conjecture suggests that then $ \tau $ being small is helpful. Recently this little auxiliary lemma has received interest from, a.o., Hector Pasten, in relation to the $ abc $ Conjecture, see \cite{P1}, \cite{P3}. In his paper \cite{P2} Pasten improved upon Lemma 1 by the following conjecture. \begin{conjecture}[Pasten, 2020] For all elliptic curves \[ \tau \ll N^{\left(\frac73\log 3 + \epsilon\right)/\log\log N} . \] \end{conjecture} Note that $ \frac73\log 3 = 2.563\ldots $, and note that I left out the term $ + O_{\epsilon}(1) $ from \cite[Conjecture 1.1]{P2}, as it seems superfluous. Pasten remarks that the constant $ \frac73\log 3 $ might not be optimal, but also believes there is `some evidence' for it. Anyway, large $ \tau $'s are rare, as it is known \cite{GOT} that the average $ \tau $ is $ 1.8193\ldots $. This leads me to define the \emph{Tamagawa quality} of an elliptic curve by \[ q_{\tau} = \dfrac{\log\tau\log\log N}{\log N} . \] It is the purpose of this note to report on first experimental results searching for elliptic curves with an exceptionally high Tamagawa quality $ q_{\tau} $, and also for elliptic curves with an exceptionally large Tamagawa product $ \tau $ itself. The methods used here are restricted to picking the low hanging fruit only, and it is my hope that this note spurs interest from others to find better methods and results. Tables with all found curves can be found online \cite{dW3}. \section{Curves from the LMFDB and Cremona Databases} LMFDB \cite{LMFDB} is a database with web interface containing a wealth of data on a.o.\ elliptic curves. In particular it contains a collection, called \verb=ec_mwbsd=\footnote{\url{https://www.lmfdb.org/api/ec_mwbsd/}}, of data related to the Birch and Swinnerton-Dyer Conjecture for $ 3824372 $ curves, containing for each curve a.o.\ the conductor and the Tamagawa product. So this is a good place to start. However, the web interface is not well suited for directly checking the Tamagawa quality for all curves in the database. I found it easiest to download the underlying database in two parts. The full data for all $ 3064705 $ curves with conductor up to $ 500000 $ can be downloaded directly from John Cremona's database \cite{C}. I did so, and found the following results (see the green dots in Figure 1 below). \begin{itemize} \item $ 10795 $ curves have $ q_{\tau} > 1.5 $, \item $ 135 $ curves have $ q_{\tau} > 2 $, \item the curve with largest $ q_{\tau} = 2.30681 $ is $ y^2 + x y = x^3 - 1054050116 x - 12046088636400 $, with $ N = 39270 = 2 \, 3 \, 5 \, 7 \, 11 \, 17 $ and $ \tau = 31104 = 2^7 \, 3^5 $, \item the curve with largest $ \tau = 87040 = 2^{10} \, 5 \, 17 $ is $ y^2 + x y = x^3 - 4456595642213 x - 1538486355950810000 $, with $ N = 364650 = 2 \, 3 \, 5^2 \, 11 \, 13 \, 17 $ and $ q_{\tau} = 2.26473 $. \item Full tables \verb!output_cremona_qua.txt!, \verb!output_cremona_tam.txt! are on \cite{dW3}. \end{itemize} Note that Cremona's Database contains all curves with conductor up to $ 500000 $. This leaves $ 759667 $ curves from the LMFDB \verb=ec_mwbsd= collection with conductor between $ 500000 $ and $ 300000000 $. The web interface does allow an easy download of data through \url{https://www.lmfdb.org/EllipticCurve/Q/?conductor=500000-}, but this does not include Tamagawa products, one only gets conductors and Weierstrass coefficients $ a_1, a_2, a_3, a_4, a_6 $. However, computing $ \tau $ by SageMath \cite{S} then is trivial, with the code snippet \\ \verb!tau = EllipticCurve([a1,a2,a3,a4,a6]).tamagawa_product()! \\ Doing this I found the following results. \begin{itemize} \item no curve has $ q_{\tau} > 1.5 $, \item the largest $ q_{\tau} $ found is $ 1.22859 $, \item the largest $ \tau $ found is $ 576 $. \end{itemize} This somewhat disappointing result is probably due to the fact that of the curves of conductor between $ 500000 $ and $ 300000000 $ only those of prime or $ 7 $-smooth conductor have been incorporated, whereas $ \tau $ seems to get large only when the conductor has many small prime factors. \section{Curves from $ abc $-triples} \subsection{$ abc $-triples and Frey-Hellegouarch curves} Let $ a, b, c $ be a triple of positive integers satisfying $ a + b = c $, $ a < b $, and $ \gcd(a,b) = 1 $. Its \emph{radical} $ r(a,b,c) $ is the product of the distinct prime divisors of $ a $, $ b $ and $ c $, i.e.\ $ r(a,b,c) = \displaystyle \prod_{\text{prime }p|abc} p $. Such a triple $ a, b, c $ is called an \emph{$ abc $-triple} if it satisfies $ c > r(a,b,c) $. The $ abc $ Conjecture states that $ c \ll r(a,b,c)^{1+\epsilon} $, and this has led to the definition of the \emph{quality} of an $ abc $-triple as $ q(a,b,c) = \dfrac{\log c}{\log r(a,b,c)} $ (so a triple is an $ abc $-triple if and only if $ q(a,b,c) > 1 $). This concept of quality, although not yet under that name, seems to appear for the first time in my paper \cite{dW1}. Later also the \emph{merit} $ m(a,b,c) = (q(a,b,c)-1)^2 \log r(a,b,c) \log\log r(a,b,c) $ was introduced as an interesting measure for $ abc $-triples. See Bart de Smit's website \cite{dS} for a wealth of experimental information on high quality and high merit $ abc $-triples. In this note I adopt the terminology \emph{medium quality} for an $ abc $-triple with $ 1.3 < q(a,b,c) < 1.4 $, next to \emph{high quality} for an $ abc $-triple with $ q(a,b,c) > 1.4 $ (as used in \cite{dW1}), and following \cite{dS} I also use \emph{high merit} for an $ abc $-triple with $ m(a,b,c) > 24 $, and \emph{unbeaten} if there is no $ abc $-triple known with larger $ c $ and larger $ q(a,b,c) $. For an $ abc $-triple $ a, b, c $ it makes sense to look at its Frey-Hellegouarch curve, defined by $ y^2 = x(x-a)(x+b) $, because its conductor equals $ r(a,b,c) $ up to a bounded power of $ 2 $, so that such elliptic curves have exceptionally small conductors precisely for good quality $ abc $-triples. The Birch and Swinnerton-Dyer Conjecture for elliptic curves over $ \mathbb{Q} $ states \[ \Omega \tau | \text{Sha} | = \dfrac{T^2}{R} \lim_{s\to1} \dfrac{L(s)}{(s-1)^r} , \] where $ \Omega = \omega $ or $ 2 \omega $ for the period $ \omega $, $ \text{Sha} $ is the Tate-Shafarevich group, $ T $ is the order of the torsion group, $ R $ is the regulator, $ L(s) $ is the $ L $-series, and $ r $ is the rank of the elliptic curve. This conjecture is believed to hold for all elliptic curves, but in this note I restrict to Frey-Hellegouarch curves. The conductor is not explicitly there in the Birch and Swinnerton-Dyer formula, but its influence comes via $ \Omega $, as it is known that $ \Omega \ll \dfrac{\log c}{\sqrt{c}} $, which in the case of a good $ abc $-triple implies $ \Omega \ll N^{-1/2+\epsilon} $. In other words, the Frey-Hellegouarch curve then has an exceptionally small period, and the Birch and Swinnerton-Dyer Conjecture then suggests that this must be compensated for somewhere. The somewhat unusual way I have used above to present the Birch and Swinnerton-Dyer formula is suggestive for where I will be looking for this compensation. The main idea of \cite{dW2} was that if one can show that, next to $ \Omega $, also the Tamagawa product $ \tau $ is small compared to the conductor, then one may expect large $ \text{Sha} $'s, and this idea turned out to be fruitful, see also \cite{N}, \cite{DW}, \cite{B}. In this note I now complement this with the idea that, although there is a subpolynomial upper bound $ \tau \ll N^{C/\log\log N} $, it may still occur that large $ \tau $ accounts for a substantial part of this compensation of very small periods. In other words, one may expect big Tamagawa products also at Frey-Hellegouarch curves, and, like in \cite{dW2}, their quadratic twists and isogenous curves. So this sets the program for the remainder of this note: to search for elliptic curves with large Tamagawa products $ \tau $, and large Tamagawa qualities $ q_{\tau} $, by looking at curves isogenous to quadratic twists of Frey-Hellegouarch curves for known good $ abc $-triples. For those $ abc $-triples the website of Bart de Smit \cite{dS} is an amazingly good source. Let's start with a picture giving an overview of the results. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=\textwidth]{tamagawa.png} \caption{Elliptic curves with large Tamagawa product $ \tau $ and Tamagawa quality $ q_{\tau} > 1.5 $. \\[1ex] {\small Legend: \begin{tabular}[t]{r@{\;}l} curved lines: & $ q_{\tau} = \dfrac73\log 3 = 2.563\ldots, 2.4, 2.2, 2, 1.8, 1.5 $, \\ green dots: & curves from the Cremona and LMFDB databases, \\ yellow dots: & curves from high merit $ abc $-triples, \\ red dots: & curves from medium quality $ abc $-triples, \\ blue dots: & curves from high quality $ abc $-triples, \\ black circles: & curves from `triples from triples'. \end{tabular}}} \end{figure} \subsection{Curves from high quality $ abc $-triples} There are $ 241 $ high quality $ abc $-triples known, nicely presented as such on \cite{dS}. For each I took the twisted Frey-Hellegouarch curves $ d y^2 = x(x-a)(x+b) $ for $ d = \pm1, \pm2, \pm3, \pm5, \pm6 $, and some isogenous curves, computed by SageMath \cite{S} as follows: \\ \verb!E = EllipticCurve([0, d*(b-a), 0, -d^2*a*b, 0])! \\ \verb![E.isogeny(E(te)).codomain() for te in E.torsion_subgroup()]! \\ and then using SageMath's \verb!E.conductor()!, \verb!E.tamagawa_product()! to compute the conductor and the Tamagawa product (aborting the computation for each curve when it took more than 1 minute). This gave the following results, made visible in Figure 1 as blue dots. \begin{itemize} \item $ 841 $ curves have $ q_{\tau} > 1.5 $, \item $ 28 $ curves have $ q_{\tau} > 2 $, \item the curve with largest $ q_{\tau} = 2.39875 $ is \\ $ y^2 + x y = x^3 - 2713479277841926834110 x - 53674762419393192464788215315900 $, \\ with $ N = 105872910 = 2 \, 3 \, 5 \, 11 \, 13 \, 23 \, 29 \, 37 $ and $ \tau = 3981312 = 2^{14} \, 3^5 $, \\ isogenous to the Frey-Hellegouarch curve, twisted by $ -1 $, for the $ abc $-triple with \\ $ a = 22771715409 = 3^{16} \, 23^2 $, \\ $ b = 348972425216 = 2^{13} \, 29^2 \, 37^3 $, \\ $ c = 371744140625 = 5^9 \, 11^4 \, 13 $, \\ $ q(a,b,c) = 1.44181 $, $ m(a,b,c) = 10.5196 $, \item the curve with largest $ \tau = 152202903552 = 2^{28} \, 3^4 \, 7 $ is \\ $ y^2 + x y = x^3 - 243293616838005191387643029131295594469691482466549330x - $ \\ $ 46189598313302475345413359036293931548705009829803079259456070575837049845007100 $, \\ with $ q_{\tau} = 2.18988 $ and $ N = 25180873035975641490 = 2 \, 3 \, 5 \, 11 \, 17 \, 19 \, 23 \, 37 \, 43 \, 61 \, 127 \, 173 \, 4817 $, \\ isogenous to the Frey-Hellegouarch curve, twisted by $ -1 $, for the $ abc $-triple with \\ $ a = 44790692380548068359375 = 5^9 \, 17^2 \, 23^4 \, 37^2 \, 43 \, 4817 $, \\ $ b = 3417300183328464869570036529 = 3^{14} \, 11^8 \, 61^2 \, 173^4 $, \\ $ c = 3417344974020845417638395904 = 2^{52} \, 19^6 \, 127^2 $, \\ $ q(a,b,c) = 1.41918 $, $ m(a,b,c) = 29.8237 $, \\ and also for the $ abc $-triple with \\ $ a = 146767394485224241 = 23^8 \, 37^4 $ \\ $ b = 13669290314405085785446416384 = 2^{28} \, 3^7 \, 11^4 \, 19^3 \, 61 \, 127 \, 173^2 $ \\ $ c = 13669290314551853179931640625 = 5^{18} \, 17^4 \, 43^2 \, 4817^2 $ \\ $ q(a,b,c) = 1.45022 $, $ m(a,b,c) = 34.4028 $, twisted by $ -1 $; \\ this peculiar situation is explained in Section 3.6. \item Full tables \verb!output_high_quality_qua.txt!, \verb!output_high_quality_tam.txt! are on \cite{dW3}. \end{itemize} \subsection{Curves from medium quality $ abc $-triples} There are $ 1947 $ medium quality $ abc $-triples known. They have been extracted from the two big tables \verb!triples_below_1018_revised! (all $ 14482065 $ $ abc $-triples below $ 10^{18} $) and \verb!big_triples! (an additional $ 9345651 $ $ abc $-triples between $ 10^{18} $ and $ 2^{63} $). With those medium quality $ abc $-triples I did exactly the same as I did with the high quality $ abc $-triples. This gave the following results, made visible in Figure 1 as red dots. \begin{itemize} \item $ 6342 $ curves have $ q_{\tau} > 1.5 $, \item $ 172 $ curves have $ q_{\tau} > 2 $, \item the curve with largest $ q_{\tau} = 2.39177 $ is \\ $ y^2 + x y = x^3 - 986143769212695065 x - 376928045756312748465752775 $, \\ with $ N = 13232310 = 2 \, 3 \, 5 \, 7 \, 13 \, 37 \, 131 $ and $ \tau = 1228800 = 2^{14} \, 3 \, 5^2 $, \\ isogenous to the Frey-Hellegouarch curve, twisted by $ -1 $, for the $ abc $-triple with \\ $ a = 658489 = 13 \, 37^3 $, \\ $ b = 6879707136 = 2^{20} \, 3^8 $, \\ $ c = 6880365625 = 5^5 \, 7^5 \, 131 $, \\ $ q(a,b,c) = 1.38137 $, $ m(a,b,c) = 6.67124 $, \item the curves (isogenous) with largest $ \tau = 509607936 = 2^{21} \, 3^5 $ are \\ $ y^2 + x y = x^3 - 13290632950903796089218578404113705 - $ \\ $ 589748175639869043839018535079512741047463438442023 $, and \\ $ y^2 + x y = x^3 - 13300785823649058521269530800913705 - $ \\ $ 588802020491225519147911238670522467018762520682023 $, \\ both with $ q_{\tau} = 2.19900 $ and $ N = 44947841915130 = 2 \, 3 \, 5 \, 7 \, 11 \, 17 \, 19 \, 23 \, 59 \, 103 \, 431 $, \\ isogenous to the Frey-Hellegouarch curve, twisted by $ -1 $, for the $ abc $-triple with \\ $ a = 40675641638471 = 7^3 \, 17^9 $, \\ $ b = 798697622664921529 = 11^3 \, 19^3 \, 23 \, 59^2 \, 103^3 $, \\ $ c = 798738298306560000 = 2^{20} \, 3^8 \, 5^4 \, 431^2 $, \\ $ q(a,b,c) = 1.31127 $, $ m(a,b,c) = 10.5021 $. \item Full tables \verb!output_medium_quality_qua.txt!, \verb!output_medium_quality_tam.txt! are on \cite{dW3}. \end{itemize} \subsection{Curves from high merit $ abc $-triples} There are $ 202 $ high merit $ abc $-triples available on \cite{dS}. For almost all I succeeded to do the same procedure described above for high and medium quality $ abc $-triples. This produced the following results, made visible in Figure 1 as yellow dots. \begin{itemize} \item $ 190 $ curves have $ q_{\tau} > 1.5 $, \item $ 172 $ curves have $ q_{\tau} > 2 $, \item the curve with largest $ q_{\tau} = 2.18988 $ is the same as found with the largest $ \tau $ for the curves coming from high quality $ abc $-triples, \item the curve with largest $ \tau = 30644423884800 = 2^{25} \, 3^4 \, 5^2 \, 11 \, 41 $ is \\ $ y^2 + x y + y = x^3 - x^2 - 621574482712904069167623787332097562003319547609729892578\backslash $ \\ $ 282444666996972826570320570862 x - 5964690030130337213799773148403012416346828\backslash $ \\ $ 0237789284970994254311306494753746869776095054625199343391902064488625755369\backslash $ \\ $ 77569403651 $, \\ with $ q_{\tau} = 1.70510 $ and $ N = 43081596887429422193675039055866970 = $ \\ $ 2 \, 3^2 \, 5 \, 7 \, 11 \, 17 \, 19 \, 29 \, 43 \, 73 \, 83 \, 97 \, 103 \, 151 \, 577 \, 751 \, 3167 \, 1230379 $, \\ isogenous to the Frey-Hellegouarch curve, twisted by $ -3 $, for the $ abc $-triple with \\ $ a = 695606563606442148006101677581923 = 73^3 \, 97^2 \, 103^4 \, 577 \, 751 \, 3167 \, 1230379 $, \\ $ b = 57576591665034362126590541368210176398589952 = 2^{45} \, 17 \, 19^{10} \, 29^4 \, 43^5 \, 151 $, \\ $ c = 57576591665729968690196983516216278076171875 = 3^{11} \, 5^{33} \, 7^9 \, 11^2 \, 83^3 $, \\ $ q(a,b,c) = 1.28114 $, $ m(a,b,c) = 27.1356 $. \item Full tables \verb!output_high_merit_qua.txt!, \verb!output_high_merit_tam.txt! are on \cite{dW3}. \end{itemize} \subsection{Curves from unbeaten $ abc $-triples} Finally, there are $ 160 $ unbeaten $ abc $-triples available on \cite{dS}, not necessarily different from curves in categories I have found above. Because those $ abc $-triples quickly get amazingly large, I was only able to process the 30 ones with smallest value of $ c $, and this did not yield any examples not already found above in other categories. Full tables \verb!output_unbeaten_qua.txt!, \verb!output_unbeaten_tam.txt! are on \cite{dW3}. \subsection{Curves from triples from triples} There is a nice trick to create new, hopefully better quality, $ abc $-triples from triples of lesser quality, provided some miracle occurs. Assume $ a, b, c $ is an $ abc $-triple of reasonable quality, and write $ d = a + c $, $ e = b + c $. Then look at the derived triples $ a, c, d $ and $ b, c, e $, and hope for the miracle that one of them is of reasonable quality as well, maybe even of quality $ > 1 $ so that it is an $ abc $-triple again. In that case I show how to get new triples of probably better quality than $ q(a,b,c) $. Note that $ a < b < c < d < e $, and observe that \[ \left\{ \begin{array}{ll} c + a = d \\ c - a = b \end{array} \right. \Longleftrightarrow \left\{ \begin{array}{ll} d + b = 2c \\ d - b = 2a \end{array} \right. , \quad\quad \left\{ \begin{array}{ll} c + b = e \\ c - b = a \end{array} \right. \Longleftrightarrow \left\{ \begin{array}{ll} e + a = 2c \\ e - a = 2b \end{array} \right. . \] Now put \[ \begin{array}{lcll} (A_1,B_1,C_1) & = & (a^2,bd,c^2), & \\ (A_2,B_2,C_2) & = & (b^2,4ac,d^2) & \text{ (if } b \text{ is even, divide by 4; if } b^2 > 4ac \text{, swap)}, \\ (A_3,B_3,C_3) & = & (b^2,ae,c^2) & \text{ (if } b^2 > ae \text{, swap)}, \\ (A_4,B_4,C_4) & = & (a^2,4bc,e^2) & \text{ (if } a \text{ is even, divide by 4)}. \end{array} \] Clearly all four new triples have $ A_i + B_i = C_i $ and $ \gcd(A_i,B_i) = 1 $ and $ A_i < B_i $, and to find out if they are $ abc $-triples only their quality has to be checked. Compared to the $ abc $-triple $ a, b, c $, the numerator of the quality function almost doubles, namely from at most $ \log e < \log c + \log 2 $ to at least $ \log \frac14 c^2 = 2 \log c - 2 \log 2 $. The denominator however also increases, but most probably by a factor smaller than $ 2 $, as it grows from a three-term radical like $ r(a,b,c) $ to a four-term radical like $ r(a,b,c,d) $. In an ideal case where $ r(a) \approx r(b) \approx r(c) \approx r(d) \approx r(e) $ this growth factor in the denominator is $ \approx 4/3 $. So in such an ideal case the quality goes up by a factor $ \approx \dfrac{2}{4/3} = \dfrac{3}{2} $. No practical case is ideal, but I will give some examples where the idea bears fruit, produces high-quality or high merit $ abc $-triples, which in turn produce Frey-Hellegouarch curves with high Tamagawa quality. Note that also $ a + e = 2c $ and $ b + d = 2c $, so that computing $ A_1, B_1, C_1 $ from $ a, 2b, e $ gives the same result as computing $ A_4, B_4, C_4 $ from $ a, b, c $, and computing $ A_2, B_2, C_2 $ from $ 2a, b, d $ gives the same result as computing $ A_3, B_3, C_3 $ from $ a, b, c $. This idea of creating triples from triples is not new, it occurs in J.P.\ van der Horst's master thesis \cite{vdH}. I tried this out for all $ abc $-triples found on \cite{dS}. This resulted in 13 examples with quality above $ 1.4 $ or merit above $ 24 $. They are shown by black circles in Figure 1. Needless to say that they were all already present in these tables, so no new interesting curves with respect to Tamagawa quality were found, although the larger examples certainly lead to large $ \tau $ and $ q_{\tau} $. But I found three cases of interest. The first is from $ a = 10 = 2 \, 5 $, $ b = 2187 = 3^7 $, $ c = 2197 = 13^3 $, with $ q(a,b,c) = 1.28975 $, which for $ e = 4384 = 2^5 \, 137 $ leads to a not too bad quality $ q(b,c,e) = 0.90396 $, and then produces two high quality $ abc $-triples: \\ $ A_3 = 43840 = 2^6 \, 5 \, 137 $, $ B_3 = 4782969 = 3^{14} $, $ C_3 = 4826809 = 13^6 $, \\ with $ q(A_3,B_3,C_3) = 1.41370 $, and \\ $ A_4 = 25 = 5^2 $, $ B_4 = 4804839 = 3^7 \, 13^3 $, $ C_4 = 4804864 = 2^8 \, 137^2 $, \\ with $ q(A_4,B_4,C_4) = 1.41328 $. The second is a similar case, from $ a = 383102329 = 23^4 \, 37^2 $, $ b = 58457678566023 = 3^7 \, 11^4 \, 61 \, 173^2 $, $ c = 58458061668352 = 2^{26} \, 19^3 \, 127 $, with $ q(a,b,c) = 1.13257 $, which for $ e = 116915740234375 = 5^9 \, 7^2 \, 43 \, 4817 $ leads to a not too bad quality $ q(b,c,e) = 0.854092 $, and then produces two high quality and high merit $ abc $-triples: \\ $ A_3 = 44790692380548068359375 = 5^9 \, 17^2 \, 23^4 \, 37^2 \,43 \, 4817 $, \\ $ B_3 = 3417300183328464869570036529 = 3^{14} \, 11^8 \, 61^2 \, 173^4 $, \\ $ C_3 = 3417344974020845417638395904 = 2^{52} \, 19^6 \, 127^2 $, \\ with $ q(A_3,B_3,C_3) = 1.41918 $, $ m(A_3,B_3,C_3) = 29.8237 $, and \\ $ A_4 = 146767394485224241 = 23^8 \, 37^4 $, \\ $ B_4 = 13669290314405085785446416384 = 2^{28} \, 3^7 \, 11^4 \, 19^3 \, 61 \, 127 \, 173^2 $, \\ $ C_4 = 13669290314551853179931640625 = 5^{18} \, 17^4 \, 43^2 \, 4817^2 $, \\ with $ q(A_4,B_4,C_4) = 1.45022 $, $ m(A_4,B_4,C_4) = 34.4028 $. \\ Those two $ abc $-triples appeared above, at the curve with the largest Tamagawa product found from high quality $ abc $-triples as well as the largest Tamagawa quality found from high merit $ abc $-triples. The third is again a similar case, from $ a = 158810997195450625 = 5^4 \, 53^2 \, 67^6 $, \\ $ b = 4025783917396764928 = 2^8\, 23^2 \, 61^6 \, 577 $, $ c = 4184594914592215553 = 17^6 \, 311 \, 823^3 $, with $ q(a,b,c) = 1.08916 $, which for $ d = 4343405911787666178 = 2 \, 3^4 \, 7 \, 13 \, 109^4 \, 2087219 $ leads to a not too bad quality $ q(a,c,d) = 0.847863 $, and then produces two high merit $ abc $-triples: \\ $ A_1 = 25220932830213426279247196812890625 = 5^8 \, 53^4 \, 67^{12} $, \\ $ B_1 = 17485613666400818352222466948402205184 = 2^9 \, 3^4 \, 7 \, 13 \, 23^2 \, 61^6 \, 109^4 \, 577 \, 20872194 $, \\ $ C_1 = 17510834599231031778501714145215095809 = 17^{12} \, 311^2 \, 823^6 $, \\ with $ m(A_1,B_1,C_1) = 30.0604 $, and \\ $ A_2 = 664559691245401291839706490968570625 = 5^4 \, 17^6 \, 53^2 \, 67^6 \, 311 \, 823^3 $, \\ $ B_2 = 4051734037392610655275387090022711296 = 2^{14} \, 23^4 \, 61^{12} \, 577^2 $, \\ $ C_2 = 4716293728638011947115093580991281921 = 3^8 \, 7^2 \, 13^2 \, 109^8 \, 20872194^2 $, \\ with $ m(A_2,B_2,C_2) = 26.5098 $. Full tables \verb!output_triples_from_triples_qua.txt!, \verb!output_triples_from_triples_tam.txt! are on \cite{dW3}. \section{Discussion} \begin{table}[h] $ \begin{array}{|l|rr|r|r|r|} \hline \text{database} & \text{\# curves} & \text{\# $abc$-triples} & \text{\# curves} & \text{largest } q_{\tau} & \text{largest } \tau \\ & & & \text{with } q_{\tau} > 1.5 & & \\ \hline \text{Cremona} & 3064705 & & 10795 & 2.30681 & 87040 \\ \text{LMFDB} & 759667 & & 0 & 1.22859 & 576 \\ \text{high quality} & & 241 & 841 & 2.39875 & 152202903552 \\ \text{medium quality} & & 1947 & 6342 & 2.39177 & 509607936 \\ \text{high merit} & & 202 & 190 & 2.18988 & 30644423884800 \\ \text{unbeaten} & & 160 & 24 & 2.18988 & 20615843020800 \\ \text{triples from triples} & & 13 & 42 & 2.18988 & 1594506608640 \\ \hline \text{all together} & & & 17760 & 2.39875 & 30644423884800 \\ \hline \end{array} $ \caption{Overview of found data on curves from different sources. Full data on \cite{dW3}.} \end{table} For a summary of my results, see Table 1, and also Figure 1. Note that not all collections are necessarily disjunct. As a first conclusion it seems justified to state that Frey-Hellegouarch curves for good $ abc $-triples are a good source for high quality Tamagawa products. Finding large $ \tau $ is probably not that hard, just by looking at curves with a large number of bad primes; the curve with largest $ \tau $ found here ($ \tau \approx 3 \times 10^{13} $) is a good example of that, having $ 18 $ bad primes. But finding curves with a high Tamagawa quality is another matter. One might describe Pasten's conjecture 2 as cited above by the simple inequality \[ \limsup_{E/\mathbb{Q}} q_{\tau} \leq \dfrac{7}{3}\log 3 . \] Pasten already remarks that it could be open for improvement, and my experiments so far seem not to enthousiastically support a conjecture that $ \dfrac{7}{3}\log 3 $ is the true constant. As may be clear I am also interested in a lower bound for $ \displaystyle\limsup_{E/\mathbb{Q}} q_{\tau} $. My experiments do not shed much light on this problem. It seems plausible to me to conjecture that at the very least $ \displaystyle\limsup_{E/\mathbb{Q}} q_{\tau} > 0 $, and I leave it to others to elaborate, and to come up with an argumented conjectured value, or at least a positive lower bound, for $ \displaystyle\limsup_{E/\mathbb{Q}} q_{\tau} $. It does seem that Frey-Hellegouarch curves show split multiplicative reduction at a substantial portion of the bad primes, causing the local Tamagawa numbers at such primes being the exponents of the primes in the discriminant $ 16 (abc)^2 $ and thus contributing to large $ \tau $'s, and this gives some hope that Pasten's analysis points in the right direction. I also leave it for others to investigate whether the methods of papers searching for large $ \text{Sha} $'s, in particular \cite{N}, \cite{DW} and \cite{B}, now geared towards small $ \tau $ for obvious reasons, can be adapted to searching for big $ \tau $'s at not too big conductors as well. The code I wrote for this little project is so embarrassingly trivial that I did not want to set up an online repository for it. Some of the most essential code snippets are mentioned in the text above; everything else is simple data manipulation.
1,477,468,750,210
arxiv
\section{Introduction}\label{sec:intro} Mass cytometry data have been used for high-throughput characterization of cell subpopulations based on unique combinations of surface or intracellular markers that may be expressed by each cell. Cytometry by time-of-flight (CyTOF) is new technology that can rapidly quantify a large number of biological, phenotypic, or functional markers on single cells through use of metal-tagged antibodies. For example, CyTOF can identify up to 40 cell surface or intracellular markers in less time and at a higher resolution than previously available methods, such as fluorescence cytometry \citep{cheung2011screening}. Because CyTOF can reveal cellular diversity and heterogeneity that could not be seen previously, it has the potential to rapidly advance the study of cellular phenotype and function in immunology. Despite the potential of CyTOF, analysis of the data that it generates is computationally expensive and challenging, and statistical tools for making inferences about cell subpopulations identified by CyTOF are quite limited. Manual \lq\lq gating\rq\rq\ is a traditional method in which homogeneous cell clusters are sequentially identified and refined based on a given set of surface markers. Manual gating has several severe shortcomings, however, including its inherent subjectivity due to the fact that it requires manual analysis, and being unscalable for high dimensional data with large numbers of markers. While manual gating is used very commonly in practice, a variety of computational methods that automatically identify cell clusters have been proposed to analyze high-dimensional cytometry data. Many existing automated methods use dimension reduction techniques and/or clustering methods, such as density-based or model-based clustering. For example, FlowSOM in \cite{van2015flowsom} uses an unsupervised neural-network-based method, called a self-organizing map (SOM), for clustering and dimension reduction. A low-dimensional representation of the marker vectors is obtained by using unsupervised neural networks for easy visualization in a graph called a map. FlowSOM is fast and can be used either as a starting point for manual gating, or as a visualization tool after gating has been performed. Other common approaches are density-based clustering methods, including DBSCAN \citep{ester1996density} and ClusterX \citep{chen2016cytofkit}, and model-based clustering methods, including flowClust \citep{lo2009flowclust} and BayesFlow \citep{johnsson2016bayesflow}, among many others. More sophisticated clustering methods based on Bayesian nonparametric models also have been proposed, for example by \cite{soriano2019mixture}). \cite{weber2016comparison} performed a study to compare several clustering methods for high-dimensional cytometry data. They analyzed six publicly available cytometry datasets and compared identified cell subpopulations to cell population identities known from expert manual gating. They found that, in many scenarios, FlowSOM had significantly shorter runtimes, and in many studies where manual gating was performed FlowSOM produced the best clusterings in terms of a metric that characterizes how well a clustering algorithm performs, compared to cell clustering by manual gating. While conventional clustering methods identify subgroups of cells with similar marker expression values, they often fail to provide direct inference on the identification and characterization of cell subpopulations. With clustering methods, cells are clustered together if their expression levels are similar, and it is assumed implicitly that underlying cell subpopulations can be identified and constructed from clusters estimated directly from the marker expression levels. The usefulness of such conventional clustering approaches also is limited by the fact that observed numerical marker expression values may differ substantially due to variability between samples or between markers. Fig~\ref{fig:overview} illustrates a toy example. Suppose that the respective log expression levels of markers 1 and 2 are -2 and -4 on a given cell, and that the respective log expression levels of the markers on a second cell are -6 and -4. A negative (positive) log expression level implies that it is unlikely (likely) that a surface marker is expressed. Although their expression patterns are similar and have the same subpopulation, a conventional clustering method is unlikely to include these two cells in the same cluster because their marker 1 expression levels are very different. Furthermore, expression levels can differ significantly between samples, often due to technical variation in the cytometry measurement process, and cell clusters based on actual expression values may not serve as a useful surrogate for cell subpopulations. As a result, most existing clustering methods are used to analyze different samples separately. \begin{figure}[t!] \begin{center} \includegraphics[width=0.7\columnwidth]{img/misc/cytof-overview-improved.pdf} \end{center} \vspace{-0.1in} \caption{\small A stylized overview of the proposed feature allocation model (FAM). $\mbox{\boldmath $Z$}$ is a binary matrix whose columns define latent subpopluations, and $\mbox{\boldmath $w$}$ is a vector of abundances of the cell subpopluations. Two subpopluations are constructed in $\mbox{\boldmath $Z$}$ based on their marker expression patterns. Cells are clustered to the subpopluations based on the patterns of their observed expression levels. } \label{fig:overview} \end{figure} In this paper, we propose a Bayesian feature allocation model (FAM) to identify and place probabilities on cell subpopluations based on multiple cytometry samples of cell surface marker expression values. Our proposed FAM characterizes cell subpopluations as latent features defined in terms of their expression patterns, and cluster individual cells to one of the identified subpopulations. We will refer to each latent feature as a \lq\lq subpopulation." Markers often are expressed in more than one cell subpopulation, and different subpopluations can be characterized by distinctive patterns of marker expressions. To represent subpopluation configurations, we introduce a random binary matrix $\mbox{\boldmath $Z$}$ whose rows and columns correspond to markers and subpopluations, respectively. We let 0 and 1 represent the expression and non-expression of a marker in a subpopluation, respectively. Using the toy example in Fig~\ref{fig:overview}, in contrast to clustering methods, the FAM constructs latent subpopluations based on marker expression patterns as in $\mbox{\boldmath $Z$}$ (top of the figure). It assigns cells 1 and 2 to subpopulation 1, for which neither marker is expressed, and it assigns cell 3 to a subpopulation where marker 1 is expressed and marker 2 is not expressed. (bottom right). We assume a finite Indian buffet process (IBP), as a prior distribution for $\mbox{\boldmath $Z$}$. The IBP is a popular model for latent binary features, and may be obtained by taking the infinite limit of a Beta-Bernoulli process \citep{ghahramani2006infinite}. Applications of the IBP as FAMs for a range of biological applications are given by \cite{hai2011inferring, chen2013phylogenetic, xu2013nonparametric, sengupta2014bayclone, xu2015mad, lee2015bayesian, lee2016bayesian, ni2018bayesian}. \cite{griffiths2011indian} reviews some earlier applications of the IBP. Furthermore, we introduce a vector of subpopulation abundances $\mbox{\boldmath $w$},$ and allow the cell samples to have different values of $\mbox{\boldmath $w$}$. This approach provides a framework for joint analysis of multiple samples, and includes structures to account for large sample-to-sample variation and abnormalities, such as missing values due to technical artifacts in the cytometry data, while quantifying uncertainty in posterior inferences. This work is motivated by a dataset comprised of three CyTOF samples of surface marker expression levels in umbilical cord blood (UCB)--derived natural killer (NK) cells. NK cells play a critical role in cancer immune surveillance, and are the first line of defense against viruses and transformed tumor cells. NK cells have the intrinsic ability to infiltrate cancer tissues. Recently, NK cells have been used therapeutically to treat a variety of diseases \citep{wu2003natural, lanier2008up}. In particular, NK cells have emerged as a potentially powerful treatment modality for advanced cancers refractory to conventional therapies \citep{rezvani2015application, suck2016natural, shah2017phase, miller2005successful, lupo2019natural}. Because cell-surface protein expression levels are used as markers to examine the behavior of NK cells, accurate identification of diverse NK-cell subpopulations along with their composition is crucial to the process of obtaining more complete characterizations of their biological processes and functions. The goal of our statistical analysis is to identify and characterize NK cell subpopulations and functions across heterogeneous collections of these cells. This may provide critical information for guiding selective {\it ex vivo} expansion of UCB-derived NK cells for treating specific cancer types. The remainder of the paper is organized as follows. We present the proposed statistical model in \S~\ref{sec:prob-model}, simulation studies in \S~\ref{sec:sim-study}, and an analysis of the NK cell mass cytometry data in \S~\ref{sec:cb-analysis}. We close with concluding remarks in \S~\ref{sec:conclusions}. \section{Probability Model}\label{sec:prob-model} \subsection{Sampling Model} Index cell samples by $i = 1,2,...,I$. Suppose that $N_i$ cells, indexed by $n=1, \ldots, N_i$, are obtained from the $i^{th}$ sample, and the expression levels of $J$ markers on each cell within each sample are measured. Let $\tilde{y}_{i,n,j} \in \mathbb{R}^+$ denote the raw measurement of the expression level of marker $j$ on cell $n$ in sample $i$. While raw measurement values reflect actual expression or non-expression of markers on cells, they also vary between cells and between samples for several reasons, including biological heterogeneity in the range of expression among different populations, as well as experimental artifacts or batch effects, such as instrument fluctuations or signal crosstalk among channels designed for different markers. While, compared to conventional flow cytometry and the use of fluorescent antibodies, the use of pure metal isotopes minimizes spectral overlap among measurement channels in CyTOF, crosstalk still may be observed due to the presence of isotopic impurity, oxide formation, and properties related to the mass cytometer. Raw measurements are normalized using cutoff values computed by a flow (rather than mass) cytometry algorithm called flowDensity \citep{malek2014flowdensity}, which aims to gate predefined cell populations of interest, in settings where the gating strategy is known. This frees practitioners from the need to manually gate analysis results, but it relies substantially on user-provided information to produce good results. Consequently, cutoffs obtained from such algorithms are crude, but are useful as a starting point for our analysis. Let $c_{i,j}$ denote the cutoff obtained for marker $j$ in sample $i$. A marker of a cell is likely to be expressed if its observed expression level $\tilde{y}_{i,n,j} > c_{i,j}$, while a value $\tilde{y}_{i,n,j} < c_{i,j}$ may imply that marker $j$ is not expressed on cell $n$ in sample $i$. To reduce skewness of the marker distributions, we will consider the log transformed values $ y_{i,n,j}=\log\p{\tilde{y}_{i,n,j}/c_{i,j}} \in \mathbb{R}. $ This transformation makes 0 the reference point for dichotomizing marker expression and non-expression. To account for the fact that some $y_{i,n,j}$ may be missing due to experimental artifacts, we define the binary indicator $m_{i,n,j} = 1$ if $y_{i,n,j}$ is observed, and $m_{i,n,j} = 0$ if missing. Denote the probability that $y_{i,n,j}$ is observed by Pr$(m_{i,n,j}=1)=1-\rho_{i,n,j}(y_{i,n,j})$. Below, we will define the latent subpopulation membership indicator, $\lambda_{i,n},$ of cell $n$ in sample $i.$ For each cell in the $i^{th}$ sample, we assume conditional independence of the cell's $J$ marker values given its latent subpopulation, formally $y_{i,n,1},\cdots, y_{i,n,J} \mid \lambda_{i,n}$ are independent, and we assume the following joint model for $y_{i,n,j}$ and $m_{i,n,j}$, \begin{eqnarray} y_{i,n,j} \mid \mu_{i,n,j}, s_{i,n}^2, \lambda_{i,n} \overset{ind}{\sim} \text{N}(\mu_{i,n,j}, s^2_{i,n}), \mbox{ and } m_{i,n,j} \mid \rho_{i,n,j}(y_{i,n,j}), \lambda_{i,n} \overset{ind}{\sim} \text{Ber}(1-\rho_{i,n,j}(y_{i,n,j})). \label{eq:joint-like} \end{eqnarray} Below, we will relate the mean expression $\mu_{i,n,j}$ to the configuration of cell subpopulation $\lambda_{i,n}$. To reflect expert biological knowledge of the investigators, a model for $\rho_{i,n,j}$ as a function of $y_{i,n,j}$ will be given in the following section. \subsection{Priors}\label{priors} \paragraph*{Priors for latent cell subpopulation}\ \noindent We assume that each sample has a heterogeneous cell population, and denote the number of different latent subpopulations by $K$. The cell subpopulations are defined by columns of a $J \times K$ (marker, subpopulation) stochastic binary matrix $\bm{Z}$. The element $z_{j, k} \in \{0, 1\}$ of $\bm{Z}$ determines marker expression by subpopulation, with $z_{j,k}=0$ if marker $j$ is not expressed and $z_{j,k}=1$ if it is expressed for subpopulation $k$. We construct a {\it feature allocation prior} for $\bm{Z}$ as follows: For $j=1, \ldots J$ and $k=1, \ldots, K,$ \begin{eqnarray} z_{j,k} \mid v_k \overset{ind}{\sim} \text{Ber}\p{v_k} ~~ \mbox{ and }~~ v_k \mid \alpha \overset{iid}{\sim} \text{Be}(\alpha/K, 1). \label{eq:FAM} \end{eqnarray} As $K \rightarrow \infty$, the limiting distribution of $\bm{Z}$ in \eqref{eq:FAM} is the IBP \citep{ghahramani2006infinite} with parameter $\alpha$, after removing all columns that contain only zeros. We assume hyperprior $\alpha \sim \text{Gamma}(a_\alpha, b_\alpha)$ with mean $a_\alpha/b_\alpha$. The IBP, which is one of the most popular FAMs, thus defines a distribution over binary matrices having an unbounded number of columns (features). In the present context, this Bayesian model provides a very useful statistical tool for identifying marker expression patterns to define latent cell subpopulations. We assume that each of the $K$ cell subpopulations is possible in each sample, but allow their cellular fractions to differ between samples. In addition, we include a special, $(K+1)^{st}$ cell type, called a \lq\lq noisy cell,\rq\rq\ to address the problem that some cells do not belong to any of the $K$ cell subpopulations. In sample $i$, let $0 < \epsilon_i < 1$ denote the proportion of noisy cells and $(1-\epsilon_i)w_{ik}$ the proportion of subpopulation $k$, where $\mbox{\boldmath $w$}_i$ =$(w_{i,1},\ldots, w_{i,K})$ with $\sum_{k=1}^K w_{i,k}=1$ and $w_{i,k}>0,$ is a probability distribution on $\{1,\cdots,K\}.$ We assume priors $\epsilon_i \overset{iid}{\sim} \text{Be}(a_\epsilon, b_\epsilon)$ with fixed hyperparameters $a_\epsilon$ and $b_\epsilon$, and $\mbox{\boldmath $w$}_i \mid K \overset{iid}{\sim} \text{Dir}_K(d/K)$ with fixed hyperparameter $d$. For cell $n=1,\ldots, N_i$ in sample $i=1, \ldots, I,$ we introduce stochastic {\it latent subpopulation indicators} (equivalently, cell cluster memberships) $\lambda_{i,n} \in \{0, 1, \ldots, K\}$. We set $\lambda_{i,n}=0$ if cell $n$ in sample $i$ does not belong to any of the cell subpopulations in $\bm{Z}$, and set $\lambda_{i,n}=k>0$ if cell $n$ in sample $i$ belongs to subpopulation $k$. For the latent subpopulation indicators, we assume $\Pr(\lambda_{i,n}=0 \mid \epsilon_i) = \epsilon_i$ to account for noisy cells, and $\Pr(\lambda_{i,n}=k \mid \lambda_{i,n} \neq 0, \mbox{\boldmath $w$}_i) = w_{ik}$. Within each sample $i=1,\cdots,I,$ assigning cells to subpopulations using $\{\lambda_{i,n},\ i=1,\cdots,N_i\}$ induces cell clusters. Thus, in contrast with clustering methods that infer only cell clusters in the $i^{th}$ sample based on $\{y_{i,n,j}\},$ our proposed method produces direct inferences on both characterization of cell subpopulations and cell clusters, simultaneously for all samples. This is highly desirable because a primary aim is to identify and make inferences about cell subpopulations. Since the number of columns containing non-zero entries under the IBP is random, the dimensions of $\bm{Z}$ and $\mbox{\boldmath $w$}_i$ may vary during posterior computation. Because this dimension change may cause a high computational cost, especially for big datasets such as those obtained by CyTOF, we use a finite version of the IBP by fixing $K$. To accommodate the fact that the number of latent subpopulations is not known {\it a priori}, we consider a set of different values for $K,$ from which we select one value of $K$ using Bayesian model selection criteria. We will discuss this selection process in detail below. \paragraph*{Priors for mean expression level}\ \noindent The mean expression level $\mu_{i,n,j}$ of marker $j$ on cell $n$ in sample $i$ in \eqref{eq:joint-like} is determined by characterizing the cell's latent subpopulation. Recall that a cell $n$ either belongs to a subpopulation $\lambda_{i,n}=k >0$ in column $k$ of $\mbox{\boldmath $Z$},$ or the noisy cell subpopulation $\lambda_{i,n}=0$. For cells with a noisy cell subpopulation, we fix $\mu_{i,n,j}=0$ for all $j$ and $s^2_{i,n}=s^2_\epsilon$, where $s^2_\epsilon$ is fixed at a large value. For a cell with $\lambda_{i,n} \in \{1,\cdots,K\}$, if the marker is not expressed in cell subpopulation $\lambda_{i,n}$ (i.e., $z_{j, \lambda_{i,n}}=0$), we let its mean expression level take a negative value, $\mu_{i,n,j} < 0$. In particular, for $(i,n,j)$ with $z_{j, \lambda_{i,n}}=0$, we introduce a set of means for expression levels of markers not expressed, $\mu^\star_{0,\ell}= \sum_{r=1}^\ell \delta_{0,r}$, where $\delta_{0, \ell} \overset{iid}{\sim} \text{TN}^-(\psi_0, \tau_0^2)$, $\ell=1, \ldots, L_0$ with fixed $L_0$. Here $\text{TN}^-(\psi, \tau^2)$ denotes the normal distribution with mean $\psi$ and variance $\tau^2$ truncated above at zero. This construction induces the ordering $0 > \mu^\star_{0,1} > \ldots > \mu^\star_{0,L_0}$. We then let $\mu_{i,n,j}=\mu^\star_{0, \ell}$ with probability $\eta^0_{i,j,\ell}$. Note that even for a marker not expressed, positive $y_{i,n,j}$ can be observed due to measurement error or estimation error in the cutoff $c_{i,j}$, and the model accounts for such cases through $s^2_{i,n}$. Similarly, we assume that the mean expression level of marker $j$ takes a positive value ($\mu_{i,n,j} > 0$) if the marker is expressed ($z_{j, \lambda_{i,n}}=1$). For cases with $z_{j, \lambda_{i,n}}=1$, we construct another set of $\delta$, $\delta_{1, \ell} \overset{iid}{\sim} \text{TN}^+(\psi_1, \tau_1^2)$, $\ell=1, \ldots, L_1$ with fixed $L_1$, where $\text{TN}^+(\psi, \tau^2)$ denotes the normal distribution truncated below at zero with mean $\psi$ and variance $\tau^2.$ We let $\mu^\star_{1,\ell}= \sum_{r=1}^\ell \delta_{1,r}$, so $0 < \mu^\star_{1,1} < \ldots < \mu^\star_{1,L_1}$. We then let $\mu_{i,n,j}=\mu^\star_{1,\ell}$ with probability $\eta^1_{i,j,\ell}$. We also let $s_{i,n}^2=\sigma_i^2$ for $\lambda_{i,n} >0$ and assume $\sigma^2_i \overset{ind}{\sim} \text{IG}(a_\sigma, b_\sigma)$. This leads to a mixture of normals for $y_{i,n,j}$ whose location parameters are determined by the cell's (latent) subpopulation, \begin{eqnarray} y_{i,n,j} \mid z_{j,\lambda_{i,n}}=z, \bm\mu^\star_z, \bm\eta^z_{i,j}, \sigma^2_i ~\overset{ind}{\sim}~ F_{i,j}^z=\sum_{\ell=1}^{L_z} \eta^z_{i,j,\ell} \text{N}(\mu^\star_{z,\ell}, \sigma_i^2), ~ z \in \{0, 1\}, ~ k > 0. \label{eq:mixture-like} \end{eqnarray} Finally, we let $\bm\eta^z_{i,j} \overset{iid}{\sim}\text{Dir}_{L_z}(a_{\eta^z}/L_z)$, $z=0, 1$, $i=1, \ldots, I$ and $j=1, \ldots, J$. The mixture model in \eqref{eq:mixture-like} encompasses a wide class of distributions, such as multi-modal or skewed distributions. It captures virtually any departure from a conventional distribution, such as a parametric exponential family model, that may appear to give a good fit to the log-transformed expression values. A key property of (\ref{eq:mixture-like}) is that it allows cells with very different numerical expression values to have the same subpopulation if their marker expression/non-expression pattern is the same. This provides a basis for obtaining a succinct representation of cell subpopulations. Because all $(i, n, j)$ share the locations $\mu^\star_{z}$ in \eqref{eq:mixture-like}, the model borrows strength across both samples and markers, while $\bm \eta^z_{i,j}=(\eta^z_{i,j,1}, \ldots, \eta^z_{i,j,L^z})$ allows the distribution of $y_{i,n,j}$ to vary across both sample $i$ and marker $j$. The construction of $\mu^\star_{z, \ell}$ through $\delta_{z, \ell}$ also ensures ordering in $\mu^\star_{z, \ell}$ and circumvents potential identifiability and label-switching issues that may be present in conventional Bayesian mixture models \citep{celeux2000computational, stephens2000dealing, jasra2005markov, fruhwirth2006finite}. \paragraph*{Model for data missingship mechanism}\ \noindent We next build a model for the data missingship mechanism. To do this, we incorporate information provided by a subject area expert, that a marker expression level is recorded as \lq\lq missing\rq\rq\ in a cell when the marker has a very weak signal, strongly implying that the marker is not expressed on that cell. There is an extensive literature for analyzing data with observations missing not at random, including methods for Bayesian data imputation and frequentist multiple imputation (\cite{rubin1974characterizing, rubin1976inference, allison2001missing, schafer2002missing, franks2016non}). The dataset does not contain information for inferring the missingness mechanism, and any assumptions for the distribution of unobserved data are not testable. Consequently, it cannot be anticipated that the imputed value of missing $y_{i,n,j},$ under any assumed missingness mechanism, is close to its potentially observed numerical values, except for the key fact that the potential value is very likely negative. We thus focus on estimating the probability that a missing value is no expression of a marker, since the task of recovering $\mbox{\boldmath $Z$}$, $\bm w$ and $\bm \lambda$, which are the primary interest, is not affected. We model missingship conditional on $y_{i,n,j}$ by assuming a logit regression model for the probability $\rho_{i,n,j}$ that $y_{i,n,j}$ is missing, \begin{eqnarray} \text{logit}(\rho_{i,n,j}) = \beta_{0i} + \beta_{1i} y_{i,n,j} + \beta_{2i} y_{i,n,j}^2. \label{eq:link} \end{eqnarray} This quadratic function of $y_{i,n,j}$ is assumed in the real-valued domain of $\text{logit}(\rho_{i,n,j})$ to allow values of the $\bm{\beta}_i$ = $(\beta_{0i},\beta_{1i},\beta_{2i})$ in the $i^{th}$ sample for which $y_{i,n,j}<0$ yields a larger probability $\rho_{i,n,j}$ of being missing. To specify values of $\bm{\beta}_i$ in \eqref{eq:link} for each sample $i=1,\cdots,I$, we take an empirical approach, using the minimum, first quartile, and median of negative $y_{i,n,j}$ values, set their $\rho_{i,n,j}$ values to .05, .80 and .50, respectively, and solve for $\bm{\beta}_i$. Under this specification of $\bm{\beta}_i$, imputed values of $y_{i,n,j}$ take a negative value with large probability and their distributions are very similar to those of observed $y_{i,n,j}<0$ in sample $i.$ We performed sensitivity analyses to the specification of values of the $\bm{\beta}_i$'s in this way, to examine robustness of the estimation of $\mbox{\boldmath $Z$}$, $\bm w$ and $\bm \lambda$, the parameters of primary interest. Additionally, in our simulation studies, missing values were generated under a mechanism different from that in \eqref{eq:link} to further examine robustness. \S~\ref{sec:sim-study} and \S~\ref{sec:cb-analysis} provide details of the sensitivity analyses. \def\text{CPO}{\text{CPO}} \def\text{LPML}{\text{LPML}} \def\text{data}{\text{data}} \paragraph*{Selection of $K$}\label{par:sel-K}\ \noindent Instead of estimating $K$, we cast the problem of selecting a value for $K$ as a model comparison problem. This approach reduces computational burden, especially for large datasets, but identifying a value of $K$ that optimizes model fit while penalizing for high model complexity is still challenging. We choose $K$ using two model selection criteria, the deviance criterion information (DIC, \cite{dic}) and log pseudo marginal likelihood (LPML, \cite{geisser1979predictive, gelfand1994bayesian}). The DIC and LPML are commonly used to quantify goodness-of-fit for model comparison in the Bayesian paradigm. The DIC measures posterior prediction error based on deviance penalized by model complexity, with smaller values corresponding to a better fit. The LPML is a metric based on cross-validated posterior predictive probability, and is defined as the sum of the logarithms of conditional predictive ordinates (CPOs), with larger LPML corresponding to a better fit to the data. Details of the computation of DIC and LPML are given in Supp.\ \S \ref{sec:lpml-dic}. In addition, we count the number of subpopulations having negligible weights, $\sum_{i,k} \mbox{I}(w_{i,k} < 1\%)$, for each value of $K$ and plot the LPML against the number of such subpopulations. A model with larger $K$ may produce cell subpopulations with very small $w_{i,k}$ that only make subtle contributions to model fit in terms of LPML. We thus search for a value of $K$, where the change rate of the increase in LPML drops. \cite{miller2018robust} used a similar calibration method to tune a model hyperparameter that determines how much coarsening is required to obtain a model that maximizes model fit while maintaining low model complexity. \subsection{Posterior Computation}\label{sec:sampling-via-mcmc} Let $\bm{\theta}=\bc{\mbox{\boldmath $Z$}, \mbox{\boldmath $w$}, \bm \delta_0, \bm \delta_1, \bm \sigma^2, \bm \eta^0, \bm \eta^1, \bm \lambda, \bm v, \bm \epsilon, \alpha}$ denote all model parameters. Let $\bm{y}$ and $\m$ denote the vectors of $y_{i,n,j}$ and $m_{i,n,j}$ values, respectively, for all $(i,n,j)$. The joint posterior distribution is \begin{eqnarray} p(\bm{\theta} \mid \bm{y}, \m, K) &\propto & p(\bm{\theta} \mid K) \prod_{i,n,j} p(m_{i,n,j} \mid y_{i,n,j}, \bm{\theta}, K) p(y_{i,n,j} \mid \bm{\theta}, K) \nonumber\\ &\propto & p(\bm{\theta} \mid K) \prod_{i,n} \bk{ \prod_j \rho_{i,n,j}^{1 - m_{i,n,j}} \times \sum_{\ell=1}^{L_{z_{j, \lambda_{i,n}}}} \eta^{z_{j, \lambda_{i,n}}}_{i,j,\ell} \phi(y_{i,n,j} \mid \mu^\star_{{z_{j, \lambda_{i,n}}}, \ell}, \sigma^2_i) }^{1(\lambda_{i,n}>0)} \nonumber \\ && \qquad \times \bk{ \prod_j \rho_{i,n,j}^{1 - m_{i,n,j}} \times \phi(y_{i,n,j} \mid 0, s^2_\epsilon) }^{1(\lambda_{i,n}=0)}, \label{eq:joint-post} \end{eqnarray} where $\phi(y \mid \mu, \sigma^2)$ is the density of the normal distribution with mean $\mu$ and variance $\sigma^2$ evaluated at $y$. Since $\rho_{i,n,j}$ is a constant for a given $y$ with fixed $\beta$'s, the terms $p(m_{i,n,j}=1)=(1-\rho_{i,n,j})^{m_{i,n,j}}$ for observed $y_{i,n,j}$ do not appear in \eqref{eq:joint-post}. Posterior simulation can be done via standard Markov chain Monte Carlo (MCMC) methods with Gibbs and Metropolis steps to draw samples from the posterior distribution. Each parameter is updated by sampling from its full conditional distribution. Details of the posterior simulation are described in Supp.\ \S \ref{sec:post-comp}. Summarizing the joint posterior distribution $p(\bm{\theta} \mid \bm{y}, \m, K)$ is challenging, especially for $\bm{Z}$, which may be susceptible to label-switching problems common in mixture models. The distributions of $\mbox{\boldmath $w$}_i$ and $\bm \lambda_i$ depend on $\bm{Z}$. To summarize the posterior distribution of $(\bm{Z},\mbox{\boldmath $w$}_i, \bm \lambda_i)$ with point estimates, we extend the sequentially-allocated latent structure optimization (SALSO) method in \cite{salso} and incorporate $\mbox{\boldmath $w$}_i$. To summarize random feature allocation matrices, we first construct $\bm A_i=\{A_{i,(j,j')}(\mbox{\boldmath $Z$})\}$, the $J \times J$ pairwise allocation matrix corresponding to a binary matrix $\mbox{\boldmath $Z$}$, where \begin{eqnarray} A_{i,(j, j')}(\mbox{\boldmath $Z$}) = \sum_{k=1}^K w_{i,k} \times 1(z_{j,k}=1) \times 1(z_{j',k}=1), ~~~\text{for } 1\leq j, j^\prime \leq J, \label{eq:compute-A} \end{eqnarray} is the number of active features that markers $j$ and $j'$ have in common, weighted by $w_{i,k}$. The from of \eqref{eq:compute-A} encourages the selection of $\mbox{\boldmath $Z$}$ based on subpopulations that are prevalent in samples. We then use constrained optimization to find a point estimate $\hat{\mbox{\boldmath $Z$}}_i$ for sample $i$ that minimizes the sum of the element-wise squared distances, \begin{eqnarray*} \text{argmin}_Z\sum_{j=1}^J\sum_{j'=1}^J(A(\mbox{\boldmath $Z$})_{i, (j,j')} - \bar{A}_{i, (j,j')})^2 \label{eq:salso} \end{eqnarray*} where $\bar A_{i, (j, j^\prime)}$ is the pairwise allocation matrix averaged by the posterior distribution of $\mbox{\boldmath $Z$}$ and $\mbox{\boldmath $w$}_i$. We use posterior Monte Carlo samples to obtain posterior point estimates $\hat{\bm{Z}}_i$ as follows. Suppose that we obtain $B$ posterior samples simulated from the posterior distribution of $\bm{\theta}$. For the $b^{th}$ posterior sample of $\bm{Z}$ and $\mbox{\boldmath $w$}_i$, we compute a $J \times J$ adjacency matrix, $\bm A_i^{(b)} =\{A^{(b)}_{i,(j,j')}\}$, $b=1, \ldots, B$ and then the mean adjacency matrix $\bar A_i = \sum_{b=1}^B A_i^{(b)} / B$. We determine a posterior point estimate of $\mbox{\boldmath $Z$}$ for sample $i$ by minimizing the mean squared deviation, $ \hat{\bm Z}_i = \text{argmin}_{\bm Z} \sum_{j,j'} (A_{i,j,j'}^{(b)} - \bar A_{i,j,j'})^2, $ where $\hat\bm{Z}_i \in \bc{\bm{Z}^{(1)} \dots \bm{Z}^{(B)}}$. For $\hat{\bm{Z}}_i = \bm{Z}^{(b)}$, we report the posterior point estimates $\hat{\mbox{\boldmath $w$}}_i=\mbox{\boldmath $w$}_i^{(b)}$ and $\hat{\lambda}_{i,n}=\lambda_{i,n}^{(b)}$. In addition, we implemented variational inference (VI), which approximates the posterior distribution of $\bm{\theta}$ through optimization \citep{wainwright2008graphical, blei2017variational, zhang2018advances}. Because VI tends to be faster than MCMC, it is a popular emerging alternative to MCMC, especially when models are complex and/or a dataset is large. In particular, we used automatic differentiation variational inference (ADVI) \citep{advi}, which makes use of automatic differentiation to simplify the process of implementing variational inference for differentiable models. ADVI requires no model-specific hand derivations, and is relatively simple to implement when an automatic differentiation library such as PyTorch \citep{paszke2017automatic}, Tensorflow \citep{tensorflow2015-whitepaper}, and Flux \citep{flux} is available. Details of the VI implementation using ADVI are included in Supp.\ \S~\ref{sec:vi}. \section{Simulation Studies}\label{sec:sim-study} \begin{table}[t!] \begin{subtable}{.5\linewidth} \centering \begin{tabular}{c} \includegraphics[width=.95\linewidth]{img/post/sim/small/best/img/Z_true-improved-1.pdf} \end{tabular} \caption{ $\bm{Z}^{\mbox{\tiny TR}}$} \end{subtable}% \begin{subtable}{.5\linewidth} \centering \begin{tabular}{|c|rrrrr|} \hline & \multicolumn{5}{c|}{Cell Subpopulations}\\ & $k=1$ & $k=2$ & $k=3$ & $k=4$ & $k=5$ \\ \hline sample 1 & 0.068& 0.163& 0.351& 0.297& 0.118\\ sample 2 & 0.194& 0.282& 0.066& 0.257& 0.199\\ sample 3 & 0.112& 0.141& 0.224& 0.119& 0.402\\ \hline \end{tabular} \caption{$\mbox{\boldmath $w$}^{\mbox{\tiny TR}}$} \end{subtable} \caption{Design of Simulation 1. $\bm{Z}^{\mbox{\tiny TR}}$ and $\mbox{\boldmath $w$}^{\mbox{\tiny TR}}$ are illustrated in (a) and (b), respectively. $K^{\mbox{\tiny TR}}=5$, $J=20$, and $I=3$ are assumed. In (a), black represents $z^{\mbox{\tiny TR}}_{j,k}=1$ (marker expression) and white represents $z^{\mbox{\tiny TR}}_{j,k}=0$ (marker non-expression).} \label{tab:sim1-tr} \end{table} In this section, we summarize simulations to assess the performance of the proposed FAM based method for identifying features and clustering cells within each sample, and compare it to an alternative model and method. We simulated data for three samples, each with 20 markers, consisting of 4000, 500, and 1000 cells, respectively. Thus, $I=3$, $J=20$, and $N_i=4000$, 500, and 1000 for $i=1$, 2, and 3. We let the true number of latent features (subpopulations) $K^{\mbox{\tiny TR}}=5$ and specified a $J \times K^{\mbox{\tiny TR}}$ (binary) feature-allocation matrix $\mbox{\boldmath $Z$}^{\mbox{\tiny TR}}$ and $K^{\mbox{\tiny TR}}$-dimensional vectors $\mbox{\boldmath $w$}^{\mbox{\tiny TR}}_i$ as follows: We first simulated $\mbox{\boldmath $Z$}^{\mbox{\tiny TR}}$ by setting $z^{\mbox{\tiny TR}}_{j,k}=1$ with probability 0.6. If any column or row in $\mbox{\boldmath $Z$}^{\mbox{\tiny TR}}$ was a column or row consisting of all 0's, the entire matrix was re-sampled. We then simulated $\mbox{\boldmath $w$}^{\mbox{\tiny TR}}_i$ from a Dirichlet distribution with parameters being random permutations of $(1, \ldots, K^{\mbox{\tiny TR}})$ for each $i$. This makes it likely that the elements of $\mbox{\boldmath $w$}^{\mbox{\tiny TR}}_{i}$ will contain both large and small values. $\mbox{\boldmath $Z$}^{\mbox{\tiny TR}}$ and $\mbox{\boldmath $w$}^{\mbox{\tiny TR}}_i$ are shown in Table~\ref{tab:sim1-tr}. We set abundances of the noisy cell types to be $\epsilon_i^{\mbox{\tiny TR}} = 0.05$ for all $i$. We specified the mixture models for the expression levels by setting $\bm\mu^{\star, {\mbox{\tiny TR}}}_{0} = (-1, -2.3, -3.5)$ and $\bm\mu^{\star, {\mbox{\tiny TR}}}_{1} = (1, 2, 3)$ with $L^{0, {\mbox{\tiny TR}}}=L^{1, {\mbox{\tiny TR}}}=3$, and simulating mixture weights $\bm{\eta}_{i,j}^{z,{\mbox{\tiny TR}}}$ from a Dirichlet distribution with parameters being a random permutation of $(1,\ldots, L^{z,{\mbox{\tiny TR}}})$, for $z \in \bc{0, 1}$ and for each $(i, j)$. The values of $\sigma^{2, {\mbox{\tiny TR}}}_i$ were set to 0.2, 0.1, and 0.3 for samples 1, 2, and 3, respectively. We then simulated latent subpopulation indicators $\lambda_{i,n}^{\mbox{\tiny TR}}$ with probabilities $\Pr(\lambda_{i,n}^{\mbox{\tiny TR}}=0)=\epsilon_i^{\mbox{\tiny TR}}$ and $\Pr(\lambda_{i,n}^{\mbox{\tiny TR}}=k \mid \lambda_{i,n}^{\mbox{\tiny TR}} \neq 0)=w_{ik}^{\mbox{\tiny TR}}$. We generated $y_{i,n,j} \overset{iid}{\sim} \text{N}(0, 9)$ for all $(i, n, j)$ with $\lambda^{\mbox{\tiny TR}}_{i,n}=0$. Otherwise, we generated $y_{i,n,j}$ from a mixture of normals, $\sum_{\ell=1}^{L^{z,{\mbox{\tiny TR}}}} \eta^{z, {\mbox{\tiny TR}}}_{i,j}\times \text{N}(\mu^{\star,{\mbox{\tiny TR}}}_{z\ell}, \sigma^{2, {\mbox{\tiny TR}}}_{i})$ given $z^{\mbox{\tiny TR}}_{j\lambda_{i,n}^{\mbox{\tiny TR}}}=z$ for each $(i, n, j)$. To simulate the missingship indicators, $m_{i,n,j}$, we first generated the proportions $p_{i,j}$ of missing values for each $(i, j)$ from a $ \text{Unif} \p{0, 0.7\sum_k w^{\mbox{\tiny TR}}_{i,k}(1-z^{\mbox{\tiny TR}}_{j,k})}$ and sampled $p_{i,j}\times N_i$ cells without replacement with probability proportional to $\{1 + \exp\p{-9.2 -2.3 y_{i,n,j}}\}^{-1}.$ Under the true missingness mechanism, a marker having a lower expression level has a higher chance of being recorded as missing. Note that the true mechanism is different from that assumed in \eqref{eq:link}. Heatmaps of the simulated $\bm{y}$ are shown in Fig~\ref{fig:sim1-post}(b), (d) and (f). The $y_{i,n,j}$'s are sorted within a sample according to their posterior subpopulation indicator estimates $\hat{\lambda}_{i,n}$ (explained later). The red, blue, and black colors represent high expression levels, low expression levels, and missing values, respectively. We fit the model separately for each $K = 2,3, \ldots, 10$. For all $K$, we fixed $L^0=L^1=5$ and $s_\epsilon^2=10$. We specified the remaining fixed hyper-parameters as follows: $a_\alpha=b_\alpha=0.1$ for $\alpha$; $\psi_z=1$ and $\tau^2_z=1$ for $\delta_{z, \ell}$; $a_\sigma=3$ and $b_\sigma=2$ for $\sigma^2_i$; $a_{\eta^z}=1$ for $\bm{\eta}_{i,j}$; $d=1$ for $\mbox{\boldmath $w$}_i$; $a_\epsilon=1$ and $b_\epsilon=99$ for $\epsilon_i$. We used the empirical approach in \S~\ref{sec:prob-model} to obtain values of $\bm\beta$ for the missingship mechanism. For each $i$, we initialized the missing values at $-\beta_{2i} / (2\beta_{1i})$, which correspond to the largest missing probabilities {\it a priori}. To initialize $\lambda_{i,n}$, $\mbox{\boldmath $w$}_i$, $\mbox{\boldmath $Z$}$, $\alpha$ and $\bm{\eta}^z_{i,j}$, we applied density-based clustering via finite Gaussian mixture models using the MClust package \citep{mclust}, and used the resulting clustering of $y_{i,n,j}$. We then drew samples of $\bm{\theta}$ values and imputed missing values of $y_{i,n,j}$ using MCMC simulation based on 16,000 iterations, discarding the first 10,000 iterations as burn-in for each model, and then keeping every other draw as thinning. We diagnosed convergence and mixing of the described posterior MCMC simulations using trace plots. We found no evidence of any practical convergence problems. For a model with $K=5$, it took 38 minutes per 1000 iterations on an interactive Linux server with four Intel Xeon E5-4650 processors and 512 GB of random access memory. \begin{figure} \begin{center} \begin{tabular}{ccc} \includegraphics[width=.32\columnwidth]{img/sim-paper/metrics/Nfac500/lpml.pdf} & \includegraphics[width=.32\columnwidth]{img/sim-paper/metrics/Nfac500/dic.pdf} & \includegraphics[width=.32\columnwidth]{img/sim-paper/metrics/Nfac500/lpml-vs-numsmallclus-improved.pdf} \\ {(a) LPML} & {(b) DIC} & {(c) Calibration of $K$} \\ \end{tabular} \end{center} \vspace{-0.1in} \caption{ Results of Simulation 1. Plots of (a) LPML = log pseudo marginal likelihood, (b) DIC = deviance information criterion , and (c) calibration metric, for $K=2, \dots, 10$.} \label{fig:metrics-sim1} \end{figure} For each value of $K,$ we computed the LPML and DIC, and obtained point estimates $\hat{\mbox{\boldmath $Z$}}_i$, $\hat{\mbox{\boldmath $w$}}_i$ and $\hat{\bm\lambda}_i$ using the method described in \S~\ref{sec:sampling-via-mcmc}. Figures \ref{fig:metrics-sim1}(a) and (b) respectively show plots of LPML and DIC as functions of $K$. Fig \ref{fig:metrics-sim1}(c) plots LPML against the number of subpopulations with $\hat{w}_{i,k} < 1\%$. The increase in LPML is very minimal, while negligible subpopulations are added for values of $K > 5$. The plots clearly indicate that $\hat{K}=5$ yields a parsimonious model with good fit. Fig~\ref{fig:sim1-post} illustrates $\hat{\mbox{\boldmath $Z$}}_i$, $\hat{\mbox{\boldmath $w$}}_i$ and $\hat{\lambda}_{i,n}$ for $\hat{K}=5$. Panels (a), (c) and (e) show $\hat{\mbox{\boldmath $Z$}}_i$ and $\hat{\mbox{\boldmath $w$}}_i$ for samples 1, 2, and 3, respectively. The subpopulations with $\hat w_{ik} > 1\%$ are included in the plots of $\hat{\mbox{\boldmath $Z$}}_i$. The estimates $\hat{\mbox{\boldmath $Z$}}_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are close to their truth values in Table~\ref{tab:sim1-tr} for all samples, implying that the true cell population structure is well recovered. The heatmaps of $y$ rearranged by cell clustering membership estimates $\hat{\lambda}_{i,n}$ are shown in panels (b), (d), and (f) of Fig 3, where the colors, red, blue, and black represent high, low, and missing expression levels, respectively. The horizontal yellow lines separate cells by $\hat{\lambda}_{i,n}$. The figures show that the cell clustering based on the estimated subpopulations captures the true clustering of $y$ quite well. \begin{figure}[h!] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.45\columnwidth]{img/post/sim/small/best/img/Z1-improved-1.pdf} & \includegraphics[width=0.45\columnwidth]{img/post/sim/small/best/img/y1.pdf} \\ {(a) $\hat{\bm{Z}}^\prime_1$ \& $\hat{\mbox{\boldmath $w$}}_1$} & {(b) heatmap of $y_{1nj}$} \\ \includegraphics[width=0.45\columnwidth]{img/post/sim/small/best/img/Z2-improved-1.pdf} & \includegraphics[width=0.45\columnwidth]{img/post/sim/small/best/img/y2.pdf} \\ {(c) $\hat{\bm{Z}}^\prime_2$ \& $\hat{\mbox{\boldmath $w$}}_2$} & {(d) heatmap of $y_{2nj}$}\\ \end{tabular} \end{center} \vspace{-0.1in} \caption{Results of Simulation 1. In (a) and (c), the transpose $\hat{\bm{Z}}^\prime_i$ of $\hat \mbox{\boldmath $Z$}_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for samples 1 and 2, respectively, with markers that are expressed denoted by black and not expressed by white. Only subpopulations with $\hat{w}_{ik} > 1\%$ are included. Heatmaps of $\bm y_i$ are shown for sample 1 in (b) and sample 2 in (d). Cells are ordered by posterior point estimates of their subpopulation indicators, $\hat{\lambda}_{i,n}$. Cells are given in rows and markers are given in columns. High and low expression levels are represented by red and blue, respectively, and black represents missing values. Yellow horizontal lines separate cells into five subpopulations.} \label{fig:sim1-post} \end{figure} \begin{figure}[h!] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.45\columnwidth]{img/post/sim/small/best/img/Z3-improved-1.pdf} & \includegraphics[width=0.45\columnwidth]{img/post/sim/small/best/img/y3.pdf} \\ {(e) $\hat{\bm{Z}}^\prime_3$ \& $\hat{\mbox{\boldmath $w$}}_3$} & {(f) heatmap of $y_{3nj}$}\\ \end{tabular} \end{center} \vspace{-0.1in} \caption*{Fig~\ref{fig:sim1-post}: Results of Simulation 1 (continued). In (e), the transpose $\hat{\bm{Z}}^\prime_i$ of $\hat \mbox{\boldmath $Z$}_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for sample 3, with markers that are expressed denoted by black and not expressed by white. Only subpopulations with $\hat{w}_{ik} > 1\%$ are included. Heatmaps of $\bm y_i$ for sample 3 is shown in (f). Cells are ordered by posterior point estimates of their subpopulation indicators, $\hat{\lambda}_{i,n}$. Cells are given in rows and markers are given in columns. High and low expression levels are represented by red and blue, respectively, and black represents missing values. Yellow horizontal lines separate cells into five subpopulations.} \end{figure} We also fit the model to the simulated data using ADVI, with a mini-batch size of 2000, $K=30$, and 20000 iterations. The time required to fit the model was approximately 4 hours, which is substantially less than that of the analogous MCMC method. Supp.\ Fig \ref{fig:sim-vb-1} shows the posterior estimates of $\mbox{\boldmath $Z$}$, $\mbox{\boldmath $w$}$ and $\lambda_{i,n}$ for ADVI. Inference for model parameters using ADVI are similar to those using MCMC. The simulation truth for the model parameters $\bm{\theta}$ are well recovered as in the MCMC implementation. We assessed sensitivity of the model to the specification of the data missinship mechanism by fitting the FAM using different specifications of $\bm\beta$ with $K=\hat{K}$, and comparing the inferences. The two different specifications of $\bm\beta$ are given in Supp.\ Table~\ref{tab:missmechsen-sim}. The estimates of $\bm\bm{\theta}$ do not change significantly across different specifications of $\bm\beta$. Point estimates of $\mbox{\boldmath $Z$}$, $\mbox{\boldmath $w$}_i$, and $\lambda_{i,n}$ are shown in Supp.\ Figures \ref{fig:Z-w-sim1-missmechsen-1} and~\ref{fig:Z-w-sim1-missmechsen-2}. The estimates $\hat \mbox{\boldmath $Z$}$ remain the same for all specifications of $\bm\beta$, and the $\hat \mbox{\boldmath $w$}_i$ values also are very similar. Supp.\ Table \ref{tab:missmechsen-sim} shows that LPML and DIC are slightly better for the data missingship mechanisms that encourage imputing smaller missing values $y_{i,n,j}$. This results in $\mu^\star_{0, L_0}$ (the smallest of the mixture component locations for non-expressed markers) being smaller than that obtained under the other specifications, accidentally more closely resembling the simulation truth. Details of the sensitivity analysis are in Supp.\ \S \ref{sec:sim}. \begin{figure}[t!] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.3\columnwidth]{img/sim-paper/data/kills-flowsom/N500/K5/90/YZ001_FlowSOM_reduced.png}& \includegraphics[width=0.3\columnwidth]{img/sim-paper/data/kills-flowsom/N500/K5/90/YZ002_FlowSOM_reduced.png}& \includegraphics[width=0.3\columnwidth]{img/sim-paper/data/kills-flowsom/N500/K5/90/YZ003_FlowSOM_reduced.png}\\ {\small (a) Sample 1} & {\small(b) Sample 2} & {\small (c) Sample 3} \\ \end{tabular} \vspace{-0.1in} \caption{Results of Simulation 1 (continued). Heatmaps of $\bm y_{i}$ for clusters estimated by FlowSOM, with cells ordered by the cluster labels $\lambda_{i,n}.$ Cells are in rows and markers are in columns. High, low, and missing expression levels are in red, blue, and black, respectively. Yellow horizontal lines separate the identified cell clusters.} \label{fig:sim1-FlowSOM} \end{center} \end{figure} We compared our model via simulation to FlowSOM in \citep{van2015flowsom}, which is implemented in the R package FlowSOM \citep{rflowsom}. FlowSOM fits a model with a varying number of clusters and selects a value of $K$ that minimizes the within-cluster variance while also minimizing the number of clusters via an \lq\lq elbow\rq\rq criterion, an {\it ad hoc} graphical method that chooses $K$ such that $K+1$ does not substantially increase percentage of variation explained. FlowSOM does not impute missing values, so we used all $y$ assuming that there is no missing $y$. In practice, missing values could be pre-imputed, or multiple imputation could be employed. Note that FlowSOM does not account for variability between samples. We combined the samples for analysis to avoid a further {\it ad-hoc} process of finding common clusters among the samples. If desired, one might do separate analyses for each of the samples. FlowSOM was considerably faster than our model, with a computation time of 11 seconds on the simulated dataset. FlowSOM identified four cell clusters, as summarized in Fig~\ref{fig:sim1-FlowSOM}, where the cells are rearranged by their cluster membership estimates in each sample. The fourth cluster (shown near the top of the heatmaps) is a mix of the cells having the true subpopulations 1 and 2 that differ only by markers 4 and 17, and its performance of cell clustering deteriorates. More importantly, FlowSOM does not model latent cell subpopulations, and no inference on cell subpopulations is produced. For this simulation scenario, the FAM easily recovers the truth, but a clustering-based method such as FlowSOM may perform poorly in inferring the cell population structure. We examined the performance of our model through an additional simulation study, Simulation 2. In this simulation, we kept most of the set-up used in Simulation 1, but assumed a more complex subpopluation structure with much larger numbers of cells, by assuming $K^{\mbox{\tiny TR}}=10$ and $N=(40000, 5000, 10000)$. $\mbox{\boldmath $Z$}^{\mbox{\tiny TR}}$ and $\mbox{\boldmath $w$}^{\mbox{\tiny TR}}_i$ are illustrated in Supp.\ Fig~\ref{tab:sim2-tr}. We considered ten models with $K = 2, 4, \cdots, 20$. For the fixed hyperparameters, we let $L^0=L^1=5$, and the remaining specifications for hyperparameters were the same as those in Simulation 1. The model comparison metrics strongly suggest $\hat{K}=10$, for which the posterior point estimates of the underlying structure including $\mbox{\boldmath $Z$}$, $\mbox{\boldmath $w$}$ and $\lambda_{i,n}$ recover the simulation truth quite well, as shown in Supp.\ Fig~\ref{fig:sim2-post}. In contrast, in this case FlowSOM groups together cells in two subpopulations that have similar configurations, similarly to Simulation 1, and estimates nine cell clusters. The FAM provides direct inference on cell subpopulations, and the cell clustering by subpopulations is better than that under FlowSOM. Details of Simulation 2 including a sensitivity analysis for the data missingship mechanism and fast computation using ADVI, are given in Supp.\ \S~\ref{sec:sim-2}. \section{Analysis of Cord Blood Derived NK Cell Data}\label{sec:cb-analysis} We next report an analysis of the CyTOF dataset of surface marker expression levels on UCB-derived NK cells. Identifying and characterizing NK cell subpopulations in terms of marker expression may serve as a critical step to identifying NK cell subpopulations to develop disease-specific therapies in a variety of severe hematologic malignancies. Our NK cell dataset consists of three samples collected from different cord blood donors, containing 41,474, 10,454, and 5,177 cells, respectively. 32 cell surface proteins were evaluated. We removed markers having positive values in more than 90\% of the cells in all samples, or with missing or negative values in over 90\% of the cells in all samples. We also removed all cells with an expression level $< -6$ for any marker. After this preprocessing, $J=20$ markers remained and the numbers of cells in the samples were $N_i=$ 38,636, 9,555, and 4,827. Supp.\ Table~\ref{tab:marker-codes} lists the markers included in the analysis. Figures \ref{fig:cb-post}(b), (d) and (e) show heatmaps of $\bm y$ after rearranging the cells by posterior estimates $\hat{\lambda}_{in}$ of the cell clusterings for each sample. We also visualize the data using a data visualization technique ``t-SNE (t-Distributed Stochastic Neighbor Embedding)'' in Supp.\ Fig~\ref{fig:CB-tsne}. t-SNE is a popular method for visualization of high-dimensional data in a two- or three-dimensional map through stochastic neighbor embedding \citep{maaten2008visualizing, van2014accelerating}. It also is used for detecting clusters in data. We used Barnes-Hut SNE implemented in the Python library sklearn to obtain two dimensional t-SNE embeddings separately for each sample. We fit our FAM over a grid for $K$ from 3 to 33 in increments of 3, with $L_0=5$ and $L_1=3$. We set priors and the data missingship mechanism as outlined in \S~\ref{sec:sim-study}. Random parameters $\bm{\theta}$ also were initialized in a similar manner. 6000 samples from the posterior distribution of the model parameters were obtained after a burn-in of 10000 iterations. The posterior samples were thinned by selecting every other sample to yield a total of 3000 samples. Figures \ref{fig:cb-select-K} (a) and (b) display LPML and DIC as functions of $K$. The LPML changes sharply for small values of $K$, and tapers at $K=21$, indicating that $\hat{K}=21$. A similar pattern is seen for DIC. As depicted in Fig \ref{fig:cb-select-K} (c), our additional calibration method also indicates that the models with $K > 21$ include more cell subpopulations comprising less than one percent of a sample (i.e. $\sum_{i,k}\hat{w}_{i,k} < 1\%$ is larger), but improve fit only minimally. \begin{figure}[t] \begin{center} \begin{tabular}{ccc} \includegraphics[width=.31\columnwidth]{img/cb-paper/metrics/L0_MCMC5/lpml.pdf} & \includegraphics[width=.31\columnwidth]{img/cb-paper/metrics/L0_MCMC5/dic.pdf} & \includegraphics[width=.31\columnwidth]{img/cb-paper/metrics/lpml-vs-numsmallclus-improved.pdf} \\ {(a) LPML} & {(b) DIC} & {(c)} \\ \end{tabular} \end{center} \vspace{-0.1in} \caption{\small Analysis of UCB-derived NK cell data. Plots of (a) LPML, (b) DIC, and (c) calibration metric, for $K=3, 6, \ldots, 33$.} \label{fig:cb-select-K} \end{figure} \begin{figure}[t!] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.45\columnwidth]{img/post/cb/best/img/Z1-improved-1.pdf}& \includegraphics[width=0.45\columnwidth]{img/post/cb/best/img/y1-1.pdf}\\ (a) $\hat{\bm{Z}}^\prime_1$ and $\hat{\mbox{\boldmath $w$}}_1$ & (b) Clustering of $y_{1nj}$\\ \includegraphics[width=0.45\columnwidth]{img/post/cb/best/img/Z2-improved-1.pdf}& \includegraphics[width=0.45\columnwidth]{img/post/cb/best/img/y2-1.pdf}\\ (c) $\hat{\bm{Z}}^\prime_2$ and $\hat{\mbox{\boldmath $w$}}_2$ & (d) Clustering of $y_{2nj}$\\ \end{tabular} \end{center} \vspace{-0.1in} \caption{Analysis of the UCB-derived NK cell data. $\hat{\bm{Z}}_i^{\prime}$ and $\hat{\mbox{\boldmath $w$}}_i$ of samples $i=1$ and 2 are illustrated in panels (a) and (c), respectively, with markers that are expressed dented by black and not expressed by white. Only subpopulations with $\hat{w}_{ik} > 1\%$ are included. Heatmaps of expression level $\bm y_i$ are shown in panels (b) and (d) for samples 1 and 2, respectively, with cells in rows and markers columns. Each column thus contains the expression levels of one marker for all cells in a sample. High, low, and missing expression levels are red, blue, and black, respectively. Cells are ordered by the posterior estimates of their clustering memberships, $\hat{\lambda}_{i,n}$. Yellow horizontal lines separate cells by different subpopulations.} \label{fig:cb-post} \end{figure} \begin{figure}[t!] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.45\columnwidth]{img/post/cb/best/img/Z3-improved-1.pdf}& \includegraphics[width=0.45\columnwidth]{img/post/cb/best/img/y3-1.pdf}\\ (e) $\hat{\bm{Z}}^\prime_3$ and $\hat{\mbox{\boldmath $w$}}_3$ & (f) Clustering of $y_{3nj}$\\ \end{tabular} \end{center} \vspace{-0.1in} \caption*{Fig~\ref{fig:cb-post} Analysis of the UCB-derived NK cell data (continued) $\hat{\bm{Z}}_i^{\prime}$ and $\hat{\mbox{\boldmath $w$}}_i$ of sample 3 are illustrated in panel (e), with markers that are expressed dented by black and not expressed by white. Only subpopulations with $\hat{w}_{ik} > 1\%$ are included. Heatmaps of $\bm y_i$ are shown in panel (f) for sample 3. Cells are in rows and markers in columns. Each column contains the expression levels of a marker for all cells in the sample. High, low, and missing expression levels are red, blue, and black, respectively. Cells are ordered by the posterior estimates of their clustering memberships, $\hat{\lambda}_{i,n}$. Yellow horizontal lines separate cells by different subpopulations.} \end{figure} Fig \ref{fig:cb-post} summarizes posterior inference on the latent cell population structure with $\hat{K}=21$. The cells are grouped by their estimated cell subpopulation indicators $\hat{\lambda}_{i,n}$. The figure shows the estimated cell subpopulations $\hat\mbox{\boldmath $Z$}_i$ (in the left column) and clustered marker expression levels $\bm y_i$ (in the right column) for the samples. Cells having subpopulations with larger $\hat{w}_{i,k}$ are shown at the bottom of the heatmaps. The subpopulations with the two largest $\hat{w}_{i,k}$ are different in the samples. The resulting inference indicates that the composition of the NK cell population varies across the samples, pointing to variations in the phenotype of NK cells among different cord blood donors. We observe similarities in the phenotypes of NK cells from samples 2 and 3, however, while sample 1 displays a different phenotype and a distinct distribution of cell subsets. NK cells from all three samples express 2B4, CD94, DNAM-1, NKG2A, NKG2D, Siglec-7, NKp30 and Zap70 in the majority of their identified subpopulations. These markers dictate NK cell functional status. While their interactions are very complicated, taken together they provide a basis for determining whether NK cells have a normal function, and whether they are mature or not. Despite great variability between cord blood 1 and the other two cord bloods, all three had a significant subset of cells with an immature phenotype. Cord blood 1 Cluster 7, cord blood 2 Cluster 17 and cord blood 3 Cluster 6 comprise the largest population of immature cells defined as EOMES (-), TBET (-), and KIR (-). Markers, KIR2DL3 and KIR3DL1, belong to killer-cell immunoglobulin-like receptors (KIRs). These immature clusters of NK cells still retain expression of 2B4, NKG2A, NKG2D, CD94 and NKp30. In particular, NKp30 is natural cytotoxicity receptor, while KIR is not. This implies that, despite great variability between sample 1 and the other two samples, all three have a significant subset of cells with an immature phenotype. Markers EOMES, TBET, Zap70 and KIR are not expressed in the largest subpopulation of each sample, indicating that those are subsets of immature cells. An immature phenotype of NK cells usually is associated with low diversity and low effector function in the absence of exogenous cytokines, \citep{li2019, savaria2017}, while a mature NK cell phenotype has been linked to superior cytotoxicity and better clinical outcomes in cancer patients \citep{ilander2017,carlsten2019}. These immature clusters of NK cells still retain expression of 2B4, CD94, NKG2A, NKG2D, and NKp30. In addition, we identify three subpopulations (12, 15 and 21) that are conserved among the three samples (although at lower percentages in sample 1). In those subpopulations, EOMES and TBET are expressed, indicating that those are a more mature phenotype. The subset with expression of EOMES and TBET could be further divided into three subpopulations based on the expressions of markers CD8, CD16, TIGIT, and KIR. Subpopulations 12 and 21 are very similar, sharing positivity for CD16, CD8 and TIGIT and are differentiated by KIR expression, which are are negative in subpopulation 21 while being positive in subpopulation 12. Subpopulation 15, however, is negative for CD16, CD8, TIGIT and KIR, making EOMES and TBET its only differentiation markers. These novel subsets of cord blood NK cells have not been described in the literature previously, and may need to be further validated. We also identified cluster 3 as an important conserved cluster among all 3 samples, which is positive for NKG2C, CD62L and CD27 which could point towards a memory subset in cord blood NK cells which has not been well described previously. Taken together, these data indicate that FAM allows not only the definition of biologically recognized subsets of NK cells but also may be applied for the discovery of novel NK cell subpopulations. Model sensitivity to the specification of the data missingship mechanism in the NK cell data analysis was assessed by fitting the FAM under two additional specifications of $\bm\beta$, which we call data missingship mechanisms (MM) I and II. We will refer to the previous (default) missingship mechanism as MM-0. Supp.\ Tables \ref{tab:missmechsen-cb} and \ref{tab:missmechsen-cb-beta} list the different data missingship mechanism specifications and the corresponding $\bm\beta$ values, respectively. Under the different specifications of $\bm\beta$, the estimates $\hat{\mbox{\boldmath $Z$}_i}$ and $\hat{\mbox{\boldmath $w$}}_i$ are similar, as shown in Supp.\ Figures \ref{fig:Z-w-CB-missmechsen-1} and \ref{fig:Z-w-CB-missmechsen-2}. The subpopulations estimated under the different missingness mechanisms are the same or differ by fewer than three markers. The subpopulations estimated under MM-I and MM-II are exactly the same or differ by no more than three markers, compared to those under MM-0. We also fit the model to the UCB-derived NK cell data using ADVI with a mini-batch size of 500 and $K=30$ for 20000 iterations. The runtime was 74 minutes on the previously described machine. Supp.\ Fig \ref{fig:cb-vb-Z} summarizes the posterior distribution of $\mbox{\boldmath $Z$}$ and the posterior mode of cell clusterings $\hat\lambda_{i,n}$. The cell subpopulations inferred by ADVI are similar to those obtained by MCMC, but the cell clustering estimates are quite different. Notably, subpopulations with large $\hat{w}_{ik}$ can be found in the estimates obtained by both methods, e.g, the subpopulations with the two largest abundances in sample 1. For subpopulations with small $\hat{w}_{ik}$, we do not find clear matches. The cluster sizes obtained by ADVI are larger than those obtained from MCMC and cells in the clusters are less homogeneous. It thus appears that ADVI should not be used in this type of setting, and that its shorter runtime compared to MCMC is a false economy. \begin{figure}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.48\columnwidth]{img/cb-paper/flowsom/Y1_FlowSOM_by_w_reduced-1.pdf}& \includegraphics[width=0.48\columnwidth]{img/cb-paper/flowsom/Y2_FlowSOM_by_w_reduced-1.pdf}\\ (a) Clustering of $y_{1nj}$ & (b) Clustering of $y_{2nj}$\\ \includegraphics[width=0.48\columnwidth]{img/cb-paper/flowsom/Y3_FlowSOM_by_w_reduced-1.pdf} & \includegraphics[width=0.40\columnwidth]{img/misc/CB-flowsom-prop.pdf} \\ (c) Clustering of $y_{3nj}$ & (d) Proportions\\ \end{tabular} \end{center} \vspace{-0.1in} \caption{[CB Data: Comparison to FlowSOM] Heatmaps of cells in (a)-(c) for samples 1-3, respectively. Cells are arranged by the cluster membership estimates by FlowSOM. The clusters are separated by yellow horizontal lines, with the most abundant clusters in each sample closer to the bottom. High, low, and missing expression levels are red, blue, and black, respectively. The proportions of the cells in the estimated clusters are shown in (d).} \label{fig:cb-flowsom} \end{figure} For comparison, we also applied FlowSOM to the UCB data. We fixed the missing values of $y_{i,n,j}$ at the minimum of the negative observed values of $y$ for each $(i, j)$ prior to analysis. FlowSOM identified 13 cell clusters in the samples. Heatmaps of $y_{i,n,j}$ rearranged by cell clustering estimates by FlowSOM are given in Fig \ref{fig:cb-flowsom} (a)-(c). Heterogeneity between cells within clusters estimated under FlowSOM is noticeably greater than that under the proposed FAM shown in Fig~\ref{fig:cb-post}. For example, marker 10 shows a mix of red, blue, and black colors for cluster 1, the largest cluster. The proportions of cells assigned to the clusters are summarized in Fig \ref{fig:cb-flowsom}(d). The clusters are much larger than those under the FAM. Particularly, cluster 1 under FlowSOM contains 36.7\%, 53.8\% and 54.1\% of the cells in samples 1-3, respectively. Lastly, FlowSOM does not produce an explicit inference on the characterization of subpopulations. \section{Discussion}\label{sec:conclusions} We have proposed a Bayesian FAM to identify and estimate cell subpopulations using CyTOF data. Our FAM identifies latent subpopulations, defined as functions of the marker expression levels, and fits the data in multiple samples simultaneously. The model accounts formally for missing values and between-sample variability. The fitted FAM assigns each cell in each sample to exactly one subpopulation, but each surface marker can belong to more than one subpopulation. The method also yields cell clusters within each sample that are defined in terms of the inferred subpopulations. We constructed a data missingship mechanism based on expert knowledge, and we examined the robustness of the model to the specification of the missingship mechanism through simulation. This showed that inferences were not sensitive to changes in the specification of the missingship mechanism. Compared to established clustering methods, including FlowSOM, the proposed FAM is more effective at discovering latent subpopulations when the underlying cell subpopulations are similar. Our proposed FAM can be extended to accommodate similar but more complex data structures, in particular including covariates. For example, samples with similar covariates may also have similar cell subpopulation structures. The model can incorporate such information by incorporating appropriate regression submodels, to enhance inferences and study how the structures may change with covariates. One also may introduce the concept of \lq\lq repulsiveness\rq\rq\ to latent features and obtain a parsimonious representation of the latent subpopulations by discouraging the creation of redundant subpopulations. Repulsive models, which are more likely to produce features that differ from each other substantially, have been developed mostly in the context of mixture models (e.g., see \cite{petralia2012repulsive, quinlan2018density, xie2019bayesian}). \cite{xu2016bayesian} used the detrimental point process (DPP) for a repulsive FAM that uses the determinant of a matrix as a repulsiveness metric. A model that explicitly penalizes the inclusion of similar features also can be developed to replace the IBP in our model. \vskip .2in \noindent {\bf Acknowledgments}\hfil\break This work was supported by NIH 1 R01 CA211044-01, 5 P01CA148600-03, and P50CA100632-16 (Katy Rezvani), a grant (CA016672) to the M.D. Anderson Cancer Center from the NIH (Katy Rezvani) and NSF grant DMS-1662427 (Juhee Lee). \newpage \noindent {\Huge Supplementary Materials} \\ \section{Data and Code} \noindent Data used for this project is available at \url{https://github.com/luiarthur/cytof-data}. \\ \noindent This project was implemented in the Julia programming language. Code for this project is available at \url{https://github.com/luiarthur/CytofResearch}. \\ \section{Posterior Computation}\label{sec:post-comp} \subsection{MCMC Simulation} Recall that $\bm{\theta}=\bc{\mbox{\boldmath $Z$}, \mbox{\boldmath $w$}, \bm \delta_0, \bm \delta_1, \bm \sigma^2, \bm \eta^0, \bm \eta^1, \bm \lambda, \bm v, \bm \epsilon, \alpha}$ denotes all random parameters. We let expression levels $\bm{y}$ and binary indicators $\m$ denote $y_{i,n,j}$ and $m_{i,n,j}$, respectively, for all $(i,n,j)$. To facilitate the posterior sampling of $\delta_{z,\ell}$, we introduce auxiliary indicators for normal mixture components $\gamma_{i,n,j} \in \{1, \ldots, L_{z_{j, \lambda_{i,n}}}\}$ for each $y_{i,n,j}$ when $\lambda_{i,n}\neq 0$. That is, $p(\gamma_{i,n,j} = \ell \mid z_{j,\lambda_{i,n}}=z, \eta^z_{i,j,\ell}, \lambda_{i,n}\neq 0) = \eta^z_{i,j,\ell}$, where $\ell \in \bc{1, \ldots,L_{z_{j, \lambda_{i,n}}}}$, and let $\mu_{i,n,j}=\mu^\star_{z_{j, \lambda_{i,n}}, \gamma_{i,n,j}}$. We extend the vector of random parameters, $\widetilde{\bm{\theta}}=(\bm{\theta}, \{\gamma_{i,n,j}\})$ by including $\gamma_{i,n,j}$ for more convenient posterior simulation. Similar to the joint posterior distribution of $\bm{\theta}$ in \eqref{eq:joint-post} of the main text, the joint posterior probability model of $\widetilde{\bm{\theta}}$ under our Bayesian FAM model is \begin{eqnarray} p(\widetilde{\bm{\theta}} \mid \bm{y}, \m, K) &\propto& p(\widetilde{\bm{\theta}}\mid K) \prod_{i,n} \bk{ \prod_j \rho_{i,n,j}^{1-m_{i,n,j}} \times \frac{1}{\sqrt{2\pi\sigma^2_{i}}} \exp\bc{-\frac{(y_{i,n,j}-\mu_{i,n,j})^2}{2\sigma^2_{i}}}}^{1(\lambda_{i,n}\neq 0)} \nonumber \\ && \times \bk{\prod_j \rho_{i,n,j}^{1 - m_{i,n,j}} \times \frac{1}{\sqrt{2\pi s^2_\epsilon}} \exp\bc{-\frac{y_{i,n,j}^2}{2 s^2_{\epsilon}}}}^{1(\lambda_{i,n}=0)}. \label{eq:joint-post-supp} \end{eqnarray} Posterior samples of $\widetilde{\bm{\theta}}$ are obtained by iteratively drawing samples from each of the full conditionals using the most recent estimate of the parameters and the data. For the parameters whose conditional distributions are known and are easy to sample from, we used Gibbs sampling. To sample from full conditionals which are otherwise difficult to sample from, the Metropolis-Hastings algorithm was used. \begin{enumerate} \item Full Conditional for $v_k$ Recall that the prior distribution for $v_k$ is $v_k \mid \alpha \overset{ind}{\sim} \text{Be}(\alpha / K, 1)$, for $k = 1,...,K$, that is, $p(v_k \mid \alpha) = \frac{\alpha}{K} v_k^{\alpha/K-1}$. \begin{align*} p(v_k \mid \bm{y}, \text{rest}) &\propto p(v_k) \prod_{j=1}^J p(z_{j,k} \mid v_k) \\ &\propto \frac{\alpha}{K} v_k^{\alpha/K - 1} \prod_{j=1}^J v_k^{z_{j,k}} (1 - v_k)^{1 - z_{j,k}}\\ &\propto v_k^{\alpha/K + \sum_{j=1}^J z_{j,k}- 1} (1 - v_k)^{J - \sum_{j=1}^J z_{j,k}} \end{align*} $$ \Rightarrow v_k \mid \bm{y}, \text{rest} \sim \text{Be}\p{ \alpha / K + \sum_{j=1}^J z_{j,k}, ~ J + 1 - \sum_{j=1}^J z_{j,k} }. $$ We use ``$\text{rest}$'' to denote all parameters except the parameter(s) that we sample. For example, ``$\text{rest}$'' implies $\widetilde{\bm{\theta}} \backslash \{v_k\}$ for updating $v_k$. \item Full Conditional for $z_{j,k}$ Let $S_k = \bc{(i, n): \lambda_{i,n} = k}$, the set of cells in samples taking cell subpopulation $k$. \def\pzOne{ v_k \prod_{(i,n) \in S_k} \sum_{\ell=1}^L \eta^1_{i,j,\ell} \cdot \phi(y_{i,n,j} \mid \mu^\star_{1,\ell}, \sigma^{2}_{i}) } \def\pzZero{ (1-v_k) \prod_{(i,n) \in S_k} \sum_{\ell=1}^L \eta^0_{i,j,\ell} \cdot \phi(y_{i,n,j} \mid \mu^\star_{0,\ell}, \sigma^{2}_{i}) } \begin{align*} p(z_{j,k} = 1 \mid \bm{y}, \text{rest}) &\propto p(z_{j,k} = 1 \mid v_k) \prod_{(i,n) \in S_k} p(y_{i,n,j} \mid \bm\mu^\star_1, \bm{\eta}_{i,j}^1, \sigma^2_i) \\ &\propto \pzOne, \\ p(z_{j,k} = 0 \mid \bm{y}, \text{rest}) &\propto p(z_{j,k} = 0 \mid v_k) \prod_{(i,n) \in S_k} p(y_{i,n,j} \mid \bm\mu^\star_0, \bm{\eta}_{i,j}^0, \sigma^2_i) \\ &\propto \pzZero, \end{align*} where $\phi(y\mid m, s^2)$ denotes the probability density function of the normal distribution with mean $m$ and variance $s^2$, evaluated at $y$. $$ \Rightarrow z_{j,k} \mid \bm{y}, \text{rest} \sim \text{Ber}\p{ \bk{1 + \frac{\pzZero}{\pzOne}}^{-1} }. $$ \item Full Conditional for $\alpha$ \begin{align*} p(\alpha \mid \bm{y}, \text{rest}) &\propto p(\alpha) \times \prod_{k=1}^K p(v_k \mid \alpha) \\ &\propto \alpha^{a_\alpha - 1} \exp\bc{-b_\alpha \alpha} \times \prod_{k=1}^K \alpha~v_k^{\alpha/K} \\ &\propto \alpha^{a_\alpha + K -1} \exp\bc{-\alpha\p{b_\alpha - \sum_{k=1}^K \log v_k / K}} \end{align*} $$ \Rightarrow \alpha \mid \bm{y}, \text{rest} \sim \text{Ga}\p{a_\alpha + K,~ b_\alpha - \sum_{k=1}^K \log v_k /K}. $$ \item Full Conditional for $\lambda_{i,n}$ \def\Ainjk{ \sum_{\ell=1}^{L} \eta^{z_{j,k}}_{i,j,\ell} \cdot \phi(y_{i,n,j} \mid \mu^\star_{z_{j,k},\ell}, \sigma^2_{i}) } The prior for $\lambda_{i,n}$ is $$ p(\lambda_{i,n} = k \mid \mbox{\boldmath $w$}_i, \epsilon_i) = \begin{cases} \epsilon_i, &\text{if } k = 0\\ (1 - \epsilon_i) \cdot w_{i,k}, &\text{if } k \in \bc{1, \dots, K}. \end{cases} $$ We thus have \begin{align*} p(\lambda_{i,n}=0\mid \bm{y},\text{rest}) &\propto p(\lambda_{i,n}=0) ~ p(\bm{y} \mid \lambda_{i,n}=0, \text{rest}) \\ &\propto \epsilon_i \prod_{j=1}^J \phi(y_{i,n,j} \mid 0, s_\epsilon^2), \\ p(\lambda_{i,n}=k\mid \bm{y},\text{rest}) &\propto p(\lambda_{i,n}=k) ~ p(\bm{y} \mid \lambda_{i,n}=k, \text{rest}) \\ & \propto (1 - \epsilon_i) w_{ik} \prod_{j=1}^J \p{ \Ainjk }, \mbox{ for } k =1, \ldots, K. \end{align*} We sample $\lambda_{i,n}$ with probabilities proportional to $p(\lambda_{i,n}=k \mid \bm{y},\text{rest})$ for $k \in \bc{0, \dots, K}$. \item Full Conditional for $\bm w_{i}$ The prior for $\bm{w}_i=(w_{i,1}, \ldots, w_{i,K})$ is $\bm w_i \sim \text{Dir}(d/K, \cdots, d/K)$. The full conditional for $\bm{w}_i$ is: \begin{align*} p(\bm w_i \mid \text{rest}) \propto&~~ p(\bm{w}_i) \times \prod_{n=1}^{N_i} p(\lambda_{i,n} \mid \bm{w}_i)\\ \propto&~~ \prod_{k=1}^K w_{ik}^{\p{d/K + \sum_{n=1}^{N_i}1(\lambda_{i,n}=k)}-1}. \end{align*} Therefore, $$ \bm{w}_i \mid \bm{y},\text{rest} ~\sim~ \text{Dir}\p{d/K+\sum_{n=1}^{N_i}1(\lambda_{i,n}=1),...,d/K+\sum_{n=1}^{N_i}1(\lambda_{i,n}=K)}. $$ \item Full Conditional for $\gamma_{i,n,j}$ For the cells with $\lambda_{i,n} > 0$, \begin{align*} p(\gamma_{i,n,j}=\ell \mid \bm{y}, z_{j\lambda_{i,n}}=z, \text{rest}) &\propto p(\gamma_{i,n,j}=\ell) \times p(y_{i,n,j} \mid \gamma_{i,n,j}=\ell, \text{rest}) \\ &= \eta^z_{ij\ell} \times \phi(y_{i,n,j} \mid \mu^\star_{z \ell}, \sigma^2_i). \end{align*} Therefore, sample $\gamma_{i,n,j}$ with probabilities proportional to $p(\gamma_{i,n,j}=\ell \mid \bm{y},\text{rest})$ for $\ell = 1,...,L^{z_{j,\lambda_{i,n}}}$. \item Full Conditional for $\delta_{z,\ell}$ For $\delta_{1,\ell}$, let $S_{1,i,\ell} = \bc{(i,n,j) : \p{z_{j,\lambda_{i,n}} = 1 ~\cap~ \gamma_{i,n,j} \ge \ell}}$ and $|S_{1,i,\ell}|$ the cardinality of $S_{1,i,\ell}$. \newcommand\dOnePostvarDenom{ \frac{1}{\tau^2_1} + \sum_{i=1}^I\frac{|S_{1,i,\ell}|}{{\sigma^2_{i}}} } \newcommand\dOnePostMeanNum{ \frac{\psi_1}{\tau^2_1} + \sum_{i=1}^I \sum_{S_{1,i,\ell}} \frac{g_{i,n,j}}{{\sigma^2_{i}}} } \begin{align*} p(\delta_{1,\ell} \mid \bm{y}, \text{rest}) &\propto p(\delta_{1,\ell} \mid \psi_1, \tau^2_1) \times p(\bm{y} \mid \delta_{1,\ell},\text{rest}) \\ &\propto 1(\delta_{1,\ell} \ge 0) \times \exp\bc{\frac{-(\delta_{1,\ell} - \psi_1)^2}{2\tau^2_{1}}} \\ & \hspace{5em} \times \prod_{i=1}^I\prod_{(i,n,j)\in S_{1i\ell}} \exp\bc{-\p{y_{i,n,j} - \sum_{r=1}^{\gamma_{i,n,j}} \delta_{1r}}^2 \bigg/ 2\sigma^2_{i}} \\ &\propto \exp\bc{ -\frac{(\delta_{1,\ell})^2}{2}\p{\dOnePostvarDenom} + \delta_{1,\ell}\p{\dOnePostMeanNum} } \\ & \hspace{5em} \times 1(\delta_{1,i,\ell} \ge 0), \end{align*} where $\displaystyle g_{i,n,j} = y_{i,n,j} - \sum_{r=1}^{\gamma_{i,n,j}} (\delta_{1,r})^{1(r \neq \ell)}$. $$ \renewcommand\dOnePostvarDenom{ 1 + \tau^2_1\sum_{i=1}^I(|S_{1,i,\ell}|/{\sigma^2_{i}}) } \renewcommand\dOnePostMeanNum{ \psi_1 + \tau^2_1 \sum_{i=1}^I\sum_{S_{1,i,\ell}} (g_{i,n,j} / {\sigma^2_{i}}) } \Rightarrow \delta_{1,\ell} \mid \bm{y}, \text{rest} \overset{ind}{\sim} \text{TN}^+\p{ \frac{\dOnePostMeanNum}{\dOnePostvarDenom}, \frac{\tau^2_1}{\dOnePostvarDenom} }. $$ Similarly, for $\delta_{0,\ell}$, let $S_{0,i,\ell} = \bc{(i,n,j) : \p{Z_{j,\lambda_{i,n}} = 0 ~\cap~ \gamma_{i,n,j} \ge \ell}}$ and $|S_{0,i,\ell}|$ be the cardinality of $S_{0,i,\ell}$. $$ \newcommand\dZeroPostvarDenom{ 1 + \tau^2_0 \sum_{i=1}^I (|S_{0,i,\ell}|/{\sigma^2_{i}}) } \newcommand\dZeroPostMeanNum{ \psi_0 + \tau^2_0 \sum_{i=1}^I \sum_{S_{1,i,\ell}} (g_{i,n,j} / {\sigma^2_{i}}) } \Rightarrow \delta_{0,l} \mid \bm{y}, \text{rest} \overset{ind}{\sim} \text{TN}^+\p{ \frac{\dZeroPostMeanNum}{\dZeroPostvarDenom}, \frac{\tau^2_0}{\dZeroPostvarDenom} }, $$ where $\displaystyle g_{i,n,j} = -y_{i,n,j} - \sum_{r=1}^{\gamma_{i,n,j}} (\delta_{0,r})^{1(r \neq \ell)}$. \item Full Conditional for $\sigma^2_i$ Let $r_{i,n,j} = 1(\lambda_{i,n} > 0)$, and let $R_i = \sum_{n=1}^{N_i}\sum_{j=1}^J r_{i,n,j}$. We then have \begin{align*} p(\sigma^2_i \mid \bm{y}, \text{rest}) &\propto p(\sigma^2_i) \times p(\bm{y} \mid \sigma^2_i, \text{rest}) \\ &\propto (\sigma^2_i)^{-a_\sigma-1} \exp\bc{-\frac{b_\sigma}{\sigma^2_i}} \prod_{j=1}^J \prod_{n=1}^{N_i} \bc{ \frac{1}{\sqrt{2\sigma^2_i}} \exp\bc{\frac{-(y_{i,n,j}-\mu_{i,n,j})^2}{2\sigma^2_i}} } \\ &\propto (\sigma^2_i)^{-\p{a_\sigma + \frac{R_i}{2}}-1} \exp\bc{-\p{\frac{1}{\sigma^2_i}}\p{b_\sigma + \sum_{j=1}^J \sum_{n=1}^{N_i} r_{i,n,j}\cdot\frac{(y_{i,n,j}-\mu_{i,n,j})^2}{2} }}. \end{align*} $$ \Rightarrow \sigma^2_i \mid \bm{y}, \text{rest} \overset{ind}{\sim} \text{IG}\p{a_\sigma + \frac{R_i}{2}, ~~ b_\sigma + \sum_{j=1}^J \sum_{n=1}^{N_i} r_{i,n,j}\cdot\frac{(y_{i,n,j}-\mu_{i,n,j})^2}{2} }. $$ \item Full Conditional for $\eta^z_{i,j}$ The prior for $\bm\eta^z_{i,j}$ is $\bm \eta^z_{i,j} \sim \text{Dir}_{L_z}(a_{\eta^z})$, for $z\in\bc{0,1}$. So the full conditional for $\bm\eta^z_{i,j}$ is: \def\etaind{ 1\{(\gamma_{i,n,j}=\ell) ~\&~ (z_{j,\lambda_{i,n}}=z) ~\&~ (\lambda_{i,n}>0)\} } \begin{align*} p(\bm \eta^z_{i,j} \mid \text{rest}) \propto&~~ p(\bm{\eta}^z_{i,j}) \times \prod_{n=1}^{N_i} p(\gamma_{i,n,j} \mid \bm \eta^z_{i,j})\\ \propto&~~ \prod_{\ell=1}^{L_z} \p{\eta^z_{i,j,\ell}}^{a_{\eta^z}-1} \times \prod_{\ell=1}^{L_z} \prod_{n=1}^{N_i} \p{\eta^z_{i,j,\ell}}^{\etaind}\\ \propto&~~ \prod_{\ell=1}^{L_z} \p{\eta^z_{i,j,\ell}}^{\p{a_{\eta^z}+ \sum_{n=1}^{N_i} \etaind} - 1}. % \end{align*} $$ \Rightarrow \bm{\eta}^z_{i,j} \mid \bm{y},\text{rest} ~\sim~ \text{Dir}_{L_z}\p{a^*_1,...,a^*_{L_z}}, $$ where $a^*_\ell = a_{\eta^z}+\sum_{n=1}^{N_i}\etaind$. \item Full Conditional for $\epsilon_i$ \begin{align*} p(\epsilon_i \mid y, \text{rest}) &\propto p(\epsilon_i) \prod_{n=1}^{N_i} \epsilon_i^{1(\lambda_{i,n}=0)} (1-\epsilon_i)^{1(\lambda_{i,n}>0)} \\ &\propto \epsilon_i^{a_\epsilon - 1} (1-\epsilon_i)^{b_\epsilon-1} \epsilon_i^{\sum_{n=1}^{N_i}1(\lambda_{i,n}=0)} (1-\epsilon_i)^{\sum_{n=1}^{N_i}1(\lambda_{i,n}>0)} \\ &\propto \epsilon_i^{a_\epsilon + \sum_{n=1}^{N_i}1(\lambda_{i,n}=0) - 1} (1-\epsilon_i)^{b_\epsilon + \sum_{n=1}^{N_i}1(\lambda_{i,n}>0) - 1}. \end{align*} $$ \Rightarrow \epsilon_i \mid y, \text{rest} \sim \text{Be}\p{ a_\epsilon + \sum_{n=1}^{N_i}1(\lambda_{i,n}=0), b_\epsilon + \sum_{n=1}^{N_i}1(\lambda_{i,n}>0)}. $$ \item Full Conditional for Missing $y_{i,n,j}$ \begin{align*} p(y_{i,n,j} \mid m_{i,n,j}=1, \text{rest}) &\propto p(m_{i,n,j} =1\mid y_{i,n,j}, \text{rest}) ~ p(y_{i,n,j} \mid \text{rest}) \\ &\propto \rho_{i,n,j} \sum_{\ell=1}^{L} \eta^{z_{j,\lambda_{i,n}}}_{i,j,\ell} \cdot \phi(y_{i,n,j} \mid \mu^\star_{z_{j,k}, \ell}, \sigma^2_{i}). \end{align*} Direct sampling from the full conditional of $y_{i,n,j}$ is difficult, so we use a Metropolis step with a normal proposal distribution to sample from the full conditional instead. \end{enumerate} \subsection{Variational Inference Implementation Details}\label{sec:vi} Variational inference (VI) is a popular alternative for fitting Bayesian models \citep{jordan1999introduction, beal2003variational, wainwright2008graphical, blei2017variational}. VI tends to be faster and more scalable with data size than the traditional MCMC method. In particular, we utilize automatic differentiation variational inference (ADVI), \citep{advi}, a derivation-free method. It is a gradient-based stochastic optimization method and is amenable to common machine learning techniques, such as stochastic gradient descent, which makes inference for large datasets more tractable. For a comprehensive review of recent advances in VI, see \cite{blei2017variational} and \cite{zhang2018advances}. In VI, parameters of a tractable approximating \dquote{variational} distribution are iteratively optimized until it \dquote{sufficiently} resembles the target (posterior) distribution. The most common metric for measuring the \dquote{closeness} of the target distribution to the variational distribution is the Kullback-Leibler (KL) divergence \citep{kullback1951information}. For our Bayesian feature allocation model (FAM), minimizing the KL divergence between the variational distribution and the posterior distribution is equivalent to maximizing the following evidence lower bound (ELBO) \begin{align} \text{ELBO} &= \text{E} _Q\bk{\log p(\m, \bm{y} \mid \bm{\theta}) + \log p(\bm{\theta}) - \log q(\bm{\theta}) - \log q(\bm{y}^\text{missing})} \nonumber \\ &= \text{E} _Q\bk{\log p(\m \mid \bm{y}, \bm{\theta}) + \log p(\bm{y} \mid \bm{\theta}) + \log p(\bm{\theta}) - \log q(\bm{\theta}) - \log q(\bm{y}^\text{missing})} \nonumber \\ &= \text{E} _Q\bk{\log p(\m \mid \bm{y}) + \log p(\bm{y} \mid \bm{\theta}) + \log p(\bm{\theta}) - \log q(\bm{\theta}) - \log q(\bm{y}^\text{missing})} \label{eq:elbo}. \end{align} $p(\bm m \mid \bm{y})$ and $p(\bm{y}\mid\bm{\theta})$ are the sampling distributions of $m_{i,n,j}$ and $y_{i,n,j}$, and $p(\bm{\theta})$ is the prior distribution for all model parameters. $q(\bm{\theta})$ is the mean-field variational distribution for model parameters. For $q(\bm{\theta})$, each model parameter is transformed to the unconstrained space \citep{advi} and is assumed to have a normal distribution \citep{advi}. $q(\bm{y}^{\text{missing}}) = \prod_{i,n,j} q(y_{i,n,j})^{1(m_{i,n,j}=0)}$ is an amortized variational distribution for the missing values\citep{vae}. Specifically, $q(y_{i,n,j}^\text{missing})$ is a normal probability density function with mean $r_{i,j}$ and standard deviation $s_{i,j}$. This simplification for the missing $y_{i,n,j}$ will produce imputed values different from those under our Bayesian FAM, but yields acceptable performance in our simulation studies at greatly reduced computational cost. Computing the gradient (in gradient descent) requires the computation of the ELBO using the entire dataset. This can be computationally prohibitive for large datasets. Instead, stochastic gradient descent (SGD) is used. A mini-batch of size $B$ (much less than the size of the full data set $N$) can be sampled at each iteration of the SGD to compute the ELBO. The ELBO should be appropriately scaled by $N / B$. This works well in practice provided that the size of the mini-batch is sufficiently large. In our model, parameters of primary interest $\mbox{\boldmath $Z$}$ and $\bm \lambda$ are discrete. Since ADVI is only valid for continuous parameters in differentiable models, we let $z_{j,k} = 1(v_k > h_{j,k})$, where $v_k \mid \alpha \sim \text{Be}(\alpha / K, 1)$, and $h_{j,k} \sim \mbox{Unif}(0, 1)$, similar to the construction of the dependent IBP in \cite{williamson2010dependent}. We approximate the gradient of the indicator function with the gradient of $\text{sigmoid}\p{(\text{logit}(v_k) - \text{logit}(h_{j,k})) \cdot 1000}$, which is smooth. We marginalize over $\bm \lambda$ for VI, and then sample from their full conditionals using the parameters estimated from the variational distributions. For completeness, we have included key terms in the computation of the ELBO using SGD. $p(\m \mid \bm{y})$ is defined as \begin{align*} p(\m \mid \bm{y}) &= \prod_{i=1}^I \prod_{n=1}^{N_i} p(\bm m_{i,n} \mid \bm y_{i,n}) \\ &= \prod_{i=1}^I \prod_{n=1}^{N_i} \prod_{j=1}^J \rho_{i,n,j}^{1 - m_{i,n,j}} \p{1 - \rho_{i,n,j}}^{m_{i,n,j}}\\ &= \prod_{i=1}^I \prod_{n=1}^{N_i} \prod_{j=1}^J \rho_{i,n,j}^{1 - m_{i,n,j}} c_{i,n,j} \\ &= \prod_{i=1}^I \prod_{n=1}^{N_i} \prod_{j=1}^J \rho_{i,n,j}^{1 - m_{i,n,j}} \prod_{i=1}^I \prod_{n=1}^{N_i} \prod_{j=1}^J c_{i,n,j} \\ &= C \prod_{i=1}^I \prod_{n=1}^{N_i} \prod_{j=1}^J \rho_{i,n,j}^{1-m_{i,n,j}}, \end{align*} where $\rho_{i,n,j} = \text{sigmoid}(\beta_{0,i} + \beta_{1,i} y_{i,n,j} + \beta_{2,i} y_{i,n,j}^2)$, and $C=\displaystyle\prod_{i=1}^I \prod_{n=1}^{N_i} \prod_{j=1}^J c_{i,n,j}$ is a constant. Computing $p(\m \mid \bm{y})$ is computationally expensive when $N_i$ is large. Hence, we can approximate it by only iterating through a subset of the data, and scaling the relavant terms. The log of the resulting expression is: \begin{align*} \log p(\m \mid \bm{y}) &= \log C + \sum_{i=1}^I \sum_{n=1}^{N_i} \sum_{j=1}^J (1-m_{i,n,j}) \log \rho_{i,n,j} \nonumber \\ &\approx \log C + \sum_{i=1}^I \frac{N_i}{\abs{S_i}} \sum_{n\in S_i} \sum_{j=1}^J (1-m_{i,n,j}) \log \rho_{i,n,j} \end{align*} where $S_i$ is a subset of $\bc{1, \dots, N_i}$. The likelihood term $p(\bm{y}\mid\bm{\theta})$ is defined as \begin{align*} p(\bm{y} \mid \bm{\theta}) &= \prod_{i=1}^I \prod_{n=1}^{N_i} \underbrace{\left\{ \epsilon_i \prod_{j=1}^J \text{N}(0, s^2_\epsilon) + (1-\epsilon_i) \sum_{k=1}^K w_{i,k} \prod_{j=1}^J \sum_{\ell=1}^{L_{z_{j,k}}} \eta_{i,j,\ell}^{z_{j,k}} \text{N}(y_{i,n,j}\mid \mu^\star_{z_{j,k}, \ell}, \sigma^2_i) \right\} }_\text{$=A_{i,n}$}. \end{align*} We thus have \begin{align*} \log p(\bm{y} \mid \bm{\theta}) &= \sum_{i=1}^I \sum_{n=1}^{N_i} \log A_{i,n} \nonumber \\ &\approx \sum_{i=1}^I \frac{N_i}{\abs{S_i}} \sum_{n\in S_i} \log A_{i,n} ~~\text{(if using mini-batches)} \end{align*} Finally, the variational distribution for the missing values in $\bm{y}$ is defined as \begin{align*} q(\bm{y}) &= \prod_{i=1}^I \prod_{n=1}^{N_i} \prod_{j=1}^J q(y_{i,n,j} \mid r_{i,j}, s_{i,j}) ^{m_{i,n,j}} \nonumber \\ \Rightarrow \log q(\bm{y}) &= \sum_{i=1}^I \sum_{n=1}^{N_i} \sum_{j=1}^J m_{i,n,j} \log q(y_{i,n,j} \mid r_{i,j}, s_{i,j}) \nonumber \\ &\approx \sum_{i=1}^I \frac{N_i}{\abs{S_i}} \sum_{n\in S_i} \sum_{j=1}^J m_{i,n,j} \log q(y_{i,n,j} \mid r_{i,j}, s_{i,j}) ~~\text{(if using mini-batches)} \end{align*} As previously noted, independent Gaussian variational distributions were placed on all other model parameters $\bm{\theta}$ after they were transformed to have support on $\mathbb{R}^{\text{dim}(\bm{\theta})}$. Notably, the parameters with support on simplexes (i.e. $\bm\eta$ and $\mbox{\boldmath $w$}$) were transformed using the stick breaking transformation \citep{stan2016stan}. \begin{figure}[t] \centering \includegraphics[scale=.5]{img/misc/prob_miss_quad.pdf} \caption{A quadratic data missingship mechanism for imputing missing data that passes through the points $(y_1=-6.0, p_1=0.2)$, $(y_2=-4.0, p_2=0.8)$, and $(y_3=-2.0, p_3=0.05)$.} \label{fig:prob-miss-eg} \end{figure} \section{Specification of Data Missingship Mechanism}\label{sec:missing-spec} We discuss the approach used to specify the data missingship mechanism. Recall that we assume a logit regression model for the probability $\rho_{i,n,j}$ for the missing $y_{i,n,j}$ in \eqref{eq:link} of the main text, $\text{logit}(\rho_{i,n,j}) = \beta_{0,i} + \beta_{1,i} y_{i,n,j} + \beta_{2,i} y_{i,n,j}^2$, with $\beta_{p,i} \in \mathbb{R}$, $p \in \bc{0, 1, 2}$. To specify values of $\beta_{p,i}$, we first select three points of $(\tilde{y}, \tilde{\rho})$ for each sample, $(\tilde{y}_1, \tilde{\rho}_1)$, $(\tilde{y}_2, \tilde{\rho}_2)$, and $(\tilde{y}_3, \tilde{\rho}_3)$. We let $\text{logit}(\tilde{\rho}) = \beta_{0,i} + \beta_{1,i} \tilde{y} + \beta_{2,i}\tilde{y}^2$ and solve for $\beta_{i,p}$. We accommodate the subject knowldge that missing $y_{i,n,j}$ strongly indicates that the marker is not expressed in the selection of three points of $(\tilde{y}, \tilde{\rho})$, and the mechanism encourages imputed values to take on negative values. For instance, Figure~\ref{fig:prob-miss-eg} shows an example of data missingship mechanism specified by selecting $(-6.0, 0.2)$, $(-4.0, 0.8)$, and $(-2.0, 0.05)$ of $(\tilde{y}, \tilde{\rho})$. This specification imputes values between -2 and -6 with large probability. The mechanism thus strongly implies that the marker is not expressed. We used empirical quantiles of negative values of observed $y$ to specify $\tilde{y}$. \def\text{CPO}{\text{CPO}} \def\text{LPML}{\text{LPML}} \def\text{data}{\text{data}} \section{Computation of LPML and DIC}\label{sec:lpml-dic} We use the log pseudo marginal likelihood (LPML) and deviance criterion information (DIC) to select the number of cell subpopulations ($K$) as discussed in \S \ref{sec:prob-model} of the main text. LPML \citep{gelfand1994bayesian, gelfand1992bayesian}) is defined as $\text{LPML} = \sum_{i=1}^n \log\text{CPO}_i$, where $\text{CPO}_i = \int f(\text{data}_i \mid \text{data}_{-i}, \theta)p(\theta \mid \text{data}_{-i})d\theta \approx \bk{\frac{1}{B}\sum_{b=1}^B \frac{1}{f(\text{data}_i \mid \theta^{(b)})}}^{-1}$, where $f(\text{data}_i\mid\theta^{(b)})$ is the likelihood evaluated at Monte Carlo sample $b$ of $B$ samples for observation $i$, and $\text{CPO}_i$ is the conditional predictive ordinates. The likelihood of cell $n$ in sample $i$ is \begin{align} f(\bm m_{i,n}, \bm y_{i,n} \mid \bm{\theta}) &= \prod_{j=1}^J \rho_{i,n,j}^{1-m_{i,n,j}} (1-\rho_{i,n,j})^{m_{i,n,j}}\cdot \phi(y_{i,n,j} \mid \mu_{i,n,j}, \sigma^2_i) \nonumber\\ &\propto \prod_{j=1}^J \rho_{i,n,j}^{1 - m_{i,n,j}}\cdot \phi(y_{i,n,j} \mid \mu_{i,n,j}, \sigma^2_i), \label{eq:lpml-like-prop} \end{align} where $\phi(y\mid m, s^2)$ denotes the probability density function of the normal distribution with mean $m$ and variance $s^2$, evaluated at $y$. Note that $(1-\rho_{i,n,j})^{m_{i,n,j}}$ in \eqref{eq:lpml-like-prop} is dropped since it remains constant for observed $y_{i,n,j}$. We then compute $\text{LPML}$ as \begin{eqnarray*}\label{eq:cpo} \text{LPML} &=& \sum_{i=1}^I\sum_{n=1}^{N_i} \log\text{CPO}_{i,n}\\ &\approx& \sum_{i=1}^I\sum_{n=1}^{N_i} \log \left\{\frac{1}{B} \sum_{b=1}^B \frac{1}{f(\bm m_{i,n}, \bm y_{i,n} \mid \bm{\theta}^{(b)})}\right\}^{-1} \\ &\propto& \sum_{i=1}^I\sum_{n=1}^{N_i} \log \left\{\frac{1}{B} \sum_{b=1}^B \frac{1}{ \prod_{j=1}^J (\rho^{(b)}_{i,n,j})^{m_{inj}}\cdot \phi(y_{i,n,j} \mid \mu^{(b)}_{i,n,j}, \sigma^{2, (b)}_{i}) }\right\}^{-1}. \end{eqnarray*} Deviance is defined as as $D = -2\log f(\bm m, \bm y \mid \bm{\theta})$, where $f(\bm m, \bm y \mid \bm{\theta})$ is the likelihood. The deviance criterion information (DIC) \citep{dic} is computed as DIC = $\bar D - D(\bar\theta)$, where $\bar D = \text{E} \bk{D}$ is the posterior mean of the deviance, and $\bar\bm{\theta}$ is the posterior mean of the parameters $\bm{\theta}$. We compute the likelihood as \begin{equation}\label{eq:dic-like} f(\bm m, \bm y \mid \bm{\theta}) = \prod_{i=1}^I\prod_{n=1}^{N_i} \prod_{j=1}^J \rho_{i,n,j}^{1-m_{i,n,j}} \cdot \phi(y_{i,n,j} \mid \mu_{i,n,j}, \sigma^2_i). \end{equation} The parameters that appear in the likelihood include $\mu_{i,n,j},\sigma^2_i$, and the missing values of $y_{i,n,j}$. So $\bar\bm{\theta}$ can be obtained by computing the posterior means of $\mu_{i,n,j}$, $\sigma_i^2$, and the missing $y_{i,n,j}$. \section{Simulation Study}\label{sec:sim} \subsection{Additional Results for Simulation 1}\label{sec:sim-1} Here we present additional figures and tables for Simulation 1. Figure~\ref{fig:sim-vb-1} summarizes the results from the analysis of Simulation 1 via ADVI. It contains the elementwise posterior means of $\bm Z$ and the posterior means of $\bm w_i$ (panels (a), (c), and (e)), and heatmaps of the simulated data $y_{i,n,j}$ sorted according to the posterior mode of the cell subpopulation indicators $\hat\lambda_{i,n}$ (panels (b), (d), and (f)). Table~\ref{tab:missmechsen-sim} contains the three data missingship mechanisms (MM) used in Simulation 1. MM0 is the default mechanism. Recall that we used empirical $\tilde{\bm q}$-quantiles to specify $\tilde{\bm y}$. Different $\tilde{\bm q}$ yields different values of $\bm \beta$. Three different sets of $\tilde{\bm q}$ are used for the sensitivity analysis, while fixing $\tilde{\bm \rho}$. For each mechanism, the LPML and DIC are shown in the last two columns of the table. Figures \ref{fig:Z-w-sim1-missmechsen-1} and \ref{fig:Z-w-sim1-missmechsen-2} respectively summarize the results for the analysis of Simulation 1 under data missingship mechanism I and II, done via MCMC. The figures contain the posterior estimate of $\bm Z$ and $\bm w$ in panels (a), (c), and (e), and heatmaps of the simulated data $y_{i,n,j}$ sorted according to the posterior estimate of the cell subpopulation indicator $\hat\lambda_{i,n}$ in panels (b), (d), and (f). \subsection{Simulation 2}\label{sec:sim-2} An additional simulation study, Simulation 2, that assumes a larger simulated dataset and a more complex cell subpopulation structure, was performed. The dataset was simulated in a manner similar to Simulation 1 in \S~\ref{sec:sim-study} of the main text, but the data size is larger with $N=(40000, 5000, 10000)$, and has more cell subpopulations with $K^{\mbox{\tiny TR}}=10$. We first specify $\mbox{\boldmath $Z$}^{\mbox{\tiny TR}}$ and simulated $\mbox{\boldmath $w$}^{\mbox{\tiny TR}}_i$ from a Dirichlet distribution with parameters being some random permutation of $(1, \ldots, K)$. Table~\ref{tab:sim2-tr} illustrates $\mbox{\boldmath $Z$}^{\mbox{\tiny TR}}$ and $\mbox{\boldmath $w$}^{\mbox{\tiny TR}}$. Parameters $\mu^{\star, {\mbox{\tiny TR}}}_{0}$, $\mu^{\star, {\mbox{\tiny TR}}}_{1}$, and $\sigma^{2, {\mbox{\tiny TR}}}_i$ are set in the same way as Simulation 1. We fit the model over a grid for $K$, for $K$ from 2 to 20 in increments of 2. For all models, we fixed $L_0=5$ and $L_1=5$. Recall that $L_0^{\mbox{\tiny TR}}=L_1^{\mbox{\tiny TR}}=3$. All other parameter specifications, MCMC initialization, and MCMC specifications were done in the same way as Simulation 1. The LPML, DIC, and calibration metric for $K$ are presented in Figure \ref{fig:metrics-sim2}. The metrics indicate that the model with $\hat{K}=10$ fits the data best and achieves a balance between good model fit and low model complexity. Figure \ref{fig:sim2-post} shows posterior estimates of the clusterings for each sample for the large simulated dataset, along with posterior estimates of the subpopulations present ($\hat \bm{Z}_i$) and their abundances ($\hat \mbox{\boldmath $w$}_i$) in each sample. The red, blue, and black cells represent high, low, and non-observed expression levels, respectively. Hotizontal yellow lines separate cells into clusters. The simulation truth for the cell subpopulations in $\mbox{\boldmath $Z$}^{\mbox{\tiny TR}}$ is recovered by $\hat{\bm{Z}}$, and $\hat \mbox{\boldmath $w$}_i$ is close to $\mbox{\boldmath $w$}^{\mbox{\tiny TR}}$. Figure~\ref{fig:sim2-FlowSOM-Z} shows estimated clusterings for each sample $\bm{y}_i$ using FlowSOM. The largest cluster in sample 1 shown in panel (a) contains a mixture of high and low expression levels for marker 9, resulting in poor performance of clustering cells. This undesired behavior is not observed in the FAM. Figure~\ref{fig:sim-vb-2} summarizes the posterior inference obtained via ADVI. The posterior mean of $\bm Z$ and the posterior mean of $\bm w_i$ are in panels (a), (c), and (e), and heatmaps of the simulated data $y_{i,n,j}$ sorted according to the posterior mode of the cell subpopulations $\hat\lambda_{i,n}$ in panels (b), (d), and (f). The posterior inference covers the simulation truth well. We performed the sensitivity analysis to the specification of the data missingship mechanism after selecting $K=10$ via DIC and LPML. Table \ref{tab:missmechsen-sim2} summarizes the missingship mechanisms used in the sensitivity analysis. Again, we note that inference on $\bm{Z}$ and $\bm w$ do not change significantly across the various missing mechanisms. However, the fit (in terms of LPML and DIC) on the observed data was highest for missingship mechanism II, which encourages imputing values that are more negative, as it best matched the simulation truth. Figures \ref{fig:Z-w-sim2-missmechsen-1} and \ref{fig:Z-w-sim2-missmechsen-2} respectively summarize the results for the analysis of Simulation 1 under data missingship mechanism I and II, done via MCMC. The figures contain the posterior estimate of $\bm Z$ and $\bm w$ in panels (a), (c), and (e), and heatmaps of the simulated data $y_{i,n,j}$ sorted according to the posterior estimate of the cell subpopulation indicators $\hat\lambda_{i,n}$ in panels (b), (d), and (f). \\ \section{Additional Results for Analysis of Cord Blood Derived NK Cell Data} This section contains additional figures and tables for the CB NK cell data analysis presented in \S~\ref{sec:cb-analysis} of the main text. Table~\ref{tab:marker-codes} lists the marker names and numbers for each marker included in the CB derived NK data analysis. Figure~\ref{fig:CB-tsne} visualizes the CB NK cell data in a two-dimensional space using a data visualization technique “t-SNE (t-Distributed Stochastic Neighbor Embedding)” \citep{maaten2008visualizing, van2014accelerating}. The two dimensional embeddings are learned separately for each sample. Cells are represented with different symbols and colors by their posterior estimate $\hat{\lambda}_{in}$ of the cell clustering. All cells in the samples are used to obtain the embeddings, but cells in the subpopulations with $\hat{w}_{ik}\geq 0.05$ are included in the plots for better illustration. Table~\ref{tab:missmechsen-cb} contains the three data missingship mechanisms (MM) used in analyzing the CB derived NK data. MM0 is the default mechanism. Each mechanism is defines the parameters $\bm\beta$ through the quantiles of the negative observed values in each sample $\tilde{\bm q}$, and probability that a record is missing at those quantiles $\tilde{\bm\rho}$. For each mechanism, the LPML and DIC are shown. Table~\ref{tab:missmechsen-cb-beta} list the implied $\bm\beta$ for each data missingship mechanism. Figures \ref{fig:Z-w-CB-missmechsen-1} and \ref{fig:Z-w-CB-missmechsen-2} respectively summarize the results for the analysis of the CB NK cell data under data missingship mechanism I and II, done via MCMC. The posterior estimate of $\bm Z$ and $\bm w$ are shown in panels (a), (c), and (e), and heatmaps of the simulated data $y_{i,n,j}$ sorted according to the posterior estimate of the cell subpopulations $\hat\lambda_{i,n}$ in panels (b), (d), and (f)). Figure~\ref{fig:cb-vb-Z} summarizes the results from the analysis of the UCB NK cell data via ADVI. The posterior mean of $\bm Z$ and the posterior mean of $\bm w_i$ are in panels (a), (c), and (e)), and heatmaps of the simulated data $y_{i,n,j}$ sorted according to the posterior mode of the cell subpopulations $\hat\lambda_{i,n}$ in panels (b), (d), and (f). \clearpage \clearpage \begin{figure}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K5/BS2000/K_VB30/10/img/yz/Z1_est_minpresence0.01-1}.pdf} & \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K5/BS2000/K_VB30/10/img/yz/y1_post}.pdf} \\ (a) $\hat{\bm{Z}}^\prime_1$ and $\hat{\mbox{\boldmath $w$}}_1$ & (b) $y_{1nj}$\\ \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K5/BS2000/K_VB30/10/img/yz/Z2_est_minpresence0.01-1}.pdf} & \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K5/BS2000/K_VB30/10/img/yz/y2_post}.pdf} \\ (c) $\hat{\bm{Z}}^\prime_2$ and $\hat{\mbox{\boldmath $w$}}_2$ & (d) $y_{2nj}$\\ \end{tabular} \vspace{-0.05in} \caption{[ADVI for Simulation 1] In (a) and (c), the transpose $\hat{\bm{Z}}^\prime_i$ of $\hat \mbox{\boldmath $Z$}_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for samples 1 and 2, respectively, with markers that are expressed dented by black and not expressed by white. Only subpopulations with $\hat{w}_{i,k} > 1\%$ are included. Heatmaps of $\bm y_i$ are shown for sample 1 in (b) and sample 2 in (d). Cells are ordered by posterior point estimates of their subpopulations, $\hat{\lambda}_{i,n}$. Cells are given in rows and markers are given in columns. High and low expression levels are represented by red and blue, respectively, and black represents missing values. Yellow horizontal lines separate cells into five subpopulations. Posterior estimates are obtained via ADVI. } \label{fig:sim-vb-1} \end{center} \end{figure} \begin{figure}[thb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K5/BS2000/K_VB30/10/img/yz/Z3_est_minpresence0.01-1}.pdf} & \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K5/BS2000/K_VB30/10/img/yz/y3_post}.pdf} \\ (e) $\hat{\bm{Z}}^\prime_3$ and $\hat{\mbox{\boldmath $w$}}_3$ & (f) $y_{3nj}$\\ \end{tabular} \vspace{-0.05in} \caption*{Figure~\ref{fig:sim-vb-1} continued: [ADVI for Simulation 1] In (e), the transpose $\hat{\bm{Z}}^\prime_i$ of $\hat \mbox{\boldmath $Z$}_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for sample 3, with markers that are expressed dented by black and not expressed by white. Only subpopulations with $\hat{w}_{i,k} > 1\%$ are included. Heatmaps of $\bm y_i$ for sample 3 is shown in (f). Cells are ordered by posterior point estimates of their subpopulations, $\hat{\lambda}_{i,n}$. Cells are given in rows and markers are given in columns. High and low expression levels are represented by red and blue, respectively, and black represents missing values. Yellow horizontal lines separate cells into five subpopulations. Posterior estimates are obtained via ADVI. } \end{center} \end{figure} \clearpage \begin{table}[t] \centering \begin{tabular}{c|cccc} \hline Data Missingship & $\tilde{\bm q}$ & Probability of Missing $(\tilde{\bm\rho})$ & LPML & DIC \\ Mechanism & & & & \\ \hline 0 & (0\%, 25\%, 50\%) & (5\%, 80\%, 5\%) & -16.728 & 172989 \\ I & (0\%, 20\%, 40\%) & (5\%, 80\%, 5\%) & -16.681 & 172914 \\ II & (0\%, 15\%, 30\%) & (5\%, 80\%, 5\%) & -16.462 & 170971 \\ \hline \end{tabular} \caption[Data Missingship Mechanism Specifications]{ Data missingship mechanisms used for Simulation 1. $\tilde{\bm q}$-quantiles of the negative observed values in each sample are used to specify $\tilde{\bm y}$, and $\tilde{\bm\rho}$ are the probability of missing at those $\tilde{\bm y}$. Three different sets of $\tilde{\bm q}$ and $\tilde{\bm \rho}$ are used to examine the sensitivity to the missingship mechanism specification. LPML and DIC are shown in the last two columns under each of the specification.} \label{tab:missmechsen-sim} \end{table} \clearpage \begin{figure}[t] \centering \begin{tabular}{ccc} \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech1/sep/{y_dat1_only_minpresence0.01_reduced}.png} & \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech1/sep/{y_dat2_only_minpresence0.01_reduced}.png} & \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech1/sep/{y_dat3_only_minpresence0.01_reduced}.png} \\ (a) heatmap of $y_{1nj}$ & (b) heatmap of $y_{2nj}$ & (c) heatmap of $y_{3nj}$\\ % \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech1/sep/{ZT_hat1_minpresence0.01-1}.pdf} & \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech1/sep/{ZT_hat2_minpresence0.01-1}.pdf} & \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech1/sep/{ZT_hat3_minpresence0.01-1}.pdf} \\ % (d) $\hat{\bm{Z}}^\prime_1$ \& $\hat{\mbox{\boldmath $w$}}_1$ & (e) $\hat{\bm{Z}}^\prime_2$ \& $\hat{\mbox{\boldmath $w$}}_2$ & (c) $\hat{\bm{Z}}^\prime_3$ \& $\hat{\mbox{\boldmath $w$}}_3$ \\ \end{tabular} \caption{Data missingship mechanism sensitivity analysis for Simulation 1. Specification I is used for $\bm \beta$. Heatmaps of $\bm{y}_i$ are shown in (a)-(c) for samples 1-3, respectively. Cells are rearranged by the posterior point estimate of cell clustering, $\hat{\lambda}_{i,n}$. Cells and markers are in rows and columns, respectively. High and low expression levels are in red and blue, respectively, and black is used for missing values. Yellow horizontal lines separate cells by different subpopulations. $\hat{\bm{Z}}^\prime_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for each of the samples in (d)-(f). We include only subpopulations with $\hat{w}_{i,k} > 1\%$.} \label{fig:Z-w-sim1-missmechsen-1} \end{figure} \clearpage \begin{figure}[t] \centering \begin{tabular}{ccc} \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech2/sep/{y_dat1_only_minpresence0.01_reduced}.png} & \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech2/sep/{y_dat2_only_minpresence0.01_reduced}.png} & \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech2/sep/{y_dat3_only_minpresence0.01_reduced}.png} \\ (a) heatmap of $y_{1nj}$ & (b) heatmap of $y_{2nj}$ & (c) heatmap of $y_{3nj}$\\ % \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech2/sep/{ZT_hat1_minpresence0.01-1}.pdf} & \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech2/sep/{ZT_hat2_minpresence0.01-1}.pdf} & \includegraphics[width=.3\columnwidth]{img/sim-paper/small/missmech2/sep/{ZT_hat3_minpresence0.01-1}.pdf} \\ (d) $\hat{\bm{Z}}^\prime_1$ \& $\hat{\mbox{\boldmath $w$}}_1$ & (e) $\hat{\bm{Z}}^\prime_2$ \& $\hat{\mbox{\boldmath $w$}}_2$ & (c) $\hat{\bm{Z}}^\prime_3$ \& $\hat{\mbox{\boldmath $w$}}_3$ \\ \end{tabular} \caption{Data missingship mechanism sensitivity analysis for Simulation 1. Specification II is used for $\bm \beta$. Heatmaps of $\bm{y}_i$ are shown in (a)-(c) for samples 1-3, respectively. Cells are rearranged by the posterior point estimate of cell clustering, $\hat{\lambda}_{i,n}$. Cells and markers are in rows and columns, respectively. High and low expression levels are in red and blue, respectively, and black is used for missing values. Yellow horizontal lines separate cells by different subpopulations. $\hat{\bm{Z}}^\prime_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for each of the samples in (d)-(f).We include only subpopulations with $\hat{w}_{i,k} >1\%$.} \label{fig:Z-w-sim1-missmechsen-2} \end{figure} \clearpage \begin{table}[t!] \begin{subtable}{.5\linewidth} \centering \begin{tabular}{c} \includegraphics[width=0.95\columnwidth, height=0.35\textheight]{img/post/sim/big/best/img/Z_true-improved-1.pdf}\\ \end{tabular} \caption{ $\bm{Z}^{\mbox{\tiny TR}}$} \end{subtable}% \begin{subtable}{.5\linewidth} \centering \begin{tabular}{|c|rrr|} \hline subpopulations & sample 1 & sample 2 & sample 3 \\ \hline $k=1$ & 0.136 & 0.160 & 0.033 \\ $k=2$ & 0.132 & 0.021 & 0.128 \\ $k=3$ & 0.111 & 0.037 & 0.257 \\ $k=4$ & 0.157 & 0.084 & 0.110 \\ $k=5$ & 0.044 & 0.183 & 0.049 \\ $k=6$ & 0.046 & 0.111 & 0.142 \\ $k=7$ & 0.215 & 0.045 & 0.142 \\ $k=8$ & 0.072 & 0.109 & 0.001 \\ $k=9$ & 0.018 & 0.109 & 0.099 \\ $k=10$ & 0.065 & 0.135 & 0.035 \\ \hline \end{tabular} \caption{$\mbox{\boldmath $w$}^{\mbox{\tiny TR}}$} \end{subtable} \caption{[Simulation 2] $\bm{Z}^{\mbox{\tiny TR}}$ and $\mbox{\boldmath $w$}^{\mbox{\tiny TR}}$ are illustrated in (a) and (b), respectively. $K^{\mbox{\tiny TR}}=10$, $J=20$, $I=3$ and $N=(40000, 5000, 10000)$ are assumed. Black and white in (a) represents $z^{\mbox{\tiny TR}}_{j,k}=1$ and 0, respectively.} \label{tab:sim2-tr} \end{table} \clearpage \begin{figure} \begin{center} \begin{tabular}{ccc} \includegraphics[width=.3\columnwidth]{img/sim-paper/metrics/Nfac5000/lpml.pdf} & \includegraphics[width=.3\columnwidth]{img/sim-paper/metrics/Nfac5000/dic.pdf} & \includegraphics[width=.3\columnwidth]{img/sim-paper/metrics/Nfac5000/lpml-vs-numsmallclus-improved.pdf} \\ {(a) LPML} & {(b) DIC} & {(c) Calibration of $K$} \\ \end{tabular} \end{center} \caption{[Simulation 2] Plots of (a) LPML, (b) DIC, and (c) calibration metric, for $K=2, 4, \dots, 20$, for large simulated data suggest that $\hat{K}=10$ is sufficient to explain the latent cell subpopulations.} \label{fig:metrics-sim2} \end{figure} \clearpage \begin{figure}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.5\columnwidth]{img/post/sim/big/best/img/Z1-improved-1.pdf} & \includegraphics[width=0.5\columnwidth]{img/post/sim/big/best/img/y1.pdf} \\ {(a) $\hat{\bm{Z}}_1$ \& $\hat{\mbox{\boldmath $w$}}_1$} & {(b) $y_{1nj}$} \\ \includegraphics[width=0.5\columnwidth]{img/post/sim/big/best/img/Z2-improved-1.pdf} & \includegraphics[width=0.5\columnwidth]{img/post/sim/big/best/img/y2.pdf} \\ {(c) $\hat{\bm{Z}}_2$ \& $\hat{\mbox{\boldmath $w$}}_2$} & {(d) $y_{2nj}$}\\ \end{tabular} \end{center} \vspace{-0.05in} % \caption{Results of Simulation 2. In (a) and (c), $\hat{\bm{Z}}^\prime_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for samples 1 and 2, respectively, with markers that are expressed dented by black and not expressed by white. Only subpopulations with $\hat{w}_{i,k} > 1\%$ are included. Heatmaps of $\bm y_i$ are shown for sample 1 in (b) and sample 2 in (d). Cells are ordered by posterior point estimates of their subpopulations, $\hat{\lambda}_{i,n}$. Cells are given in rows and markers are given in columns. High and low expression levels are represented by red and blue, respectively, and black represents missing values. Yellow horizontal lines separate cells into five subpopulations.} \label{fig:sim2-post} \end{figure} \clearpage \begin{figure}[h!] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.5\columnwidth]{img/post/sim/big/best/img/Z3-improved-1.pdf} & \includegraphics[width=0.5\columnwidth]{img/post/sim/big/best/img/y3.pdf} \\ {(e) $\hat{\bm{Z}}_3$ \& $\hat{\mbox{\boldmath $w$}}_3$} & {(f) $y_{3nj}$}\\ \end{tabular} \end{center} \vspace{-0.05in} % \caption*{Figure~\ref{fig:sim2-post}. Results of Simulation 2 (continued) In (e), $\hat{\bm{Z}}^\prime_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for sample 3, with markers that are expressed dented by black and not expressed by white. Only subpopulations with $\hat{w}_{i,k} > 1\%$ are included. Heatmaps of $\bm y_i$ for sample 3 is shown in (f). Cells are ordered by posterior point estimates of their subpopulations, $\hat{\lambda}_{i,n}$. Cells are given in rows and markers are given in columns. High and low expression levels are represented by red and blue, respectively, and black represents missing values. Yellow horizontal lines separate cells into five subpopulations.} \end{figure} \begin{figure}[t] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K10/BS2000/K_VB30/10/img/yz/Z1_est_minpresence0.01-1}.pdf} & \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K10/BS2000/K_VB30/10/img/yz/y1_post}.pdf} \\ (a) $\hat{\bm{Z}}^\prime_1$ and $\hat{\mbox{\boldmath $w$}}_1$ & (b) $y_{1nj}$\\ \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K10/BS2000/K_VB30/10/img/yz/Z2_est_minpresence0.01-1}.pdf} & \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K10/BS2000/K_VB30/10/img/yz/y2_post}.pdf} \\ (c) $\hat{\bm{Z}}^\prime_2$ and $\hat{\mbox{\boldmath $w$}}_2$ & (d) $y_{2nj}$\\ \end{tabular} \vspace{-0.05in} \caption{[ADVI for Simulation 2] In (a) and (c), the transpose $\hat{\bm{Z}}^\prime_i$ of $\hat \mbox{\boldmath $Z$}_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for samples 1 and 2, respectively, with markers that are expressed dented by black and not expressed by white. Only subpopulations with $\hat{w}_{i,k} > 1\%$ are included. Heatmaps of $\bm y_i$ are shown for sample 1 in (b) and sample 2 in (d). Cells are ordered by posterior point estimates of their subpopulations, $\hat{\lambda}_{i,n}$. Cells are given in rows and markers are given in columns. High and low expression levels are represented by red and blue, respectively, and black represents missing values. Yellow horizontal lines separate cells into five subpopulations. Posterior estimates are obtained via ADVI. } \label{fig:sim-vb-2} \end{center} \end{figure} \begin{figure}[thb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K10/BS2000/K_VB30/10/img/yz/Z3_est_minpresence0.01-1}.pdf} & \includegraphics[width=0.5\columnwidth]{{img/vb-sim-paper/K10/BS2000/K_VB30/10/img/yz/y3_post}.pdf} \\ (e) $\hat{\bm{Z}}^\prime_3$ and $\hat{\mbox{\boldmath $w$}}_3$ & (f) $y_{3nj}$\\ \end{tabular} \vspace{-0.05in} \caption*{Figure~\ref{fig:sim-vb-2} continued: [ADVI for Simulation 2] In (e), the transpose $\hat{\bm{Z}}^\prime_i$ of $\hat \mbox{\boldmath $Z$}_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for sample 3, with markers that are expressed dented by black and not expressed by white. Only subpopulations with $\hat{w}_{i,k} > 1\%$ are included. Heatmaps of $\bm y_i$ for sample 3 is shown in (f). Cells are ordered by posterior point estimates of their subpopulations, $\hat{\lambda}_{i,n}$. Cells are given in rows and markers are given in columns. High and low expression levels are represented by red and blue, respectively, and black represents missing values. Yellow horizontal lines separate cells into five subpopulations. Posterior estimates are obtained via ADVI.} \end{center} \end{figure} \clearpage \begin{table}[t] \centering \begin{tabular}{c|cccc} \hline Missing Mechanism & $\tilde{\bm q}$ & Probability of Missing $(\bm\rho)$ & LPML & DIC \\ \hline 0 & (0\%, 25\%, 50\%) & (5\%, 80\%, 5\%) & -16.215 & 1675117 \\ I & (0\%, 20\%, 40\%) & (5\%, 80\%, 5\%) & -16.052 & 1662834 \\ II & (0\%, 15\%, 30\%) & (5\%, 80\%, 5\%) & -15.771 & 1640255 \\ \hline \end{tabular} \caption[Missingness Mechanism Specifications for Simulation 2]{Missingness mechanisms used for Simulation 2. $\tilde{\bm q}$-quantiles of the negative observed values in each sample are used to specify $\tilde{\bm y}$, and $\bm\rho$ are the probability of missing at $\tilde{\bm y}$. Three different sets of $\tilde{\bm q}$ and $\tilde{\bm \rho}$ are used to examine the sensitivity to the missingship mechanism specification. LPML and DIC are shown in the last two columns under each of the specification. } \label{tab:missmechsen-sim2} \end{table} \begin{figure}[t] \centering \begin{tabular}{ccc} \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech1/sep/{y_dat1_only_minpresence0.01_reduced}.png} & \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech1/sep/{y_dat2_only_minpresence0.01_reduced}.png} & \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech1/sep/{y_dat3_only_minpresence0.01_reduced}.png} \\ (a) heatmap of $y_{1nj}$ & (b) heatmap of $y_{2nj}$ & (c) heatmap of $y_{3nj}$\\ % \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech1/sep/{ZT_hat1_minpresence0.01-1}.pdf} & \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech1/sep/{ZT_hat2_minpresence0.01-1}.pdf} & \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech1/sep/{ZT_hat3_minpresence0.01-1}.pdf} \\ % (d) $\hat{\bm{Z}}^\prime_1$ \& $\hat{\mbox{\boldmath $w$}}_1$ & (e) $\hat{\bm{Z}}^\prime_2$ \& $\hat{\mbox{\boldmath $w$}}_2$ & (c) $\hat{\bm{Z}}^\prime_3$ \& $\hat{\mbox{\boldmath $w$}}_3$ \\ \end{tabular} \caption{Data missingship mechanism sensitivity analysis for Simulation 2. Specification I is used for $\bm \beta$. Heatmaps of $\bm{y}_i$ are shown in (a)-(c) for samples 1-3, respectively. Cells are rearranged by the posterior point estimate of cell clustering, $\hat{\lambda}_{i,n}$. Cells and markers are in rows and columns, respectively. High and low expression levels are in red and blue, respectively, and black is used for missing values. Yellow horizontal lines separate cells by different subpopulations. $\hat{\bm{Z}}^\prime_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for each of the samples in (d)-(f). We include only subpopulations with $\hat{w}_{i,k} > 1\%$.} \label{fig:Z-w-sim2-missmechsen-1} \end{figure} \begin{figure}[t] \centering \begin{tabular}{ccc} \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech2/sep/{y_dat1_only_minpresence0.01_reduced}.png} & \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech2/sep/{y_dat2_only_minpresence0.01_reduced}.png} & \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech2/sep/{y_dat3_only_minpresence0.01_reduced}.png} \\ (a) heatmap of $y_{1nj}$ & (b) heatmap of $y_{2nj}$ & (c) heatmap of $y_{3nj}$\\ % \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech2/sep/{ZT_hat1_minpresence0.01-1}.pdf} & \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech2/sep/{ZT_hat2_minpresence0.01-1}.pdf} & \includegraphics[width=.3\columnwidth]{img/sim-paper/big/missmech2/sep/{ZT_hat3_minpresence0.01-1}.pdf} \\ (d) $\hat{\bm{Z}}^\prime_1$ \& $\hat{\mbox{\boldmath $w$}}_1$ & (e) $\hat{\bm{Z}}^\prime_2$ \& $\hat{\mbox{\boldmath $w$}}_2$ & (c) $\hat{\bm{Z}}^\prime_3$ \& $\hat{\mbox{\boldmath $w$}}_3$ \\ \end{tabular} \caption{Data missingship mechanism sensitivity analysis for Simulation 2. Specification II is used for $\bm \beta$. Heatmaps of $\bm{y}_i$ are shown in (a)-(c) for samples 1-3, respectively. Cells are rearranged by the posterior point estimate of cell clustering, $\hat{\lambda}_{i,n}$. Cells and markers are in rows and columns, respectively. High and low expression levels are in red and blue, respectively, and black is used for missing values. Yellow horizontal lines separate cells by different subpopulations. $\hat{\bm{Z}}^\prime_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for each of the samples in (d)-(f). We include only subpopulations with $\hat{w}_{i,k} > 1\%$.} \label{fig:Z-w-sim2-missmechsen-2} \end{figure} \clearpage \begin{figure}[h] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.3\columnwidth]{img/sim-paper/data/kills-flowsom/N5000/K10/1/YZ001_FlowSOM_reduced.png}& \includegraphics[width=0.3\columnwidth]{img/sim-paper/data/kills-flowsom/N5000/K10/1/YZ002_FlowSOM_reduced.png}& \includegraphics[width=0.3\columnwidth]{img/sim-paper/data/kills-flowsom/N5000/K10/1/YZ003_FlowSOM_reduced.png}\\ (a) Sample 1 & (b) Sample 2 & (c) Sample 3\\ \end{tabular} \vspace{-0.05in} \caption{\small[FlowSOM for Simulation 2] Heatmaps of $\bm{y}_{i}$ for Simulation 2. Samples 1-3 are in (a)-(c), respectively. The cells are sorted by the cluster labels $\lambda_{i,n}$ for each sample, estimated by FlowSOM.} \label{fig:sim2-FlowSOM-Z} \end{center} \end{figure} \clearpage \begin{table}[h] \centering \begin{tabular}{|c|c|} \hline \textbf{Marker} & \textbf{Marker} \\ \textbf{Number} & \textbf{Name} \\ \hline 1 & 2B4 \\ 2 & KIR2DL3 \\ 3 & KIR3DL1 \\ 4 & CD158B \\ \hline 5 & CD16 \\ 6 & CD27 \\ 7 & CD62L \\ 8 & CD8 \\ \hline 9 & CD94 \\ 10 & DNAM1 \\ 11 & EOMES \\ 12 & KLRG1 \\ \hline 13 & NKG2A \\ 14 & NKG2C \\ 15 & NKG2D \\ 16 & NKP30 \\ \hline 17 & SIGLEC7 \\ 18 & TBET \\ 19 & TIGIT \\ 20 & ZAP70 \\ \hline \end{tabular} \caption{Marker names and numbers for each marker referenced in the CB NK cell data.} \label{tab:marker-codes} \end{table} \clearpage \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.47\columnwidth]{img/cb-tsne//tsne_sample_1.pdf}& \includegraphics[width=0.47\columnwidth]{img/cb-tsne//tsne_sample_2.pdf}\\ (a) Sample 1 & (b) Sample 2 \\ \includegraphics[width=0.47\columnwidth]{img/cb-tsne//tsne_sample_3.pdf}&\\ (c) Sample 3 & \\ \end{tabular} \vspace{-0.05in} \caption{\small[Plots of t-SNE's for the CB data] The CB data is visualized using two-dimensional t-SNE’s that are learned separately on each sample, where each point represents a cell. Cells in different subpopluations estimated by the FAM are marked by different symbols and colors. On the top of the scatterplots, the subpopulation numbers are listed with their corresponding symbols and colors. All cells are used to obtain t-SNE embeddings, but only cell subpopulations belonging to subpopluations with $\hat{w}_{ik} \geq 0.05$ are included in the plots for better illustration. } \label{fig:CB-tsne} \end{center} \end{figure} \clearpage \begin{table}[h] \centering \begin{tabular}{|c|cccc|} \hline Data Missingship & $\tilde{\bm q}$ & Probability of Missing $(\bm\rho)$ & LPML & DIC \\ Mechanism & & & & \\ \hline 0 & (0\%, 25\%, 50\%) & (5\%, 80\%, 5\%) & -24.90 & 2569097 \\ I & (0\%, 20\%, 40\%) & (5\%, 80\%, 5\%) & -24.93 & 2569098 \\ II & (0\%, 15\%, 30\%) & (5\%, 80\%, 5\%) & -24.98 & 2569098 \\ \hline \end{tabular} \caption[Different data missingship mechanisms in UCB NK cell data analysis]{ $\tilde{\bm q}$-quantiles of the negative observed values in each sample are used to specify $\tilde{\bm y}$, and $\bm\rho$ are the probability of missing at $\tilde{\bm y}$. Three different sets of $\tilde{\bm q}$ and $\tilde{\bm \rho}$ are used to examine the sensitivity to the missingship mechanism specification. LPML and DIC are shown in the last two columns under each of the specification. } \label{tab:missmechsen-cb} \end{table} \begin{table}[h] \centering \begin{tabular}{|c|c|rrr|} \hline Data Missingship Mechanism & $\beta$ & Sample 1 & Sample 2 & Sample 3 \\ \hline \hline 0 & $\beta_0$ & -15.35 & -15.73 & -13.66 \\ & $\beta_1$ & -10.39 & -10.20 & -9.60 \\ & $\beta_2$ & -1.38 & -1.34 & -1.30 \\ \hline \hline I & $\beta_0$ & -20.40 & -21.50 & -18.21 \\ & $\beta_1$ & -12.60 & -12.76 & -11.62 \\ & $\beta_2$ & -1.61 & -1.61 & -1.51 \\ \hline \hline II & $\beta_0$ & -27.43 & -29.21 & -25.26 \\ & $\beta_1$ & -15.52 & -15.86 & -14.62 \\ & $\beta_2$ & -1.90 & -1.91 & -1.81 \\ \hline \end{tabular} \caption{Values for $\beta$ used for the sensitivity analysis to the missinghsip mechanism in CB NK cell data analysis.} \label{tab:missmechsen-cb-beta} \end{table} \begin{figure}[t] \centering \begin{tabular}{ccc} \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech1/img/y_dat1.pdf} & \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech1/img/y_dat2.pdf} & \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech1/img/y_dat3.pdf} \\ (a) heatmap of $y_{1nj}$ & (b) heatmap of $y_{2nj}$ & (c) heatmap of $y_{3nj}$\\ % \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech1/img/ZT_hat_1.pdf} & \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech1/img/ZT_hat_2.pdf} & \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech1/img/ZT_hat_3.pdf} \\ % (d) $\hat{\bm{Z}}^\prime_1$ \& $\hat{\mbox{\boldmath $w$}}_1$ & (e) $\hat{\bm{Z}}^\prime_2$ \& $\hat{\mbox{\boldmath $w$}}_2$ & (c) $\hat{\bm{Z}}^\prime_3$ \& $\hat{\mbox{\boldmath $w$}}_3$ \\ \end{tabular} \caption{Data missingship mechanism sensitivity analysis for CB NK cell data analysis. Specification I is used for $\bm \beta$. Heatmaps of $\bm{y}_u$ are shown in (a)-(c) for samples 1-3, respectively. Cells are rearranged by the posterior point estimate of the cell clusterings $\hat{\lambda}_{i,n}$. Cells and markers are in rows and columns, respectively. High and low expression levels are in red and blue, respectively, and black is used for missing values. Yellow horizontal lines separate cells by different subpopulations. $\hat{\bm{Z}}^\prime_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for each of the samples in (d)-(f). We include only subpopulations with $\hat{w}_{i,k} > 1\%$.} \label{fig:Z-w-CB-missmechsen-1} \end{figure} \begin{figure}[t] \centering \begin{tabular}{ccc} \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech2/img/y_dat1.pdf} & \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech2/img/y_dat2.pdf} & \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech2/img/y_dat3.pdf} \\ (a) heatmap of $y_{1nj}$ & (b) heatmap of $y_{2nj}$ & (c) heatmap of $y_{3nj}$\\ % \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech2/img/ZT_hat1.pdf} & \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech2/img/ZT_hat2.pdf} & \includegraphics[width=.3\columnwidth]{img/cb-paper/missmech2/img/ZT_hat3.pdf} \\ (d) $\hat{\bm{Z}}^\prime_1$ \& $\hat{\mbox{\boldmath $w$}}_1$ & (e) $\hat{\bm{Z}}^\prime_2$ \& $\hat{\mbox{\boldmath $w$}}_2$ & (c) $\hat{\bm{Z}}^\prime_3$ \& $\hat{\mbox{\boldmath $w$}}_3$ \\ \end{tabular} \caption{Data missingship mechanism sensitivity analysis for CB NK cell data analysis. Specification II is used for $\bm \beta$. Heatmaps of $\bm{y}_i$ are shown in (a)-(c) for samples 1-3, respectively. Cells are rearranged by the posterior point estimate of the cell clusterings $\hat{\lambda}_{i,n}$. Cells and markers are in rows and columns, respectively. High and low expression levels are in red and blue, respectively, and black is used for missing values. Yellow horizontal lines separate cells by different subpopulations. $\hat{\bm{Z}}^\prime_i$ and $\hat{\mbox{\boldmath $w$}}_i$ are shown for each of the samples in (d)-(f). We include only subpopulations with $\hat{w}_{i,k} > 1\%$.} \label{fig:Z-w-CB-missmechsen-2} \end{figure} \begin{figure}[t!] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.5\columnwidth]{img/vb-cb-paper/test/img/yz/Z1_post_VI.pdf} & \includegraphics[width=0.5\columnwidth]{img/vb-cb-paper/test/img/yz/y1_post_1.pdf}\\ (a) $\hat{\bm{Z}}^\prime_1$ and $\hat{\mbox{\boldmath $w$}}_1$ & (b) $y_{1nj}$\\ \includegraphics[width=0.5\columnwidth]{img/vb-cb-paper/test/img/yz/Z2_post_VI.pdf} & \includegraphics[width=0.5\columnwidth]{img/vb-cb-paper/test/img/yz/y2_post_1.pdf}\\ (c) $\hat{\bm{Z}}^\prime_2$ and $\hat{\mbox{\boldmath $w$}}_2$ & (d) $y_{2nj}$\\ \end{tabular} \end{center} \vspace{-0.05in} \caption{\small[CB NK cell data] Inference obtained by VI is illustrated. $\hat{\bm{Z}^\prime}_i$ and $\hat{\mbox{\boldmath $w$}}_i$ of samples 1 and 2 are illustrated in panels (a) and (c), respectively, with markers that are expressed dented by black and not expressed by white. Only subpopulations with $\hat{w}_{i,k} > 1\%$ are included. Heatmaps of $\bm{y}_i$ are shown in panels (b) and (d) for samples 1 and 2, respectively. Cells and markers are in rows and columns, respectively. Each column contains the expression levels of a marker for all cells in the sample. High and low expression levels are red are blue, respectively. Missing values are black. Cells are rearranged by the corresponding posterior estimate of their subpopulation indicator, $\hat{\lambda}_{i,n}$. Yellow horizontal lines separate cells by different subpopulations.} \label{fig:cb-vb-Z} \end{figure} \begin{figure}[t!] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.5\columnwidth]{img/vb-cb-paper/test/img/yz/Z3_post_VI.pdf} & \includegraphics[width=0.5\columnwidth]{img/vb-cb-paper/test/img/yz/y3_post_1.pdf}\\ (e) $\hat{\bm{Z}}^\prime_3$ and $\hat{\mbox{\boldmath $w$}}_3$ & (f) $y_{3nj}$\\ \end{tabular} \end{center} \vspace{-0.05in} \caption*{Figure~\ref{fig:cb-vb-Z} continued: [CB NK cell data] Inference obtained by VI is illustrated. $\hat{\bm{Z}^\prime}_i$ and $\hat{\mbox{\boldmath $w$}}_i$ of sample 3 illustrated in panel (e), with markers that are expressed dented by black and not expressed by white. Only subpopulations with $\hat{w}_{i,k} > 1\%$ are included. Heatmaps of $\bm{y}_i$ are shown in panels (b) and (d) for samples 1 and 2, respectively. Cells and markers are in rows and columns, respectively. Each column contains the expression levels of a marker for all cells in the sample. High and low expression levels are red are blue, respectively. Missing values are black. Cells are rearranged by the corresponding posterior estimate of their subpopulation indicator, $\hat{\lambda}_{i,n}$. Yellow horizontal lines separate cells by different subpopulations.} \end{figure} \clearpage \bibliographystyle{natbib}
1,477,468,750,211
arxiv
\section{Risks of Hidden Failure Correlations} \label{sec-avail} Ensuring high availability is usually a high priority for cloud infrastructure and services, and state replication and fault tolerance mechanisms is the focus of much industry and research attention. Most of this attention is focused {\em within} a particular cloud service, however. In addition to the stability risks discussed above, interactions between multiple interdependent cloud services could lead to availability risks not yet addressed in mainstream research, where hardware infrastructure interdependencies hidden by proprietary business relationships can lead to unexpected failure correlations. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{stack.eps} \caption{Cloud service stack illustrating risks of correlated failures due to hidden service interdependencies} \label{fig-stack} \end{figure} As another contrived but illustrative example, consider the ``cloud service stack'' in Figure~\ref{fig-stack}. The provider at the top offers a cloud-based application intended to offer mission-critical reliability. To ensure this reliability, the application replicates all critical application state across the storage services provided by two nominally-independent cloud storage providers, A and B, each of which in turn provides storage at multiple geographic sites with separate network connectivity at each site. Unbeknownst to the application provider, however, each storage provider obtains its network connections from a common underlying network provider, C. The application's access to its critical storage proves highly reliable as long as provider C operates normally. If provider C encounters a rare disaster or administrative glitch, however---% or enters a dispute with another top-tier network provider~\cite{bray05dispute}---% the mission-critical application may suddenly lose connectivity to {\em both} of its critical storage repositories. This correlated failure results from the shared dependencies on C being hidden by the proprietary business relationships through which the application provider obtains services from A and B. As the cloud computing industry matures and produces ever more complex cloud-based services, it seems inevitable that the depth and complexity of inter-service relationships will continue to explode, which may create unpredictable availability risks due to ever more subtle cross-layer interdependencies, of which the above example is merely the most simplistic representative. Furthermore, one of the fundamental attractions of cloud computing is that it makes computing infrastructure, services, and applications into generic, almost arbitrarily ``fungible'' resources that can be bought, sold, and resold as demanded by business objectives~\cite{williams12xen}. It does not seem far-fetched to predict that cloud services will arise that represent a thin veneer over, or ``repackaging'' of, other services or combinations of services: e.g., businesses that resell, trade, or speculate on complex cocktails or ``derivatives'' of more basic cloud resources and services, much like the modern financial and energy trading industries operate. If this prediction bears out, the cloud services industry could similarly start yielding speculative bubbles and occasional large-scale failures, due to ``overly leveraged'' composite cloud services whose complex interdependencies hide correlated failure modes that do not become apparent until the bubble bursts catastrophically---% perhaps not wholly unlike the causes of the recent financial meltdown or the earlier Enron energy bubble~\cite{healy03fall}. Once again, while this risk is pure speculation at this point, it seems worth taking seriously and exploring in advance. \com{ Section~\ref{sec-avail} briefly outlines one possible technical approach for heading off such risks. \com{ \subsection{Independence in Resource Provisioning for Multilayer Cloud Services} \label{sec-avail} The availability risk discussed in Section~\ref{sec-motiv-avail} result from interdependencies between cloud services in a different way. If not completely transparent, these interdependencies can hide shared failure points underlying otherwise-independent services, yielding correlated failure modes that may turn localized failures into collective disasters. \begin{figure}[t] \centering \includegraphics[width=0.70\textwidth]{andor.eps} \caption{AND/OR Graph Representing Service Composition and Infrastructure Dependencies} \label{fig-andor} \end{figure} \xxx{Potentially related work: "Fault Tree Handbook" [1981] - good overview !! http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA354973 p. II-5: "Fault Hazard Analysis": "for detecting faults that cross organizational interfaces." "Probabilistic Risk Analysis: Foundations and Methods" [2001] - broader overview including fault tree analysis "A FUZZY SET APPROACH TO FAULT TREE AND RELIABILITY ANALYSIS" "MULTILEVEL FAULT TREE ANALYSIS USING FUZZY NUMBERS" "Fault-tree analysis by fuzzy probability" "Reliability analysis for complex, repairable systems" (1975) "Safety Analysis Using Petri Nets" [1987] "Software reliability analysis models" (1984) "Prediction of software reliability using connectionist models" "A Comparison of Techniques for Developing Predictive Models of Software Metrics" and other work on software reliability modeling "Applied Reliability" book, 1995 "Bayesian Reliability" book, 2008 - includes some fault tree analysis Online failure prediction (more recent): "A Survey of Online Failure Prediction Methods" [2010] http://informatik.hu-berlin.de/Members/salfner/publications/salfner10survey.pdf very relevant recent work: "A Rigorous, Compositional, and Extensible Framework for Dynamic Fault Tree Analysis" ftp://ftp.inrialpes.fr/pub/vasy/publications/others/Boudali-Crouzen-Stoelinga-09.pdf } For space reasons this proposal only briefly sketches one approach I intend to explore to address this risk. The key to the correlated failure risk is the non-transparency of the dependency graph. It is unrealistic to expect cloud service providers to disclose publicly all the technical components and corresponding business relationships they rely on to support the operation of their cloud services, and the ways in which those technical components fit together. Supposing this information {\em were} available, however, then we could in principle build a graph representing all dependencies between cloud services and the underlying hardware infrastructure supporting them. One form such a graph might take is a directed AND/OR graph, such as the simplistic but illustrative one in Figure~\ref{fig-andor}. AND nodes reflect design composition and hence conjunctive dependency: {\em all} components underneath an AND node must function correctly in order for the component above the AND to operate. OR nodes reflect design redundancy and hence disjunctive dependency: if {\em any} component underneath the OR node operates, the dependent component above the OR will operate. This picture of course fails to account for important design patterns such as $m$-of-$n$ redundancy, circular dependency structures such two peering network providers relying on each other as a backup service, etc. Nevertheless, such a graph representation provides the crucial property that, via straightforward graph analysis, one might estimate the {\em actual} reliability of a system after accounting for unanticipated common dependencies, such as Network Provider C in the example. Although cloud providers may not be willing to release publicly all the information needed to construct such a dependency graph, they might be willing to release this information to an impartial third party, such as an independent organization offering cloud reliability analysis services, analogous to the product safety analysis services of Underwriters Laboratories (UL). As a more ambitious approach offering real-time reliability analysis, cloud providers might leverage TPM-attested, IFC-enforcing kernels~\cite{ efstathopoulos05labels,zeldovich06making,krohn07information} to make dependency graph information available dynamically to each other, while ensuring that this information does not ``leak'' beyond a set of trusted analysis applications designed to compute, declassify, and reveal only composite analysis results. \section{Conclusion} \label{sec-concl} While the cloud computing model is promising and attractive in many ways, the author hopes that this paper has made the case that the model may bring risks beyond obvious information security concerns. At the very least, it would be prudent for us to study some of these risks {\em before} our socioeconomic system becomes completely and irreversibly dependent on a computing model whose foundations may still be incompletely understood. \subsection*{Acknowledgments} Jeff Mogul, Michael Schapira, John Edwards, and the anonymous HotCloud reviewers offered valuable feedback on early drafts of this paper. This research was sponsored by the NSF under grant CNS-1149936. \section{Introduction} \com{ work in refs: ~\cite{raghavendra08power,mogul06emergent}. Also, John Edwards who contacted me by E-mail about police \& fire depts. Also, guy from IBM at the NSF workshop who mentioned encountering instability issues in IBM's internal cloud... Can't remember who it was though; maybe it'll come out in the workshop notes? Also: Schapira's paper: http://www.cs.yale.edu/homes/schapira/jsw-ics11.pdf (full), http://www.cs.yale.edu/homes/schapira/podc171b-jaggard.pdf (summary). Related to future ``multi-cloud/multi-provider'' trend: Xen Blanket, EuroSys '12~\cite{williams12xen} } Attractive features and industry momentum make cloud computing appear destined to be the next dominant computing paradigm. Cloud computing is appealing due to the convenience of central management and the elasticity of resource provisioning. Moving critical information infrastructure to the cloud also presents risks, however, some of which are well-known and already hot research topics. The much-discussed challenge of ensuring the privacy of information hosted in the cloud, for example~\cite{ gellman09privacy}, has resulted in an emerging breed of ``cloud-hardened'' virtualization hardware~\cite{ keller10nohype} and security kernels~\cite{zhang11cloudvisor}. Similarly, the challenge of ensuring high availability in the cloud has in part fueled recent research on robust data center networking~\cite{ wang10r3,raiciu11improving}. This paper assumes that a large fraction of the computing industry is, for better or worse, ``moving to the cloud,'' and that current research addressing the immediate information security risks is well underway and will (eventually) succeed. Setting aside these known challenges, therefore, this paper attempts to identify and focus on several {\em less} well-understood---and perhaps less ``imminent''---risks that {\em may} emerge from the shift to cloud computing. In particular, this paper addresses: (1) {\bf stability} risks due to unpredictable interactions between independently developed but interacting cloud computations; (2) {\bf availability} risks due to non-transparent layering resulting in hidden failure correlations; and (3) {\bf preservation} risks due to the unavailability of a cloud service's essential code and data outside of the provider. This paper is speculative and forward-looking; the author cannot yet offer definitive evidence that any of these risks {\em will} fully materialize or become vitally important, but rather can offer only informal arguments and anecdotal evidence that these risks {\em might} become important issues. The above list is also probably incomplete: it is likely that other important risks will emerge only as the industry continues its shift to the cloud. Nevertheless, I argue that it is worth proactively investigating longer-term risks such as these before they are certain or imminent, as the stakes may be high. Further, once any of these risks {\em do} become important, it may be too late to reconsider or slow the movement of critical infrastructure to the cloud, or to rethink the architecture of important cloud infrastructure or services once they are already perceived as ``mature'' in the industry. Section~\ref{sec-stab} addresses stability risks, Section~\ref{sec-avail} explores availability risks, and Section~\ref{sec-pres} explores preservation risks. Section~\ref{sec-sol} briefly points out a few possible research directions in which solutions might be found---% though this paper cannot and does not pretend to offer ``answers.'' Finally, Section~\ref{sec-concl} concludes. \section{Digital Preservation Risks} \label{sec-pres} The final risk considered here is more long-term. With the tremendous economic momentum toward cloud-based and cloud-dependent applications and services, it appears inevitable that these cloud-based ``digital artifacts'' will soon represent a considerable and ever-increasing component of our social and cultural heritage. In 100 years, however, will today's culturally important cloud-based digital artifacts still be available in a historically accurate form---% or in any form? A physical book has an inherent {\em decentralized archivability} property. In order to make money on a book, its author or publisher must make complete copies available to customers. Customers in turn are free to---and cannot effectively be prevented from---% independently storing books for any amount of time, relocating copies to a safe long-term repository (e.g., a library), copying them to other media as the original media deteriorates, etc. Preservation of digital works presents many known challenges---% principally the faster deterioration or obsolescence of electronic media, and the obsolescence of computing environments needed to interpret old data formats~\cite{ rothenberg99avoiding,bearman99reality,maniatis05lockss}. Yet despite these known challenges, traditional software and associated documents stored on a floppy or hard disk, USB stick, or even a ``cloud drive'' holding raw files, still has the same {\em decentralized archivability} property of a book. The vendor of a traditional software application or digital document must, in order to make money, make essentially {\em complete} copies available to customers, and these customers can work in an arbitrarily decentralized fashion using their own resources to preserve digital works deemed worth saving. Cloud-based applications and services, however, completely eliminate this property of decentralized archivability. Unlike users of Microsoft Office, users of Google Search or Maps never gain access to anything remotely resembling a ``complete copy'' of the entire digital artifact represented by the Google Search or Maps service. At most, users might save the results of particular queries or interactions. Unlike players of Doom, players of World of Warcraft (WoW) cannot independently archive and preserve a copy of the WoW universe---% or even a small portion of interest---% because the provider of the cloud-based application need not, and typically does not, make publicly available the server-side software and data comprising the service. Given the number of scholarly papers written on the technological and social implications of each, it would be hard to argue that Google Search and WoW do not represent a historically significant digital artifacts. Yet given the rate that Google and Blizzard evolve their services to compete more effectively in the search and gaming markets, respectively, it is almost certain that ten years from now, no one outside these companies---% perhaps not even anyone {\em inside} them---% will be able to reproduce a faithful, functioning copy of the Google Search or WoW service {\em as it exists today}. In 100 years, these services will probably have evolved beyond recognition, assuming they survive at all. If today's digital archivists do their jobs well, in 100 years we will be able to run today's Microsoft Word or play Doom (in an emulator if necessary)---% but nothing today's digital archivists can do will preserve historically relevant snapshots of today's cloud-based services, because the archivists never even get access to a ``complete'' snapshot for preservation. The historical record of today's Google Search or WoW will consist merely of second-hand accounts: articles written about them, saved search queries or screen shots, captured videos of particular WoW games, etc. While better than nothing, such second-hand accounts would not suffice for future historians to answer questions such as: ``How did the focus or breadth of search results for interesting queries evolve over the last 10 or 100 years?'' Or, ``How did social-interaction and player-reward mechanisms change in MMOGs historically?'' These particular examples may or may not seem interesting or important, but the point is that {\em we don't know} what future historians or social scientists will deem important about today's world. As more of today's culture shifts to the cloud, our failure to preserve our cloud-based digital artifacts could produce a ``digital dark age'' far more opaque and impenetrable to future generations than what media or OS obsolescence alone will produce. \com{ While this problem is clearly not likely to present a simple or easy solution, Section~\ref{sec-preserv} proposes one way in which cloud infrastructure could be evolved so as to give all stakeholders in a cloud service---% not just the provider but also customers and archivists---% some ability to preserve functioning snapshots of cloud services. } \com{ - incentives don't encourage developer to make archival copies available or preserve them; economic incentives normally only make the ``latest version'' interesting, and encourage constant upgrades to keep up with the competition. - even if app developer is willing to make the code and data available for archival storage and historical use, the relevant datasets may be huge, widely distributed across thousands of machines in multiple data centers, and constantly-changing: who will pay for the cost of storing snapshots of these huge artifacts? Customers and librarians probably can't afford to, at least individually. - app developer may be willing to make only subsets of their dataset available to a particular stakeholder: e.g., only the portion of map data requested by a customer at a particular time. \section{Digital Preservation of Cloud-Based Artifacts} \label{sec-preserv} Again for space reasons, this proposal merely sketches one preliminary approach to addressing the problem of preserving cloud-based digital artifacts, discussed in Section~\ref{sec-motiv-preserv}. Two major factors make preservation difficult for cloud artifacts: an incentive factor and a size factor. We discuss and address each in turn. The incentive problem is that in a cloud-based computing model, the application or service provider need not, and has little incentive to, make publicly available all the software and data underlying the service that would be necessary for accurate historical preservation. Competitiveness considerations, in fact, offer the provider considerable incentive {\em not} to make available the ``secret sauce'' underlying their products, in any form. This incentive has long led traditional software vendors to release their software only in binary form---% often with deliberate obfuscation to thwart analysis---% but only the cloud model frees the vendor entirely from the need to release their code in {\em any} form directly executable by the customer. This incentive problem is probably unsolvable by technical means alone: we may have to institute social, commercial, and/or governmental incentives for providers to make their cloud-based artifacts preservable. One first step might be for a major potential purchaser of cloud-based services---% the US government, for example---% to demand purchased services to meet some archivability criterion, much as the government already drives the development and deployment of many security standards such as SHA and AES. Assuming the proper incentives can be established, the more technical part of the problem is that cloud-based services often rely on enormous, frequently-changing datasets, such as the massive distributed databases underlying Google Search or Maps or an MMOG's virtual world. It would probably be impractically costly for providers to ship regular snapshots of their entire datasets to digital archivists---% even well-provisioned ones such as the Library of Congress---% not to mention impractically costly for the receiving archivists to do anything with such enormous snapshots beyond saving the raw bits in the hope they might become useful (and manageable) sometime in the future. An approach more likely to be practical takes advantages of two observations. First, in the short term, it is much more efficient---% hence much more likely to be technically and economically feasible---% for archival snapshots of the provider's datasets to remain largely resident on the same storage infrastructure holding the ``live,'' master datasets from which these snapshots are taken. Standard copy-on-write cloning and deduplication technologies~\cite{ santry99elephant,quinlan02venti} offer building blocks for maintaining such snapshots efficiently. The more significant challenge may be {\em protecting} and {\em administering} these archival snapshots appropriately. Archivists will desire strong guarantees that historical snapshots will not be accidentally lost or deleted, or maliciously tampered with by rogue provider employees, while held on the provider's infrastructure. Techniques currently being explored to protect a user's information privacy from a host cloud provider~\cite{gu11certikos,zhang11cloudvisor} may be adaptable to this purpose. Further, since it is primarily the service's customers, and not the provider, who wish for these snapshots to be stored, the storage infrastructure should be able to account for and ``charge'' appropriately the relevant storage costs, sharing them proportionately among the (potentially many) stakeholders wishing for the provider's service to be historically preserved. Ideally, using virtualization and shadowing technologies~\cite{ dunlap02revirt,alimi08shadow}, historical snapshots of the cloud service should remain ``runnable'' and dynamically usable by all stakeholders in the snapshots: e.g., a customer or historian could use the provider's infrastructure to run a new Web search query on a working copy of the search engine and its database as they existed at a particular historical moment. The second observation is that in the longer term, the raw cost of safely {\em storing} historical snapshots is likely to decrease rapidly with the age of the stored data, as the density and cost-efficiency of mass storage continues to explode---% while the difficulty and potential costs of making snapshots {\em usable} is likely to increase with age, as the original hardware and OS infrastructure that hosted the cloud server becomes obsolete and is replaced with newer, not-quite-compatible hardware and operating systems. While it may be too costly to transfer and store complete snapshots of a cloud service's state off the provider's own physical infrastructure immediately after the snapshot is taken, after 10 years for example, the intervening increases in readily-available network and storage capacity may make it practical to transfer and store complete 10-year-old snapshots off the provider's infrastructure (e.g., at long-term digital libraries). The PI has already done work towards addressing the latter issue of maintaining the usability of archived software. VXA~\cite{ford05vxa,ford08vx32} is an archival storage system that uses lightweight virtualization and sandboxing techniques to create OS-independent ``executable archives'' that may be more amenable to long-term preservation than conventional software. This and other emulation-based preservation techniques~\cite{ rothenberg99avoiding,lorie00archiving,lorie02uvc} are merely a start, however, and will require substantial adaptation to the large-scale cloud context. \section{In Search of Possible Solutions} \label{sec-sol} This paper cannot hope to---and makes no attempt to---% offer solutions or answers to the problems outlined above. Instead, we merely conjecture at a few potential directions in which solutions {\em might} be found. \paragraph{Stabilizing Cloud Services:} One place we might begin to study stability issues between interacting cloud services, and potential solutions, is the extensive body of work on the unexpected inter-AS (Autonomous System) interactions frequently observed in BGP routing~\cite{ varadhan00persistent,griffin02stable}. In particular, the ``dependency wheel'' model, useful for reasoning about BGP policy loops, seems likely to generalize to higher-level control loops in the cloud, such as load balancing policies. Most of the potential {\em solutions} explored so far in the BGP space, however, appear largely specific to BGP---or at least to routing---% and may have to be rethought ``fram scratch'' in the context of more general, higher-level cloud services. Beyond BGP, classic control theory may offer a broader source of inspiration for methods of understanding and ensuring cloud stability. Most conventional control-theoretic techniques, however, are unfortunately constructed from the assumption that some ``master system architect'' can control or at least describe all the potentially-interacting control loops in a system to be engineered. The cloud computing model violates this assumption at the outset by juxtaposing many interdependent, reactive control mechanisms that are by nature {\em independently} developed, and are often the proprietary and closely-guarded business secrets of each provider. \paragraph{Deep Resource (In)Dependence Analysis:} \begin{figure*}[t] \centering \includegraphics[width=0.74\textwidth]{andor.eps} \caption{AND/OR Graph Representing Service Composition and Infrastructure Dependencies} \label{fig-andor} \end{figure*} The availability risks discussed in Section~\ref{sec-avail} result from the fact that cloud service and infrastructure providers usually do not reveal the deep dependency structure underlying their services. The key to this risk is the non-transparency of the dependency graph: the application provider in Figure~\ref{fig-stack} {\em does not know} that both A and B depend on the same network provider C, resulting in hidden failure correlations. Supposing the providers were to make these dependencies visible in an explicit dependency graph, however, we might be able to estimate {\em actual} dependence or independence between different services or resources for reliability analysis. Hardware design techniques such as fault tree analysis~\cite{vesely81fault,bedford01probabilistic} may offer some tools that could be adapted to the purpose of reasoning about cloud service and infrastructure dependencies. Consider for example a simplistic AND/OR resource dependency graph, shown in Figure~\ref{fig-andor}. AND nodes reflect design composition and hence conjunctive dependency: {\em all} components underneath an AND node must function correctly in order for the component above to operate. OR nodes reflect design redundancy and hence disjunctive dependency: if {\em any} component underneath the OR node operates, the dependent component above the OR will operate. Given such a graph, annotated with expected failure rates, one might compute or estimate a system's {\em effective} reliability after accounting for unanticipated common dependencies, such as Network Provider C in the example. Cloud providers may be reluctant to release detailed dependency information publicly for business reasons, but might willing release it to a trusted third party, such as an organization analogous to Underwriters Laboratories (UL) offering cloud reliability analysis services. More ambitiously, cloud providers might leverage TPM-attested, IFC-enforcing kernels~\cite{ zeldovich06making} to exchange and analyze dependency graph information, without allowing proprietary information to ``leak'' beyond this analysis. \paragraph{Preserving Cloud Artifacts:} Enabling the long-term preservation of cloud artifacts will require solving both incentive problems and technical challenges. In a cloud-based computing model, application and service providers currently need not, and have little incentive to, make publicly available all the software and data underlying the service that would be necessary for accurate historical preservation. Competition encourages providers to closely guard the ``secret sauce'' underlying their products. This incentive has long led traditional software vendors to release their software only in binary form---% often with deliberate obfuscation to thwart analysis---% but only the cloud model frees the vendor entirely from the need to release their code in {\em any} form directly executable by the customer. Solving this incentive problem will likely require social, commercial, and/or governmental incentives for providers to make their cloud-based artifacts preservable in some way. On the technical side, cloud-based services often rely on enormous, frequently-changing datasets, such as the massive distributed databases underlying Google Search or Maps or an MMOG's virtual world. Even if willing, it might be impractically costly for providers to ship regular snapshots of their entire datasets to digital archivists---% even well-provisioned ones such as the Library of Congress---% not to mention costly for receiving archivists to do anything with such enormous snapshots beyond saving the raw bits. A more practical approach may be for providers themselves to be responsible for saving historical snapshots in the short term, using standard copy-on-write cloning and deduplicated storage technologies for efficiency~\cite{ santry99elephant,quinlan02venti}. After some time period, say 5--10 years, a select subset of these historical snapshots might then be transferred to external archives for long-term preservation, at considerably reduced cost-per-bit in terms of both network bandwidth and storage due to intervening technological evolution. Any solution would need to address many other challenges, such as ensuring the durability and integrity of online digital archives~\cite{ maniatis05lockss} and the honesty of their providers~\cite{shah11auditing}, maintaining information security of sensitive data in snapshots of cloud-based artifacts, and preserving artifacts' practical usability in addition to their raw bits, but we leave these issues to future work. \section{Stability Risks from Interacting Services} \label{sec-stab} Cloud services and applications increasingly build atop one another in ever more complex ways, such as cloud-based advertising or mapping services used as components in other, higher-level cloud-based applications, all of these building on computation and storage infrastructure offered by still other providers. Each of these interacting, codependent services and infrastructure components is often implemented, deployed, and maintained independently by a single company that, for reasons of competition, shares as few details as possible about the internal operation of its services. The resource provisioning and moment-by-moment operation of each service is often managed by dynamic, reactive control processes that constantly monitor the behavior of customer load, internal infrastructure, and other component services, and implement complex proprietary policies to optimize the provider's cost-benefit ratio. Each cloud service's control loop may change the service's externally visible behavior, in policy-specific ways, based on its neighboring services' behavior, creating cyclic control dependencies between interacting cloud services. These dependency cycles may lead to unexpected feedback and instability, in much the way that policy-based routing in BGP is already known to lead to instability or ``route flapping'' in the much more restricted ``control domain'' of Internet routing~\cite{varadhan00persistent,griffin02stable}. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{fig-stab.eps} \caption{Example instability risk from unintended coupling of independently developed reactive controllers} \label{fig-stab} \end{figure} To illustrate this risk, we consider a minimalistic, perhaps contrived, but hopefully suggestive example in Figure~\ref{fig-stab}. Application provider $A$ develops and deploys a cloud-based application, which runs on virtual compute and storage nodes from infrastructure provider $B$. For simplicity, assume $A$ leases two virtual nodes from $B$, and dynamically load-balances incoming requests across the web/application servers running on these nodes. Assume $A$'s load balancer operates in a control loop with a 1-minute period: after each minute it evaluates each server's current load based on that server's response time statistics during the past minute, and shifts more traffic during the next minute to the less-loaded server. Assume that $A$'s load shifting algorithm is well-designed and stable assuming the servers in the pool behave consistently over time, like dedicated physical servers would. Unbeknownst to $A$, however, suppose $B$ also runs a control loop, which attempts to optimize the power consumption of its physical servers by dynamically adjusting the servers' clock rates based on load. This control loop also happens to have a 1-minute period: after each minute, $B$'s controller measures each CPU core's utilization during the past minute, then reduces the core's voltage and speed if the core was underutilized or increases voltage and speed if the core was overutilized. Again, assume that $B$'s controller is well-designed and stable assuming that the servers' load stays relatively constant or varies independently of $B$'s control actions. Although both $A$'s and $B$'s control loops would be stable if operating alone, by the misfortune of their engineers (independently) picking similar control loop periods, the combination of the two control loops may risk a positive feedback loop. Suppose during one minute the load is slightly imbalanced toward virtual server 1, and the two control loops' periods happen to be closely aligned; this will happen sooner or later in the likely event their clocks run at slightly different rates. $A$'s load balancer notices this and shifts some load away from the node in the next minute, while $B$'s power optimizer notices the same thing and increases the node's voltage and clock speed. While either of these actions alone would lead toward convergence, the two in combination cause overcompensation: during the next minute, server 1 becomes {\em more} underutilized than it was overutilized in the previous minute. The two controllers each compensate with a stronger action---% a larger shift of traffic back to server 1 by $A$ and a larger decrease in voltage and clock speed by $B$---% causing a larger swing the next minute. Soon all incoming load is oscillating between the two servers, cutting the system's overall capacity in half---% or worse, if more than two servers are involved. This simplistic example might be unlikely to occur in exactly this form on real systems---% or might be quickly detected and ``fixed'' during development and testing---% but it suggests a general risk. When multiple cloud services independently attempt to optimize their own operation using control loops that both monitor, and affect, the behavior of upstream, downstream, or neighboring cloud services, it is hard to predict the outcome: we might well risk deploying a combination of control loops that behaves well ``almost all of the time,'' until the emergence of the rare, but fatal, cloud computing equivalent of the Tacoma Narrows Bridge~\cite{billah91resonance,mckenna99large}. Comparable forms of ``emergent misbehavior'' have been observed in real computing systems outside of the cloud context~\cite{mogul06emergent}, and some work has studied the challenge of coordinating and stabilizing multiple interacting control loops, such as in power management~\cite{raghavendra08power}. Current approaches to solving or heading off such instability risks, however, generally assume that {\em some} single engineer or company has complete information about, and control over, all the interacting layers and their control loops. The cloud business model undermines this design assumption, by incentivizing providers {\em not} to share with each other the details of their resource allocation and optimization algorithms---% crucial parts of their ``secret sauce''---% that would be necessary to analyze or ensure the stability of the larger, composite system. \com{ While it is unclear to what extent this is or will be a problem or whether a general and realistic solution exists, Section~\ref{sec-stab} explores one potential approach, based on a variation of the same labeling technique with which we hope to address the timing channel problem above. \section{Timing Information Flow Control} \label{sec-tifc} This section introduces a model for controlling an application's perception of time, which we call {\em Timing Information Flow Control} or TIFC. TIFC is inspired by recent work in Decentralized Information Flow Control (DIFC)~\cite{ efstathopoulos05labels,zeldovich06making,krohn07information}, but serves the purpose of controlling the propagation of {\em information about time} into, out of, or within a software system. With TIFC, an operating system can attach explicit labels or {\em taints} to processes and other objects, describing what sources, types, and granularities of timing information may have affected the state of the labeled object. Using these labels, the OS can enforce policies constraining how timing-derived information may flow among processes and affect their results. \com{ (from NSF Det proposal...) While many organizations see strong economic or practical motivations to move compute-intensive applications onto shared clusters or cloud services such as EC2~\cite{amazon-ec2}, sharing can create considerable information security and privacy concerns. Even if the cloud provider itself is trustworthy and its virtualization software correctly prevents clients from directly interfering with each other, any client's software can mount a variety of timing attacks on other client processes running on the same or nearby hosts to steal valuable secrets~\cite{brumley03remote,percival05cache}, provided the attacking process has fine-grained timing capability. But many compute-bound applications suited to cloud computing have no inherent need for fine-grained timing: most data analysis applications merely compute a result from a well-defined input dataset. If a provider offered a compute cloud in which no client has fine-grained timing capability, the only timing information with which one client could attack another would be the much coarser and noisier total job completion times observed from the clients' own hosts. A provider cannot create such a ``timing channel-free'' compute cloud merely by disabling client access to hardware clocks and timers, however: an attacker's hosted code could use many other sources of nondeterminism to regain fine-grained timing capability, such as by spawning a thread that counts iterations of a spin-loop. Only fully timing-insensitive execution can guarantee a shared environment free of timing attack channels. (from CRASH proposal...) The second unique aspect of our kernel design is that it will address not just conventional, explicit interactions between processes, but also covert timing channels~\cite{kemmerer83shared,wray91analysis}, which have been largely avoided in previous IFC work but are becoming increasingly critical to real-world security~\cite{ brumley03remote,percival05cache,wang06covert, aciicmez07predicting,aciicmez07yet,ristenpart09cloud}. Further leveraging our work in deterministic execution and combining them with classic IFC techniques, we will design the kernel to provide pervasive controls over how and when potentially sensitive timing information can enter or affect the results of any untrusted application computation. We describe these ideas in more detail elsewhere~\cite{ford10determinating}. If we wish to timeshare a CPU core between two untrusted processes and prevent timing channels between them, for example, a classic approach would be to arrange a fixed timeslice for each process, not varying depending on either process's actual usage, and clear all caches and other state affecting timings on context switches. While this approach may be useful in some situations, it is undesirable due to the potential waste of resources it incurs, due to the flushing of potentially useful state and giving up the ability of one process to utilize fully any resources left underutilized by others. An alternative solution we will explore is to timeshare the processes without restriction, but run them deterministically and thus prevent them from being able to ``tell the time'' locally while running on the timeshared CPU core. If one process has a semantic need to tell the time, its ``read time'' request leads to an IFC ``taint'' fault, e.g., causing the process to be migrated to some other CPU core that is not timeshared at fine granularity between untrusted processes, and on which the system time is thus ``untainted'' by information from other processes. Taking this approach further, suppose a process wishes to run on timeshared cores for performance, but also use fine-grained internal timers to make decisions for load-balancing parallel computations across cores or similar internal optimization purposes. In this case, instead of reading ``tainted'' high-resolution timers directly, the process can fork off a parallel process to make dynamic load-balancing decisions on behalf of the original process. This new load-balancing process will become tainted by timing information from other processes sharing the same core. The kernel's determinism and IFC enforcement mechanisms, however, will allow the tainted process to affect only the {\em scheduling} (and hence execution performance) of the original process it was forked from, and not the actual {\em results} computed by that process; the original process will thus run (deterministically) without itself becoming tainted with potentially leaked timing information. To create a preliminary TIFC model, we choose to build on the DIFC model used in Flume~\cite{ krohn07information} due to its simplicity and elegance. Comparable TIFC models could probably be built ``stand-alone'' or as extensions to other DIFC systems, however. Like Flume, our model uses {\em tags} and {\em labels} to track information as it flows through a system---% potentially any type of information, but here we focus exclusively on timing information. \subsection{TIFC Model Overview} As in Flume, our TIFC model assigns {\em labels} to information-bearing system objects such as processes, messages, and files. A label can contain any number of {\em tags}, each if which indicates that the labelled object has a particular ``taint,'' or may be derived from information owned by a particular user. Unlike conventional DIFC, however, TIFC labels reflect not only the {\em content} contained in such an object---% i.e., the information contained in the bits comprising a message or a process's state---% but also information that may have effected the timing of observable {\em events} associated with that object---% a process starting or stopping, a message being sent or received, etc. Consistent with conventional, informal practices for reasoning about timing channels~\cite{kemmerer83shared,wray91analysis}, our TIFC model does not attempt the likely-infeasible task of eliminating timing channels entirely, but rather seeks to impose strong limits on the {\em rate} at which information might leak via timing channels. To distinguish content and timing taint explicitly, we give TIFC labels the form $\{L_C/L_T\}$, where $L_C$ is a set of tags representing content taint, and $L_T$ is a set of tags representing timing taint. As in Flume, content tags in the set $L_C$ simply identify a user, such as Alice or Bob. Timing tags, however, we give the form $P_f$, where $U$ is a user such as Alice or Bob, and $f$ is a frequency representing the maximum rate with which user $U$'s information might leak via this timing event, in bits per second. The frequency part of a timing tag may be $\infty$, indicating that information leakage may occur at an unbounded rate. Thus, the label $\{A/A_\infty,B_f\}$ attached to a message might indicate that the content (bits) comprising the message contains Alice's (and only Alice's) information, but that the {\em timing} with which the message was sent might contain (and hence leak) both Alice's and Bob's information---% at an arbitrarily high rate in Alice's case, but up to at most $f$ bits per second in Bob's case. \subsubsection{Declassification Capabilities} To enforce information security policies, we similarly build on Flume's DIFC model. In particular, the system allows a given process $P$ to transmit information to another process or target object $O$ only if $P$'s label is a subset of $O$'s, or if $P$ holds {\em declassification capabilities} for any tags in $P$ that are not in $O$. A {\em content declassification capability} has the form $U^-$, and represents the ability to remove content tag $U$, as in Flume. TIFC also adds {\em timing declassification capabilities} of the form $U^-_f$, representing the ability to declassify information carried by timing channels at a rate up to frequency $f$. We consider the ``maximum-strength'' timing declassifier $U^-_\infty$ to be equivalent to the content declassifier $U^-$; other timing capabilities with finite frequencies represent weakened versions of these infinite-rate capabilities. Suppose process $P_1$ has label $\{A/A_\infty,B_f\}$, and process $P_2$ has the empty label $\{-/-\}$. If process $P_1$ were allowed to send a message to $P_2$, this action would leak $A$'s information via both message content and the timing of the message's transmission, and would leak $B$'s information (at a rate up to $f$) via timing alone. The system will disallow this communication, therefore, unless the processes hold and use the relevant capabilities to adjust their labels before interacting. In particular: (a) $P_1$ must hold the capability $A^-$ and use it to remove its content tag $A$ before sending the message; and (b) $P_1$ must hold and use a timing capability $B^-_f$ (or stronger) to declassify timing tag $B_f$ before sending the message. \subsection{Controlling Timing Channels with Determinism and Pacing} Timing labels and capabilities alone would not be very useful without practical mechanisms to control the timing information flows they represent. We briefly introduce two specific tools useful for this purpose: {\em deterministic execution} and {\em pacing}. This section briefly describes these tools, and the next section illustrates how we may be able to use them to control timing channels in practical systems. \paragraph{Deterministic Execution:} In general, a process whose label contains content tag $U$ must also have timing tag $U_\infty$, because the process's flow of control---and hence execution time---% can vary depending on the $U$-owned bits contained in its process state. (We could envision special-purpose ``data-invariant'' execution models or processors that avoid flow control and guarantee the same execution time on any input, but such processors would be of limited use and are not our focus.) It might also seem that the converse should be true: if a process has timing tag $U_f$ for any frequency $f$, and the process reads the current time via \verb|gettimeofday()|, for example, then the process's content subsequently depends on its prior execution timing, hence the process would have to be given content tag $U$. Even if we disable system calls like \verb|gettimeofday()|, conventional programming models---especially parallel, multithreaded models---% offer innumerable ways in which processes and threads can observe and depend on their execution timing in implicit ways, such as by measuring the relative execution speed of different threads. For example, one thread might simply remain in a tight loop forever incrementing a counter in shared memory, which other threads read and use as a ``timer.'' The PI's recent work in system-enforced deterministic parallel execution~\cite{ ford10efficient}, however, offers a new tool to decouple a process's timing and content labels. With system-enforced determinism, as implemented in the PI's Determinator OS, the kernel can prevent unprivileged processes from exhibiting {\em any} timing dependencies---% even if the process maliciously attempts to introduce such dependencies---% except via explicit inputs obtained through controlled channels. In effect, deterministic processes cannot ``tell time'' except via timing information contained in their explicit inputs controlled by their content labels. With system-enforced deterministic execution, therefore, it becomes ``safe'' for a process's content and timing labels to differ. If a process's explicit inputs were derived from user $A$'s information, but the timing of its execution was also affected by $B$'s information at rate $f$, we can give the process the label $\{A/A_\infty,B_f\}$ rather than $\{A,B/A_\infty,B_f\}$, safe in the knowledge that system-enforced determinism prevents $B$'s ``timing domain'' information from leaking into the process's ``content domain'' (i.e., its register/memory state). \paragraph{Pacing:} Processes often interact with each other and with the external world via queued messages or I/O requests, and we can leverage ``traffic shaping'' techniques common in the networking world to limit the rate at which information might leak across these queues via timing channels. In particular, we assume that we can {\em pace} the output of a message queue, such that regardless of how messages build up in the queue, the queue's output ``releases'' at most one message per tick of some recurring timer firing at a given frequency $f$. That is, after each $1/f$-time period, the queue's output releases exactly one message if the queue is non-empty, and no message if the queue is empty at that point. Between clock ticks, the queue releases no information at all. Ignoring any information contained in the {\em content} and {\em order} of the queued messages---% which we will control via content labels---% we can see that such a paced queue leaks at most one bit of timing information per $1/f$-time period: namely, whether or not the queue was empty at that particular timer tick. That is, if the messages flowing into a paced queue have a timing tag of $U_f'$ for $f' > f$ (including $f' = \infty$), then we can safely ``downgrade'' the timing tags of those messages to $U_f$ as they emerge from the queue, if $f$ is the queue's pacing frequency. If messages with label $\{A/A_\infty,B_\infty\}$ flow into a pacer with frequency $f$, for example, for example, then the messages emerging from the queue's output have label $\{A/A_f,B_f\}$. While we for now offer only an intuitive argument for the correctness of this rate-limiting mechanism, I believe this argument can be precisely formalized, and intend to do so as part of the further development of the TIFC model. \subsection{TIFC Examples} Instead of delving into this preliminary TIFC model in more detail, we merely illustrate its potential uses via three simple examples. In these examples, two customers---% Alice and Bob---% each wish to perform some privacy-sensitive computation on cloud hardware managed by a trusted cloud provider. Each customer would like strong assurance from the provider that his sensitive data cannot leak to other customers beyond some well-defined rate---% even if his code is infected with malware that deliberately attempts to use timing channels to leak his data. For these examples we make the simplifying assumption that timing channels arise only from shared compute resources, such as processor cores and the caches and functional units supporting them. We neglect for now other sources of timing channels, such as those arising from network communication paths either within the cloud or between cloud and customers~\cite{ ristenpart09cloud}, although this TIFC model should readily extend to the handling of communication-related timing channels as well. \paragraph{Dedicated Hardware Scenario:} The first example, in Figure~\ref{fig-private}(a), illustrates a trivial ``base case'' scenario, in which the cloud provider controls timing channels merely by imposing a fixed partitioning of hardware compute resources between Alice and Bob. Alice submits compute job requests via a cloud gateway node that the provider dedicates exclusively to Alice, and similarly for Bob. Each customer's gateway node forwards each submitted compute job to some compute core (on the same or a different node) that is also dedicated exclusively to the same customer. The provider's trusted gateway nodes attach TIFC labels to each incoming request, and the provider's trusted OS kernel or hypervisor managing each compute core uses these labels to prevent either customer's compute jobs from leaking information to the other via either the content or timing of messages within the cloud. Figure~\ref{fig-private}(b) and (c) illustrates the (intuitively trivial) reason this example provides timing isolation, by contrasting the system's timing when Bob submits a ``short'' job (b) with the timing when Bob submits a ``long'' job (c). Since Alice's job runs on a separate compute core from Bob's, Alice's job completion time depends only on the content of that job and Alice's prior jobs---% information represented by the timing tag $A_\infty$---% and is not ``tainted'' by any timing dependency on Bob's jobs. \begin{figure}[t] \centering \includegraphics[width=0.90\textwidth]{private.eps} \caption{Content and Timing Information Labels: Private Per-Client Hardware Resources} \label{fig-private} \end{figure} \paragraph{Fixed-Reservation Timeslicing Scenario:} Figure~\ref{fig-reserved}(a) shows a similar but slightly less trivial example, in which a shared compute core processes both Alice's and Bob's jobs on a ``fixed reservation'' schedule that does {\em not} depend on either Alice's or Bob's {\em demand} for the shared core. This example assumes that the shared compute core maintains and isolates the state of each customer's job using standard process or virtual machine mechanisms. The scheduling of these per-customer processors onto the shared compute core, however, is controlled by a separate entity we call the {\em reservation scheduler}. The scheduler conceptually runs on its own dedicated CPU hardware, and sends a message to the shared compute core at the beginning of each timeslice indicating which customer's job should be run during that timeslice. The code implementing the scheduling policy need not be trusted for information flow control purposes, as long as trusted code attaches and checks TIFC labels appropriately. In particular, the scheduler and the messages it sends have the empty label $\{-/-\}$, which allows the scheduler's messages to affect the timing of Alice's and Bob's labeled jobs running on the shared core, without adding any new ``taint.'' With its empty label, however, the reservation scheduler cannot {\em receive} any messages from the shared core that might depend on either the content or timing of the customers' jobs. In particular, TIFC enforcement prevents the scheduler from obtaining any feedback about whether either Alice's or Bob's processes actually demand CPU time at any given moment, forcing the scheduler to implement a ``demand-insensitive'' policy, which isolates the timing of different customers' jobs sharing the core at the cost of wasting shared core capacity when any customer's demand is low. Figure~\ref{fig-reserved}(b) and (c) shows execution schedules for the shared core in the cases in which Bob's job is short or long, respectively, illustrating why Alice's job completion time depends only on Alice's information---% hence the timing label of $A_\infty$---% even though Bob's job may have executed on the same core during different (demand-independent) timeslices. \begin{figure}[t] \centering \includegraphics[width=0.90\textwidth]{reserved.eps} \caption{Labeling Scenario: Shared Resource with Reservation-based Scheduling} \label{fig-reserved} \end{figure} \paragraph{Demand-Sensitive Statistical Multiplexing Scenario:} The above ``dedicated hardware'' and ``reservation scheduling'' scenarios embody well-known timing channel control techniques~\cite{ kemmerer83shared,wray91analysis}, to which TIFC merely adds an explicit, enforceable labeling model. These standard timing channel control techniques unfortunately undermine the cloud {\em business model}, however, by limiting or eliminating the cloud provider's ability to obtain efficiencies of scale through oversubscription and statistical multiplexing of hardware resources~\cite{ford10determinating}. Figure~\ref{fig-statmux} illustrates a final scenario that {\em does} allow statistical multiplexing, at the cost of a controlled-rate timing information leak. \begin{figure}[t] \centering \includegraphics[width=0.90\textwidth]{statmux.eps} \caption{Labeling Scenario: Shared Resource with Demand-driven Scheduling} \label{fig-statmux} \end{figure} As in the previous example, this scenario includes a shared compute core and a separate scheduler entity. In this case, however, instead of the empty (minimum) label, we give the scheduler a ``high'' (maximum) label containing all customers' content and timing taints. This high label allows the scheduler to receive demand information about the customers' jobs from the shared compute core, and even to receive messages from customers' jobs themselves containing explicit scheduling requests or ``hints.'' Since the scheduler's content label ($A,B$) is higher than the content labels of either Alice's or Bob's jobs, TIFC disallows the scheduler from sending messages {\em to} Alice or Bob, or otherwise affecting the {\em content} (process state) of their jobs. The scheduler can send messages to the shared compute core's trusted control logic, however, which uses these messages only to determine which customer's jobs run during a particular timeslice. The shared core runs these jobs deterministically, however, ensuring that regardless of how the scheduler runs them, each job's result content depends only on that job's input content and not on any aspect of the job's execution timing. The scheduler's control messages therefore ``taint'' all jobs with the timing tags---but {\em not} the content tags---% of all customers. The results of Alice's job, for example, has the label $\{A/A_\infty,B_\infty\}$, indicating that the result content contains only Alice's information but the timing of the job's completion may also contain Bob's information. Without additional measures, this high timing label would prevent Alice's gateway from sending Alice's job results back to Alice, since the timing of these job completion messages could leak Bob's information at an arbitrarily high rate. To rate-limit this timing leak, we assume that when requesting service from the cloud provider, Alice and Bob agreed to allow timing information leaks up to a specific rate $f$ fixed throughout this particular cloud. To enforce this policy, the cloud provider inserts a pacer on the path of each customer's job results queue, which releases the results of at most one queued job at each frequency $f$ ``tick'' of some trusted provider-wide clock. In addition, since all customers have agreed to allow timing information leaks up to rate $f$, each user's gateway node gives all other gateways a timing declassification capability for rate $f$: thus, Alice's and Bob's gateways can declassify each others' timing labels up to rate $f$. With this system design, the TIFC rules allow Alice's job results to flow back to Alice at a rate of up to $f$ jobs per second, while leaking at most $f$ bits per second of Bob's information. As in the previous scenarios, Figure~\ref{fig-statmux}(b) and (c) compares two execution schedules resulting from Bob's job being ``short'' and ``long,'' respectively. Due to demand-sensitive multiplexing, each job's completion time depends on the prior jobs of all users, possibly embodying all users' information at an arbitrary rate. Alice's output pacer, however, delays the release of each job's results to a unique clock tick boundary, ``scrubbing'' this timing channel down to the frequency $f$ at which the gateways can declassify the timing labels.
1,477,468,750,212
arxiv
\section{ \setcounter{equation}{0} \setcounter{theorem}{0} \@startsection {section}{1}{\z@}{-4.0ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\bf\Large}} \def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus-1ex minus-.2ex}{1.5ex plus.2ex}{\reset@font\bf\LARGE}} \begin{document} \maketitle \begin{abstract} The local Minkowski tensors are valuations on the space of convex bodies in Euclidean space with values in a space of tensor measures. They generalize at the same time the intrinsic volumes, the curvature measures and the isometry covariant Minkowski tensors that were introduced by McMullen and characterized by Alesker. In analogy to the characterization theorems of Hadwiger and Alesker, we give here a complete classification of all locally defined tensor measures on convex bodies that share with the local Minkowski tensors the basic geometric properties of isometry covariance and weak continuity.\\[1mm] {\em 2010 Mathematics Subject Classification:} primary 52A20, secondary 52A22\\[1mm] {\em Keywords:} valuation, Minkowski tensor, tensor valuation, support measure, characterization theorem, weak continuity, normal cycle \end{abstract} \renewcommand{\thefootnote}{{}} \footnote{We acknowledge the support of the German Research Foundation (DFG) through the Research Unit `Geometry and Physics of Spatial Random Systems' under the grant HU 1874/2-1.} \section{Introduction}\label{sec1} Additivity of set functions is a basic notion of great use in different parts of mathematics. While in its stronger version of countable additivity it constitutes the foundation of measure theory and thus is ubiquitous in analysis, a seemingly rudimentary form of additivity leads to a surprisingly rich theory in the geometry of sets in Euclidean and other spaces. Restricting ourselves here to the space ${\mathcal K}^n$ of convex bodies (nonempty, compact, convex subsets) in Euclidean space ${\mathbb R}^n$, we say that a function $\varphi$ on ${\mathcal K}^n$ with values in an abelian group (or an abelian semigroup with cancellation law) is {\em additive} or a {\em valuation} if $$ \varphi(K\cup M)+\varphi(K\cap M)=\varphi(K)+\varphi(M)$$ whenever $K,M,K\cup M\in{\mathcal K}^n$. The space ${\mathcal K}^n$ may be replaced by an intersectional subclass, and also suitable classes of sets more general than convex bodies may be admitted. Real valuations on convex polytopes were first put to good use in Dehn's solution of Hilbert's third problem, and since then they play an essential role in the dissection theory of polytopes (see \cite{McM93}). An important development was initiated when Blaschke \cite[\S43]{Bla37} treated the kinematic integral $$ \psi(K,M)= \int_{G_n}\chi(K\cap gM)\,\mu({\rm d} g)$$ for $K,M\in{\mathcal K}^n$ (it is insignificant that Blaschke considered a slightly different situation). Here, $\chi(L)=1$ for $L\in{\mathcal K}^n$, $\chi(\emptyset)=0$, and $\mu$ denotes the suitably normalized Haar measure on $G_n$, the motion group of ${\mathbb R}^n$. Thus, $\psi(K,M)$ is the total invariant measure of the rigid motions that bring $M$ into a position of nonempty intersection with $K$. The determination of such integrals is a prerequisite for answering some classical questions on geometric probabilities. Blaschke pointed out that the function $\psi(K,\cdot)$, for fixed $K$, is a rigid motion invariant valuation on ${\mathcal K}^n$, and that, therefore, it would be helpful to have an axiomatic characterization of the classical examples of such valuations, the intrinsic volumes. These functionals on ${\mathcal K}^n$ can be derived from the notion of volume, via the coefficients in the Steiner formula $$ V_n(K+\rho B^n) = \sum_{j=0}^n \rho^{n-j}\kappa_{n-j}V_j(K),\qquad \rho\ge 0,$$ for $K\in{\mathcal K}^n$. Here $V_n$ denotes the volume, $+$ is vector addition, $B^n$ is the unit ball, and $\kappa_n=V_n(B^n)$. The function $V_j:{\mathcal K}^n\to{\mathbb R}$ defined in this way for $j=0,\dots,n$, the $j$th {\em intrinsic volume}, is a rigid motion invariant valuation, but it shares these properties with many other functions on ${\mathcal K}^n$. To single out the intrinsic volumes, Blaschke originally suggested the property of local boundedness, and Hadwiger repeatedly (\cite[p. 346]{Had50}, \cite[footnote 3]{Had51}) emphasized that a characterization on this basis would be desirable and useful. However, an example in \cite[p. 229]{McMS83} shows that this is not possible. It is the condition of continuity, with respect to the Hausdorff metric, with which Hadwiger succeeded. \vspace{2mm} \noindent{\bf Hadwiger's characterization theorem} \;{\em If $\varphi:{\mathcal K}^n\to{\mathbb R}$ is a continuous valuation which is invariant under proper rigid motions, then there are constants $c_0,\dots,c_n$ such that $$ \varphi(K)= \sum_{j=0}^n c_jV_j(K)$$ for $K\in{\mathcal K}^n$.} \vspace{2mm} Hadwiger proved this in \cite{Had51} for $n=3$ and in \cite{Had52} for general $n$ (the proof is reproduced in \cite[6.1.10]{Had57}) and presented integral-geometric applications in \cite{Had50} and \cite{Had56}. For example, since the function $\psi(K,\cdot)$ defined above is continuous, Hadwiger's theorem yields that it is of the form $\psi(K,M)= \sum_{j=0}^n c_j(K)V_j(M)$. A repetition of the argument with variable $K$ shows that $\psi(K,M)= \sum_{i,j=0}^n c_{ij}V_i(K)V_j(M)$, with constants $c_{ij}$, which are then easily determined by inserting balls of different radii. The result is the {\em principal kinematic formula} of Blaschke, Chern and Santal\'{o}, for convex bodies. In this elegant way, several integral-geometric formulas can be proved. Usually, they admit also other, though more complicated proofs. However, for the following result, called {\em Hadwiger's general integral-geometric theorem}, the approach via Hadwiger's characterization theorem (reproduced in \cite[Theorem 5.1.2]{SW08}) is up to now the only known proof. If $\varphi:{\mathcal K}^n\to{\mathbb R}$ is a continuous valuation, then $$ \int_{G_n} \varphi(K\cap g M)\,\mu({\rm d} g) =\sum_{j=0}^n\varphi_{n-j}(K) V_j(M)$$ for $K,M\in{\mathcal K}^n$, where the coefficients $\varphi_{n-j}(K)$ are given by $$ \varphi_{n-j}(K) =\int_{A(n,j)} \varphi(K\cap E)\,\mu_j({\rm d} E);$$ here $A(n,j)$ is the affine Grassmannian of $j$-planes in ${\mathbb R}^n$ and $\mu_j$ is its suitably normalized motion invariant measure. In view of such applications and their interpretations, Hadwiger's characterization theorem is esteemed so highly that, for example, Gian--Carlo Rota \cite{Rot98}, in a Colloquium Lecture at an Annual Meeting of the American Mathematical Society, called it the `Main Theorem of Geometric Probability'. Nowadays, the use of real translation invariant valuations (called `geometric functionals', if they have certain boundedness and mesasurability properties) in stochastic geometry goes far beyond classical geometric probabilities. We refer to \cite[Chap. 9]{SW08}, where densities of geo\-metric functionals for random sets are treated, and for example to \cite{HLS13}, which investigates asymptotic covariances and multivariate central limit theorems for geometric functionals in relation to Boolean models. The valuation property is shared by many functions arising naturally in convex geometry; they may, for example, be vector-valued, measure-valued, body-valued, or function-valued. In particular, the intrinsic volumes have local generalizations in the form of different versions of measures, known as {\em area measures} (defined on the unit sphere), {\em curvature measures} (defined on ${\mathbb R}^n$), and {\em support measures} (defined on sets of support elements). For each of these, characterization theorems of Hadwiger type, with suitably modified assumptions, have been proved, in \cite{Sch75}, \cite{Sch78}, \cite{Gla97} (see also the formulations in \cite[Sec. 4.2, Notes 11, 12]{Sch14}). Among the integral-geometric applications are a short proof of Federer's \cite{Fed59} kinematic formulas for curvature measures of convex bodies in \cite{Sch78} and a new type of kinematic formulas for support measures in \cite{Gla97} (a technical restriction was later removed in \cite{Sch99}). The theory of valuations on convex bodies comprised already an impressive body of results, as documented by the survey articles \cite{McMS83}, \cite{McM93}, when Klain \cite{Kla95} published a new and shorter proof of Hadwiger's characterization theorem and noticed some new consequences. Klain's proof is reproduced in the book of Klain and Rota \cite{KR97}, which gives an excellent introduction to valuations and their integral-geometric applications, with side-views to some discrete aspects. (The proof is also reproduced in \cite[Thm. 6.4.14]{Sch14}). Klain's proof gave new impetus to the theory of valuations and, in particular, Hadwiger's theorem became anew the incentive and template for a large number of characterization and classification results for valuations. This second phase of valuation theory, beginning in the late 1990s, has two main branches. One of these is a deep algebraic structure theory for valuations, developed mainly by Semyon Alesker. Hints to the literature are found in the very brief sketch in \cite[Sec. 6.5]{Sch14}. To illuminate the considerable impact that this new theory had on integral geometry, we mention the articles \cite{BF10}, \cite{BFS14} and the surveys given by Bernig \cite{Ber10} and Fu \cite{Fu11}, both under the title of `Algebraic integral geometry'. The role of measure-valued valuations in this new theory is revealed in \cite{BFS14} and \cite{Wan14}. The other branch of valuation theory, initiated by Monika Ludwig, has produced a series of important characterization theorems where the assumed invariance, covariance or contravariance is with respect to the groups ${\rm GL}(n)$ or ${\rm SL}(n)$, with or without translation invariance. We refer the reader to \cite[Sec. 10.16]{Sch14} for a brief survey with hints to the original literature. The simple geometric nature of the properties appearing in Hadwiger's characterization theorem and the close relation between the notions of `additivity' (as defined above) and that of an `extensive property' (as coined by Richard Tolman and used in the physics of interacting particle systems) may be reasons why the intrinsic volumes have found surprising applications in statistical physics (under the name of `Minkowski functionals'). The survey by Mecke \cite{Mec00} explains how Minkowski functionals are used to describe the morphology of random spatial configurations and how they are applied to the investigation of physical properties of materials such as complex fluids and porous media. A more recent trend employs, even more surprisingly, the natural tensor-valued generalizations of the intrinsic volumes, called `Minkowski tensors', in physics. For small dimensions and low ranks, Minkowski tensors have been applied, and are finding increasing interest, in the investigation of real materials, in particular of the morphology and anisotropy analysis of cellular, granular or porous structures. We refer (in chronological order) to \cite{BDMW02, SchT10, SchT11, SchT12, HHKM13}, for example. With the Minkowski tensors, we come to the central goal of the present paper. For these tensor functions, a natural extension of Hadwiger's characterization theorem has been established previously, and we now aim at a similar classification of their local versions. In early analogues of Hadwiger's characterization theorem, real-valued valuations were replaced by vector-valued valuations, resulting in characterizations of moment vectors and curvature centroids (\cite{HS71}, \cite{Sch72}). The next step of extension took some time. When McMullen \cite{McM97} initiated a thorough study of tensor-valued versions of the intrinsic volumes, he also took a possible characterization into consideration. As it turned out, Alesker was in possession of the right results from his work \cite{Ale99a} on rotation invariant valuations to prove in \cite{Ale99b} a characterization theorem for the (suitably modified) Minkowski tensors. The step from vector-valued to tensor-valued valuations with covariance properties with respect to the isometry group required new methods for their characterization. Alesker made use of representation theory for the rotation group. An approach to Alesker's characterization theorem using essentially only Hadwiger's techniques has so far not been successful. Alesker's characterization of Minkowski tensors and the previously mentioned characterizations of the local versions of the intrinsic volumes call immediately for a classification of local versions of the Minkowski tensors, in the form of tensor-valued measures, by their most essential geometric properties. Such a local characterization theorem, which turned out to be a non-trivial task, is the subject of the present paper. One motivation for this is the expectation that the local versions of the Minkowski tensors should be more flexible for integral-geometric applications, since in contrast to the global Minkowski tensors they do not satisfy non-trivial linear relations (see the next section). A first step of a characterization theorem, namely for local tensor valuations on convex polytopes, was accomplished in \cite{Sch12}. The extension to general convex bodies is, however, far from straightforward, since unexpected local tensor valuations have come up in the polytopal case. Among these, we shall single out those admitting a continuous extension to general convex bodies. For the precise description of the situation, some more elaborate explanations are needed, which we postpone to the next section. Here we only point out that Alesker's characterization theorem will be recalled as Theorem \ref{Theorem2.1}, the local characterization theorem for polytopes will be repeated and refined in Theorem \ref{Theorem2.2}, and the main result of this paper is Theorem \ref{Theorem2.3}. As indicated, if one wants to extend the characterization theorem for local tensor valuations from polytopes to general convex bodies, a crucial issue is whether the tensor measure valued mappings that appear in the polytopal case admit weakly continuous extensions to general convex bodies. Section \ref{sec4} provides a positive answer in certain cases, and Section \ref{sec5} gives a negative answer in the remaining cases. A suitable refinement of this negative result finally leads to a proof of the main theorem. The employed methods in both parts are very different. In Section \ref{sec4}, a tensor-valued, rotation covariant differential form is defined and evaluated at the normal cycle of a convex set. Then basic geometric measure theory is used to show that this defines a (weakly) continuous extension of some of the tensor measure valued mappings known from the polytopal case. The main tool of Section \ref{sec5} is the approximation of a highly symmetric convex body by polytopes with controllable symmetries. The approximating polytopes are constructed by lifting a polytopal complex, which is defined by a lattice in ${\mathbb R}^{n-1}$, to a paraboloid of revolution. For reasons indicated in Section 5, a distinction has to be made between dimensions at least four and dimension three, where the construction is more delicate. \section{Explanations and results}\label{sec2} We work in $n$-dimensional Euclidean space ${\mathbb R}^n$ ($n\ge 2$) with a fixed scalar product $\langle\cdot\,,\cdot\rangle$ and induced norm $\|\cdot\|$. The scalar product is also used to identify ${\mathbb R}^n$ with its dual space. We write $B^n$ for the unit ball, ${\mathbb S}^{n-1}$ for the unit sphere of ${\mathbb R}^n$, and $\Sigma^n$ for the product space ${\mathbb R}^n\times{\mathbb S}^{n-1}$. Sometimes we identify ${\mathbb R}^n\times{\mathbb R}^n$ with ${\mathbb R}^{2n}$. The $k$-dimensional Hausdorff measure in a Euclidean space is denoted by ${\mathcal H}^k$. The constant $\omega_n=2\pi^{n/2}/\Gamma(n/2)$ is the $(n-1)$-dimensional Hausdorff measure of ${\mathbb S}^{n-1}$ and $\kappa_n=n\omega_n$ is the volume of $B^n$. For $p\in{\mathbb N}_0$, let ${\mathbb T}^p$ be the vector space of symmetric tensors of rank $p$ on ${\mathbb R}^n$. The symmetric tensor product (see, e.g., Satake \cite[Chap.~5, Sec.~4.2]{Sat75}) is denoted by $\odot$, but we shall throughout use the abbreviations $$ a\odot b=:ab,\qquad \underbrace{a\odot\cdots\odot a}_r=:a^r$$ for symmetric tensors $a,b$ and for $r\in {\mathbb N}$. Moreover, $a^0:= 1$ for $a\not=0$. Since we have identified ${\mathbb R}^n$ with its dual space via the scalar product, each symmetric $p$-tensor is a symmetric $p$-linear functional on ${\mathbb R}^n$, and for $x\in{\mathbb R}^n$ and $r\in{\mathbb N}$ we have $x^r(y_1,\dots,y_r)=\langle x,y_1\rangle\cdots \langle x,y_r\rangle$ for $y_1,\dots,y_r\in{\mathbb R}^n$. For a topological space $X$, we write ${\mathcal B}(X)$ for the $\sigma$-algebra of Borel sets of $X$. If $X$ is a metric space, we denote by ${\mathcal B}_b(X)$ the ring of bounded Borel sets in ${\mathcal B}(X)$. By ${\mathcal K}^n$ we denote the space of convex bodies (nonempty compact convex subsets) of ${\mathbb R}^n$, equipped with the Hausdorff metric. For basic facts about convex bodies used in the following, we refer to \cite{Sch14}. The subset of convex polytopes is denoted by ${\mathcal P}^n$. Let $K\in{\mathcal K}^n$. By a {\em support element} of $K$ we understand a pair $(x,u)$ where $x$ is a boundary point of $K$ and $u$ is an outer unit normal vector of $K$ at $x$. The set of all support elements of $K$ is denoted by ${\rm Nor}\,K$ and is called the {\em normal bundle} of $K$. It is a closed subset of the space $\Sigma^n$. For $x\in{\mathbb R}^n\setminus K$, the point $p(K,x)$ is the unique point in $K$ nearest to $x$, and the unit vector $u(K,x):= (x-p(K,x))/\|x-p(K,x)\|$ points from $p(K,x)$ to $x$. Clearly, $(p(K,x),u(K,x))\in{\rm Nor}\,K$. For $\rho>0$ and $\eta\in{\mathcal B}(\Sigma^n)$, the volumes of the local parallel sets $$ M_\rho(K,\eta):=\{x\in (K+\rho B^n)\setminus K:(p(K,x),u(K,x))\in\eta\} $$ permit a polynomial expansion of Steiner-type, $$ {\mathcal H}^n(M_\rho(K,\eta))=\sum_{k=0}^{n-1}\rho^{n-k}\kappa_{n-k}\Lambda_{k}(K,\eta). $$ This defines the {\em support measures} $\Lambda_0(K,\cdot),\dots,\Lambda_{n-1}(K,\cdot)$ of $K$ (also known as generalized curvature measures). They are re-normalized versions of the measures $\Theta_k(K,\cdot)$ introduced in \cite[Sec. 4.2]{Sch14}, namely \begin{equation}\label{2.2a} n\kappa_{n-k}\Lambda_k(K,\cdot)=\binom{n}{k}\Theta_k(K,\cdot). \end{equation} The measures $\Lambda_k(K,\cdot)$ are concentrated on ${\rm Nor }\,K$. We need them here for a description of the Minkowski tensors. For $K\in{\mathcal K}^n$, the {\em Minkowski tensors} are defined by \begin{equation}\label{2.2} \Psi_r(K) = \Phi^{r,0}_n(K):= \frac{1}{r!}\int_K x^r\,{\mathcal H}^n({\rm d} x) \end{equation} and \begin{equation}\label{2.3} \Phi_k^{r,s}(K):= c_{n,k}^{r,s} \int_{\Sigma^n} x^ru^s\,\Lambda_k(K,{\rm d}(x,u)) \end{equation} with $$ c_{n,k}^{r,s} := \frac{1}{r!s!}\frac{\omega_{n-k}}{\omega_{n-k+s}}$$ for $r,s\in{\mathbb N}_0$ and $k\in\{0,\dots,n-1\}$; for convenience, we put $\Phi_k^{r,s}=0$ for all other $r,s,k$. We remark that the moment tensor $\Psi_r$ is a very natural construction and that the tensors (\ref{2.3}) necessarily come up if $\Psi_r$ is applied to a parallel body. In fact, for $\rho>0$, the Steiner-type formula $$ \Psi_r(K+\rho B^n) = \sum_{k=0}^{n+r} \rho^{n+r-k}\kappa_{n+r-k}\sum_{s\in{\mathbb N}_0}\Phi_{k-r+s}^{r-s,s}(K)$$ holds, which was proved in \cite{Sch00} with other notations. See also \cite[Subsection 5.4.2]{Sch14}. Each Minkowski tensor $\Phi_k^{r,s}$ defines a mapping $\Gamma:{\mathcal K}^n\to{\mathbb T}^p$, for $p=r+s$, which is a valuation and is continuous (with respect to the topology on ${\mathcal K}^n$ induced by the Hausdorff metric and the standard topology on ${\mathbb T}^p$). Moreover, $\Gamma$ is isometry covariant, that is, it is rotation covariant and has polynomial translation behavior. Here, rotation covariance is defined by $\Gamma(\vartheta K)=\vartheta\Gamma(K)$ for $\vartheta\in{\rm O}(n)$, where the rotation group ${\rm O}(n)$ operates in the standard way on ${\mathbb T}^p$, namely by $$ (\vartheta T)(x_1,\dots,x_p)= T(\vartheta^{-1}x_1,\dots,\vartheta^{-1}x_p)$$ for $T\in{\mathbb T}^p$ and $x_1,\dots,x_p\in{\mathbb R}^n$. Polynomial translation behavior of $\Gamma$ means that $\Gamma(K+t)$ is a tensor polynomial in $t\in{\mathbb R}^n$, that is, there are tensors $\Gamma_{p-j}(K)\in{\mathbb T}^{p-j}$, independent of $t$, such that $$ \Gamma(K+t)= \sum_{j=0}^p \Gamma_{p-j}(K)t^j$$ for all $t\in{\mathbb R}^n$ and all $K\in{\mathcal K}^n$. (We define $0^0:=1$ here.) The listed properties are sufficient to essentially characterize the Minkowski tensors. Here `essentially' refers to the facts that linear combinations with constant coefficients preserve the properties and that the metric tensor $Q$, defined by $$ Q(x,y):= \langle x,y\rangle\qquad\mbox{for }x,y\in {\mathbb R}^n,$$ is also isometry covariant. Therefore, multiplication by a power of the metric tensor also preserves the listed properties. The following characterization theorem was proved in \cite{Ale99b}. \begin{theorem}[Alesker]\label{Theorem2.1} Let $p\in{\mathbb N}_0$. The real vector space of continuous, isometry covariant valuations on ${\mathcal K}^n$ with values in ${\mathbb T}^p$ is spanned by the tensor valuations $Q^m\Phi_k^{r,s}$, where $m,r,s \in{\mathbb N}_0$ satisfy $2m+r+s=p$ and where $k\in\{0,\dots,n\}$, but $s=0$ if $k=n$. \end{theorem} The dimensions of the vector spaces of continuous, isometry covariant tensor valuations of a fixed rank and a given degree of homogeneity were explicitly determined in \cite{HSS08a}. This required some effort, since there exist non-trivial linear relations between the tensor functions $Q^m\Phi_k^{r,s}$, which had been discovered by McMullen \cite{McM97}. The existence of these linear relations was an obstruction to applying the characterization theorem in Hadwiger's fashion, to derive integral-geometric formulas. Instead, such formulas for Minkowski tensors were proved by direct, cumbersome computations, in \cite{HSS08b}. Recently, the modern structure theory of valuations provided a new approach to part of these and additional new integral-geometric formulas for tensor valuations, see \cite{BH14}. \vspace{2mm} The natural local versions of (\ref{2.3}), which we call {\em local Minkowski tensors}, are defined by \begin{equation}\label{2.4} \phi_k^{r,s}(K,\eta):= c_{n,k}^{r,s} \int_\eta x^ru^s\,\Lambda_k(K,{\rm d}(x,u)) \end{equation} for $\eta\in{\mathcal B}(\Sigma^n)$ and $r,s\in{\mathbb N}_0$, $k\in\{0,\dots,n-1\}$. Each local Minkowski tensor $\phi_k^{r,s}$ defines a mapping from ${\mathcal K}^n\times{\mathcal B}(\Sigma^n)$ into ${\mathbb T}^p$, for $p=r+s$. Generally, for a mapping $\Gamma:{\mathcal K}^n\times {\mathcal B}(\Sigma^n) \to {\mathbb T}^p$ we consider the following properties. Here we write $\eta+t:=\{(x+t,u):(x,u)\in\eta\}$ and $\vartheta \eta:= \{(\vartheta x,\vartheta u):(x,u)\in\eta\}$ for $\eta\in{\mathcal B}(\Sigma^n)$, $t\in{\mathbb R}^n$ and $\vartheta\in{\rm O}(n)$. \noindent $\bullet$\; $\Gamma$ is {\em translation covariant of degree} $q$, where $0\le q\le p$, if \begin{equation}\label{1} \Gamma(K+t,\eta+t)= \sum_{j=0}^q \Gamma_{p-j}(K,\eta)\frac{t^j}{j!} \end{equation} with tensors $\Gamma_{p-j}(K,\eta)\in{\mathbb T}^{p-j}$, for all $K\in{\mathcal K}^n$, $\eta\in{\mathcal B}(\Sigma^n)$ and $t\in{\mathbb R}^n$ (the denominator $j!$ appears for convenience); here $\Gamma_p=\Gamma$. In particular, $\Gamma$ is called {\em translation invariant} if it is translation covariant of degree zero. \noindent $\bullet$\; $\Gamma$ is {\em rotation covariant} if $\Gamma(\vartheta K,\vartheta\eta)= \vartheta \Gamma(K,\eta)$ for all $K\in{\mathcal K}^n$, $\eta\in{\mathcal B}(\Sigma^n)$ and $\vartheta\in{\rm O}(n)$. \noindent $\bullet$\; $\Gamma$ is {\em isometry covariant} (of degree $q$) if it is translation covariant of some degree $q\le p$ (and hence of degree $p$) and rotation covariant. \noindent $\bullet$\; $\Gamma$ is {\em locally defined} if for $\eta\in {\mathcal B}(\Sigma^n)$ and $K,K'\in{\mathcal K}^n$ with $\eta\cap {\rm Nor}\,K = \eta\cap {\rm Nor}\,K'$ the equality $\Gamma(K,\eta)=\Gamma(K',\eta)$ holds. \noindent $\bullet$\; If $\Gamma(K,\cdot)$ is a ${\mathbb T}^p$-valued measure for each $K\in{\mathcal K}^n$, then $\Gamma$ is {\em weakly continuous} if for each sequence $(K_i)_{i\in{\mathbb N}}$ of convex bodies in ${\mathcal K}^n$ converging to a convex body $K$ the relation $$ \lim_{i\to\infty} \int_{\Sigma^n} f\,{\rm d}\Gamma(K_i,\cdot) = \int_{\Sigma^n} f\,{\rm d}\Gamma(K,\cdot)$$ holds for all continuous functions $f:\Sigma^n\to{\mathbb R}$ (the integral is defined coordinate-wise). In the previous definitions, the set ${\mathcal K}^n$ may be replaced by ${\mathcal P}^n$. The particular mapping $\Gamma= \phi_k^{r,s}$ has the following properties, as a consequence of the known properties of the support measures (which are found in \cite[Sec.~4.2]{Sch14}). For each $K\in{\mathcal K}^n$, $\Gamma(K,\cdot)$ is a ${\mathbb T}^p$-valued measure, and $\Gamma$ is weakly continuous. For each $\eta\in{\mathcal B}(\Sigma^n)$, $\Gamma(\cdot,\eta)$ is measurable and is a valuation. The mapping $\Gamma$ is isometry covariant. In fact, the translation covariance follows from \begin{eqnarray} \phi_k^{r,s}(K+t,\eta+t)&=& c_{n,k}^{r,s} \int_{\eta+t} x^ru^s\,\Lambda_k(K+t,{\rm d}(x,u))\nonumber\\ & =& c_{n,k}^{r,s} \int_\eta (x+t)^ru^s\,\Lambda_k(K,{\rm d}(x,u))\nonumber\\ &=& \sum_{i=0}^r\phi_k^{r-i,s}(K,\eta)\frac{t^i}{i!}. \label{2.7} \end{eqnarray} Finally, the mapping $\Gamma$ is locally defined. As mentioned, these properties follow from the corresponding properties of the support measures. If one replaces ${\mathcal K}^n$ by the space ${\mathcal P}^n$ of polytopes, then it was noted by Glasauer \cite[Lem.~1.3]{Gla97} that the latter properties, without the valuation property and the weak continuity, are already sufficient to characterize the linear combinations of the support measures. This was one of the motivations for the following considerations about local Minkowski tensors of polytopes. For a polytope $P\in{\mathcal P}^n$, we denote by ${\mathcal F}_k(P)$ the set of $k$-dimensional faces of $P$, for $k\in\{0,\dots,n\}$. For $F\in{\mathcal F}_k(P)$, the set $\nu(P,F)=N(P,F)\cap{\mathbb S}^{n-1}$ is the set of outer unit normal vectors of $P$ at its face $F$ (see \cite[Sec.~2.4]{Sch14} for the normal cone $N(P,F)$). The local Minkowski tensors of a polytope $P$ have the explicit representation \begin{equation}\label{2.6a} \phi_k^{r,s}(P,\eta)= C_{n,k}^{r,s} \sum_{F\in{\mathcal F}_k(P)} \int_F \int_{\nu(P,F)} {\bf 1}_\eta(x,u) x^r u^s\, {\mathcal H}^{n-k-1}({\rm d} u)\,{\mathcal H}^k({\rm d} x) \end{equation} with \begin{equation}\label{2.7a} C_{n,k}^{r,s}:= (r!s!\omega_{n-k+s})^{-1} \end{equation} and with ${\bf 1}_\eta$ denoting the indicator function of the set $\eta$. This follows from a corresponding representation of the support measures, see \cite[(4.3)]{Sch14} and (\ref{2.2a}). The local Minkowski tensors of polytopes can be generalized while preserving their properties, except weak continuity. Let $L\subset {\mathbb R}^n$ be a linear subspace, let $\pi_L:{\mathbb R}^n\to L$ be the orthogonal projection and define $Q_L\in{\mathbb T}^2$ by $$ Q_L(a,b):=\langle \pi_L a,\pi_L b\rangle,\qquad \mbox{for }a,b\in{\mathbb R}^n.$$ Then $Q_{\vartheta L}=\vartheta Q_L$ for $\vartheta\in{\rm O}(n)$. For $P\in{\mathcal P}^n$ and $F\in{\mathcal F}_k(P)$, let $L(F)$ be the linear subspace parallel to ${\rm aff}\, F$ (the {\em direction space} of $F$). Then we define the {\em generalized local Minkowski tensor} \begin{equation}\label{2.6} \phi_k^{r,s,j}(P,\eta) := C_{n,k}^{r,s} \sum_{F\in{\mathcal F}_k(P)} Q_{L(F)}^{\hspace*{1pt}j}\int_F \int_{\nu(P,F)} {\bf 1}_\eta(x,u) x^r u^s\, {\mathcal H}^{n-k-1}({\rm d} u)\,{\mathcal H}^k({\rm d} x), \end{equation} for $r,s,j,k\in{\mathbb N}_0$ with $1\le k\le n-1$. This definition is supplemented by $\phi_0^{r,s,0}:=\phi_0^{r,s}$, but $\phi_0^{r,s,j}$ will remain undefined for $j\ge 1$. Each mapping $\Gamma= \phi_k^{r,s,j}$ has the following properties. It is isometry covariant and locally defined. For each $P\in{\mathcal P}^n$, $\Gamma(P,\cdot)$ is a ${\mathbb T}^p$-valued measure. For each $\eta\in{\mathcal B}(\Sigma^n)$, $\Gamma(\cdot,\eta)$ is a valuation. This was stated without proof in \cite{Sch12}. We shall provide a proof in the next section (Theorem \ref{T3.3}). Not all of the listed properties are needed for a characterization. \begin{theorem}\label{Theorem2.2} For $p\in{\mathbb N}_0$, let $T_p({\mathcal P}^n)$ denote the real vector space of all mappings $\Gamma:{\mathcal P}^n\times{\mathcal B}(\Sigma^n)\to{\mathbb T}^p$ with the following properties.\\[1mm] $\rm (a)$ $\Gamma(P,\cdot)$ is a ${\mathbb T}^p$-valued measure, for each $P\in{\mathcal P}^n$;\\[1mm] $\rm (b)$ $\Gamma$ is isometry covariant;\\[1mm] $\rm (c)$ $\Gamma$ is locally defined.\\[1mm] Then a basis of $T_p({\mathcal P}^n)$ is given by the mappings $Q^m\phi^{r,s,j}_k$, where $m,r,s,j\in{\mathbb N}_0$ satisfy $2m+2j+r+s=p$ and where $k\in\{0,\dots,n-1\}$, but $j=0$ if $k\in\{0,n-1\}$. \end{theorem} This is a stronger version of a theorem proved in \cite{Sch12}, including the linear independence result of Theorem \ref{Theorem3.1}. We shall explain the further modifications, in comparison to \cite{Sch12}, in the next section. A similar theorem for ${\mathcal K}^n$ instead of ${\mathcal P}^n$ can hardly be expected without a continuity assumption. This raises the question whether the modified local Minkowski tensors $\phi^{r,s,j}_k$ with $k\ge 1$ and $j\ge 1$ have weakly continuous extensions from ${\mathcal P}^n$ to ${\mathcal K}^n$. The answer is easily seen to be positive for $k=n-1$, see Lemma \ref{Lemma3.4}. In Section \ref{sec4} we shall show that the answer is positive for $j=1$, and we shall suggest a representation of $\phi_k^{r,s,1}(K,\cdot)$ for general convex bodies $K$. For $j\ge 2$ and $1\le k\le n-2$ the answer is negative, in a stronger sense, which will allow us to prove in Section \ref{sec5} the following main result. \begin{theorem}\label{Theorem2.3} For $p\in{\mathbb N}_0$, let $T_p({\mathcal K}^n)$ denote the real vector space of all mappings $\Gamma:{\mathcal K}^n\times{\mathcal B}(\Sigma^n)\to{\mathbb T}^p$ with the following properties.\\ $\rm (a)$ $\Gamma(K,\cdot)$ is a ${\mathbb T}^p$-valued measure, for each $K\in{\mathcal K}^n$;\\ $\rm (b)$ $\Gamma$ is isometry covariant;\\ $\rm (c)$ $\Gamma$ is locally defined;\\ $\rm (d)$ $\Gamma$ is weakly continuous.\\ Then a basis of $T_p({\mathcal K}^n)$ is given by the mappings $Q^m\phi^{r,s,j}_k$, where $m,r,s\in{\mathbb N}_0$ and $j\in\{0,1\}$ satisfy $2m+2j+r+s=p$ and where $k\in\{0,\dots,n-1\}$, but $j=0$ if $k\in\{0,n-1\}$. \end{theorem} \section{Reductions and the polytopal case}\label{sec3} We assume that $\Gamma:{\mathcal K}^n\times{\mathcal B}(\Sigma^n)\to{\mathbb T}^p$ is a mapping which has the following properties.\\ $\bullet$ For each $K\in{\mathcal K}^n$, $\Gamma(K,\cdot)$ is a ${\mathbb T}^p$-valued measure;\\ $\bullet$ $\Gamma$ is isometry covariant;\\ $\bullet$ $\Gamma$ is locally defined.\\ Here ${\mathcal K}^n$ may be replaced by ${\mathcal P}^n$. It is our goal to classify the mappings $\Gamma$ with these and possibly a continuity property. As a preparation, in the present section we establish several auxiliary results and provide reduction steps in order to deduce the general results from some simpler classification problems. If $\Gamma$ is translation covariant of degree $q$, then there are mappings $\Gamma_{p-j}:{\mathcal K}^n\times{\mathcal B}(\Sigma^n)\to{\mathbb T}^{p-j}$, $j=0,\dots,q$, (possibly zero for some $j$ and with $\Gamma_p=\Gamma$) such that $$ \Gamma(K+t,\eta+t)= \sum_{j=0}^q \Gamma_{p-j}(K,\eta)\frac{t^j}{j!}$$ for all $K\in {\mathcal K}^n$, $\eta\in {\mathcal B}(\Sigma^n)$ and $t\in{\mathbb R}^n$. The following lemma extends an observation of McMullen \cite[Thm.~2.3]{McM97}. \begin{lemma}\label{Lemma3.1} If $\Gamma$ is translation covariant of degree $q\le p$, then the mappings $\Gamma_{p-j}$ satisfy $$ \Gamma_{p-j}(K+t,\eta+t) =\sum_{r=0}^{q-j} \Gamma_{p-j-r}(K,\eta)\frac{t^r}{r!}$$ for $j=0,\dots,q$ and all $K\in {\mathcal K}^n$, $\eta\in {\mathcal B}(\Sigma^n)$ and $t\in{\mathbb R}^n$, in particular (case $j=q$) $$ \Gamma_{p-q}(K+t,\eta+t) = \Gamma_{p-q}(K,\eta).$$ \end{lemma} \begin{proof} For $s,t\in{\mathbb R}^n$ we have \begin{eqnarray*} & & \sum_{j=0}^q \Gamma_{p-j}(K+t,\eta+t)\frac{s^j}{j!} = \Gamma(K+t+s,\eta+t+s) = \sum_{i=0}^q \Gamma_{p-i}(K,\eta)\frac{(t+s)^i}{i!}\\ & & = \sum_{i=0}^q \Gamma_{p-i}(K,\eta) \sum_{j=0}^i\frac{t^{i-j}}{(i-j)!} \frac{s^j}{j!} = \sum_{j=0}^q\left(\sum_{i=j}^q\Gamma_{p-i}(K,\eta)\frac{t^{i-j}}{(i-j)!}\right)\frac{s^j}{j!}. \end{eqnarray*} It follows that $$ \Gamma_{p-j}(K+t,\eta+t)=\sum_{i=j}^q \Gamma_{p-i}(K,\eta)\frac{t^{i-j}}{(i-j)!} = \sum_{r=0}^{q-j} \Gamma_{p-j-r}(K,\eta) \frac{t^r}{r!}.$$ Here we have used the subsequent lemma, together with the fact that the symmetric tensor algebra has no zero divisors. \end{proof} In order to derive properties of $\Gamma_{p-j}$ from those of $\Gamma$, the following lemma is useful. It is simpler than \cite[Lem.~1]{Sch12}, which was also used for that purpose. \begin{lemma}\label{Lemma3.2} Let $\Gamma$ be translation covariant of degree $q\le p$. Then there are constants $a_{jm}$ $($$j=0,\dots,q$, $m=1,\dots,q+1$$)$, depending only on $q,j,m$, such that \begin{equation}\label{3} \Gamma_{p-j}(K,\eta)\frac{t^j}{j!} =\sum_{m=1}^{q+1} a_{jm}\Gamma(K+mt,\eta+mt) \end{equation} for all $K\in {\mathcal K}^n$, $\eta\in {\mathcal B}(\Sigma^n)$ and $t\in{\mathbb R}^n$. \end{lemma} \begin{proof} For fixed $K,\eta,t$, let $f(\lambda)$, for $\lambda\in{\mathbb R}$, be a coordinate of $\Gamma(K+\lambda t,\eta+\lambda t)$ with respect to some given basis, and let $f_j$ be the corresponding coordinate of $\Gamma_{p-j}(K,\eta) t^j/j!$. (The following argument is similar to one used in \cite[p.~213]{Sch14}.) In the equation $f(\lambda)=\sum_{j=0}^q \lambda^j f_j$, which holds by (\ref{1}), we insert for $\lambda$ the values $1,\dots,q+1$. The resulting system of linear equations for $f_0,\dots,f_q$ has a (Vandermonde) determinant different from zero, hence there is a solution of the form $f_j=\sum_{m=1}^{q+1} a_{jm}f(m)$, $j=0,\dots,q$, with certain constants $a_{jm}$, depending only on $q,j,m$. Since this holds for all coordinates, equation (\ref{3}) results. \end{proof} A typical application is as follows. From the isometry covariance of $\Gamma$ we get, for $\vartheta\in{\rm O}(n)$, \begin{eqnarray*} \Gamma_{p-j}(\vartheta K,\vartheta\eta)\frac{(\vartheta t)^j}{j!} &= &\sum_{m=1}^{q+1} a_{jm} \Gamma(\vartheta(K+mt),\vartheta(\eta+mt))\\ & = &\vartheta\left( \sum_{m=1}^{q+1} a_{jm} \Gamma(K+mt,\eta+mt)\right) = \vartheta\left( \Gamma_{p-j}(K,\eta) \frac{t^j}{j!}\right)\\ &=& \vartheta \Gamma_{p-j}(K,\eta)\frac{(\vartheta t)^j}{j!}. \end{eqnarray*} Since the symmetric tensor algebra has no zero divisors, it follows that $$ \Gamma_{p-j}(\vartheta K,\vartheta\eta) = \vartheta\Gamma_{p-j}(K,\eta).$$ Together with Lemma \ref{Lemma3.1}, this shows that $\Gamma_{p-j}$ is isometry covariant. In a similar way, one shows that $\Gamma_{p-j}(K,\cdot)$ is a ${\mathbb T}^{p-j}$-valued measure, for each $K\in{\mathcal K}^n$. Further, if $\Gamma$ is weakly continuous or is a valuation in its first argument, then each $\Gamma_{p-j}$ has the corresponding property. The following lemma extends an argument from \cite[p.~124]{Sch78}. A similar argument (in a simpler situation) appears in \cite{Gla97} (proof of Lemma 1.3). \begin{lemma}\label{Lemma3.3} For each $K\in {\mathcal K}^n$, the measure $\Gamma(K,\cdot)$ is concentrated on ${\rm Nor}\,K$. \end{lemma} \begin{proof} Recall that ${\mathcal B}_b(\Sigma^n)$ is the ring of bounded Borel sets in $\Sigma^n$. Let $\eta\in {\mathcal B}_b(\Sigma^n)$. Then we can choose points $x_1,x_2\in{\mathbb R}^n$ with $\eta\cap{\rm Nor}\,\{x_i\}=\emptyset$ for $i=1,2$. Since $\Gamma$ is locally defined, we have $\Gamma(\{x_1\},\eta)= \Gamma(\{x_2\},\eta)$. Therefore, we can define $$ F(\eta):= \Gamma(\{x\},\eta) \quad \mbox{for }\eta\in{\mathcal B}_b(\Sigma^n) \mbox{ and arbitrary $x$ with } \eta\cap{\rm Nor}\,\{x\}=\emptyset.$$ Let $(\eta_i)_{i\in{\mathbb N}}$ be a disjoint sequence in ${\mathcal B}_b(\Sigma^n)$ such that $\bigcup_{i\in{\mathbb N}} \eta_i\in{\mathcal B}_b(\Sigma^n)$. Then we can choose $x$ with $\eta_i\cap{\rm Nor}\,\{x\}=\emptyset$ for all $i\in{\mathbb N}$ and deduce that $$ \sum_{i\in{\mathbb N}} F(\eta_i)= \sum_{i\in{\mathbb N}} \Gamma\left(\{x\},\eta_i\right)=\Gamma\left(\{x\},\bigcup_{i\in{\mathbb N}} \eta_i\right) = F\left(\bigcup_{i\in{\mathbb N}} \eta_i\right).$$ Thus, $F$ is a ${\mathbb T}^p$-valued measure on the ring ${\mathcal B}_b(\Sigma^n)$. Let $\eta\in{\mathcal B}_b(\Sigma^n)$ and $t\in{\mathbb R}^n$. Choosing $x$ with $\eta\cap{\rm Nor}\,\{x\}=\emptyset$, we have $(\eta+t)\cap{\rm Nor}\,\{x+t\}=\emptyset$ and hence $$ F(\eta+t)= \Gamma(\{x\}+t,\eta+t) = \sum_{j=0}^p \Gamma_{p-j}(\{x\},\eta)\frac{t^j}{j!}.$$ This is independent of $x$ (as long as $\eta\cap{\rm Nor}\,\{x\}=\emptyset$), hence we can define $$ F_0(\eta):= \Gamma_0(\{x\},\eta).$$ From Lemma \ref{Lemma3.1} it follows that $F_0(\eta+t)=F_0(\eta)$. Since $\Gamma_{p-j}(\{x\},\cdot)$ is a ${\mathbb T}^{p-j}$-valued measure, $F_0$ is a real-valued signed measure on ${\mathcal B}_b(\Sigma^n)$. Let $\omega\in{\mathcal B}({\mathbb S}^{n-1})$ be fixed and define $$ \mu(\beta):= F_0(\beta\times \omega)\qquad\mbox{for } \beta\in{\mathcal B}_b({\mathbb R}^n).$$ Then $\mu$ is a translation invariant finite signed measure on ${\mathcal B}_b({\mathbb R}^n)$ and hence a multiple of Lebesgue measure. Thus, there is a constant $c$ such that $\Gamma_0(\{x\}, \beta\times\omega)=c\,{\mathcal H}^n(\beta)$ for all bounded Borel sets $\beta$ with $x\notin\beta$, but since both sides are measures in $\beta$, the equality holds for arbitrary Borel sets $\beta$ with $x\notin\beta$. If $c\not=0$, then $\Gamma_0(\{x\}, \beta\times\omega)=\infty$ if, for instance, $\beta$ is a halfspace and $x$ a point not contained in it. This is a contradiction. From $F_0(\beta\times\omega)=0$ for all $\beta\in{\mathcal B}({\mathbb R}^n)$ and $\omega\in{\mathcal B}({\mathbb S}^{n-1})$ it follows that $F_0(\eta)=0$ for all $\eta\in{\mathcal B}(\Sigma^n)$. Since $F_0=0$, we now have $$ F(\eta+t)= \sum_{j=0}^{p-1} \Gamma_{p-j}(\{x\},\eta)\frac{t^j}{j!}$$ for $\eta\in{\mathcal B}_b(\Sigma^n)$, $x\in{\mathbb R}^n$ with $\eta\cap{\rm Nor}\{x\}=\emptyset$ and $t\in{\mathbb R}^n$. Repeating the argument above and arguing as in the proof of Lemma \ref{Lemma3.1}, we obtain that $F_1(\eta):= \Gamma_1(\{x\},\eta)$ defines a translation invariant ${\mathbb T}^1$-valued measure $F_1$ on ${\mathcal B}_b(\Sigma^n)$. As above, we deduce that each coordinate of $F_1$ must be zero, and from $F_1=0$ we conclude that $$ F(\eta+t)= \sum_{j=0}^{p-2} \Gamma_{p-j}(\{x\},\eta)\frac{t^j}{j!}.$$ The argument can now be repeated and after finitely many steps we arrive at $F=0$. Now let $K\in{\mathcal K}^n$ and let $\eta\in{\mathcal B}_b(\Sigma^n)$ be a set with $\eta\cap {\rm Nor}\,K=\emptyset$. We can choose a point $x$ with $\eta\cap{\rm Nor}\,\{x\}=\emptyset$. Since $\Gamma$ is locally defined, it follows that $$ \Gamma(K,\eta)=\Gamma(\{x\},\eta)=0.$$ From this, we can deduce that $\Gamma(K,\eta)=0$ for arbitrary sets $\eta\in{\mathcal B}(\Sigma^n)$ with $\eta\cap{\rm Nor}\,K=\emptyset$. This means that $\Gamma(K,\cdot)$ is concentrated on ${\rm Nor}\,K$. \end{proof} Now we turn to polytopes and remark first that all of the previous statements of this section remain true if ${\mathcal K}^n$ is replaced by ${\mathcal P}^n$. We study the generalized local Minkowski tensors $\phi_k^{r,s,j}$ of polytopes defined by (\ref{2.6}). For $k=n-1$, we show that they yield nothing new, as they can be expressed as linear combinations of the mappings $Q^m\phi_{n-1}^{r,l}$. \begin{lemma}\label{Lemma3.4} \begin{equation}\label{5} \phi_{n-1}^{r,s,j} = \sum_{i=0}^j (-1)^i\binom{j}{i}\frac{(s+2i)!\omega_{1+s+2i}}{s!\omega_{1+s}} Q^{j-i}\phi_{n-1}^{r,s+2i}. \end{equation} \end{lemma} \begin{proof} Let $P\in{\mathcal P}^n$. For $F\in{\mathcal F}_{n-1}(P)$, let $\pm u_F$ be the two unit vectors orthogonal to $L(F)$. Then $Q_{L(F)}+u_F^2=Q$, hence $$ \phi_{n-1}^{r,s,j}(P,\eta) = C_{n,n-1}^{r,s} \sum_{F\in{\mathcal F}_{n-1}(P)} \int_F \int_{\nu(P,F)}\left(Q-u_F^2\right)^j {\bf 1}_\eta(x,u) x^r u^s\, {\mathcal H}^0({\rm d} u)\,{\mathcal H}^{n-1}({\rm d} x).$$ This immediately gives the assertion. \end{proof} Whereas the global tensor functions $Q^m\Phi_k^{r,s}$ satisfy the non-trivial linear `McMullen relations', their local versions on polytopes, $Q^m\phi_k^{r,s}$ with $0\le k\le n-1$ and $Q^m\phi_k^{r,s,j}$ with $1\le k\le n-2$ and $j\ge 1$, are linearly independent. This is the subject of the following theorem, which is part of Theorem \ref{Theorem2.2}. It is expected that this fact will later be useful for the derivation of integral-geometric relations. \begin{theorem}\label{Theorem3.1} Let $p\in{\mathbb N}_0$. On ${\mathcal P}^n$, the local tensor valuations $Q^m\phi_k^{r,s,j}$ with $$ m,r,s,j\in{\mathbb N}_0,\;2m+2j+r+s=p,\;k\in\{0,\dots,n-1\}, \mbox{ but }j=0 \mbox{ if } k\in\{0,n-1\},$$ are linearly independent. \end{theorem} \begin{proof} Suppose that \begin{equation}\label{7} \sum_{m,r,s,j,k \atop 2m+2j+r+s=p} a_{kmrsj} Q^m\phi_k^{r,s,j}=0 \end{equation} with $a_{kmrsj}\in{\mathbb R}$ and with $a_{0mrsj}=a_{(n-1)mrsj}=0$ for $j\not=0$. Let $F$ be a $k$-dimensional polytope, $k\in\{0,\dots,n-1\}$. Let $\text{relint } M$ denote the relative interior of a convex set $M$ in ${\mathbb R}^n$. For arbitrary Borel sets $\beta\subset \text{relint }F$ and $\omega\subset L(F)^\perp\cap{\mathbb S}^{n-1}$ (where $L^\perp$ denotes the orthogonal complement of $L$) we have $$ \phi_k^{r,s,j}(F,\beta\times\omega) = Q_{L(F)}^j C_{n,k}^{r,s} \int_\beta x^r\,{\mathcal H}^k({\rm d} x) \int_\omega u^s \, {\mathcal H}^{n-k-1}({\rm d} u) $$ and $\phi_{k'}^{r,s,j}(F,\beta\times\omega) =0$ for $k'\not= k$. Therefore, for each $k\in\{0,\dots,n-1\}$ we get $$ \sum_{m,r,s,j \atop 2m+2j+r+s=p} a_{kmrsj}Q^m Q_{L(F)}^j C_{n,k}^{r,s}\int_\beta x^r\,{\mathcal H}^k({\rm d} x) \int_\omega u^s\,{\mathcal H}^{n-k-1}({\rm d} u)=0$$ whenever $F$ is a $k$-dimensional polytope and $\beta\subset \text{relint } F$, $\omega\subset L(F)^\perp\cap{\mathbb S}^{n-1}$ are Borel sets. We can choose a translate of $F,\beta$ such that $\int_\beta x^r\,{\mathcal H}^k({\rm d} x)\not=0$. Replacing $F,\beta$ by multiples and comparing degrees of homogeneity, we conclude that for each fixed $r\in{\mathbb N}_0$ we have $$ \sum_{m,s,j \atop 2m+2j+s=p-r} a_{kmrsj}Q^m Q_{L(F)}^j C_{n,k}^{r,s}\int_\beta x^r\,{\mathcal H}^k({\rm d} x) \int_\omega u^s\,{\mathcal H}^{n-k-1}({\rm d} u)=0.$$ Since the symmetric tensor algebra has no zero divisors, it follows that $$ \sum_{m,s,j \atop 2m+2j+s=p-r} a_{kmrsj}Q^m Q_{L(F)}^j C_{n,k}^{r,s} \int_\omega u^s\,{\mathcal H}^{n-k-1}({\rm d} u)=0.$$ This holds for arbitrary Borel sets $\omega \subset {\mathbb S}^{n-1}\cap L(F)^\perp$, hence we obtain $$ \sum_{m,s,j \atop 2m+2j+s=p-r} a_{kmrsj}Q^m Q_{L(F)}^j C_{n,k}^{r,s} u^s=0$$ for all $u\in {\mathbb S}^{n-1}\cap L(F)^\perp$. Here $L(F)$ can be an arbitrary $k$-dimensional linear subspace of ${\mathbb R}^n$. Since $n$, $k$ and $r$ are fixed, we set $a_{kmrsj} C_{n,k}^{r,s}=:b_{msj}$ and write the latter relation in the form \begin{equation}\label{4} \sum_{m,s,j \atop 2m+2j+s=p-r} b_{msj}Q^m Q_{L(F)}^j u^s=0. \end{equation} Let $(e_1,\dots,e_n)$ be an orthonormal basis of ${\mathbb R}^n$ such that $L(F)$ is the subspace spanned by $e_1,\dots,e_k$. Applying both sides of (\ref{4}) to the $(p-r)$-tuple $$ (\underbrace{x,\dots,x}_{2m+2j+s}),\qquad x=x_1e_1+\cdots+x_ne_n\in{\mathbb R}^n,$$ we obtain $$ \sum_{m,s,j \atop 2m+2j+s=p-r} b_{msj} (x_1^2+\dots+x_n^2)^m(x_1^2+\dots+x_k^2)^j(u_{k+1} x_{k+1}+\dots+u_nx_n)^s=0.$$ This holds for all $x_1,\dots,x_n\in{\mathbb R}$ and all $u_{k+1},\ldots,u_n\in{\mathbb R}$ such that $u_{k+1}e_{k+1}+\dots +u_ne_n\in{\mathbb S}^{n-1}$. If $k=0$, then $b_{msj}=0$ for $j\neq 0$ and for $j=0$ the factor $(x_1^2+\dots+x_k^2)^j$ is omitted (respectively, taken as 1). First we assume now that $0\le k\le n-2$. For fixed $x_{k+1},\dots,x_n\not=0$, the values $$ t = u_{k+1} x_{k+1}+\dots+u_nx_n$$ with $u_{k+1}e_{k+1}+\dots+u_ne_n\in{\mathbb S}^{n-1}$ fill a nondegenerate interval, hence it follows that $$ \sum_{m,j \atop 2m+2j=p-r-s} b_{msj}(x_1^2+\dots+ x_n^2)^m(x_1^2+\dots+ x_k^2)^j=0$$ for all $s\in{\mathbb N}_0$. Only such $s$ occur here for which $p-r-s$ is even, say equal to $2q$. For fixed $s$, the latter relation can be written in the form $$ \sum_{m=0}^q b_m (x_1^2+\dots+ x_n^2)^m(x_1^2+\dots+ x_k^2)^{q-m}=0$$ with $b_m:= b_{ms(q-m)}$. It holds for all $x_1,\dots,x_n\not=0$ and hence for arbitrary $x_1,\dots, x_n$. Since the highest appearing power of $x_n$ must appear with coefficient zero, we have $b_q=0$, then $b_{q-1}=0$, and so on until $b_0=0$. Thus, all coefficents $a_{kmrsj}$ in (\ref{7}) with $k\le n-2$ are zero. It remains to consider $k=n-1$. In this case, we have $$ b_{msj}=a_{(n-1)mrsj}C_{n,n-1}^{r,s}=0 \quad\mbox{for }j\not=0,$$ hence (\ref{4}) reduces to $$ \sum_{m,s \atop 2m+s=p-r} b_{ms0}Q^m u^s=0.$$ Similarly as above, this implies that $$ \sum_{m=0}^{\lfloor \frac{q}{2}\rfloor} b_{m(q-2m)0}(x_1^2+\dots+ x_n^2)^m x_n^{q-2m}=0$$ for all $x_1,\ldots,x_n\in{\mathbb R}$ with $|x_n|= 1$, where $q=p-r$. Comparing powers of $x_1$, we conclude that all coefficients $b_{m( q-2m) 0}$ must be zero. Thus, also all coefficients $a_{(n-1)mrsj}$ in (\ref{7}) are zero. \end{proof} The local characterization for polytopes, Theorem \ref{Theorem2.2} without the assertion about linear independence, was essentially proved in \cite{Sch12}, as mentioned in the previous section. One of the further differences is that in \cite{Sch12} it was additionally assumed that $\Gamma(P,\cdot)$ is concentrated on ${\rm Nor}\,P$. By Lemma \ref{Lemma3.3}, this follows from the other assumptions. Another difference is the fact that the mappings corresponding to $\phi_k^{r,s,j}$ in \cite[Thm.~1]{Sch12} involve (with slightly different notations) tensors of the form $Q_L^lQ_{L^\perp}^{m-l}$. But the latter is a linear combination of tensors of the form $Q^iQ_L^j$, since $Q_L+Q_{L^\perp}=Q$. Further, we point out that in Theorem \ref{Theorem2.3} the terms $\phi_{n-1}^{r,s,j}$ with $j\ge 1$ are not really needed, due to Lemma \ref{Lemma3.4}. Next, we wish to point out that the proof of \cite[Thm. 1]{Sch12}, and thus the proof of Theorem \ref{Theorem2.2}, can be simplified slightly. We consider first the translation invariant case, then we show how the general case can be deduced from this special one. \begin{theorem}\label{Theorem3.2} Let $p\in{\mathbb N}_0$. Let $\Gamma:{\mathcal P}^n\times{\mathcal B}(\Sigma^n)\to{\mathbb T}^p$ be a mapping with the following properties.\\ $\rm (a)$ $\Gamma(P,\cdot)$ is a ${\mathbb T}^p$-valued measure, for each $P\in{\mathcal P}^n$; \\ $\rm (b)$ $\Gamma$ is translation invariant and rotation covariant;\\ $\rm (c)$ $\Gamma$ is locally defined.\\ Then $\Gamma$ is a linear combination, with constant coefficients, of the mappings $Q^m\phi^{0,s,j}_k$, where $m,s,j\in{\mathbb N}_0$ satisfy $2m+2j+s=p$ and where $k\in\{0,\dots,n-1\}$, but $j=0$ if $k\in\{0,n-1\}$. \end{theorem} \begin{proof} We sketch the proof, to show where the proof given in \cite{Sch12} can be simplified. We have to prove (for each polytope $P$) the equality of two tensor-valued measures on ${\mathcal B}(\Sigma^n)$. It is sufficient to prove equality on sets of the form $\beta\times \omega$ with $\beta\in{\mathcal B}({\mathbb R}^n)$ and $\omega\in{\mathcal B}({\mathbb S}^{n-1})$. Disjoint decomposition of $P$ into relatively open faces gives $$ (\beta\times \omega)\cap{\rm Nor}\,P = \bigcup_{k=0}^{n-1}\bigcup_{F\in{\mathcal F}_k(P)}(\beta\cap {\rm relint}\, F)\times (\omega\cap\nu(P,F)).$$ By Lemma \ref{Lemma3.3}, $\Gamma(P,\cdot)$ is concentrated on ${\rm Nor}\,P$, hence \begin{equation}\label{6} \Gamma(P,\beta\times \omega) = \sum_{k=0}^{n-1} \sum_{F\in{\cal F}_k(P)} \Gamma(P,(\beta\cap{\rm relint}\,F) \times (\omega\cap \nu(P,F))). \end{equation} Thus, it is sufficient to determine $\Gamma(P, \beta\times \omega)$ for the case where $\beta\subset {\rm relint}\,F$ and $\omega\subset \nu(P,F)$, for some face $F\in{\mathcal F}_k(P)$. Therefore, we consider the following data: a number $k\in\{0,\dots,n-1\}$, a $k$-dimensional linear subspace $L\subset{\mathbb R}^n$, a bounded Borel set $\beta \subset L$, a Borel set $\omega\subset{\mathbb S}^{n-1}\cap L^\perp$, a $k$-dimensional polytope $P\subset L$ with $\beta\subset{\rm relint}\,P$, and we determine $\Gamma(P,\beta\times \omega)$ in this case. First, for fixed $\omega$, we consider the functional $$ \beta\mapsto \Gamma(P,\beta\times \omega)$$ for bounded Borel sets $\beta\subset L$, where $P$ is chosen such that $\beta\subset{\rm relint}\,P$ (the particular choice of $P$ is irrelevant, since $\Gamma$ is locally defined). Let $f(\beta)$ be a coordinate of $\Gamma(P,\beta\times\omega)$. We choose a polytope $P\subset L$ with $\beta\subset {\rm relint}\,P$. For $t\in L$ we then have $\beta+t\subset {\rm relint}(P+t)$ and it follows that $f(\beta+t)=f(\beta)$. Thus, $f$ is a translation invariant finite signed measure on ${\mathcal B}_b(L)$ and hence a constant multiple of Lebesgue measure in $L$, where the factor depends on $L$ and $\omega$. We conclude that $$ \Gamma(P,\beta\times\omega)=a(L,\omega){\mathcal H}^k(\beta)$$ with a tensor $a(L,\omega)\in{\mathbb T}^p$. It is shown in \cite{Sch12} that $a(L,\cdot)$ is a ${\mathbb T}^p$-valued measure satisfying $$ a(\vartheta L,\vartheta \omega) =\vartheta a(L,\omega)\qquad\mbox{for }\vartheta\in{\rm O}(n)$$ and $\vartheta a(L,\omega)= a(L,\omega)$ if $\vartheta$ fixes $L^\perp$ pointwise. From this, it is deduced in \cite{Sch12} (in particular, Lemmas 3 and 4) that $$ a(L,\omega) = \sum_{j=0}^{\lfloor p/2\rfloor} Q_L^j \sum_{i=0}^{\lfloor p/2\rfloor} c_{pkij}Q_{L^\perp}^i\int_\omega u^{p-2j-2i}\,{\mathcal H}^{n-k-1}({\rm d} u) $$ with real constants $c_{pkij}$ (the dependence of the coefficients on $k$ was not shown explicitly in \cite{Sch12}) and $c_{p0ij}=0$ for $j\ge 1$. It follows that $$ \Gamma(P,\beta\times\omega)= \sum_{i,j=0}^{\lfloor p/2\rfloor}c_{pkij} Q_L^j (Q-Q_L)^i \int_\beta {\mathcal H}^k({\rm d} x)\int_\omega u^{p-2j-2i}\,{\mathcal H}^{n-k-1}({\rm d} u). $$ Now let $P\in{\mathcal P}^n$, $\beta\in {\mathcal B}({\mathbb R}^n)$ and $\omega\in {\mathcal B}({\mathbb S}^{n-1})$ be arbitrary. From (\ref{6}) we get \begin{eqnarray*} \Gamma(P,\beta\times \omega) &=& \sum_{k=0}^{n-1} \sum_{F\in{\cal F}_k(P)} \Gamma(P,(\beta\cap{\rm relint}\,F)\times(\omega\cap \nu(P,F)))\\ & =& \sum_{k=0}^{n-1} \sum_{F\in{\cal F}_k(P)} \sum_{i,j=0}^{\lfloor p/2\rfloor}c_{pkij} Q_{L(F)}^j (Q-Q_{L(F)})^i\\ & & \times\int_{\beta\cap F} {\mathcal H}^k({\rm d} x)\int_{\omega\cap \nu(P,F)} u^{p-2j-2i}\,{\mathcal H}^{n-k-1}({\rm d} u). \end{eqnarray*} This extends to \begin{eqnarray*} \Gamma(P,\eta) &= & \sum_{i,j=0}^{\lfloor p/2\rfloor} \sum_{k=0}^{n-1}c_{pkij} \sum_{F\in{\cal F}_k(P)} Q_{L(F)}^j (Q-Q_{L(F)})^i \\ & &\times \int_F \int_{\nu(P,F)} {\bf 1}_\eta(x,u) u^{p-2j-2i}\,\,{\mathcal H}^{n-k-1}({\rm d} u)\,{\mathcal H}^k({\rm d} x) \end{eqnarray*} for all $\eta\in{\mathcal B}(\Sigma^n)$. Thus, $\Gamma$ is a linear combination, with constant coefficients, of the mappings $Q^m\phi^{0,s,j}_k$, with $m,s,j,k\in{\mathbb N}_0$, $k\le n-1$ and $2m+2j+s=p$, but $j=0$ if $k\in\{0,n-1\}$. \end{proof} Now we indicate how Theorem \ref{Theorem2.2} can be derived from Theorem \ref{Theorem3.2}. Since we already know that the mappings $Q^m\phi_k^{p,s,j}$ have the properties (a), (b), (c) of Theorem \ref{Theorem2.2} and since linear independence follows from Theorem \ref{Theorem3.1}, it remains to show that a mapping $\Gamma$ with the properties in Theorem \ref{Theorem2.2} is a linear combination of mappings $Q^m\phi_k^{p,s,j}$. For this, we extend an argument used by Alesker \cite{Ale99b}. By (\ref{1}) we have $$ \Gamma(P+t,\eta+t)= \sum_{i=0}^p \Gamma_{p-i}(P,\eta)\frac{t^i}{i!}.$$ By Lemma \ref{Lemma3.1}, $\Gamma_0$ is translation invariant. Hence, $\Gamma_0$ has all the properties of $\Gamma$ in Theorem \ref{Theorem3.2} (where we have to put $p=0$). It follows that $$ \Gamma_0 =\sum_{m,s,j,k \atop 2m+2j+s=0} c_{msjk} Q^m\phi_k^{0,s,j}$$ with real constants $ c_{msjk}$. Precisely as in the proof of (\ref{2.7}) we get \begin{equation}\label{n3.1} \phi_k^{r,s,j}(P+t,\eta+t) =\sum_{i=0}^r \phi_k^{r-i,s,j}(P,\eta)\frac{t^i}{i!}. \end{equation} Hence, if we define $$ \Delta:= \sum_{m,s,j,k \atop 2m+2j+s=0} c_{msjk} Q^m\phi_k^{p,s,j},$$ then $$ \Delta(P+t,\eta+t)= \sum_{i=0}^p \Delta_{p-i}(P,\eta)\frac{t^i}{i!}$$ with tensors $\Delta_{p-j}(P,\eta)\in{\mathbb T}^{p-j}$, and here $\Delta_0=\Gamma_0$. Therefore, the mapping $\Gamma':=\Gamma-\Delta$ satisfies $$ \Gamma'(P+t,\eta+t) = \sum_{j=0}^{p-1} \Gamma'_{p-j}(P,\eta)\frac{t^j}{j!}.$$ By Lemma \ref{Lemma3.1}, $\Gamma'_1$ is translation invariant. Hence, $\Gamma'_1$ has all the properties of $\Gamma$ in Theorem \ref{Theorem3.2} (where now we have to put $p=1$). Therefore, $$ \Gamma'_1 =\sum_{m,s,j,k \atop 2m+2j+s=1} c'_{msjk} Q^m\phi_k^{0,s,j}$$ with real constants $c'_{msjk}$. Subtracting from $\Gamma'$ a suitable linear combination of tensor valuations $Q^m\phi_k^{p-1,s,j}$, we obtain a mapping $\Gamma''$ which is translation covariant of degree $p-2$. We can now repeat the argument, apply Theorem \ref{Theorem3.2} with $p=2$, and so on. After finitely many steps, $\Gamma$ is represented as a linear combination of mappings $Q^m\phi_k^{p,s,j}$, thus Theorem \ref{Theorem2.2} is proved. \qed The local Minkowski tensors appearing in Theorem \ref{Theorem2.2} are valuations, though this is not one of the assumptions of the theorem. The valuation property was asserted in \cite{Sch12}. We give a proof. \begin{theorem}\label{T3.3} For each $\eta\in{\mathcal B}(\Sigma^n)$, the mapping $\phi_k^{r,s,j}(\cdot,\eta)$ is a valuation on ${\mathcal P}^n$. \end{theorem} \begin{proof} To show that a function $\varphi$ on ${\mathcal P}^n$ with values in an abelian group is a valuation, it suffices to show that, after supplementing the definition by $\varphi(\emptyset)=0$, one has \begin{equation}\label{A} \varphi(P\cap H^-) + \varphi(P\cap H^+) = \varphi(P)+\varphi(P\cap H) \end{equation} for $P\in{\mathcal P}^n$ and every hyperplane $H$, where $H^-,H^+$ are the two closed halfspaces bounded by $H$. This was first noticed by Sallee \cite{Sal68}. Hence, using the representation (\ref{2.6}) and the abbreviation $$ J(P,F):= C_{n,k}^{r,s}\, Q_{L(F)}^{\hspace*{1pt}j}\int_F \int_{\nu(P,F)} {\bf 1}_\eta(x,u) x^r u^s\, {\mathcal H}^{n-k-1}({\rm d} u)\,{\mathcal H}^k({\rm d} x),$$ we have to show that \begin{eqnarray}\label{B} & & \sum_{F\in{\mathcal F}_k(P\cap H^-)} J(P\cap H^-,F) \enspace+ \sum_{F\in{\mathcal F}_k(P\cap H^+)} J(P\cap H^+,F) \nonumber\\ & & = \sum_{F\in{\mathcal F}_k(P)} J(P,F) \enspace+ \sum_{F\in{\mathcal F}_k(P\cap H)} J(P\cap H,F). \end{eqnarray} Let a polytope $P\in {\mathcal P}^n$ and a hyperplane $H$ be given. We consider a face $$ F\in {\mathcal F}_k(P)\cup {\mathcal F}_k (P\cap H)$$ and distinguish the following five cases. Case 1: $F\not\subset H$ and $F\subset H^-$. Then $$ F\in {\mathcal F}_k(P) \cap {\mathcal F}_k(P\cap H^-), \qquad F\notin{\mathcal F}_k(P\cap H^+)\cup {\mathcal F}_k(P\cap H).$$ Since $\nu(P\cap H^-,F)=\nu(P,F)$, it follows that $J(P\cap H^-,F)=J(P,F)$. Case 2: $F\not\subset H$ and $F\subset H^+$. Similarly as in Case 1, it follows that $J(P\cap H^+,F)=J(P,F)$. Case 3: $F\not\subset H$, $F\not \subset H^-$, and $F\not \subset H^+$. Then $F^-:= F\cap H^-\in{\mathcal F}_k(P\cap H^-)$ and $F^+:= F\cap H^+\in{\mathcal F}_k(P\cap H^+)$. Moreover, $L(F^-)=L(F^+)=L(F)$, $\nu(P\cap H^-,F^-)=\nu(P\cap H^+,F^+)=\nu(P,F)$, $F^- \cup F^+= F$, and ${\mathcal H}^k(F^-\cap F^+)=0$. It follows that $J(P\cap H^-,F^-) + J(P\cap H^+,F^+) = J(P,F)$. Case 4: $F\subset H$ and $F\notin{\mathcal F}_k(P)$. Then there is a unique face $G\in{\mathcal F}_{k+1}(P)$ such that $F= G\cap H$. We have $$ F\in {\mathcal F}_k(P\cap H^-)\cap {\mathcal F}_k(P\cap H^+)\cap {\mathcal F}_k(P\cap H).$$ We choose a point $q$ in the relative interior of $F$. For $Q\in\{P\cap H^-,P\cap H^+,P\cap H\}$ and for $u\in {\mathbb S}^{n-1}$ we then have ${\bf 1}_{\nu(Q,F)}(u)= j(Q,q,q+u)$, where $j$ denotes the index function defined in \cite[p. 231]{Sch14}; further, ${\bf 1}_{\nu(P,G)}(u)= j(P,q,q+u)$. From the additivity of $j(\cdot,q,q+u)$ ({\em loc. cit.}) we obtain $$ {\bf 1}_{\nu(P\cap H^-,F)} + {\bf 1}_{\nu(P\cap H^+,F)} = {\bf 1}_{\nu(P,G)} + {\bf 1}_{\nu(P\cap H,F)}.$$ Since ${\mathcal H}^{n-k-1}(\nu(P,G))=0$, it follows that $J(P\cap H^-,F)+J(P\cap H^+,F) =J(P\cap H,F)$. Case 5: $F\subset H$ and $F\in{\mathcal F}_k(P)$. Then (irrespective of whether $H$ supports $P$ or not) we have $$ F\in {\mathcal F}_k(P\cap H^-)\cap {\mathcal F}_k(P\cap H^+)\cap {\mathcal F}_k(P\cap H)$$ and, using the index function similarly as in Case 4, $$ {\bf 1}_{\nu(P\cap H^-,F)} + {\bf 1}_{\nu(P\cap H^+,F)} = {\bf 1}_{\nu(P,F)} + {\bf 1}_{\nu(P\cap H,F)}.$$ This gives $J(P\cap H^-,F)+J(P\cap H^+,F) = J(P,F)+J(P\cap H,F)$. In the five cases, each of the pairs $(P,F)$ with $F\in{\mathcal F}_k(P)$ and $(P\cap H,F)$ with $F\in{\mathcal F}_k(P\cap H)$ was considered precisely once. But also each of the pairs $(P\cap H^-,F)$ with $F\in{\mathcal F}_k(P\cap H^-)$ and $(P\cap H^+,F)$ with $F\in{\mathcal F}_k(P\cap H^+)$ was obtained precisely once (in Case 3 with the notation $F^-$ or $F^+$). Hence, if we add all the established equations for $J$, we obtain (\ref{B}). \end{proof} We return to general convex bodies and prepare the proof of Theorem \ref{Theorem2.3} in Section \ref{sec5} by the final lemma of this section. We say that $\Gamma:{\mathcal K}^n\times{\mathcal B}(\Sigma^n)\to{\mathbb T}^p$ is {\em homogeneous of degree} $k$ if $\Gamma(\lambda K,\lambda \eta) = \lambda^k\Gamma(K,\eta)$ for all $K\in{\mathcal K}^n$, $\eta\in{\mathcal B}(\Sigma^n)$ and $\lambda>0$, where $\lambda\eta:=\{(\lambda x,u):(x,u)\in\eta\}$. \vspace{2mm} \begin{lemma}\label{Lemma3.5} Let $p\in{\mathbb N}_0$. Let $\Gamma:{\mathcal K}^n\times{\mathcal B}(\Sigma^n)\to{\mathbb T}^p$ be a mapping with the following properties.\\ $\rm (a)$ $\Gamma(K,\cdot)$ is a ${\mathbb T}^p$-valued measure, for each $K\in{\mathcal K}^n$;\\ $\rm (b)$ $\Gamma$ is translation invariant and rotation covariant;\\ $\rm (c)$ $\Gamma$ is locally defined;\\ $\rm (d)$ $\Gamma$ is weakly continuous.\\ Then $\Gamma= \sum_{k=0}^{n-1} \Gamma_k$, where each $\Gamma_k$ has properties $\rm (a) - (d)$ and is homogeneous of degree $k$. \end{lemma} \begin{proof} To the restriction $\Gamma'$ of $\Gamma$ to ${\mathcal P}^n\times{\mathcal B}(\Sigma^n)$ we can apply Theorem \ref{Theorem3.2} and deduce that $\Gamma'$ is a linear combination of the mappings $Q^m\phi_k^{0,s,j}$, $0\le k\le n-1$ and $m,s,j\in{\mathbb N}_0$. Each $Q^m\phi_k^{0,s,j}$ has properties $\rm (a) - (c)$ and is homogeneous of degree $k$. Hence, we can write $\Gamma'= \sum_{k=0}^{n-1} \Gamma_k$, where each $\Gamma_k:{\mathcal P}^n\times{\mathcal B}(\Sigma^n)\to{\mathbb T}^p$ has properties $\rm (a) - (c)$ and is homogeneous of degree $k$. We argue similarly as in the proof of Lemma \ref{Lemma3.2}. For $P\in{\mathcal P}^n$, $\eta\in{\mathcal B}(\Sigma^n)$ and $\lambda>0$ we have \begin{equation}\label{10} \Gamma(\lambda P,\lambda \eta)=\sum_{k=0}^{n-1} \lambda^k\Gamma_k(P,\eta). \end{equation} For fixed $P$ and $\eta$, let $f(\lambda)$ be a coordinate of $\Gamma(\lambda P,\lambda\eta)$, and let $f_k$ be the corresponding coordinate of $\Gamma_k(P,\eta)$. Then $f(\lambda)=\sum_{k=0}^{n-1} \lambda^k f_k$. We insert $\lambda =1,\dots,n$ and solve the resulting system of linear equations for the $f_k$, to obtain $f_k=\sum_{q=1}^n b_{kq}f(q)$, $k=0,\dots,n-1$, with constants $b_{kq}$ depending only on $n,k,q$. Thus, \begin{equation}\label{8} \Gamma_k(P,\eta)=\sum_{q=1}^n b_{kq} \Gamma(qP,q\eta),\quad k=0,\dots,n-1. \end{equation} This holds for arbitrary $P\in{\mathcal P}^n$ and $\eta\in{\mathcal B}(\Sigma^n)$. Now we extend the definition of $\Gamma_k$ by \begin{equation}\label{9} \Gamma_k(K,\eta):=\sum_{q=1}^n b_{kq} \Gamma(qK,q\eta),\quad k=0,\dots,n-1, \end{equation} for $K\in{\mathcal K}^n$ and $\eta\in{\mathcal B}(\Sigma^n)$. Then $\Gamma_k$ has properties $\rm (a) - (d)$. For given $K\in{\mathcal K}^n$, let $(P_i)_{i\in{\mathbb N}}$ be a sequence of polytopes converging to $K$. Since $\Gamma(q P_i,q\,\cdot) \xrightarrow{w} \Gamma(qK,q\,\cdot)$ (where $\xrightarrow{w}$ denotes weak convergence), it follows from (\ref{8}) and (\ref{9}) that $\Gamma_k(P_i,\cdot) \xrightarrow{w} \Gamma_k(K,\cdot)$. This implies that the extended $\Gamma_k$ is homogeneous of degree $k$. From (\ref{10}) and weak continuity it follows that $$ \Gamma(\lambda K,\lambda \eta)=\sum_{k=0}^{n-1} \lambda^k\Gamma_k(K,\eta).$$ Thus, $\Gamma=\sum_{k=0}^{n-1} \Gamma_k$, where each $\Gamma_k$ has the properties $\rm (a) - (d)$ and is homogeneous of degree $k$. \end{proof} \section{Weakly continuous extensions}\label{sec4} So far, we have defined the generalized local Minkowski tensors $\phi_k^{r,s,j}$ with $j\ge 1$ only for polytopes. Lemma \ref{Lemma3.4} allows us to define $\phi_{n-1}^{r,s,j}$, for general convex bodies $K\in{\mathcal K}^n$, by $$\phi_{n-1}^{r,s,j}(K,\cdot) = \sum_{i=0}^j (-1)^i\binom{j}{i} \frac{(s+2i)!\omega_{1+s+2i}}{s!\omega_{1+s}} Q^{j-i}\phi_{n-1}^{r,s+2i}(K,\cdot).$$ Then $\phi_{n-1}^{r,s,j}$ is weakly continuous, since this holds for $\phi_{n-1}^{r,l}$. For $1\le k\le n-2$ and $j\ge 2$, we shall see in the next section that no weakly continuous extension of $\phi_k^{r,s,j}$ to ${\mathcal K}^n$ is possible. The remaining case, $j=1$, is the subject of this section. We assume that $1\le k\le n-2$ and construct a weakly continuous extension of $\phi_k^{r,s,1}$ from ${\mathcal P}^n$ to ${\mathcal K}^n$. For this, we need some basic terminology and results of geometric measure theory, for which we refer to Federer's book \cite{Fed69}. We start by describing support measures in terms of currents. This point of view was first suggested and explored (for sets of positive reach) by Z\"ahle \cite{Zae86}. We shall need the normal cycle associated with a convex body. To introduce it, we remark that $\Nor K\subset{\mathbb R}^n\times{\mathbb R}^n={\mathbb R}^{2n}$, the normal bundle of $K$, is an $(n-1)$-rectifiable set. For $\mathcal{H}^{n-1}$-almost all $(x,u)\in\Nor K$, the set of $(\mathcal{H}^{n-1}\,\rule{.1mm}{.20cm}\rule{.20cm}{.1mm}\, \Nor K,n-1)$ approximate tangent vectors at $(x,u)$ is an $(n-1)$-dimensional linear subspace of $\ensuremath{\mathbb{R}}^{2n}$, which is denoted by $\textrm{Tan}^{n-1}(\mathcal{H}^{n-1}\,\rule{.1mm}{.20cm}\rule{.20cm}{.1mm}\, \Nor K,(x,u))$. This {\em approximate tangent space} is spanned by the orthonormal basis $(a_1(x,u),\dots,a_{n-1}(x,u))$, where $$ a_i(x,u):=\left( \frac{1}{\sqrt{1+{k_i(x,u)}^2}}\,b_i(x,u),\frac{k_i(x,u)}{\sqrt{1+{k_i(x,u)}^2}}\,b_i(x,u)\right). $$ Here, $(b_1(x,u),\ldots,b_{n-1}(x,u))$ is a suitable orthonormal basis of $u^\perp$ (the orthogonal complement of the linear subspace spanned by $u$), which is chosen so that the basis $(b_1(x,u),\ldots,b_{n-1}(x,u),u)$ has the same orientation as the standard basis $(e_1,\ldots,e_n)$ of ${\mathbb R}^n$, and $k_i(x,u)\in [0,\infty]$ for $i=1,\ldots,n-1$, with the understanding that $$ \frac{1}{\sqrt{1+{k_i(x,u)}^2}}=0\quad\mbox{and}\quad\frac{k_i(x,u)}{\sqrt{1+{k_i(x,u)}^2}}=1 \qquad\mbox{if }k_i(x,u)=\infty.$$ Note that the dependence of $a_i,b_i,k_i$ on $K$ is not made explicit by our notation. The data $b_i, k_i$, $i=1,\ldots,n-1$, are essentially uniquely determined (cf.~\cite[Prop.~3 and Lem.~2]{RZ05}). Moreover, we can assume that $b_i(x+\varepsilon u,u)=b_i(x,u)$, independent of $\varepsilon>0$, where $(x,u)\in\Nor K$ and $(x+\varepsilon u,u)\in\Nor K_\varepsilon$ with $K_\varepsilon:=K+\varepsilon B^n$. See \cite{Zae86, RZ01, RZ05, Hug95, Hug98} for a geometric interpretation of the numbers $k_i(x,u)$ as generalized curvatures and for arguments establishing these facts. Hence, for $\mathcal{H}^{n-1}$-almost all $(x,u)\in\Nor K$, we can define an $(n-1)$-vector $$ a_K(x,u):=a_1(x,u)\wedge\ldots\wedge a_{n-1}(x,u), $$ which defines an orientation of $\textrm{Tan}^{n-1}(\mathcal{H}^{n-1}\,\rule{.1mm}{.20cm}\rule{.20cm}{.1mm}\, \Nor(K),(x,u))$. Then an $(n-1)$-dimensional current in ${\mathbb R}^{2n}$ is defined by $$ T_K:=\left(\mathcal{H}^{n-1}\,\rule{.1mm}{.20cm}\rule{.20cm}{.1mm}\,\Nor K\right)\wedge a_K, $$ which is known as the {\em normal cycle} of $K$. More explicitly, $$ T_K(\varphi)=\int_{\Nor K}\langle a_K(x,u),\varphi(x,u)\rangle\, \mathcal{H}^{n-1}(d(x,u)), $$ for all $\mathcal{H}^{n-1}\,\rule{.1mm}{.20cm}\rule{.20cm}{.1mm}\, \Nor K$-integrable functions $\varphi:{\mathbb R}^{2n}\to \bigwedge^{n-1}{\mathbb R}^{2n}$, where we write $\langle\cdot\,,\cdot\rangle$ for the pairing of $m$-vectors and $m$-covectors, as in \cite[p.~17]{Fed69} (but we continue to use $\langle\cdot\,,\cdot\rangle$ also for the scalar product in ${\mathbb R}^n$, which cannot lead to ambiguities). Here we use that $T_K$ is a rectifiable current, which has compact support, and thus $T_K$ can be defined for a larger class of functions than just for the class of smooth differential forms. In order to define the Lipschitz--Killing forms $\varphi_k$, $k\in\{0,\ldots,n-1\}$, we need the projection maps $\Pi_1:{\mathbb R}^n\times{\mathbb R}^n\to{\mathbb R}^n$, $(x,u)\mapsto x$, and $\Pi_2:{\mathbb R}^n\times{\mathbb R}^n\to{\mathbb R}^n$, $(x,u)\mapsto u$. Let $\Omega_n$ be the volume form on ${\mathbb R}^n$ with the orientation chosen so that $$ \Omega_n(e_1,\ldots,e_n)=\langle e_1\wedge\ldots\wedge e_n,\Omega_n\rangle=1, $$ where $(e_1,\dots,e_n)$ is the standard orthonormal basis of ${\mathbb R}^n$. Then differential forms $\varphi_k:{\mathbb R}^{2n} \to \bigwedge^{n-1}{\mathbb R}^{2n}$, $k\in\{0,\ldots,n-1\}$, of degree $n-1$ on ${\mathbb R}^{2n}$ are defined by \begin{eqnarray*} & & \varphi_k(x,u)(\xi_1,\ldots,\xi_{n-1})\\ & & := \frac{1}{k!(n-1-k)!\omega_{n-k}}\sum_{\sigma\in S(n-1)}\sgn(\sigma)\left\langle \bigwedge_{i=1}^k\Pi_1\xi_{\sigma(i)}\wedge \bigwedge_{i=k+1}^{n-1}\Pi_2\xi_{\sigma(i)}\wedge u,\Omega_n\right\rangle, \end{eqnarray*} where $(x,u)\in {\mathbb R}^n\times{\mathbb R}^n={\mathbb R}^{2n}$, $\xi_1,\ldots,\xi_{n-1}\in {\mathbb R}^{2n}$, and $S(n-1)$ denotes the set of all permutations of $\{1,\ldots,n-1\}$. Note that this definition is equivalent to the one given in \cite{Zae86}. A straightforward calculation yields that $$ \langle a_K(x,u),\varphi_k(x,u)\rangle =\frac{1}{\omega_{n-k}}\sum_{|I|=n-1-k}\frac{\prod_{i\in I}k_i(x,u)}{\prod_{i=1}^{n-1}\sqrt{1+k_i(x,u)^2}} $$ for $\mathcal{H}^{n-1}$-almost all $(x,u)\in\Nor K$. The summation extends over all subsets $I$ of $\{1,\ldots,n-1\}$ of cardinality $n-1-k$. We adopt the convention that a product over an empty set is defined as 1. Then, for $\eta\in\mathcal{B}(\Sigma^n)$ we obtain $$ T_K\left(\mathbf{1}_\eta {\varphi}_k \right)=\Lambda_k(K,\eta), $$ which provides a representation of the $k$th support measure of $K$ in terms of the normal cycle of $K$, evaluated at the $k$th Lipschitz--Killing form $\varphi_k$. Taking this procedure as a model, we now introduce new tensor-valued differential forms of degree $n-1$. By evaluating these forms at the normal cycle, we shall obtain the requested continuous extension of $\phi_k^{r,s,1}$ to general convex bodies. Let $n\ge 3$, $k\in \{1,\ldots,n-2\}$ and $r,s\in {\mathbb N}_0$. Let $(x,u)\in {\mathbb R}^n\times{\mathbb R}^n={\mathbb R}^{2n}$ and $\mathbf{v}=(v_1,\ldots,v_{r+s+2})\in ({\mathbb R}^n)^{r+s+2}$. For $\xi_1,\ldots,\xi_{n-1}\in{\mathbb R}^{2n}$, we define \begin{eqnarray*} & & \widetilde{\varphi}_k^{r,s}(x,u;\mathbf{v};\xi_1,\ldots,\xi_{n-1}):=\frac{C^{r,s}_{n,k}}{(k-1)!(n-1-k)!}\, x^r(v_1,\ldots,v_r)u^s(v_{r+1},\ldots,v_{r+s})\\ & & \times\sum_{\sigma\in S(n-1)}\sgn(\sigma)\left\langle v_{r+s+1},\Pi_1\xi_{\sigma(1)}\right\rangle \left\langle v_{r+s+2}\wedge\bigwedge_{i=2}^k\Pi_1\xi_{\sigma(i)}\wedge\bigwedge_{i=k+1}^{n-1}\Pi_2\xi_{\sigma(i)}\wedge u,\Omega_n\right\rangle . \end{eqnarray*} We omit the wedge product $\bigwedge_{i=2}^k$ if $k=1$. For fixed $x,u,\mathbf{v}$, the map $$ \widetilde{\varphi}_k^{r,s}(x,u;\mathbf{v};\cdot):({\mathbb R}^{2n})^{n-1}\to{\mathbb R} $$ is multilinear and alternating, and therefore it is an element of $\bigwedge^{n-1}{\mathbb R}^{2n}$. Next, we symmetrize by defining \begin{eqnarray*} & & {\varphi}_k^{r,s}(x,u;\mathbf{v};\xi_1,\ldots,\xi_{n-1})\\ & & :=\frac{1}{(r+s+2)!}\sum_{\tau\in S(r+s+2)} \widetilde{\varphi}_k^{r,s}(x,u;v_{\tau(1)},\ldots,v_{\tau(r+s+2)};\xi_1,\ldots,\xi_{n-1}). \end{eqnarray*} Then the mapping $$(\xi_1,\dots,\xi_{n-1})\mapsto \varphi_k^{r,s}(x,u;\,\cdot\,;\xi_1,\dots,\xi_{n-1}),$$ which we denote briefly by ${\varphi}_k^{r,s}(x,u)$, is an $(n-1)$-covector of $\mathbb{T}^{r+s+2}$, hence an element of $\bigwedge^{n-1}({\mathbb R}^{2n},\mathbb{T}^{r+s+2})$. Therefore, the map $$ {\varphi}_k^{r,s}:{\mathbb R}^{2n}\to \bigwedge\nolimits^{n-1}({\mathbb R}^{2n},\mathbb{T}^{r+s+2}),\qquad (x,u)\mapsto {\varphi}_k^{r,s}(x,u), $$ is a differential form of degree $n-1$ on ${\mathbb R}^{2n}$ with coefficients in $\mathbb{T}^{r+s+2}$ (see \cite[p.~351]{Fed69}), that is, an element of $\mathcal{E}^{n-1}({\mathbb R}^{2n},\mathbb{T}^{r+s+2}):=\mathcal{E} ({\mathbb R}^{2n},\bigwedge^{n-1}({\mathbb R}^{2n},\mathbb{T}^{r+s+2}))$. In particular, $$ \langle a,{\varphi}_k^{r,s}(x,u)\rangle\in \mathbb{T}^{r+s+2} $$ for all $(x,u)\in {\mathbb R}^{2n}$ and $a\in \bigwedge_{n-1}{\mathbb R}^{2n}$, where we use the linear isomorphism mentioned in \cite[p. 17]{Fed69} to identify $ \bigwedge^{n-1}({\mathbb R}^{2n},W)$ and $ {\rm Hom}\left(\bigwedge_{n-1}{\mathbb R}^{2n},W\right)$, for an arbitrary vector space $W$. We remark that a straightforward calculation shows that \begin{equation}\label{eqcov} \langle \vartheta a,{\varphi}_k^{r,s}(\vartheta x,\vartheta u)\rangle=\vartheta \langle a,{\varphi}_k^{r,s}( x, u)\rangle, \end{equation} for all $\vartheta\in {\rm O}(n)$, where in each case the natural operation of the rotation group is used (in particular, $\vartheta \xi:=(\vartheta p,\vartheta q)$ for $\xi=(p,q)\in{\mathbb R}^n\times{\mathbb R}^n ={\mathbb R}^{2n}$). The next lemma shows that the differential forms $\varphi_k^{r,s}$ serve the intended purpose. \begin{lemma}\label{lemma4.2} If $P\in\mathcal{P}^n$ and $\eta\in{\mathcal B}(\Sigma^n)$, then $$ T_P\left(\mathbf{1}_\eta {\varphi}_k^{r,s} \right)=\phi_k^{r,s,1}(P,\eta). $$ \end{lemma} \begin{proof} For given $P\in\mathcal{P}^n$ and $\eta\in{\mathcal B}(\Sigma^n)$, we have to show that \begin{eqnarray*} T_P\left(\mathbf{1}_{\eta} \widetilde{\varphi}_k^{r,s}(\cdot\,;\mathbf{v};\cdot) \right)&=&C^{r,s}_{n,k}\sum_{F\in\mathcal{F}_k(P)} Q_{L(F)}(v_{r+s+1},v_{r+s+2})\\ & & \times\, \int_{\eta\cap(F\times\nu(P,F))} x^r(v_1,\ldots,v_r)u^s(v_{r+1},\ldots,v_{r+s}) \,\mathcal{H}^{n-1}({\rm d}(x,u)), \end{eqnarray*} for all $\mathbf{v}=(v_1,\ldots,v_{r+s+2})\in ({\mathbb R}^n)^{r+s+2}$. Subsequent symmetrization then yields the result, in view of (\ref{2.6}). To see this, we use the disjoint decomposition $$ \eta\cap \Nor P=\bigcup_{j=0}^{n-1}\bigcup_{F\in\mathcal{F}_j(P)} \eta\cap(\textrm{relint }F\times\nu(P,F)) $$ to get \begin{eqnarray*} & & T_P\left(\mathbf{1}_{\eta} \widetilde{\varphi}_k^{r,s}(\cdot\,;\mathbf{v};\cdot)\right)\\ & & =\int_{\eta\cap\Nor P} \left\langle a_P(x,u), \widetilde{\varphi}_k^{r,s}(x,u;\mathbf{v};\cdot)\right\rangle \mathcal{H}^{n-1}({\rm d} (x,u)) \allowdisplaybreaks\\ & & = \frac{C^{r,s}_{n,k}}{(k-1)!(n-1-k)!}\sum_{j=0}^{n-1} \sum_{F\in\mathcal{F}_j(P)} \int_{\eta\cap(F\times \nu(P,F)) } x^r(v_1,\ldots,v_r)u^s(v_{r+1},\ldots,v_{r+s})\\ & & \hspace{4mm} \times \sum_{\sigma \in S(n-1)}\sgn(\sigma)\frac{\prod_{i=k+1}^{n-1}k_{\sigma(i)}(x,u)} {\prod_{i=1}^{n-1}\sqrt{1+k_i(x,u)^2}}\left\langle v_{r+s+1},b_{\sigma(1)}(x,u)\right\rangle \\ & & \hspace{4mm}\times \left\langle v_{r+s+2}\wedge\bigwedge_{i=2}^{n-1}b_{\sigma(i)}(x,u)\wedge u,\Omega_n\right\rangle\mathcal{H}^{n-1}({\rm d}(x,u)) \allowdisplaybreaks\\ & & =\frac{C^{r,s}_{n,k}}{(k-1)!(n-1-k)!}\sum_{j=0}^{n-1} \sum_{F\in\mathcal{F}_j(P)} \int_{\eta\cap(F\times \nu(P,F)) }x^r(v_1,\ldots,v_r)u^s(v_{r+1},\ldots,v_{r+s})\\ & &\hspace{4mm}\times \sum_{\sigma\in S(n-1)}\frac{\prod_{i=k+1}^{n-1}k_{\sigma(i)}(x,u)} {\prod_{i=1}^{n-1} \sqrt{1+k_i(x,u)^2}}\, b_{\sigma(1)}(x,u)^2(v_{r+s+1},v_{r+s+2})\,\mathcal{H}^{n-1}({\rm d}(x,u)). \end{eqnarray*} In the final step we have used that $(b_1(x,u),\dots,b_{n-1}(x,u),u)$ is an orthonormal basis of ${\mathbb R}^n$ with the same orientation as the standard basis and hence $$ \left\langle v_{r+s+2}\wedge\bigwedge_{i=2}^{n-1}b_{\sigma(i)}(x,u)\wedge u,\Omega_n\right\rangle =\left\langle v_{r+s+2},b_{\sigma(1)}(x,u)\right\rangle\sgn(\sigma).$$ If $F\in\mathcal{F}_j(P)$, then, for $\mathcal{H}^{n-1}$-almost all $(x,u)\in F\times\nu(P,F)$, exactly $j$ of the numbers $k_i(x,u)$ are zero and $n-1-j$ of these numbers are infinite. Moreover, in this situation, $k_i(x,u)=0$ if and only if the corresponding vector $b_i(x,u)$ is in $L(F)$. Hence, if $j\neq k$, then $$ \frac{\prod_{i=k+1}^{n-1}k_{\sigma(i)}(x,u)}{\prod_{i=1}^{n-1}\sqrt{1+k_i(x,u)^2}}=0\quad \textrm{for all }\sigma\in S(n-1). $$ In fact, if $j>k$, then the numerator is zero, and for $j<k$ the number of indices $i\in\{1,\ldots,n-1\}$ such that $k_i(x,u)=\infty$ in the numerator is smaller than the corresponding number of indices in the denominator. If $F\in{\mathcal F}_k(P)$, $(x,u)\in F\times\nu(P,F)$ and, say, $b_1(x,u),\dots,b_k(x,u)\in L(F)$, then $$ Q_{L(F)}=b_1(x,u)^2+\dots+ b_k(x,u)^2.$$ Hence, we conclude that \begin{align*} &T_P\left(\mathbf{1}_{\eta} \widetilde{\varphi}_k^{r,s}(\cdot\,;\mathbf{v};\cdot)\right)\\ &=\frac{C^{r,s}_{n,k}}{(k-1)!(n-1-k)!} \sum_{F\in\mathcal{F}_k(P)} (k-1)!(n-1-k)!Q_{L(F)}(v_{r+s+1},v_{r+s+2})\\ &\hspace{4mm}\times \int_{\eta\cap(F\times \nu(P,F))}x^r(v_1,\ldots,v_r)u^s(v_{r+1},\ldots,v_{r+s})\,\mathcal{H}^{n-1} ({\rm d}(x,u)), \end{align*} as stated. \end{proof} \begin{lemma}\label{Lemma4.1} {\rm (a)} For $K\in{\mathcal K}^n$, $T_K$ is a cycle.\\ {\rm (b)} The map $K\mapsto T_K$ is a valuation on ${\mathcal K}^n$.\\ {\rm (c)} If $K_i,K\in{\mathcal K}^n$, $i\in{\mathbb N}$, and $ K_i\to K$ in the Hausdorff metric, as $i\to\infty$, then $T_{K_i}\to T_K$ in the dual flat seminorm for currents. \end{lemma} \begin{proof} These facts are provided, for instance, in the more general context of sets with positive reach as Proposition 2.6 (assertion (a)), Theorem 2.2 (assertion (b)) and Theorem 3.1 (assertion (c)) in \cite{RZ01}. \end{proof} We remark that a strengthened form of the continuity assertion (c) of Lemma \ref{Lemma4.1}, namely local H\"older continuity of the normal cycles of convex bodies with respect to the Hausdorff metric and the dual flat seminorm, is proved in \cite{HS13}. The third statement of Lemma \ref{Lemma4.1} implies that if $f:{\mathbb R}^{2n}\to{\mathbb R}$ is of class $C^\infty$, then the map $$ {\mathcal K}^n\to{\mathbb R},\qquad K\mapsto T_K\left(f {\varphi}_k^{r,s} \right), $$ is continuous. But then the same is true if $f$ is merely continuous, and thus $(K,\eta)\mapsto T_K\left(\mathbf{1}_\eta {\varphi}_k^{r,s} \right)$ is the weakly continuous extension of $(P,\eta)\mapsto \phi_k^{r,s,1}(P,\eta)$ from polytopes $P$ to general convex bodies. \begin{theorem}\label{Theorem4.1} The map ${\mathcal K}^n\times{\mathcal B}(\Sigma^n)\to{\mathbb T}^{r+s+2}$ defined by $(K,\eta)\mapsto T_K\left(\mathbf{1}_\eta \varphi_k^{r,s}\right)$, satisfies the properties {\rm (a) -- (d)} listed in Theorem $\ref{Theorem2.3}$. \end{theorem} \begin{proof} (a) Since $\Nor K$ has finite $(n-1)$-dimensional Hausdorff measure, the measure property follows from the dominated convergence theorem. (b) is implied since $\phi_k^{r,s,1}$ is isometry covariant on polytopes and this property is preserved under weak convergence. ((b) can also be shown directly. The rotation covariance follows, for instance, from \eqref{eqcov}.) (d) is implied by Lemma \ref{Lemma4.1} (c). It remains to verify that the tensor-valued measure is locally defined. For this, let $K,K'\in{\mathcal K}^n$ and $\eta\in{\mathcal B}(\Sigma^n)$ be such that $\eta\cap\Nor K=\eta\cap\Nor K'$. With $\Nor K$ and $\Nor K'$ also $\eta\cap\Nor K\cap \Nor K'$ is $(n-1)$-rectifiable. Therefore, for $\mathcal{H}^{n-1}$-almost all $(x,u)\in \eta\cap\Nor K\cap \Nor K'$ the approximate tangent space of this intersection coincides with the one of $\Nor K$ and the one of $\Nor K'$ at $(x,u)$, and the orientations coincide. Hence we have $a_K(x,u)=a_{K'}(x,u)$ for $\mathcal{H}^{n-1}$-almost all $(x,u)\in \eta\cap\Nor K=\eta\cap\Nor K'$, which yields the assertion. \end{proof} \begin{corollary}\label{Corollary4.1} Let $r,s\in{\mathbb N}_0$ and $k\in\{1,\ldots,n-2\}$. Then, for each $\eta\in \mathcal{B}(\Sigma^n)$, the map $K\mapsto \phi_k^{r,s,1}(K,\eta)$ is additive (a valuation) and Borel measurable on ${\mathcal K}^n$. \end{corollary} \begin{proof} Since $K\mapsto T_K$ is a valuation on ${\mathcal K}^n$ by Lemma \ref{Lemma4.1} (b), the first assertion follows from $\phi_k^{r,s,1}(K,\eta)=T_K\left(\mathbf{1}_\eta {\varphi}_k^{r,s} \right)$ for $K\in{\mathcal K}^n$. Let $f:\Sigma^n\to{\mathbb R}$ be a continuous function with compact support. Then $$ K\mapsto \int_{\Sigma^n} f(x,u)\, \phi_k^{r,s,1}(K,{\rm d} (x,u)),\qquad K\in{\mathcal K}^n, $$ is continuous by Lemma \ref{Lemma4.1} (c), and therefore measurable. The second assertion is then implied by \cite[Lem.~12.1.1]{SW08}. \end{proof} \begin{remark}\label{Remarksec4} {\rm Since the global functionals $\phi_k^{r,s,1}(P,\Sigma^n)$ are continuous, Alesker's characterization theorem must yield a representation for them. Such a representation was explicitly known before. In fact, for $r=0$ it follows from a relation of McMullen \cite[p.~269]{McM97} (see also \cite[Lem.~3.3]{HSS08b}) that $$ \phi_k^{0,s,1}(P,\Sigma^n) = Q\Phi_k^{0,s}(P) -2\pi (s+2)\, \Phi_k^{0,s+2}(P).$$ The general case is provided by the second and the third displayed formula in \cite[p.~505]{HSS08b}.} \end{remark} Finally in this section, we express the new local tensor valuations $\phi^{r,s,1}_k(K,\cdot)$ for a general convex body $K$ in terms of the generalized curvatures $k_i(x,u)$ and the corresponding principal directions of curvature, given by the unit vectors $b_i(x,u)$, $i=1,\ldots, n-1$ (the result is relation (\ref{nn}) below). We put $$ \mathbb{K}(x,u):=\prod_{i=1}^{n-1}\sqrt{1+k_i(x,u)^2}. $$ Similarly as in the beginning of the proof of Lemma \ref{lemma4.2} and with the notations introduced there, we obtain that \begin{eqnarray*} & & T_K\left(\mathbf{1}_{\eta} \widetilde{\varphi}_k^{r,s}(\cdot\,;\mathbf{v};\cdot)\right)\\ & & =\int_{\eta\cap\Nor K} \left\langle a_K(x,u), \widetilde{\varphi}_k^{r,s}(x,u;\mathbf{v};\cdot)\right\rangle \mathcal{H}^{n-1}({\rm d} (x,u))\\ & & = \frac{C^{r,s}_{n,k}}{(k-1)!(n-1-k)!}\int_{\eta\cap\Nor K}x^r(v_1,\ldots,v_r)u^s(v_{r+1},\ldots,v_{r+s}) \\ & & \hspace{4mm} \times \sum_{\sigma \in S(n-1)}\sgn(\sigma)\frac{\prod_{i=k+1}^{n-1}k_{\sigma(i)}(x,u)} {\mathbb{K}(x,u)}\left\langle v_{r+s+1},b_{\sigma(1)}(x,u)\right\rangle \\ & & \hspace{4mm}\times \left\langle v_{r+s+2},b_{\sigma(1)}(x,u)\right\rangle\sgn(\sigma)\, \mathcal{H}^{n-1}({\rm d}(x,u)) \allowdisplaybreaks\\ & & = \frac{C^{r,s}_{n,k}}{(k-1)!(n-1-k)!}\int_{\eta\cap\Nor K}x^r(v_1,\ldots,v_r)u^s(v_{r+1},\ldots,v_{r+s}) \\ & & \hspace{4mm}\times \sum_{\sigma \in S(n-1)}b_{\sigma(1)}(x,u)^2(v_{r+s+1},v_{r+s+2})\frac{\prod_{i=k+1}^{n-1} k_{\sigma(i)}(x,u)}{\mathbb{K}(x,u)}\,\mathcal{H}^{n-1}({\rm d}(x,u)) \allowdisplaybreaks\\ & & = C^{r,s}_{n,k}\int_{\eta\cap\Nor K}x^r(v_1,\ldots,v_r)u^s(v_{r+1},\ldots,v_{r+s})\\ & & \hspace{4mm}\times \sum_{i=1}^{n-1}b_{i}(x,u)^2(v_{r+s+1},v_{r+s+2})\sum_{|I|=n-1-k \atop i\notin I} \frac{\prod_{j\in I} k_{j}(x,u)}{\mathbb{K}(x,u)}\,\mathcal{H}^{n-1}({\rm d}(x,u)). \end{eqnarray*} From this we deduce that \begin{eqnarray}\label{nn} & & \phi^{r,s,1}_k(K,\eta)\\ & & =C^{r,s}_{n,k}\int_{\eta\cap\Nor K}x^ru^s\sum_{i=1}^{n-1}b_{i}(x,u)^2 \sum_{|I|=n-1-k\atop i\notin I}\frac{\prod_{j\in I} k_{j}(x,u)}{\mathbb{K}(x,u)}\,\mathcal{H}^{n-1}({\rm d}(x,u)).\nonumber \end{eqnarray} If $k=1$, then $$ \phi^{r,s,1}_1(K,\eta)=C^{r,s}_{n,1}\int_{\eta\cap\Nor K} x^ru^s \sum_{i=1}^{n-1}b_{i}(x,u)^2 \frac{\prod_{j:j\neq i} k_{j}(x,u)}{\mathbb{K}(x,u)}\,\mathcal{H}^{n-1}({\rm d}(x,u)), $$ and for $k=n-2$, we have $$ \phi^{r,s,1}_{n-2}(K,\eta)=C^{r,s}_{n,n-2}\int_{\eta\cap\Nor K}x^ru^s\sum_{i=1}^{n-1}b_{i}(x,u)^2\sum_{j:j\neq i} \frac{k_{j}(x,u)} {\mathbb{K}(x,u)}\,\mathcal{H}^{n-1}({\rm d}(x,u)). $$ For $n=3$, these two special cases coincide and we get $$ \phi^{r,s,1}_{1}(K,\eta)=C^{r,s}_{3,1}\int_{\eta\cap\Nor K} x^ru^s \frac{k_1(x,u)b_{2}(x,u)^2+k_2(x,u)b_{1}(x,u)^2} {\mathbb{K}(x,u)}\,\mathcal{H}^{2}({\rm d}(x,u)). $$ For a convex body of class $C^2$, we write $u_x$ for the unique exterior unit normal of $K$ at the boundary point $x\in\partial K$ of $K$. An application of the coarea formula (together with Lemma 3.1 from \cite{Hug98b}) then yields $$ \phi^{r,s,1}_k(K,\eta)=C^{r,s}_{n,k}\int_{\partial K}\mathbf{1}_\eta(x,u_x)x^ru_x^s\sum_{i=1}^{n-1}b_{i}(x)^2 \sum_{|I|=n-1-k\atop i\notin I}\prod_{j\in I} k_{j}(x)\,\mathcal{H}^{n-1}({\rm d} x), $$ where the $k_j(x)$ are the principal curvatures and the unit vectors $b_j(x)$ give the principal directions of curvature of $K$ at $x\in\partial K$. In particular, for a convex body $K$ in ${\mathbb R}^3$ with a $C^2$ boundary we get $$ \phi^{r,s,1}_{1}(K,\eta)=C^{r,s}_{3,1}\int_{\partial K }\mathbf{1}_\eta(x,u_x)x^ru_x^s \left(k_1(x)b_{2}(x)^2+k_2(x)b_{1}(x)^2 \right)\,\mathcal{H}^{2}({\rm d} x). $$ \section{Proof of Theorem \ref{Theorem2.3}}\label{sec5} For the proof of Theorem \ref{Theorem2.3}, it suffices to prove the following. \begin{theorem}\label{Theorem5.1} Let $p\in{\mathbb N}_0$. Let $\Gamma:{\mathcal K}^n\times{\mathcal B}(\Sigma^n)\to{\mathbb T}^p$ be a mapping with the following properties.\\ $\rm (a)$ $\Gamma(K,\cdot)$ is a ${\mathbb T}^p$-valued measure, for each $K\in{\mathcal K}^n$;\\ $\rm (b)$ $\Gamma$ is translation invariant and rotation covariant;\\ $\rm (c)$ $\Gamma$ is locally defined;\\ $\rm (d)$ $\Gamma$ is weakly continuous.\\ Then $\Gamma$ is a linear combination, with constant coefficients, of the mappings $Q^m\phi^{0,s,j}_k$, where $m,s\in{\mathbb N}_0$ and $j\in\{0,1\}$ satisfy $2m+2j+s=p$ and where $k\in\{0,\dots,n-1\}$, but $j=0$ if $k\in\{0,n-1\}$. \end{theorem} Since we know that the mappings $Q^m\phi_k^{p,s,j}$ with $j\in\{0,1\}$ have the properties (a), (b), (c), (d) and since linear independence follows from Theorem \ref{Theorem3.1}, Theorem \ref{Theorem2.3} can be derived from Theorem \ref{Theorem5.1} in the same way as, in Section \ref{sec3}, Theorem \ref{Theorem2.2} was derived from Theorem \ref{Theorem3.2}. For this, we use the weak continuity to extend (\ref{n3.1}), for $j\in\{0,1\}$, from polytopes to general convex bodies. For the proof of Theorem \ref{Theorem5.1} we note first that, by Lemma \ref{Lemma3.5}, it is sufficient to prove the theorem under the additional assumption that $\Gamma$ is homogeneous of some fixed degree $k\in\{0,\dots,n-1\}$. We assume this and then distinguish two cases. If $k\in\{0,n-1\}$, then Theorem \ref{Theorem3.2} shows that on polytopes $P$ the mapping $\Gamma$ is of the form $$ \Gamma(P,\cdot) = \sum_{m,s\ge 0 \atop 2m+s=p} c_{ms} Q^m\phi^{0,s}_k(P,\cdot) $$ with constants $c_{ms}$. From the weak continuity of $\Gamma$ and of $\phi^{0,s}_k$ it follows that $$ \Gamma(K,\cdot) = \sum_{m,s\ge 0 \atop 2m+s=p} c_{ms} Q^m\phi^{0,s}_k(K,\cdot) $$ for all $K\in{\mathcal K}^n$. Now let $k\in\{1,\dots,n-2\}$. Thus, from now on we are dealing only with dimensions $n\ge 3$. By Theorem \ref{Theorem3.2}, on polytopes $P$ the mapping $\Gamma$ is of the form $$ \Gamma(P,\cdot) = \sum_{m,j,s\ge 0 \atop 2m+2j+s=p} c_{mjs} Q^m\phi^{0,s,j}_k(P,\cdot) $$ with constants $c_{mjs}$. Since $\Gamma$ and the mappings $\phi_k^{0,s,0}$ and $\phi_k^{0,s,1}$ are weakly continuous, the mapping $\Gamma'$ defined by $$ \Gamma':=\Gamma-\sum_{m,j,s\ge 0,\,j\le 1 \atop 2m+2j+s=p} c_{mjs} Q^m\phi^{0,s,j}_k$$ has again the properties (a) -- (d) of Theorem \ref{Theorem5.1}, and on polytopes $P$ it is of the form $$ \Gamma'(P,\cdot) = \sum_{m,s\ge 0,\,j\ge 2 \atop 2m+2j+s=p} c_{mjs} Q^m\phi^{0,s,j}_k(P,\cdot). $$ We have to show that here all the remaining constants $c_{mjs}$ are zero. The main idea of the proof is to construct a particular sequence of polytopes which converges to a convex body that has more rotational symmetries than the approximating polytopes. The mapping $\Gamma'$ must be covariant under the rotations mapping the limit body into itself. If $\Gamma'$ is not identically zero, it can be shown that for our special choice of the approximating polytopes this covariance is violated so strongly that it is still violated in the weak limit, which is a contradiction. This approach requires some preparations. In the following, we write again $\Gamma$ instead of $\Gamma'$. Let $(e_1,\dots,e_n)$ be an orthonormal basis of ${\mathbb R}^n$ and let ${\mathbb R}^{n-1}$ be the subspace spanned by $e_1,\dots,e_{n-1}$. \vspace{2mm} \noindent{\bf Definition.} Let $\omega\in{\mathcal B}({\mathbb S}^{n-1})$ and $0<\varepsilon<1$. We say that $\omega$ is $\varepsilon$-{\em close to} $-e_n$ if each $u\in\omega$ satisfies \begin{equation}\label{34a} \langle u,-e_n\rangle > 1-\varepsilon \end{equation} and \begin{equation}\label{35} |\langle u,a\rangle| < \varepsilon\|a\| \quad\mbox{for each }a\in{\mathbb R}^{n-1}. \end{equation} \vspace{2mm} For example, if $\mu>0$ is sufficiently small (depending on $\varepsilon$), then the nonempty, open set $$ \omega = \{u\in {\mathbb S}^{n-1}:\langle u,-e_n\rangle >1-\mu\}$$ is $\varepsilon$-close to $-e_n$. In the following, we write $$ \Gamma(K,f):= \int_{\Sigma^n} f(u)\, \Gamma(K,{\rm d} (x,u))$$ for $K\in {\mathcal K}^n$ and any continuous real function $f$ on the unit sphere ${\mathbb S}^{n-1}$. For a polytope $P\in {\mathcal P}^n$, we define $$ W_k(P,f):= \sum_{F\in{\mathcal F}_k(P)} {\mathcal H}^k(F)\int_{\nu(P,F)}f\,{\rm d}{\mathcal H}^{n-k-1}.$$ Using the $k$th area measure $\Psi_k(P,\cdot)=\Lambda_k(P,{\mathbb R}^n\times\cdot)$ (see \cite[(4.20), (4.24)]{Sch14}), this can also be written as \begin{equation}\label{35a} W_k(P,f) = \omega_{n-k}\int_{{\mathbb S}^{n-1}} f\,{\rm d}\Psi_k(P,\cdot). \end{equation} For polytopes $P$ and for suitable $f$ and $E$, the following lemma approximates $\Gamma(P,f)(E)$ by a simpler expression. \vspace{2mm} \begin{lemma}\label{Lemma5.1} Let $k\in\{1,\dots,n-2\}$. Suppose that $\Gamma$ satisfies the assumptions listed in Theorem $\ref{Theorem5.1}$ and that for $P\in{\mathcal P}^n$ it is of the form \begin{equation}\label{31a} \Gamma(P,\cdot) = \sum_{m,s\ge 0,\,j\ge 2 \atop 2m+2j+s=p} c_{mjs} Q^m\phi^{0,s,j}_k(P,\cdot) \end{equation} with constants $c_{mjs}$ which are not all zero. Let $s_0$ be the smallest number $s$ for which $c_{mjs}\not=0$ for some $m,j$, set $q:=(p-s_0)/2$ and $c_j:=((2q)!s_0!/p!)c_{(q-j)js_0}C_{n,k}^{0,s_0}$ $($recall definition $(\ref{2.7a}))$. Let $d$ be the largest $j\in \{2,\dots,q\}$ for which $c_j\not=0$. Let $\omega\in{\mathcal B}({\mathbb S}^{n-1})$ and $0<\varepsilon<1$ be such that $\omega$ is $\varepsilon$-close to $-e_n$. Let $f$ be a nonnegative, continuous real function on ${\mathbb S}^{n-1}$ with support in $\omega$. For $P\in{\mathcal P}^n$, define $\Delta(P,f)\in{\mathbb T}^{2q}$ by \begin{equation}\label{5.2ab} \Delta(P,f) := \sum_{j=2}^d c_j Q^{q-j}\sum_{F\in{\mathcal F}_k(P)} Q^j_{L(F)}{\mathcal H}^k(F) \int_{\nu(P,F)} f\,{\rm d}{\mathcal H}^{n-k-1}. \end{equation} Let $$ E':=(\underbrace{a,\dots,a}_{2q})\quad\mbox{with } a\in{\mathbb R}^{n-1}, \|a\|=1,$$ and \begin{equation}\label{5.2a} E:=(b_1,\dots,b_p):=(\underbrace{a,\dots,a}_{2q},\underbrace{-e_n,\dots,-e_n}_{s_0}). \end{equation} Then \begin{equation}\label{5.2c} \left|\Gamma(P,f)(E)-\Delta(P,f)(E')\right|\le C_3 W_k(P,f)\varepsilon \end{equation} with a constant $C_3$ depending only on $\Gamma$. \end{lemma} \begin{proof} For a polytope $P$ and any set $\eta={\mathbb R}^n\times\omega'$ with $\omega'\in{\mathcal B}({\mathbb S}^{n-1})$, the representation (\ref{31a}) can explicitly be written as \begin{equation} \Gamma(P,\eta)= \sum_{m,s\ge 0,\,j\ge 2 \atop 2m+2j+s=p} c_{mjs} C_{n,k}^{0,s} \sum_{F\in{\mathcal F}_k(P)} {\mathcal H}^k(F)\int_{\omega'\cap \nu(P,F)} Q^m Q_{L(F)}^j u^s\,{\mathcal H}^{n-k-1}({\rm d} u), \end{equation} according to (\ref{2.6}). Therefore, \begin{equation}\label{32} \Gamma(P,f)= \sum_{m,s\ge 0,\,j\ge 2 \atop 2m+2j+s=p} c_{mjs} C_{n,k}^{0,s} \sum_{F\in{\mathcal F}_k(P)} {\mathcal H}^k(F)\int_{\nu(P,F)} Q^m Q_{L(F)}^j u^sf(u)\,{\mathcal H}^{n-k-1}({\rm d} u). \end{equation} In the following estimates, we need only consider vectors $u\in\omega$, since $f$ has its support in $\omega$. Let $u\in\omega$. For a $p$-tuple $E$ as given by (\ref{5.2a}), we have \begin{eqnarray*} & & (Q^mQ_{L(F)}^ju^s)(E)\\ & &= \frac{1}{p!} \sum_{\sigma\in S(p)} Q^m(b_{\sigma(1)},\dots,b_{\sigma(2m)})Q_{L(F)}^j (b_{\sigma(2m+1)},\dots,b_{\sigma(2m+2j)}) u^s(b_{\sigma(2m+2j+1)},\dots,b_{\sigma(p)}), \end{eqnarray*} where $S(p)$ is the group of permutations of $1,\dots,p$. If at least one of the arguments of $u^s$, the vectors $b_{\sigma(2m+2j+1)},\dots,b_{\sigma(p)}$, is not equal to $- e_n$ and hence is equal to $a$, then \begin{equation}\label{33} |u^s(b_{\sigma(2m+2j+1)},\dots,b_{\sigma(p)})|=|\langle u,b_{\sigma(2m+2j+1)}\rangle| \cdots |\langle u,b_{\sigma(p)}\rangle|\le \varepsilon \end{equation} by (\ref{35}), since we have assumed that $\omega$ is $\varepsilon$-close to $-e_n$ and $\|a\|=1$. When $s>s_0$, this holds for all $\sigma\in S(p)$. The absolute value of $Q^m(\cdot)Q_{L(F)}^j(\cdot)$ in the last sum above is at most $1$. Hence, we have $$ |(Q^mQ_{L(F)}^ju^s)(E)| \le \varepsilon \qquad\mbox{for }s>s_0.$$ For each fixed $s>s_0$, this yields the estimate \begin{eqnarray*} & & \left| \sum_{m\ge 0,\,j\ge 2 \atop 2m+2j=p-s} c_{mjs}C_{n,k}^{0,s} \sum_{F\in{\mathcal F}_k(P)} {\mathcal H}^k(F)\int_{\nu(P,F)} (Q^m Q_{L(F)}^j u^{s})(E)f(u)\,{\mathcal H}^{n-k-1}({\rm d} u)\right|\\ & & \le \sum_{m\ge 0,\,j\ge 2 \atop 2m+2j=p-s} |c_{mjs}| C_{n,k}^{0,s} \sum_{F\in{\mathcal F}_k(P)} {\mathcal H}^k(F)\int_{\nu(P,F)} \varepsilon f\,{\rm d}{\mathcal H}^{n-k-1}\\ & & \le C_1W_k(P,f)\varepsilon \end{eqnarray*} with a constant $C_1$ that (for given dimension) depends only on $\Gamma$. For $s=s_0$ we have $$ |u^{s_0}(b_{\sigma(2m+2j+1)},\dots,b_{\sigma(p)})|\le \varepsilon $$ if at least one of the vectors $b_{\sigma(2m+2j+1)},\dots,b_{\sigma(p)}$ is not equal to $- e_n$, and otherwise $$ u^{s_0}(b_{\sigma(2m+2j+1)},\dots,b_{\sigma(p)})=\langle u,-e_n\rangle^{s_0}.$$ Hence, we obtain $$ (Q^m Q_{L(F)}^j u^{s_0})(E) = \frac{(2q)!s_0!}{p!}(Q^mQ_{L(F)}^j)(E')\langle u,-e_n\rangle^{s_0} + R_1$$ with $$ |R_1|\le\left[1-\binom{p}{s_o}^{-1}\right]\varepsilon.$$ For the vectors $u\in\omega$ we have the estimate $ 1-\varepsilon \le \langle u,-e_n\rangle \le 1$ by (\ref{34a}) and hence we can write $$ (Q^m Q_{L(F)}^j u^{s_0})(E) = \frac{(2q)!s_0!}{p!}(Q^mQ_{L(F)}^j)(E') + R_2$$ with $$ |R_2|\le C_2\varepsilon,$$ where the constant $C_2$ depends only on $\Gamma$. Splitting the sum in (\ref{32}) in the form $\sum_{m,j,s} = \sum_{m,j,s_0}+\sum_{m,j, s>s_0}$, we conclude that \begin{eqnarray*} & & \Gamma(P,f)(E)\\ & & = \frac{(2q)!s_0!}{p!} \sum_{m\ge 0,\,j\ge 2\atop 2m+2j=p-s_0} c_{mjs_0}C_{n,k}^{0,s_0} \sum_{F\in{\mathcal F}_k(P)}(Q^mQ_{L(F)}^j)(E') {\mathcal H}^k(F) \int_{\nu(P,F)}f\,{\rm d}{\mathcal H}^{n-k-1} + R_3 \end{eqnarray*} with $$ |R_3|\le C_3W_k(P,f)\varepsilon,$$ where $C_3$ depends only on $\Gamma$. With $((2q)!s_0!/p!)c_{(q-j)js_0}C_{n,k}^{0,s_0}=c_j$, we obtain \begin{eqnarray*} \Gamma(P,f)(E) &=& \sum_{j=2}^q c_j\sum_{F\in{\mathcal F}_k(P)} (Q^{q-j}Q_{L(F)}^j)(E') {\mathcal H}^k(F)\int_{\nu(P,F)}f \,{\rm d}{\mathcal H}^{n-k-1} + R_3\\ &=& \Delta(P,f)(E')+R_3 \end{eqnarray*} (recall that $c_j\neq 0$ for some $j\in \{2,\ldots,q\}$ and that $d$ was defined as the largest number $j\in \{2,\dots,q\}$ such that $c_j\not=0$). \end{proof} As already indicated, we construct a sequence of polytopes that converges to a convex body having more rotational symmetries than the approximating polytopes. If $\Gamma\not\equiv 0$, then on these polytopes the mapping $\Gamma$ violates the rotation covariance in a way that is preserved under the weak limit. This contradiction will show that $\Gamma\equiv 0$. We turn to the construction of the polytopes. Recall that $(e_1,\dots,e_n)$ is the standard orthonormal basis of ${\mathbb R}^n$ and that the linear hull ${\rm lin}\{e_1,\dots,e_{n-1}\}$ is identified with ${\mathbb R}^{n-1}$. We write the points $y\in {\mathbb R}^n$ in the form $$ y= y_1e_1+\dots +y_ne_n= (y_1,\dots, y_n),$$ thus $(x_1,\dots,x_{n-1},0)\in{\mathbb R}^{n-1}$. We define the lifting map $\ell:{\mathbb R}^{n-1}\to{\mathbb R}^n$ by \begin{equation}\label{lift} \ell(x):= x+\|x\|^2e_n \qquad\mbox{for } x\in{\mathbb R}^{n-1}. \end{equation} Then ${\mathcal R}:= \ell({\mathbb R}^{n-1})$ is a paraboloid of revolution. Let $t>0$ be given and consider the lattice $2t{\mathbb Z}^{n-1}$ of all points $$2t(m_1,\dots,m_{n-1},0)\in{\mathbb R}^{n-1}$$ with integers $m_1,\dots,m_{n-1}$. The points of $2t{\mathbb Z}^{n-1}$ are the vertices of a tessellation of ${\mathbb R}^{n-1}$ into $(n-1)$-cubes, which together with all their faces form a polytopal complex, which we denote by ${\mathcal C}_t$. We consider the polyhedral set $$ R_t:= {\rm conv}\,\ell(2t{\mathbb Z}^{n-1}).$$ For given $z\in{\mathbb R}^{n-1}$, we define an affine map $\alpha_z:{\mathbb R}^{n-1}\to{\mathbb R}^n$ by \begin{equation}\label{aff} \alpha_z(y):= y+2\langle z,y\rangle e_n +\left(r_t^2-\|z\|^2\right)e_n, \qquad y\in{\mathbb R}^{n-1}, \end{equation} where $r_t$ ($=t\sqrt{n-1}$) is the radius of the sphere through the vertices of a cube in ${\mathcal C}_t$. Then $$ \alpha_z(z+x)=\ell(z+x)+\left(r_t^2-\|x\|^2\right)e_n$$ for all $x\in{\mathbb R}^{n-1}$. Hence, if $\|x\|=r_t$, then $$ \alpha_z(z+x)=\ell(z+x).$$ Let $C_z$ be an $(n-1)$-cube of the complex ${\mathcal C}_t$, with center $z$. On the vertices of $C_z$, the mapping $\ell$ coincides with the affine map $\alpha_z$. Therefore, the $\ell$-images of the vertices of $C_z$ lie in the hyperplane $H:=\alpha_z({\mathbb R}^{n-1})$. Every point of the lattice $2t{\mathbb Z}^{n-1}$ which is not a vertex of $C_z$ is mapped by $\ell$ into the `upper' open halfspace bounded by $H$. It follows that $H$ is a supporting hyperplane of $R_t$. Therefore, the convex hull of the $\ell$-images of the vertices of $C_z$ is a facet of $R_t$. (A more general version of this observation is well known in the theory of Voronoi and Delaunay tessellations; see \cite{For04}, \cite[Sec.~4.4]{DO11}, \cite[Sec.~7]{JT13}, for example.) From this it follows further that each face $G$ of ${\mathcal C}_t$ is in one-to-one correspondence with a face $F$ of $R_t$ of the same dimension, in such a way that $F$ is the convex hull of the $\ell$-images of the vertices of $G$. We say in this case that $F$ {\em lies above} $G$. In particular, $R_t$ has no other faces than those lying above the faces of ${\mathcal C}_t$. For a given face $F$ of $R_t$, we denote by $F^\Box$ the face of ${\mathcal C}_t$ above which it lies. If $F$ is the facet of $R_t$ for which $F^\Box=C_z$, then $F=\alpha_z(C_z)$. For $h>0$, let $$ H^-_h := \{y\in{\mathbb R}^n:\langle y,e_n\rangle \le h\}.$$ We define a convex body of revolution by $$ K_h:= \left\{y\in{\mathbb R}^n:y_n \ge y_1^2+\dots+y_{n-1}^2\right\}\cap H^-_h.$$ This is the intersection of the epigraph of $\ell$ with the closed halfspace $H^-_h$. Further, we define a polytope $P_{h,t}$ by $$ P_{h,t}:= R_t\cap H^-_h.$$ Then $P_{h,t}\to K_h$ for $t\to 0$. By $\omega_h$ we denote the spherical image (that is, the set of outer unit normal vectors) of the part of $K_h$ that lies in the interior of the halfspace $H^-_{h/2}$. Now let $0<\varepsilon<1$ be given. We choose $h$ in dependence of $\varepsilon$, in the following way. Recall that $\pi_L:{\mathbb R}^n\to L$ denotes the orthogonal projection to a subspace $L$. Let $\lambda_z:{\mathbb R}^{n-1}\to{\mathbb R}^n$ be the linear part of the affine map $\alpha_z$ used above, that is, $$\lambda_z(x):= x+2\langle z,x \rangle e_n\qquad\mbox{for } x\in{\mathbb R}^{n-1}.$$ Since $\lambda_z$ for $z=0$ is just the inclusion map ${\mathbb R}^{n-1} \hookrightarrow {\mathbb R}^n$, we can choose $h>0$ so small that for $z\in{\mathbb R}^{n-1}$ with $\|z\|^2\le h$ the following holds. For every linear subspace $L$ of ${\mathbb R}^{n-1}$ and for any unit vector $a\in{\mathbb R}^n$, we have \begin{equation}\label{13} \left|\|\pi_L a\|^2 -\|\pi_{\lambda_z L} a\|^2\right| \le \varepsilon. \end{equation} Moreover, we choose $h$ so small that the set $\omega_h$ is $\varepsilon$-close to $-e_n$. We point out that $K_h,P_{h,t},\omega_h$ all depend on $\varepsilon$, although this is not made explicit by the notation. We define $$ {\mathcal F}_{k,h}(P_{h,t}):=\{F\in{\mathcal F}_k(P_{h,t}): \omega_h\cap\nu(P_{h,t},F)\not=\emptyset\}.$$ For $u\in\omega_h$, let $H(K,u)$ denote the supporting hyperplane of the convex body $K$ with outer normal vector $u$. Let $H_h$ be the boundary hyperplane of the halfspace $H^-_h$. There is a number $\delta>0$ such that each hyperplane $H(K_h,u)$ with $u\in\omega_h$ has distance at least $\delta$ from the set $K_h\cap H_h$. Therefore, there is a number $t_0>0$ (depending on $\varepsilon$) such that for $0<t<t_0$ each supporting hyperplane $H(P_{h,t},u)$ with $u\in\omega_h$ has distance at least $\delta/2$ from $K_h\cap H_h$. For these $t$, a face $F\in{\mathcal F}_{k,h}(P_{h,t})$ cannot contain a point of $H_h$ and hence must be a face of $R_t$ lying above some face $F^\Box$ of ${\mathcal C}_t$. Decreasing $t_0$, if necessary, we can further assume that each such face $F^\Box$ is a face of some cube $C_z$ of ${\mathcal C}_t$ whose center satisfies $\|z\|^2\le h$. We assume in the following that $0<t<t_0$. Let ${\mathcal L}_k$ denote the set of all linear subspaces spanned by any $k$ vectors of $e_1,\dots,e_{n-1}$. This set of $k$-dimensional coordinate subspaces of ${\mathbb R}^{n-1}$ contains $b(n,k)=\binom{n-1}{k}$ elements. We numerate them by $L_1,\dots,L_{b(n,k)}$. By ${\mathcal F}^i_{k,h}(P_{h,t})$ we denote the set of $k$-faces $F\in{\mathcal F}_{k,h}(P_{h,t})$ for which $F^\Box$ is parallel to $L_i$. Now let $f$ be a nonnegative, continuous function on ${\mathbb S}^{n-1}$ that is invariant under all rotations fixing $e_n$, has its support in $\omega_h$ and is not identically zero. The number \begin{equation}\label{36a} W^i_k(P_{h,t},f):= \sum_{F\in{\mathcal F}^i_{k,h}(P_{h,t})} {\mathcal H}^k(F)\int_{\nu(P_{h,t},F)}f\, {\rm d}{\mathcal H}^{n-k-1} \end{equation} is independent of $i$, since the polytope $P_{h,t}$ and the function $f$ are invariant under the rotations permuting the basis vectors $e_1,\dots,e_{n-1}$ and fixing $e_n$. Since $$ \sum_{i=1}^{b(n,k)}\sum_{F\in{\mathcal F}^i_{k,h}(P_{h,t})} {\mathcal H}^k(F)\int_{\nu(P_{h,t},F)}f\, {\rm d}{\mathcal H}^{n-k-1} = \sum_{F\in{\mathcal F}_k(P_{h,t})} {\mathcal H}^k(F)\int_{\nu(P_{h,t},F)}f\, {\rm d}{\mathcal H}^{n-k-1},$$ where we used that $f$ has its support in $\omega_h$, we have \begin{equation}\label{36} b(n,k) W^i_k(P_{h,t},f)= W_k(P_{h,t},f). \end{equation} This finishes the construction of the polytopes $P_{h,t}$ and the description of their properties. Suppose now that $\Gamma$ is a mapping with the properties listed in Theorem \ref{Theorem5.1} and such that for $P\in{\mathcal P}^n$ it is given by $$ \Gamma(P,\cdot) = \sum_{m,s\ge 0,\,j\ge 2 \atop 2m+2j+s=p} c_{mjs} Q^m\phi^{0,s,j}_k(P,\cdot) $$ with constants $c_{mjs}$ which are not all zero. Let $f$ be a function as described above. As in Lemma \ref{Lemma5.1}, we define $s_0$ as the smallest number $s$ for which $c_{mjs}\not=0$ for some $m,j$. Definition (\ref{5.2ab}) for the polytope $P_{h,t}$ reads \begin{equation}\label{50a} \Delta(P_{h,t},f) = \sum_{j=2}^d c_j Q^{q-j}\sum_{F\in{\mathcal F}_{k,h}(P_{h,t})} Q^j_{L(F)}{\mathcal H}^k(F) \int_{\nu(P,F)} f\,{\rm d}{\mathcal H}^{n-k-1}, \end{equation} where only faces $F\in{\mathcal F}_{k,h}(P_{h,t})$ appear, since $f$ has its support in $\omega_h$. Let \begin{equation}\label{40b} E:= (E',\underbrace{-e_n,\dots,-e_n}_{s_0}), \end{equation} where $$ E':=(\underbrace{a,\dots,a}_{2q})\quad\mbox{with } a\in{\mathbb R}^{n-1},\, \|a\|=1.$$ Since $\omega_h$ is $\varepsilon$-close to $-e_n$, it follows from Lemma \ref{Lemma5.1} that \begin{equation}\label{51} \left|\Gamma(P_{h,t},f)(E)-\Delta(P_{h,t},f)(E')\right|\le C_3W_k(P_{h,t},f)\varepsilon. \end{equation} Let $F\in{\mathcal F}_{k,h}(P_{h,t})$. The face $F$ lies above some $k$-face $F^\Box$ of ${\mathcal C}_t$. The face $F^\Box$ belongs to some cube $C_z$ with center $z$ satisfying $\|z\|^2\le h$, and $F$ is a translate of $\lambda_zF^\Box$, therefore $L(F)=\lambda_z(L(F^\Box))$. Let $a\in{\mathbb R}^{n-1}$ be a unit vector. By (\ref{13}), $$ \left|\|\pi_L a\|^2-\|\pi_{\lambda_z L} a\|^2\right|\le\varepsilon.$$ This yields $$ (Q^{q-j}Q_{L(F)}^j)(E') = (Q^{q-j}Q_{L(F^\Box)}^j)(E')+R_4$$ with $$|R_4|\le C_4\varepsilon,$$ where the constant $C_4$ can be chosen to depend only on $\Gamma$. Together with (\ref{50a}) and (\ref{51}) this yields \begin{eqnarray}\label{41} & & \Gamma(P_{h,t},f)(E)\\ & & = \sum_{j=2}^d c_j\sum_{F\in{\mathcal F}_{k,h}(P_{h,t})} \left(Q^{q-j}Q_{L(F^\Box)}^j\right)(E') {\mathcal H}^k(F)\int_{\nu(P_{h,t},F)} f\,{\rm d}{\mathcal H}^{n-k-1} +R_5(E) \nonumber \end{eqnarray} with $$ |R_5(E)|\le C_5W_k(P_{h,t},f)\varepsilon,$$ where $C_5$ depends only on $\Gamma$. (We have written $R_5(E)$, for fixed $P_{h,t},f$, since later we shall have to distinguish between remainder terms for different arguments $E$.) For each $F\in{\mathcal F}_{k,h}(P_{h,t})$ we have $L(F^\Box)=L_i$ for suitable $i\in \{1,\dots,b(n,k)\}$, hence, using (\ref{36}), \begin{eqnarray}\label{53} & & \Gamma(P_{h,t},f)(E)\\ & & = \sum_{j=2}^d c_j \sum_{i=1}^{b(n,k)}\sum_{F\in{\mathcal F}^i_{k,h}(P_{h,t})} (Q^{q-j}Q_{L_i}^j)(E') {\mathcal H}^k(F)\int_{\nu(P_{h,t},F)} f\,{\rm d}{\mathcal H}^{n-k-1} +R_5(E) \nonumber\\ & &= b(n,k)^{-1}W_k(P_{h,t},f)\left(\sum_{j=2}^d c_j Q^{q-j}\sum_{i=1}^{b(n,k)} Q_{L_i}^j\right)(E')+R_5(E).\nonumber \end{eqnarray} Before we continue, it seems appropriate to point out why we cannot finish the proof shortly. For a shorter proof, we would need the following. Suppose that $d\le q$. If the polynomial $$ \sum_{j=2}^d c_j(x_1^2+\cdots+x_n^2)^{q-j} \sum_{I\subset \{1,\dots,n\},|I|=k}\left(\sum_{i\in I} x_i^2\right)^j$$ is invariant under $SO(n)$, then $c_2=\cdots=c_d=0$. However, this is not true. The simplest example is obtained for $n=2$, $k=1$, $q=d=3$, by $$ P(x_1,x_2)=c_2(x_1^2+x_2^2)\left[(x_1^2)^2+(x_2^2)^2\right]+c_3\left[(x_1^2)^3+(x_2^2)^3\right].$$ With $x_1^2+x_2^2=1$ we get $$ P(x_1,x_2)=(2c_2+3c_3)x_1^4-(2c_2+3c_3)x_1^2+(c_2+c_3).$$ Hence, if $2c_2+3c_3=0$, then, for arbitrary $x_1,x_2$, $$ P(x_1,x_2)=(c_2+c_3)(x_1^2+x_2^2)^3.$$ Thus, $P$ is rotation invariant, but not necessarily $c_2=c_3=0$. There are many other examples of this kind. The existence of these examples forces us to perform a case distinction, involving some geometric arguments. Our approach requires, as it turns out, to consider dimensions three and higher separately. First we consider the case $n\ge 4$ and prove the following lemma. \vspace{2mm} \begin{lemma}\label{Lemma5.2} Let $n\ge 4$. Under the assumptions made above, there exist a convex body $K\in{\mathcal K}^n$, a continuous function $f$ on ${\mathbb S}^{n-1}$, a $p$-tuple $E$, and a rotation $\vartheta\in{\rm O}(n)$ such that $K$ and $f$ are invariant under $\vartheta$, but $\Gamma(K,f)(\vartheta E)\not=\Gamma(K,f)(E)$. \end{lemma} If this is proved, then the invariance of $K$ and $f$ under $\vartheta$ and the rotation covariance of $\Gamma$ give $$ \Gamma(K,f)(\vartheta E)=\Gamma(\vartheta K,\vartheta f)(\vartheta E) = \Gamma(K,f)(E).$$ This is a contradiction, which finishes the proof of Theorem 5.1 for $n\ge 4$. \vspace{2mm} \noindent{\em Proof of Lemma} 5.2. Let ${\rm O}(n,e_n)$ denote the group of rotations of ${\mathbb R}^n$ that fix $e_n$. First we want to show that the tensor $$ \Upsilon:= \sum_{j=2}^d c_j Q^{q-j}\sum_{i=1}^{b(n,k)} Q_{L_i}^j $$ is not invariant under ${\rm O}(n,e_n)$. For $x=(x_1,\dots, x_{n-1},0)\in{\mathbb R}^n$ with $\|x\|=1$ we have $$ p_\Upsilon(x):= \Upsilon(\underbrace{x,\dots,x}_{2q}) = \sum_{j=2}^d c_j \sum_{I\subset\{1,\dots,n-1\},\,|I|=k}\left(\sum_{i\in I} x_i^2\right)^j.$$ First let $d$ be even. Consider $x= (x_1,x_2,0,\dots,0)\in{\mathbb R}^n$ with $\|x\|=1$. For such $x$, $$ p_\Upsilon(x)=\sum_{j=2}^d c_j\left\{\binom{n-3}{k-2}(x_1^2+x_2^2)^j+\binom{n-3}{k-1}(x_1^{2j}+x_2^{2j})\right\}.$$ Here and below we make use of the convention that $\binom{n}{m}=0$ for $m<0$. With $x(\lambda)= (\lambda,\sqrt{1-\lambda^2},0,\dots,0)$, $\lambda\in[0,1]$, we get $$ p_\Upsilon(x(\lambda)) = \sum_{j=2}^d c_j\left\{\binom{n-3}{k-2}+\binom{n-3}{k-1}(\lambda^{2j}+(1-\lambda^2)^j)\right\}=2c_d\binom{n-3}{k-1}\lambda^{2d}+\dots,$$ where the dots stand for terms where the exponent of $\lambda$ is less than $2d$. Since $\lambda\mapsto p_\Upsilon(x(\lambda))$ is not a constant function, the polynomial $p_\Upsilon$ is not constant on unit vectors in ${\mathbb R}^{n-1}$ and hence is not invariant under ${\rm O}(n,e_n)$. Now let $d$ be odd. For $x=(x_1,x_2,x_3,0,\dots,0)\in{\mathbb R}^n$ with $\|x\|=1$ we have \begin{eqnarray*} p_\Upsilon(x) &=& \sum_{j=2}^d c_j \Bigg\{\binom{n-4}{k-3} +\binom{n-4}{k-2} \left[\left(x_1^2+x_2^2\right)^j+ \left(x_1^2+x_3^2\right)^j +\left(x_2^2+x_3^2\right)^j \right]\\ &&+ \binom{n-4}{k-1}\left(x_1^{2j}+x_2^{2j}+x_3^{2j}\right)\Bigg\}. \end{eqnarray*} With $$ x(\lambda,\mu)= \left(\lambda,\mu\sqrt{1-\lambda^2},\sqrt{1-\mu^2}\sqrt{1-\lambda^2},0,\dots,0\right)\in{\mathbb R}^n,\qquad \lambda,\mu\in[0,1],$$ we get by an elementary calculation that \begin{eqnarray*} p_\Upsilon(x(\lambda,\mu)) &=& dc_d\left[\binom{n-4}{k-2}-\binom{n-4}{k-1}\right]\mu^{2d-2}\lambda^{2d}+\dots, \end{eqnarray*} where the dots stand for terms in which the exponent of $\lambda$ is less than $2d$ or the exponent of $\mu$ is less than $2d-2$. If $2k\not=n-1$, then $\binom{n-4}{k-2}-\binom{n-4}{k-1}\not=0$, hence the function \begin{equation}\label{43} (\lambda,\mu)\mapsto p_\Upsilon(x(\lambda,\mu)),\quad \lambda,\mu\in[0,1], \end{equation} is not constant. Let us first assume that $2k\not=n-1$. In both cases, $d$ even and $d$ odd, we have seen that the polynomial $p_\Upsilon$ and hence the tensor $\Upsilon$ is not ${\rm O}(n,e_n)$ invariant. Hence, there are a unit vector $a\in{\mathbb R}^{n-1}$ and a rotation $\vartheta\in{\rm O}(n,e_n)$ such that the $(2q)$-tuple $E':= (a,\dots,a)$ satisfies $$ |\Upsilon(\vartheta E')-\Upsilon(E')|=M>0.$$ By (\ref{53}), for $E:= (E',-e_n,\dots,-e_n)$ we have $$ \Gamma(P_{h,t},f)(E) = b(n,k)^{-1}W_k(P_{h,t},f)\Upsilon(E')+R_5(E)$$ with $|R_5(E)|\le C_5 W_k(P_{h,t},f)\varepsilon$, and similarly for $\vartheta E$. Thus, we get \begin{eqnarray*} & & |\Gamma(P_{h,t},f)(\vartheta E)-\Gamma(P_{h,t},f)(E)|\\ & & = |b(n,k)^{-1} W_k(P_{h,t},f)\Upsilon(\vartheta E')+ R_5(\vartheta E)-b(n,k)^{-1}W_k(P_{h,t},f)\Upsilon(E')- R_5(E)|\\ & & \ge b(n,k)^{-1} W_k(P_{h,t},f)|\Upsilon(\vartheta E')-\Upsilon(E')|-|R_5(\vartheta E)|-|R_5(E)|\\ & & \ge b(n,k)^{-1} W_k(P_{h,t},f)(M-C_6\varepsilon) \end{eqnarray*} with $C_6:= 2b(n,k)C_5$. The constants $M>0$ and $C_6$ are independent of $\varepsilon$. Hence, we can choose $\varepsilon>0$ so small that $M-C_6\varepsilon>0$. By (\ref{35a}) we have $$W_k(P_{h,t},f)=\omega_{n-k}\int_{{\mathbb S}^{n-1}}f\,{\rm d} \Psi_k(P_{h,t},\cdot).$$ From $\lim_{t\to 0}P_{h,t}=K_h$ and the weak continuity of the $k$th area measure we deduce that $$ \lim_{t\to 0} W_k(P_{h,t},f)= \omega_{n-k}\int_{{\mathbb S}^{n-1}} f\,{\rm d}\Psi_k(K_h,\cdot).$$ Since $\int f\,{\rm d}\Psi_k(K_h,\cdot)>0$ (which follows from \cite[(4.20), (4.26)]{Sch14} and the fact that all principal radii of curvature of the paraboloid ${\mathcal R}$ are positive), there is a positive number $W$ such that $b(n,k)^{-1}W_k(P_{h,t},f)\ge W$ for all sufficiently small $t>0$. We assume that $t$ is sufficiently small in this sense. Then we have $$ |\Gamma(P_{h,t},f)(\vartheta E)-\Gamma(P_{h,t},f)(E)|\ge W(M-C_6\varepsilon)>0.$$ From the convergence $P_{h,t}\to K_h$ for $t\to 0$ and from the assumed weak continuity of $\Gamma$ it follows that $$ |\Gamma(K_h,f)(\vartheta E) - \Gamma(K_h,f)(E)| \ge W(M-C_6\varepsilon)>0.$$ This completes the proof of Lemma 5.2 in the case where $2k\not= n-1$. It remains to consider the case where \begin{equation}\label{44} 2k=n-1\quad\mbox{and} \quad d\mbox{ is odd}. \end{equation} In this case, $n\ge 5$. As before, we denote by ${\mathbb R}^{n-1}$ the space spanned by the basis vectors $e_1,\dots,e_{n-1}$, and in addition by ${\mathbb R}^{n-2}$ the space spanned by $e_1,\dots,e_{n-2}$. In ${\mathbb R}^{n-1}$ we construct $(n-1)$-dimensional polytopes $P_{h,t}$ and convex bodies $K_h$ as above, just by replacing the previous triple $({\mathbb R}^n,{\mathbb R}^{n-1},e_n)$ by $({\mathbb R}^{n-1},{\mathbb R}^{n-2},e_{n-1})$. The numbers $t,h,\varepsilon$ and the set $\omega_h$ have the same meaning as before. The set $\omega_h$ is now a subset of the unit sphere of ${\mathbb R}^{n-1}$. We define the set $$ \Omega_h:= \left\{ \sqrt{1-\alpha^2}\,v +\alpha e_n: v\in \omega_h,\,|\alpha|\le\varepsilon\right\}.$$ This set is invariant under the group ${\rm O}(n,e_n,e_{n-1})$, consisting of the rotations of ${\mathbb R}^n$ that fix $e_n$ and $e_{n-1}$, and it has nonempty interior in ${\mathbb S}^{n-1}$. Since $\omega_h$ is $\varepsilon$-close to $e_{n-1}$, we can, in view of (\ref{34a}) and (\ref{35}), choose $\varepsilon>0$ so small that \begin{equation}\label{47} \langle u,-e_{n-1}\rangle\ge 1-\varepsilon \quad \mbox{for all }u\in\Omega_h \end{equation} and \begin{equation}\label{48} |\langle u,a\rangle| \le\varepsilon \|a\| \quad\mbox{for all }u\in\Omega_h\mbox{ and all }a\in{\mathbb R}^{n-2}. \end{equation} Let $f$ be a nonnegative, continuous function on ${\mathbb S}^{n-1}$ that is invariant under ${\rm O}(n,e_n,e_{n-1})$, has its support in $\Omega_h$ and is not identically zero. We define $$ {\mathcal F}_{k,h}(P_{h,t})=\{F\in{\mathcal F}_k(P_{h,t}):\Omega_h\cap \nu(P_{h,t},F)\not=\emptyset\}.$$ As above, we can choose $t_0>0$ so small that for $0<t<t_0$ each face of ${\mathcal F}_{k,h}(P_{h,t})$ lies above some face $F^\Box$ of the complex ${\mathcal C}_t$ in ${\mathbb R}^{n-2}$. The subspaces $L_1,\dots,L_{b(n-1,k)}\subset{\mathbb R}^{n-2}$ and the function $W_k(P_{h,t},f)$ are defined as before. Similar estimates as above show that for each $p$-tuple $$ E:= (E',-e_{n-1},\dots,-e_{n-1}),\quad E':=(\underbrace{a,\dots,a}_{2q})$$ with $a\in {\mathbb R}^{n-2}$, $\|a\|=1$, an estimate corresponding to (\ref{53}) is valid, namely \begin{equation}\label{49} \Gamma(P_{h,t},f)(E) = b(n-1,k)^{-1}W_k(P_{h,t},f)\left(\sum_{j=2}^d c_j Q^{q-j}\sum_{i=1}^{b(n-1,k)} Q_{L_i}^j\right)(E')+R_6(E) \end{equation} with $$ |R_6(E)|\le C_7W_k(P_{h,t},f)\varepsilon.$$ The tensor $$ \Upsilon':= \sum_{j=2}^d c_j Q^{q-j}\sum_{i=1}^{b(n-1,k)} Q_{L_i}^j $$ now corresponds to the polynomial $$ p_{\Upsilon'}(x)= \Upsilon'(\underbrace{x,\dots,x}_{2q}) =\sum_{j=2}^d c_j \|x\|^{2(q-j)} \sum_{I\subset\{1,\dots,n-2\},\,|I|=k}\left(\sum_{i\in I} x_i^2\right)^j.$$ For $$ x(\lambda,\mu)= \left(\lambda,\mu\sqrt{1-\lambda^2},\sqrt{1-\mu^2}\sqrt{1-\lambda^2},0,\dots,0\right)\in{\mathbb R}^n,\qquad \lambda,\mu\in[0,1],$$ we obtain (observing that $d$ is odd) \begin{eqnarray*} p_{\Upsilon'}(x(\lambda,\mu)) &=& dc_d\left[\binom{n-5}{k-2}-\binom{n-5}{k-1}\right]\mu^{2d-2}\lambda^{2d}+\dots, \end{eqnarray*} where the dots stand for terms in which the exponent of $\lambda$ is less than $2d$ or the exponent of $\mu$ is less than $2d-2$. Since $\binom{n-5}{k-2}-\binom{n-5}{k-1}\not=0$ for $2k=n-1$, this function is not constant. The proof can now be completed as before. The essential fact that $\int f\,{\rm d}\Psi_k(K_h,\cdot)>0$, although now $K_h$ is of dimension $n-1$, remains true since $k\le n-2$. \qed \vspace{2mm} Finally, we consider the case $n=3$, $k=1$. We shall need Lemma \ref{Lemma5.1} for this case without change, hence all assumptions about $\Gamma, \varepsilon,\omega,s_0,q, E',E,f,c_j,d$ are as in that lemma. For even numbers $d$, the distinction between dimension three and higher dimensions in the proof of Lemma \ref{Lemma5.2} was not necessary, hence we can now assume that $d$ is odd. For odd numbers $d$, the previous proof breaks down, since for special values of $c_2,\dots,c_d$ with $c_d\not=0$ it can happen that the tensor $\Upsilon$ is invariant under ${\rm O}(3,e_3)$. Therefore, we need to modify the approximating polytopes $P_{h,t}$. We do this as follows. Recall that we have identified ${\mathbb R}^2$ with the plane spanned by $e_1$ and $e_2$. Let $\beta_d:=\pi/d$. In ${\mathbb R}^2$, we define the vectors $$ b_1 = e_1,\quad b_2 = (\cos\beta_d)e_1 +(\sin\beta_d)e_2\quad b_3 = b_2-b_1. $$ The triangle $T$ with vertices $0$, $b_1$ and $b_2$ has angles $\beta_d$ at $0$ and $((d-1)/2)\beta_d$ at $b_1$ and at $b_2$. For $t>0$, the lines $$ {\mathbb R} b_1+\zeta t b_2,\quad {\mathbb R} b_2+\zeta t b_3,\quad {\mathbb R} b_3+\zeta t b_1,\quad \zeta\in{\mathbb Z},$$ divide the plane ${\mathbb R}^2$ into triangles congruent to $T$. Together with their edges and vertices they define a polygonal complex in ${\mathbb R}^2$, which we denote by ${\mathcal T}_t$. In the construction of the polytopes $P_{h,t}$, as described after the proof of Lemma \ref{Lemma5.1}, we replace (for $n=3$) the complex ${\mathcal C}_t$ by the complex ${\mathcal T}_t$. Thus, we use the lifting map $\ell$, defined by (\ref{lift}), to lift the vertices of ${\mathcal T}_t$ to the paraboloid ${\mathcal R}$. The polyhedral set $R_t$ is the convex hull of the images. For $z\in{\mathbb R}^2$, the affine map $\alpha_z$ is again defined by (\ref{aff}), where now $r_t$ is the radius of the cirumcircle (that is, the circle through the vertices) of $T$. The closed disc bounded by the circumcircle of $T$ contains no other vertices of ${\mathcal T}_t$ besides the vertices of T. By symmetry, the corresponding statement is true for all triangles of ${\mathcal T}_t$. Therefore, under orthogonal projection to ${\mathbb R}^2$, the faces of $R_t$ correspond precisely to the faces of ${\mathcal T}_t$. The role of the cubes $C_z$ of ${\mathcal C}_t$ is now played by the triangles $T_z$ of ${\mathcal T}_t$, where $z$ is the circumcenter (the center of the circumcircle) of $T_z$. On the vertices of $T_z$, the lifting map $\ell$ coincides with the affine map $\alpha_z$. Each face $F$ of $R_t$ is of the form $F=\alpha_z T_z$ for some triangle $T_z$ of ${\mathcal T}_t$. In ${\mathbb R}^2$ we define the one-dimensional subspaces $$ L_r:= {\mathbb R}((\cos r\beta_d)e_1 +(\sin r\beta_d)e_2),\qquad r=0,1,\dots, d-1.$$ We observe that each edge of the complex ${\mathcal T}_t$ is parallel to one of the lines $L_0, L_1, L_{(d+1)/2}$; note that $(d+1)/2$ is an integer $\le d-1$. The definitions of $K_h,P_{h,t},\omega_h,\lambda_z$ and the choices of $h,t_0$ are now, {\em mutatis mutandis}, the same as before. We assume that $0<t<t_0$. Let $\vartheta_d\in{\rm SO}(3,e_3)$ be the rotation by the angle $\beta_d$. The crucial new polytopes are the Minkowski averages $$ P_{h,t}^d:= \frac{1}{d}\sum_{l=0}^{d-1}\vartheta_d^lP_{h,t}.$$ These polytopes clearly satisfy $$ \vartheta_dP_{h,t}^d = P_{h,t}^d$$ and $$ \lim_{t\to 0}P_{h,t}^d=K_h.$$ Define $$ {\mathcal F}_{1,h}(P_{h,t}^d) := \{F\in{\mathcal F}_1(P_{h,t}^d):\omega_h\cap \nu(P_{h,t}^d,F)\not=\emptyset\}.$$ Let $F\in{\mathcal F}_{1,h}(P_{h,t}^d)$. Since $\omega_h$ is open and because of relation (2.26) in \cite{Sch14}, there exists a vector $u\in\omega_h\cap \nu(P_{h,t}^d,F)$ such that $F(P_{h,t}^d,u)=F$, where $F(K,u)$, for a convex body $K$, denotes the support set of $K$ with outer normal vector $u$. By \cite[Thm. 1.7.5(c)]{Sch14} we have $$ F= \frac{1}{d}\sum_{l=0}^{d-1}F(\vartheta_d^lP_{h,t},u).$$ Since $\dim F=1$, for each $l\in\{0,\dots,d-1\}$ the support set $F(\vartheta_d^lP_{h,t},u)$ is either a vertex or an edge of $\vartheta_d^lP_{h,t}$ parallel to $F$, and in the latter case its orthogonal projection to ${\mathbb R}^2$ is parallel to one of the one-dimensional subspaces $L_r$, $r\in\{0,\dots,d-1\}$; here $r$ is independent of $l$, since all the considered edges are parallel. It follows that the orthogonal projection of the edge $F$ to ${\mathbb R}^2$ is parallel to $L_r$. We denote by ${\mathcal F}_{1,h}^{\, r}(P_{h,t}^d)$ the set of edges $F\in {\mathcal F}_{1,h}(P_{h,t}^d)$ for which the projection to ${\mathbb R}^2$ is parallel to $L_r$, $r=0,\dots,d-1$. Let $f$ be a nonnegative, continuous function on ${\mathbb S}^2$ that is invariant under ${\rm O}(3,e_3)$, has its support in $\omega_h$ and is not identically zero. The fact that each edge of $P_{h,t}^d$ is parallel to an edge of some $\vartheta_d^lP_{h,t}$ has the consequence that the former estimates, which depend only on the directions of such edges, remain valid. In particular, in the same way as (\ref{53}) was proved, we obtain \begin{eqnarray}\label{5.1} & & \Gamma(P_{h,t}^d,f)(E)\\ & & = \sum_{j=2}^d c_j \sum_{r=0}^{d-1} (Q^{q-j}Q_{L_r}^j)(E')\sum_{F\in{\mathcal F}^{\, r}_{1,h}(P_{h,t}^d)} {\mathcal H}^1(F)\int_{\nu(P_{h,t}^d,F)} f\,{\rm d}{\mathcal H}^1 +R_7(E) \nonumber \end{eqnarray} with $$ |R_7(E)|\le C_8 W_1(P_{h,t}^d,f)\varepsilon.$$ Here $$ \sum_{F\in{\mathcal F}^{\, r}_{1,h}(P_{h,t}^d)} {\mathcal H}^1(F)\int_{\nu(P_{h,t}^d,F)} f\,{\rm d}{\mathcal H}^1$$ is independent of $r$, since $P_{h,t}^d$ is invariant under $\vartheta_d$. The sum of these terms over all $r$ is equal to $W_1(P_{h,t}^d,f)$. Thus, we obtain \begin{equation}\label{5.2} \Gamma(P_{h,t}^d,f)(E)=\frac{1}{d}W_1(P_{h,t}^d,f)\left(\sum_{j=2}^d c_j Q^{q-j}\sum_{r=0}^{d-1} Q_{L_r}^j\right)(E')+R_7(E). \end{equation} We want to show that the tensor $$ \Upsilon_d:= \sum_{j=2}^d c_jQ^{q-j}\sum_{r=0}^{d-1}Q^j_{L_r}$$ is not invariant under ${\rm SO}(3,e_3)$. For $x=(x_1,x_2,0)\in{\mathbb R}^3$ with $\|x\|=1$ we have $$ p_{\Upsilon_d}(x):=\Upsilon_d(\underbrace{x,\dots,x}_{2q}) =\sum_{j=2}^dc_j\sum_{r=0}^{d-1}(x_1\cos r\beta_d+x_2\sin r\beta_d)^{2j}.$$ For $x(\lambda):=(\lambda,\sqrt{1-\lambda^2},0)$, $\lambda\in[0,1]$, this reads $$ p_{\Upsilon_d}(x(\lambda))=\sum_{j=2}^d c_j \sum_{r=0}^{d-1}\left(\lambda \cos r\beta_d + \sqrt{1-\lambda^2}\,\sin r \beta_d \right)^{2j}.$$ The right-hand side is a polynomial in $\lambda$, defined for all real $\lambda$, and for the coefficient $A_{2d}$ of $\lambda^{2d}$ in this polynomial we obtain \begin{eqnarray*} A_{2d} &=& \lim_{\lambda\to\infty}\lambda^{-2d}\sum_{j=2}^d c_j\sum_{r=0}^{d-1}\left(\lambda\cos r\beta_d + \sqrt{1-\lambda^2}\,\sin r\beta_d \right)^{2j}\\ &=& c_d \sum_{r=0}^{d-1}(\cos r\beta_d+{\rm i}\sin r\beta_d)^{2d}\\ &=& c_d \sum_{r=0}^{d-1} \exp\left(r\frac{\pi}{d}{\rm i}\cdot 2d\right) =dc_d \not=0. \end{eqnarray*} Hence, $p_{\Upsilon_d}(x(\lambda))$ is not a constant function of $\lambda$. Once we know this, the proof can be completed precisely as before.
1,477,468,750,213
arxiv
\section{Introduction}\label{sec:intro} Due to recent advances in nanofabrication and nanopatterning dimensionality effects in physical phenomena have become a very active field of research. Even nanowires or two-dimensional films of decidedly three-dimensional materials such as perovskites have become available for further studies.\cite{Schlaus19, Ji19} Still, the majority of publications in this field is dedicated to a narrow range of materials with structurally inherent low-dimensional features. These are for example NbSe$_3$ \cite{Meerschaut75, Hodeau78, Regueiro92, Yang19} and K$_{0.3}$MoO$_3$ \cite{Graham66, Travaglini81, Pouget83a, Inagaki18} as representatives of quasi-one-dimensional (quasi-1D) compounds and graphite/graphene \cite{Novoselov04, Cao18} and transition-metal dichalcogenides \cite{Williams74, Lee19, Hughes77, Shu19, Pyon12, Oike18} as representatives of quasi-two-dimensional (quasi-2D) compounds. On the one hand, this focus comes from the weak bonding between their low-dimensional building units easing the fabrication of nano-devices. On the other hand, but even more importantly, intriguing effects of the strongly anisotropic atomic interactions can already be observed in the bulk material. A characteristic physical phenomenon in many structurally low-dimensional materials is the existence of a subtle competition between a structurally distorted state (\textit{e.g.} due to the formation of a charge-density wave) and a superconducting state. The balance between these usually conflicting states may be influenced by external factors such as hydrostatic pressure,\cite{Ido90, Regueiro92, Yasuzuka05, Monceau12, Kiswandhi13} rapid quenching \cite{Oike18} and chemical pressure \textit{via} the intercalation or substitution of additional elements.\cite{Yang12, Pyon12} The complex carbide Sc$_3$CoC$_4$\ represents a promising new member in this family of materials: Its structure is coined by quasi-1D infinite $\left[\right.$Co(C$_2$)$_2\left.\right]$ ribbons orientated along the crystallographic $b$-axis \cite{jeitschko_carbon_1989,Tsokol86,Rohrmoser07, Scherer10, Scheidt11, Scherer12, Eickerling13, He15} and it shows a phase transition into a superconducting state below $T_{\mathtt{c}} =$~4.5~K.\cite{Scheidt11,Scherer10,Eickerling13} As was recently demonstrated by Wang \textit{et al.}, the superconducting volume fraction of polycrystalline Sc$_3$CoC$_4$\ samples significantly increases with pressure.\cite{Wang16} At the same time, we have shown in a previous combined X-ray and neutron diffraction study that below approx. 72~K Sc$_3$CoC$_4$\ undergoes a Peierls-type transition to a low-temperature phase with a doubled translational period along the $\left[\right.$Co(C$_2$)$_2\left.\right]$ ribbons. \cite{Scherer10, Scherer12, Eickerling13} Crystallographically, the transition from the orthorhombic high-temperature (HT) phase (space group $Immm$) to the monoclinic low-temperature (LT) phase (space group $C2/m$) proceeds \textit{via} a $t2$- followed by an $i2$-transition, leading to a systematic twinning of single crystalline samples in the LT phase.\cite{Vogt09} Yet, the driving forces and the exact path to this structurally distorted LT phase remain controversial. In earlier works, we interpreted anomalies in the electrical resistivity and the magnetic susceptibility of polycrystalline Sc$_3$CoC$_4$\ samples as hints to the emergence of a charge-density wave (CDW) below $\approx$~140~K. \cite{Eickerling13, Scherer10, Scherer12} This interpretation has been challenged by Zhang \textit{et al.} \cite{Zhang12} Their theoretical study provided no evidence of a Fermi surface instability with respect to a CDW of Sc$_3$CoC$_4$\ in its HT phase. In order to correlate the anomalies in the electrical transport properties with potential structural changes we performed temperature-dependent X-ray diffraction and resistivity measurements on single-crystalline Sc$_3$CoC$_4$\ samples. \section{Methods}\label{sec:experimental} Single crystals of Sc$_3$CoC$_4$\ were grown according to methods described in the literature\cite{Vogt09, Rohrmoser07, He15} and in addition from a lithium metal flux.\cite{Haas19} Needle-like samples with a thickness of $\approx 20$~$\mu$m and a length of $\approx 200$~$\mu$m were obtained from the first method and platelet-like samples with a thickness of $\approx 150$~$\mu$m and a lateral size of $\approx 300$~$\mu$m from the latter. X-ray diffraction measurements\footnote{See Supplemental Material at [URL will be inserted by publisher] for full experimental details and all data recorded between room temperature and 12~K.} of the temperature dependent superstructure reflection intensities (10~K~$< T <$~160~K) were performed on a HUBER eulerian cradle equipped with a MAR345 image-plate detector and operated at the window of a BRUKER FR591 rotating anode (Mo K$_{\alpha}$). Additional temperature-dependent single crystal data ($T > 100$~K) was collected on a BRUKER-NONIUS $\kappa$-goniometer operated at the window of an INCOATEC MicroFocus Tube (Mo K$_{\alpha}$). Mapping of diffuse scattering intensities was done employing a HUBER eulerian cradle goniometer equipped with an INCOATEC MicroFocus Tube (Ag K$_{\alpha}$) and a PILATUS 300K CdTe detector. Cryogenic temperatures $T >$~80~K were generated by a standard OXFORD open-flow N$_2$ cooler,\cite{Cosier86} measurements at $T <$~80~K were performed employing an ARS closed-cycle He-cryostat. The handling of parasitic scattering from the vacuum and radiation shields in the closed-cycle cryostat (beryllium domes) was described elsewhere.\cite{Reisinger07} Numerical values for the intensities of representative superstructure reflections and diffuse features were extracted by image analysis of the collected X-ray diffraction data. Resistivity measurements of single crystals of Sc$_3$CoC$_4$\ contacted in a four-point geometry were carried out using a Physical Property Measurement System (PPMS, QUANTUM DESIGN). Geometry relaxations of the HT and LT phase of Sc$_3$CoC$_4$\ starting from the structural parameters from Eickerling \textit{et al.}\cite{Eickerling13} were performed employing the VASP code.\cite{kresse_efficiency_1996,kresse_efficient_1996,kresse_ab_1994,kresse_ab_1993} The PBE density functional was used throughout,\cite{perdew96, perdew97} the energy cutoff for the plane wave basis set was set to 550~eV and a Brillouin grid sampling of $4~\times 4~\times 2$ and $2~\times 2~\times 2$ was used for the HT- and LT-phase, respectively. Optimizations were stopped when forces were smaller than 0.001~eV/\AA. The PHONOPY code\cite{Togo15} was used for phonon dispersion calculations on Sc$_3$CoC$_4$\ with a $2~\times 2~\times 2$ supercell. Forces were calculated with the VASP program employing the PBE density functional\cite{perdew96, perdew97}, a $4~\times 4~\times 2$ $k$-point mesh and an energy cutoff for the plane wave basis set of 500~eV.\cite{kresse_efficiency_1996,kresse_efficient_1996,kresse_ab_1994,kresse_ab_1993} Temperature-dependent thermal diffuse scattering (TDS) simulations were obtained using the \textit{ab2tds} code.\cite{Wehinger13} The phonon eigenvectors underlying the simulations were generated with PHONOPY on a $24~\times 24~\times 22$ $q-$mesh (containing $\Gamma$). In \textit{ab2tds}, the Fourier transform of the dynamical matrix was calculated on a $9~\times 9~\times 9$ mesh of points. Debye-Waller factors for each temperature were computed and reciprocal space planes of TDS intensity were sampled on 100~$\times$ 100 $q$-points for a wavelength of 0.56087~\AA\ and a lower eigenvalue-cutoff of 0.001~meV. \section{Electrical Resistivity}\label{sec:res} We focus on the results of the electrical resistivity measurements on single crystalline samples of Sc$_3$CoC$_4$\ first (see Fig.~\ref{fig:res-xrd}a). In accordance with earlier resistivity data from polycrystalline samples,\cite{Eickerling13, Scherer10} two anomalies are observed at $T \approx$~82~K and 149~K. At both temperatures, the overall metallic decrease of $\rho(T)$\ is interrupted by a local increase of the resistivity. The anomaly at 82~K is discerned by a sharp jump in $\rho(T)$\ and corresponds to an irreversibility in $\rho(T)$\ for polycrystalline samples that permanently displaces the heating against the cooling curve.\cite{Eickerling13, Scherer10} This contrasts with the broader and more gradual character of the anomaly at 149~K. Details about structural changes connected to the two anomalies in the electrical resistivity are provided by the results of detailed temperature-dependent single-crystal X-ray diffraction measurements on Sc$_3$CoC$_4$\ outlined in the following. \begin{figure} \includegraphics[height=0.58\textwidth]{Figure1.ps} \caption{\label{fig:res-xrd} Temperature-dependence of (a) the electrical resistivity $\rho(T)$, (b) the X-ray scattering intensity of superstructure reflections and (c) the X-ray scattering intensity of diffuse rods $I_{XRD}(T)$\ \ connecting the superstructure reflection positions along $c^\ast$.} \end{figure} \section{X-ray diffraction}\label{sec:recspace-map} A concise analysis of temperature-dependent changes in diffraction space allows insight into the evolution of the LT structure of Sc$_3$CoC$_4$\ from the HT structure. The intensity of the superstructure reflections $I_{XRD}(T)$\ with $k = \left(\pm \frac{1}{2}, \pm \frac{1}{2}, 0\right)$ that result from a fourfold enlargement of the orthorhombic HT unit cell in its $ab$-plane \cite{Vogt09} represents an appropriate order parameter for the transition from the HT to the LT structure. The simultaneous existence of additional reflections at $k = \left(+\frac{1}{2}, +\frac{1}{2}, 0\right)$ and $k = \left(+\frac{1}{2}, -\frac{1}{2}, 0\right)$ is due to the systematic twinning caused by the $t2$-transition. Note, that all real space and reciprocal space coordinates given hereafter are referred to the orthorhombic HT-phase unit cell. \begin{figure}[h] \centering \includegraphics[height=0.56\textwidth]{Figure2.ps} \caption{Comparison of the X-ray scattering features in the $(h, 1.5, l)$-plane of Sc$_3$CoC$_4$\ as obtained from experiments at different temperatures (a and b) and thermal diffuse scattering (TDS) simulations based on \textit{ab-initio} calculated phonon dispersion relations for the HT-phase structure (c and left part of d) and the LT-phase structure of Sc$_3$CoC$_4$ (right part of d). Note that the Miller indices refer to the orthorhombic HT-phase and that twinning was not considered in the calculations. 1D-profiles of (a) and (c) at $l \approx 2$ are given in the top panel. For details on the simulations, see main text.} \label{fig:tds-sim} \end{figure} Our measurements reveal an increase of $I_{XRD}(T)$\ (see Fig.~\ref{fig:res-xrd}b) in two phenomenologically distinct steps at temperatures between 150~K and 80~K.\footnote{An account of the employed image analysis techniques for the extraction of $I_{XRD}(T)$\ from experimental X-ray diffraction data is given in the Supplemental Material at [URL will be inserted by publisher]} Thereby, $T\approx 150~K$ marks the onset of $I_{XRD}(T)$\ followed by a steady increase down to 80~K. At about 80~K, a sharp jump of $I_{XRD}(T)$\ is observed. Further cooling towards 10~K entails a saturation of $I_{XRD}(T)$\ already below $\approx$~70~K. We note the close resemblance of this temperature-dependence of the superstructure reflection intensity to the observed behavior of $I_{XRD}(T)$\ in the charge-density wave material $2H$-TaSe$_2$.\cite{Moncton75, Moncton77, Williams76} In this compound, a sharp step in the superstructure reflection intensities marks a lock-in transition from an incommensurate modulation of the atomic positions at higher temperatures to a commensurate modulation at lower temperatures \cite{Moncton75, Moncton77, Williams76}. However, within the available experimental accuracy we could not find hints to the existence of an incommensurate phase in Sc$_3$CoC$_4$, \textit{i.e.} significant temperature-dependent changes in the superstructure reflection positions or the appearance of higher-order satellite reflections. This puts Sc$_3$CoC$_4$\ in a row with the extensively studied transition-metal dichalcogenide $1T$-TiSe$_2$ that shows a Peierls-type structural distortion with a twofold commensurable modulation wave vector $k$ down to a temperature of 8.3~K.\cite{Snow03,Kusmartseva09,Sugai80, Wakabayashi78,Hughes77,Holt01,DiSalvo76,Brown80,Kidd02,Joe14} Adding to the pinpoint superstructure reflections strongly temperature-dependent diffuse rods connecting the superstructure reflection positions along $c^\ast$ can be observed for Sc$_3$CoC$_4$. Representative $(h, 1.5, l)$ reciprocal space planes reconstructed from measuring data at 200~K and 12~K and showing exclusively superstructure reflections and diffuse rods are shown in Fig.~\ref{fig:tds-sim}a-b.\cite{Note1} Above 200~K, diffuse rod-shaped features without significant intensity modulation along $c^\ast$ are observed. Upon cooling towards 80~K, a monotonous increase of the intensity at the superstructure reflection positions is paralleled by a lambda-shaped peaking of the diffuse intensity between the superstructure reflection positions at 150~K and its subsequent decay to zero (see Fig.~\ref{fig:res-xrd}c).\cite{Note2} Below 80~K, only intensity at the superstructure reflection positions remains. This marked temperature-dependence along with an anomalous modulation of the diffuse intensity with varying $h$-index (indicated by the profile in the top-panel of Fig.~\ref{fig:tds-sim}a) rules out crystal defects (\textit{e.g.} stacking disorder) as the predominant origin of these diffuse features. A detailed discussion of the characteristic variation of the diffuse intensity in reciprocal space can be found in Appendix~B. Moreover, the coinciding positions of both, diffuse rods and pinpoint superstructure reflections, can be taken as a hint to their common origin. Similar transitions from precursor diffuse features in reciprocal space to pinpoint superstructure reflections are known for other structurally low-dimensional materials featuring low-temperature periodic distortions, \textit{e.g.} $1T$-TaS$_2$, \cite{Williams74, Scruby75} K$_{0.3}$MoO$_3$\cite{Pouget83a, Pouget16} or NbSe$_3$.\cite{Pouget83, Pouget16} Based on the temperature-dependent changes in diffraction space, three different temperature regimes for the structural properties of Sc$_3$CoC$_4$\ may be assigned: (\textit{\textbf{I}})~a HT-regime above $\approx$~150~K characterized by unmodulated (or only weakly) modulated diffuse rods along $c^\ast$ in reciprocal space, (\textit{\textbf{II}})~a pre-LT-regime between $\approx$~150~K and $\approx$~80~K with coexistent diffuse rods and weak superstructure reflections, and (\textit{\textbf{III}})~a LT-regime below $\approx$~80~K marked by the exclusive presence of strong and pinpoint superstructure reflections. This partitioning into temperature regimes fits equally well with the anomalies in the electrical resistivity $\rho(T)$\ (see Fig.~\ref{fig:res-xrd}a). More specifically, the steady transition between (\textit{\textbf{I}}) and (\textit{\textbf{II}}) in $I_{XRD}(T)$\ is reflected by a broad uprise in $\rho(T)$\ at 149~K. At the same time, the step-like increase of $I_{XRD}(T)$\ between (\textit{\textbf{II}}) and (\textit{\textbf{III}}) relates to the sharp increase in $\rho(T)$\ at 82~K. The differing nature of the transitions at around ~80~K and 150~K is further emphasized by powder neutron diffraction studies on Sc$_3$CoC$_4$\ performed earlier.\cite{Eickerling13} Therein, step-like lattice parameter changes with sudden increases in $b$ and $c$ and a decrease in $a$ were observed upon cooling of the samples to below 80~K. There was no evidence for a comparable anomaly in $a$, $b$ and $c$ in the temperature region around 150~K. \section{Discussion}\label{sec:discussion} Starting point for the interpretation of the above results is the structural model of the Sc$_3$CoC$_4$\ HT-phase. The existence of rod-shaped features in diffraction space may be related to layered structural moieties in real space. Taking into account the orientation of the rods along $c^\ast$, these layers must extend parallel to the crystallographic $ab$-plane of the orthorhombic HT cell and can be associated with stacked ribbons of interconnected $\left[\right.$Co(C$_2$)$_2$Co$\left.\right]$ hexagons (shaded in red and green in Fig.~\ref{fig:struct-layers}a). \begin{figure*} \includegraphics[height=0.38\textwidth]{Figure3.ps} \caption{Ball-and-Stick representation of the layered building units of HT Sc$_3$CoC$_4$\ in (a) the crystallographic $bc$-plane and (b) the crystallographic $ab$-plane of the orthorhombic unit cell. In (b) the sinusoidal displacive modulation of the cobalt and scandium atom positions as observed for the low-frequency phonon modes between W and T (see text) and the monoclinic LT-phase\cite{Eickerling13} is indicated by arrows.} \label{fig:struct-layers} \end{figure*} In a simplistic picture, the diffuse rods may be attributed to disorder between the layered building units of the HT-structure along $c$. However, the characteristic intensity modulation of the rod intensity perpendicular to $c^\ast$ (see profile above Fig.~\ref{fig:tds-sim}a) precludes an explanation in terms of a static stacking disorder involving the slippage of complete layers (see Appendix ~\ref{appendix2}). An alternative explanation for the occurrence of diffuse intensity above 80~K might be provided by precursor dynamic fluctuations along the displacement coordinates of the static Peierls-type distortion evolving below 80~K. The temperature-dependent contraction of the diffuse intensity into superstructure reflections between 150~K and 80~K may in turn be linked to the softening of a phonon mode at $k = \left(\frac{1}{2}, \frac{1}{2}, 0\right)$.\cite{Eickerling13} \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{Figure4.ps} \caption{Calculated phonon dispersion along selected high-symmetry paths in the Brillouin zone of HT Sc$_3$CoC$_4$. The path between W ($\frac{1}{2}, \frac{1}{2}, 0$) and T ($\frac{1}{2}, \frac{1}{2}, \frac{1}{2}$) is highlighted.} \label{fig:ht-phon-disp} \end{figure} In Fig.~\ref{fig:ht-phon-disp} the phonon dispersion of Sc$_3$CoC$_4$\ is shown along selected lines of the first Brillouin-Zone (BZ) of the orthorhombic HT phase. Along the path W~($\frac{1}{2}, \frac{1}{2}, 0$)--T~($\frac{1}{2}, \frac{1}{2}, \frac{1}{2}$) a low-frequency branch with marginal dispersion can be identified. The displacement pattern corresponding to the mode at W is in close analogy to the static displacement pattern in the LT-phase of Sc$_3$CoC$_4$. \cite{Eickerling13} Furthermore, the path W--T of the low-lying phonon branch can be correlated with the course of the diffuse rods in reciprocal space (see Appendix \ref{appendix1}).\cite{Ishida73, Xu05} Figuratively, its flat course can be interpreted in terms of equal excitation energies for an infinite set of dynamic LT-phase-like displacements of the cobalt and scandium atoms (illustrated in Fig.~\ref{fig:struct-layers}b for a layer section in the $ab$-plane) with differing modulations along the stacking direction $c$.\footnote{The fact that the frequencies along the branch remain positive and the calculations do not predict the instability of the HT-phase most likely highlights the shortcomings of the standard GGA functional employed within this study to properly resolve the very flat energy surface region separating the HT- and LT-phase of Sc$_3$CoC$_4$.} The superposition of all these dynamic displacements yields the picture of disorder between the layered building units of Sc$_3$CoC$_4$. In fact, similar behavior connected to weak coupling between layered building units has been observed for other compounds such as the francisite Cu$_3$Bi(SeO$_3$)$_2$O$_2$Cl. In the phonon dispersion of its HT-phase, a nearly dispersionless branch connects a zone-center mode at $\Gamma$ with equal atom displacements in all constituting layers to a modulated variant of the mode at Z with layer-wise inverted atom displacements.\cite{Milesi20} To underline the correspondence between diffuse rods in X-ray diffraction and a soft branch in the phonon dispersion we performed simulations of the thermal diffuse scattering (TDS) contribution to the diffracted intensity in the $(h, 1.5, l)$-plane. Consistent with the inferences made above, simulations based on the phonon dispersion of the HT-phase of Sc$_3$CoC$_4$\ and assuming a temperature of 200~K (Fig.~\ref{fig:ht-phon-disp}) reproduce the experimental observations not only in the general positions and direction of the rods, but even in details like the non-trivial intensity variations along the $a^\ast$ direction. A comparison between the experimentally obtained $(h, 1.5, l)$-plane at 200~K and the TDS simulation for the HT-phase of Sc$_3$CoC$_4$\ is given in Figs.~\ref{fig:tds-sim}a and c with corresponding profiles in the top panel. The weak modulation of the simulated TDS intensity with a period of two reciprocal lattice constants along $c^\ast$ in Fig.~2c is most likely due to numerical artifacts introduced by a Fourier interpolation step and not supported by reciprocal space reconstructions based on X-ray diffraction data.\footnote{Key properties of the phonon dispersion remain unaffected by this problem as is demonstrated in the Supplemental Material at [URL will be inserted by publisher].} The phonon dispersion of the HT-phase of Sc$_3$CoC$_4$, however, cannot explain the gradual vanishing of the diffuse rods between 150~K and 80~K and the appearance of pinpoint superstructure reflections. Fig.~\ref{fig:tds-sim}b shows a reconstruction of the $(h, 1.5, l)$-plane from measuring data at 12~K illustrating the extensive reorganization of the X-ray diffraction pattern below 80~K. To account for these changes, a phonon softening mechanism may be invoked that selectively reduces the frequency of the phonon mode at W in the soft W--T branch to zero. Simultaneously with the increasing dispersion between W and T comes a preference for LT-phase-like dynamic atom displacements without superimposed modulation along $c$ and a progressive structural ordering. Zero phonon frequency at W is reached at 80~K resulting in the formation of the LT-phase of Sc$_3$CoC$_4$\ with static atom displacements from the equilibrium positions in the HT-phase. Consequently, no more prominent diffuse features are found in TDS simulations of the $(h, 1.5, l)$-plane at 12~K employing the phonon eigenvalues of the LT-phase\cite{Eickerling13} (see right part of Fig.~\ref{fig:tds-sim}d).\footnote{A plot of the phonon dispersion for the LT-phase structure of Sc$_3$CoC$_4$\ can be found in the Supplemental Material at [URL will be inserted by publisher].} Only localized TDS contributions at positions of the experimentally observed superstructure reflections remain, consistent with a doubling of the unit cell. We note, that twinning was not considered within the simulations and thus each second reflection position is missing in Fig.~\ref{fig:tds-sim}d. That the rearrangement of diffuse features in reciprocal space cannot be attributed to temperature effects alone is illustrated by TDS simulations for the HT-phase structure of Sc$_3$CoC$_4$\ at 12~K (left part of Fig.~\ref{fig:tds-sim}d). Without a structural transition weak diffuse rods would still be present at this temperature. Again, parallels to the francisite Cu$_3$Bi(SeO$_3$)$_2$O$_2$Cl may be drawn, where cooling induces a successive frequency reduction of the phonon mode at Z in the soft $\Gamma$-Z branch setting the stage to a displacive transition into a static LT-phase with doubled $c$-parameter at 115~K.\cite{Constable17,Milesi20} We may thus propose a consistent model for the observations in electrical resistivity and intensity-weighted reciprocal space: In the HT-regime (\textit{\textbf{I}}, see Fig.~\ref{fig:res-xrd}) above 150~K, dynamic disorder driven by the phonon modes W--T leads to the occurrence of diffuse rods in reciprocal space. At approx. 150~K, the mode at W starts to soften, thus continuously reducing the degree of thermal fluctuations in the pre-LT regime (\textit{\textbf{II}}) and leading to the successive ordering of the layers stacked in $c$ direction. Around 80~K, the softening process is complete and the displacement pattern of the phonon mode freezes into the static atomic positions observed in the LT-phase (\textit{\textbf{III}}) of Sc$_3$CoC$_4$. \section{Conclusion}\label{sec:conclusion} To conclude, we provide experimental and theoretical evidence for a soft-phonon-driven formation of a Peierls-type structurally distorted state in Sc$_3$CoC$_4$\ upon cooling. Based on the new results, the interplay between two distinct transitions, \textit{i.e.} a charge-density wave transition at 150~K and a Peierls-type distortion at 80~K, discussed in earlier publications \cite{Scherer10,Scherer12,Eickerling13,Zhang12} can now be consistently described by a single extended structural phase transition \textit{via} an intermediate state between 80~K and 150~K. This intermediate state is characterized by phonon-driven dynamic atom displacements in the crystallographic $ab$-plane with strongly temperature-dependent frequency and correlation-length along the $c$-axis. Inelastic X-ray or neutron scattering experiments might provide further information on the progression of the phonon-softening mechanism towards the static structurally distorted LT-phase observed below 80~K. \section{Investigated samples} \label{sec:investigated-samples} Two types of Sc$_3$CoC$_4$\ single-crystals were investigated in this work: Needle-shaped single-crystals obtained according to methods described in Refs.~\citenum{Rohrmoser07,Vogt09,He15} and platelet-like single-crystals from heat treatment of Sc$_3$CoC$_4$\ powder in a lithium flux.\cite{Haas19} The needle-shaped single-crystals were used in the determination of the temperature-dependent X-ray scattering intensity of pinpoint superstructure reflections and the tempera\-ture-dependent electrical resistivity. Sample sizes were characterized by an approximate thickness of 20~$\mu$m and an approximate length of 200~$\mu$m (see Fig.~\ref{fig:needle}a for a phonographic image of a typical sample). Reconstructions of common reciprocal-space planes from room-temperature X-ray diffraction data (Figs.~\ref{fig:needle}b-d) indicate high crystalline quality with only minor imperfections. Comparison with corresponding reciprocal-space planes at 150~K (Figs.~\ref{fig:diffraction-data-150K}a-c) underlines that the sample quality is not degraded by systematic twinning in the transition from the high-temperature to the low-temperature phase of Sc$_3$CoC$_4$. Due to their larger size with an approximate thickness of 150~$\mu$m and an approximate lateral extension of 300~$\mu$m (see Fig.~\ref{fig:platelet}a for a photographic image of the sample) the platelet-like single-crystals were employed in the temperature-dependent tracking of diffuse X-ray scattering features. Significant, but non-interfering crystal imperfections are apparent from scattered intensity at non-indexed positions in reconstructed reciprocal-space planes at room-temperature (Figs.~\ref{fig:platelet}b-d) and 150~K (Figs.~\ref{fig:diffraction-data-150K}d-f). In order to verify, that the type of the sample does not affect the results discussed in the main text, measurements were performed on both types of single-crystals. In analogy to the platelet-like single-crystals, weak diffuse rods can be recognized in the room-temperature reconstruction of the $(h,1.5,l)$-plane for a needle-shaped single-crystal (Fig.~\ref{fig:needle-diffuse}). Complementarily, the characteristic two-step increase of the superstructure reflection intensity $I_{XRD}(T)$\ \ found for the needle-shaped single-crystals is reproduced by a platelet-like single-crystal (Fig.~\ref{fig:platelet-ixrd-rho}a). This also holds for the corresponding two jumps in the electrical resistivity $\rho(T)$\ (Fig.~\ref{fig:platelet-ixrd-rho}b). Sample size effects may account for the slight downward shift of the anomalies in $I_{XRD}(T)$\ \ and $\rho(T)$\ as compared to the needle-shaped single-crystals.\cite{Natarajan69} Notably, the abrupt nature of the lower anomaly in scattering intensity and electrical resistivity is further emphasized by a sizable temperature hysteresis between cooling and heating cycle for the platelet-like single-crystal. \begin{figure}[p] \includegraphics[width=1.0\textwidth]{Figure1S.ps} \caption{(a)~Photographic image of a typical needle-shaped Sc$_3$CoC$_4$\ single-crystal with crystal axes $a$, $b$ and $c$ referring to the orthorhombic high-temperature phase unit cell indicated by coloured lines. Reconstructions of reciprocal-space planes (b)~$(hk0)$, (c)~$(h0l)$ and (d)~$(0kl)$ from room-temperature X-ray diffraction data. Predicted reflection positions for HT-Sc$_3$CoC$_4$\ are indicated by red circles.} \label{fig:needle} \end{figure} \begin{figure}[p] \centering \includegraphics[width=1.0\textwidth]{Figure2S.ps} \caption{(a)~Photographic image of a platelet-like Sc$_3$CoC$_4$\ single-crystal with crystal axes $a$, $b$ and $c$ referring to the orthorhombic high-temperature phase unit cell indicated by coloured lines. Reconstructions of reciprocal-space planes (b)~$(hk0)$, (c)~$(h0l)$ and (d)~$(0kl)$ from room-temperature X-ray diffraction data. Predicted reflection positions are indicated by red circles.} \label{fig:platelet} \end{figure} \begin{figure}[p] \centering \includegraphics[width=0.85\textwidth]{Figure3S.ps} \caption{Reconstructions of reciprocal-space planes $(hk0)$, $(h0l)$ and $(0kl)$ from X-ray diffraction data for a platelet-like (a-c) and a needle-shaped Sc$_3$CoC$_4$\ single-crystal (d-f) at 150~K.} \label{fig:diffraction-data-150K} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{Figure4S.ps} \caption{Diffuse X-ray scattering features in the $(h,1.5,l)$-plane for a needle-shaped single-crystal at room-temperature.} \label{fig:needle-diffuse} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.57\textwidth]{Figure5S.ps} \caption{Temperature-dependence of (a)~the electrical resistivity $\rho(T)$\ and (b)~the X-ray scattering intensity at superstructure reflection positions $I_{XRD}(T)$\ \ for a platelet-like single-crystal.} \label{fig:platelet-ixrd-rho} \end{figure} \FloatBarrier \newpage \section{X-ray diffraction experiments}\label{sec:x-ray} The temperature-dependence of the superstructure reflection intensity was determined by means of variable-temperature X-ray diffraction experiments on a single-crystalline needle of Sc$_3$CoC$_4$\ mounted on a capton mesh (MiTeGen). Depending on the temperature range two different experimental setups were employed: Sample temperatures in the range 100~K~$<T<$~300~K were reached on a CAD4 $\kappa$-goniometer (BRUKER) fitted with a micro-focus tube (INCOATEC, $\lambda$(Mo~K$_\alpha$) = 0.71073\AA) and an open-flow N$_2$ gas stream cooler (OXFORD).\cite{Cosier86} The scattering intensity was recorded in $\omega$-scans ($\Delta\omega$ = 2$^{\circ}$, $t$ = 582.5~s) on an XPAD~3.2 hybrid pixel detector.\cite{Wenger14} $\phi$-scans ($\Delta\phi$ = 0.5$^{\circ}$, $t$ = 30~min) in the temperature range 10~K$<T<$100~K were performed employing a Huber Eulerian cradle goniometer equipped with a modified closed-cycle He cryostat (ARS Cryo),\cite{Reisinger07} a MAR345 image plate detector (MARXPERTS), and a graphite monochromated FR591 rotating anode with molybdenum target (BRUKER, $\lambda$(Mo~K$_\alpha$) = 0.71073\AA). An updated version of the latter setup featuring a micro-focus tube (INCOATEC, $\lambda$(Ag~K$_\alpha$)~= 0.56087~\AA) and a Pilatus3 R CdTe detector (DECTRIS) was used to collect long-exposure diffraction data for a lithium flux grown plate shaped single crystal of Sc$_3$CoC$_4$\ (12~K~$< T <~$100~K, $\phi$-Scans, $\Delta \phi$ = 0.5$^{\circ}$, $t$ = 120~s). The sample was glued to a capton microloop (MiTeGen) using nail varnish. In the temperature range 100~K~$< T <$~300~K the closed-cycle He cryostat was replaced by an open-flow N$_2$ gas stream cooler (OXFORD).\cite{Cosier86} \newpage \section{Twinning}\label{sec:twinning} A direct observation of the twin domains in Sc$_3$CoC$_4$, e.g. by means of transmission electron microscopy (TEM), has not been achieved so far. Yet, inferences about the most probable orientation of the twin domain boundaries can be made on group theoretical grounds. Hints to the domain sizes come from single-crystal X-ray diffraction data. As already pointed out by Vogt \textit{et al.}\cite{Vogt09}, systematic twinning in Sc$_3$CoC$_4$\ is induced by a $t2$ step in the symmetry reduction pathway from the high-temperature to the low-temperature phase space-group $I\frac{2}{m}\frac{2}{m}\frac{2}{m}$ $\overset{t2}{\rightarrow}$ $I11\frac{2}{m}$ $\overset{i2}{\rightarrow}$ $B11\frac{2}{m}$ ($\hat{=}$~$C1\frac{2}{m}1$). Thereby, twin domains with two distinct orientation states are formed. Consideration of the point groups of the involved space groups is sufficient for a determination of the twin operations relating the two possible twin domain orientations to each other.\cite{ITC-VolD-Twin} Coset decomposition of the point group $\frac{2}{m}\frac{2}{m}\frac{2}{m}$ ($I\frac{2}{m}\frac{2}{m}\frac{2}{m}$) with respect to the point group $11\frac{2}{m}$ ($I11\frac{2}{m}$) yields the set of formally equivalent twin operations $\{m_{[100]}, 2_{[100]}, m_{[010]}, 2_{[010]}\}$. The mirror plane $m_{[100]}$ has the highest chance of actually defining the macroscopic twin element that partitions the crystal into twin domains. This comes for two reasons: In the formation of twin domains mirror planes mostly take precedence over rotation axes.\cite{ITC-VolD-Twin} Additionally, hypothetical twin domain boundaries aligning with the mirror plane $m_{[010]}$ would bisect strongly covalent bonds in the infinite $\left[\right.$Co(C$_2$)$_2$Co$\left.\right]$ ribbons of Sc$_3$CoC$_4$. Thus, a polysynthetic twin state with a stacking of twin domains along the $a$-axis of the high-temperature phase unit cell may be proposed. An approximate upper boundary for the twin domain sizes can be obtained by twin integration of low-temperature single-crystal X-ray diffraction data with \textit{EVAL14}\cite{Duisenberg03} and subsequent data reduction with \textit{TWINABS}.\cite{Krause15} Frame-wise scaling of collected reflection intensities as implemented in \textit{TWINABS}\cite{Krause15} compensates for variations in absorption and irradiated crystal volume during sample rotation. The according scale-factors for each frame minimize intensity differences between symmetry-equivalent reflections and can be refined separately for each twin domain.\cite{Sheldrick07,Sheldrick15} Taking into account the presumed polysynthetic twin domain arrangement the domain centers in Sc$_3$CoC$_4$\ cannot be moved to the rotation axis of the goniometer simultaneously. As a result, large twin domains with dimensions comparable to the X-ray beam diameter of approx. 90~$\mu$m (FWHM)\cite{Incoatex_ImuS_AgKa} should be affected by significant and domain-specific staggering around the rotation axis. This translates into a different variation of symmetry-equivalent reflection intensities for the individual twin domains and a different behavior of the frame-wise scale-factor refined by \textit{TWINABS}.\cite{Krause15} If the rotation axis is oriented perpendicular to the plane of the domain walls, however, no differences in the domain-specific scale-factors can be observed and a domain size assessment is not possible. This is the case for the needle-shaped crystals whose long axis corresponding to the crystallographic $a$-axis was oriented approximately along the rotation axis (see crystal coordinate system in Fig.~\ref{fig:needle}a). In contrast, the $c$-axis of a platelet-like crystal (dimensions 140~$\mu$m~$\times$ 338~$\mu$m~$\times$ 350~$\mu$m) was oriented parallel to the rotation axis, so that differences in the scale-factor variation may be expected in case of large twin domains. Yet, the obtained scale-factors for the twin domains at a temperature of 12~K vary almost perfectly in sync (see Fig.~\ref{fig:scale-factor}a). Such a behavior points to twin domains with a thickness much smaller than the beam diameter of 90~$\mu$m. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{Figure6S.ps} \caption{(a)~Frame-wise scale factor variation as obtained for a platelet-like crystal at 12~K and (b)~simulation setup taking into account the crystal shape and a Gaussian beam-profile. Simulated scale factor variations for a domain thickness of 87~$\mu$m and 17~$\mu$m are given in c and d.} \label{fig:scale-factor} \end{figure} For a more accurate estimate of the maximum domain size the rotation of a twinned crystal in an X-ray beam with Gaussian profile (90~$\mu$m FWHM) was simulated. Thereby, the crystal shape was taken from the experiment and partitioned into lamellar domains of varying thickness (see Fig.~\ref{fig:scale-factor}b). The intensity-weighted irradiated volumes of domains~1 and 2 were then sampled for different rotation angles. Conversion into scale-factors was achieved by normalization of the irradiated volumes to their average and subsequent calculation of the reciprocal values. As can be recognized from Figs.~\ref{fig:scale-factor}c and d, only twin domains with a thickness below approx. 20~$\mu$m along the $a$-axis lead to a synchronous scale-factor variation. \newpage \section{Generation of reciprocal-space maps}\label{sec:gen-rec-space-map} For the creation of reciprocal layer reconstructions from experimental X-ray diffraction data the program \textit{htd2predict} was used.\cite{Langmann19} Similar to the program \textit{XCAVATE}\cite{Estermann98} it is capable of extracting scattering intensities in arbitrary pre-defined slices of reciprocal space from diffraction images. Unlike precession imaging relying on a defined specimen-alignment with respect to the rotation axis, no pre-orientation of the sample is needed for this purpose. The only requirement is the use of an area detector for the acquisition of the diffraction data. Thereby, the following procedure is used: First, the orientation matrix describing the crystal orientation with respect to the laboratory axis system is determined with the program \textit{DIRAX}\cite{Duisenberg92} and refined with the program \textit{EVAL14}.\cite{Duisenberg03} Building on this information the positions on the diffraction images corresponding to the points in the desired reciprocal-space layer are predicted. The detected scattering intensities at these positions are sampled and assigned to their reciprocal-space coordinates. \newpage \section{Temperature-dependent reciprocal-space maps}\label{sec:rec-space-map} \begin{figure}[h] \includegraphics[width=0.7\textwidth]{Figure7S.ps} \caption{\label{fig:rec-space-maps-above-130K} Temperature-dependence of the X-ray scattering intensities in the $(h, 1.5, l)$ reciprocal-space plane between 300~K and 130~K. Note that the Miller indices refer to the orthorhombic high-temperature phase unit cell. The X-ray scattering intensity at irregular positions can be attributed to imperfections in the investigated large single-crystal.} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{Figure8S.ps} \caption{\label{fig:rec-space-maps-below-130K} Temperature-dependence of the X-ray scattering intensities in the $(h, 1.5, l)$ reciprocal-space plane between 100~K and 12~K. Note that the Miller indices refer to the orthorhombic high-temperature phase unit cell. The X-ray scattering intensity at irregular positions can be attributed to imperfections in the investigated large single-crystal. Weak ring-shaped intensity between 80~K and 12~K is due to an incomplete subtraction of the scattering contributions from the beryllium vacuum shrouds of the closed-cycle He cryostat (see Sec.~\ref{sec:x-ray}).} \end{figure} \newpage \section{Temperature-dependent intensity profiles} \label{sec:profiles} \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{Figure9S.ps} \caption{Horizontal cuts through the reciprocal-space maps from Sec.~\ref{sec:rec-space-map}. The path $(h, 1.5, 2.0)$ including superstructure reflection positions was sampled for temperatures between 300~K and 150~K (a) and between 130~K and 12~K (b). Note that the Miller indices refer to the orthorhombic high-temperature phase unit cell and that a Gaussian background was subtracted.} \label{fig:profile_at_1} \end{figure} \begin{figure}[p] \centering \includegraphics[width=0.8\textwidth]{Figure10S.ps} \caption{Horizontal cuts through the reciprocal-space maps from Sec.~\ref{sec:rec-space-map}. The path $(h, 1.5, 1.5)$ excluding superstructure reflection positions was sampled for temperatures between 300~K and 100~K. Note that the Miller indices refer to the orthorhombic high-temperature phase unit cell and that a Gaussian background was subtracted.} \label{fig:profile_at_2} \end{figure} \FloatBarrier \newpage \section{Extraction of scattering intensities} \label{sec:extr-scatt-intens} Due to different characteristics of diffuse rods and superstructure reflections adapted approaches had to be used in the extraction of their intensities. The superstructure reflections are pinpoint and spread only over few X-ray diffraction images collected in $\phi$-scanning mode. This situation allows the usage of single unprocessed diffraction images to track the temperature-dependent intensity of a representative $(-1.5, 0.5, 0)$ reflection.\footnote{referring to the unit cell of the high-temperature phase} The intensity sampling was done using the program Imagej\cite{Schneider12} by summing up the detected intensity of the pixels inside a quadratic box around the predicted position of the $(-1.5, 0.5, 0)$ reflection. Therefrom, a background intensity obtained by multiplication of the average pixel intensity at the edges of the integration box with the number of contained pixels was subtracted. The usage of different $\phi$-increments of 0.5$^\circ$ and 2.0$^\circ$ per frame in the temperature ranges 10~K~$< T <$ 100~K and 100~K~$< T <$ 300~K was outbalanced by adding up four frames with 0.5$^\circ$ increment. Both temperature-dependent intensity data sets were scaled to the reflection intensity at 100~K. The diffuse rods, by contrast, are weak and span a large number of X-ray diffraction images. Thus, the intensity extraction scheme was based on reciprocal-space reconstructions that were generated with the program \textit{htd2predict}\cite{Langmann19} (see Sec.~\ref{sec:gen-rec-space-map}). Thereby, the intensity at positions $(2.5, 1.5, 0.5)$, $(2.5, 1.5, 1.5)$ and $(2.5, 1.5, 2.5)$,\cite{Note1} \textit{i.e.} midway between the superstructure reflection positions in a row along $c^\ast$, was sampled by a box integration method in analogy to the superstructure reflections. Due to their continuous nature along $c^\ast$, however, only the pixel intensity at the two edges of the integration box parallel to the diffuse rods was used in the determination of the subtracted background intensity. In a final step the intensities at all three positions were averaged. \newpage \section{Simulations of thermal diffuse scattering (TDS)} \label{sec:TDS-sim} In this work the simulation of thermal diffuse scattering (TDS) contributions with the program \textit{ab2tds}\cite{Wehinger13} relies on \textit{ab-initio} dynamical matrices obtained by the finite-displacement method. Necessary Fourier interpolation of the dynamical matrix\cite{Wehinger14a} during the simulation process may introduce artifacts such as the observed weak modulation of the diffuse rod intensity for the high-temperature (HT) phase of Sc$_3$CoC$_4$\ along the $c^\ast$-axis. To ensure that the dynamical matrix is basically left intact by the Fourier interpolation step inelastic X-ray scattering (IXS) intensity maps for a temperature of 300~K were simulated in \textit{ab2tds}.\cite{Wehinger13} By plotting the calculated variation of the inelastically scattered X-ray intensity $I(E, \mathbf{Q})$ with energy transfer $E$ and momentum transfer $\mathbf{Q}$ salient features of the phonon dispersion should be reproduced. Differences may only arise due to the dependence of $I(E, {\bf Q})$ on the scattering factors $S_\alpha(\lambda, \mathbf{Q})$ for atoms $\alpha$ and the scalar product $\mathbf{Q} \cdot \mathbf{e}^\alpha_{v\mathbf{Q}}$ of momentum $\mathbf{Q}$ and polarisation vector for phonon branch $v$. Additionally, folding of $I(E, {\bf Q})$ with a resolution function leads to a blurring of details in the phonon dispersion. Comparison of the simulated IXS intensity map for the HT-phase of Sc$_3$CoC$_4$\ in Fig.~\ref{fig:tds-dispers-ht}a with the unprocessed phonon dispersion in Fig.~\ref{fig:tds-dispers-ht}b shows excellent overall agreement. Most noteworthy, the soft phonon branch between the high-symmetry points W and T in the phonon dispersion reappears unaltered in the IXS intensity map. Vanishing of the IXS intensity for the phonon branches between $\Gamma$ and R at approx. 120~meV may be caused by a small $S_\alpha(\lambda, \mathbf{Q})$ or orthogonality of $\mathbf{Q}$ and $\mathbf{e}^\alpha_{v\mathbf{Q}}$ in this region. There is also good correspondence between the simulated IXS intensity map for the LT-phase of Sc$_3$CoC$_4$\ in Fig.~\ref{fig:tds-dispers-lt}a and the unprocessed phonon dispersion in Fig.~\ref{fig:tds-dispers-lt}b. Although high-energy regions of the IXS map suffer from weak intensity, the absence of a soft phonon branch in the LT-phase is clearly reflected. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{Figure11S.ps} \caption{(a)~Simulated Inelastic X-ray scattering (IXS) map at 300~K and (b)~unprocessed phonon dispersion relation for the high-temperature phase of Sc$_3$CoC$_4$\ along the same high-symmetry paths in the Brillouin zone.} \label{fig:tds-dispers-ht} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{Figure12S.ps} \caption{(a)~Simulated Inelastic X-ray scattering (IXS) map at 300~K and (b)~unprocessed phonon dispersion relation for the low-temperature phase of Sc$_3$CoC$_4$\ along the same high-symmetry paths in the Brillouin zone.} \label{fig:tds-dispers-lt} \end{figure} \FloatBarrier
1,477,468,750,214
arxiv
\section{Introduction} \label{intro} \input{intro.tex} \section{Galaxies and Shape Measurement} \label{sec:1} \input{sec1.tex} \section{Sparse Deconvolution} \label{sec:2} \input{sec2.tex} \section{Deconvolution with Shape Constraint} \label{sec:3} \input{sec3.tex} \section{Numerical Experiments} \label{sec:4} \input{sec4.tex} \section{Reproducible Research} \label{sec:5} \input{sec5.tex} \section{Conclusion} \label{conclu} \input{conclu.tex} \begin{acknowledgements} \label{ack} \input{ack.tex} \end{acknowledgements} \clearpage \begin{appendices} \section{Expressing Galaxy Ellipticity with Inner Products} \label{append:ell2inner_prod} \input{append_A.tex} \section{Dataset Generation} \label{append:dataset} \input{append_B.tex} \end{appendices} \clearpage \bibliographystyle{spmpsci} \subsection{Sparse Restoration} \label{subsec:2_1} Sparsity has proven to be an effective regularization technique for denoising~\cite{farrens2017space,starck2015sparse}. Its use in deconvolution along with positivity offers satisfying results regarding the pixel error~\cite{farrens2017space}. Sparse regularization is applied in a space where the solution is known to be sparse. In the case of galaxy images, it has been shown in~\cite{starck2015sparse} that starlets offer such a sparse representation. In the following, we will assume that the noise, $b$ in eq.~\ref{eq:invprob}, is white additive Gaussian noise with variance $\sigma^2I_n$. Let $\phi=\left(\phi_i\right)_{i\in\left\{1,\dots,I \right\}}$ denote the starlet transform operator, with $I$ its chosen number of components. The loss function for the sparse deconvolution problem can be written as the sum of differentiable and non-differentiable terms: \begin{equation}\label{eq:L0} L_0(x) = \overbrace{\underbrace{\frac{1}{2\sigma^2}\|x\ast h-y\|^2_2}_{\text{data-fidelity}}}^{:=L_{0d}(x),\text{ differentiable}}+\overbrace{\underbrace{\|\lambda_0 \odot \phi(x)\|_1}_{\text{sparsity}}+\underbrace{\iota_+(x)}_{\text{positivity}}}^{:=L_{0p}(x),\text{ non-diffrentiable}} \quad, \end{equation} where $\lambda_0$ is a weighting matrix with non-negative entries. We now give the major properties of $L_0$ needed to construct a Sparse Restoration Algorithm (SRA) that minimizes it. Straightforwardly, $L_{0d}$ has gradient \begin{equation} \nabla L_{0d}(x) = \frac{1}{2\sigma^2}2h_\pi \ast \left(x\ast h -y\right) \quad. \end{equation} Following from its definition, the Lipschitz constant of $\nabla L_{0d}$, noted $\alpha_0$, is \begin{equation} \alpha_0 = \frac{1}{2\sigma^2}\rho\left(2h_\pi\ast h \ast I_{n^2}\right) \quad. \end{equation} Following \cite{starck2015sparse}, we set the value of $\lambda_0$ such that it is proportional to the standard deviation map of $\phi \nabla L_{0d}(x_T)$. We notice that for $x=x_T$, we have $x\ast h - y$ equal to $-b$ which is also a white Gaussian noise of variance $\sigma^2I_{n}$. Consequently, \begin{equation} \phi_i \ast \nabla L_{0d}(x_T) = -\frac{1}{\sigma^2}\phi_i \ast h_\pi\ast b \quad, \forall i \in \{1,\dots,I\}\,. \end{equation} It follows that $\phi \nabla L_{0d}(x_T)$ is colored Gaussian noise, with variance \begin{equation} \Sigma_0 = \underbrace{\frac{1}{\sigma^2}\left[\left(\phi_i \ast h_\pi\ast I_{n^2}\right)\left(\phi_i \ast h_\pi\ast I_{n^2}\right)^\top\right]_{i\in\{1,\dots,I\}}}_{{:=\left(\Sigma_{0i}\right)}_{i\in\{1,\dots,I\}}} \quad. \end{equation} We then set \begin{equation}\label{eq:lambda0} \lambda_0 = \left[\kappa[i] \cdot \text{diag}\left(\Sigma_{0i}\right)\right]_{i\in\{1,\dots,I\}}=\frac{1}{\sigma^2}\left(\kappa[i] \cdot \|\phi_i \ast h_\pi\|^2_2\mathbf{1}_{n^2}\right)_{i\in\{1,\dots,I\}}\quad, \end{equation} where $\kappa$ is a vector in $\mathbb{R}^{I+1}$ of the form $\left(0,q,\dots,q,q+1\right)$, assuming that the components of $\phi$ are arranged gradually from the coarse scale $\phi_0$ to the finest scale $\phi_I$. In this work, we set $q=4$ Finally, we approximate the proximal operator of the non-differentiable part, $L_{0p}$, of the the loss function. To do so, let us first recall the exact forms of the proximal operators of $\iota_+$ and $\|\lambda \odot\cdot\|_1$, with $\lambda\in\mathbb{R}^{(I+1)\times n \times n}_+$: \begin{equation} \text{prox}_{\iota_+}(x) = (x)_+\quad, \end{equation} where $\forall k \in\{1,\dots,n^2\}$, \begin{equation} (x)_+[k] = \text{max}\left(x[k],0\right)\quad, \end{equation} and \begin{equation} \text{prox}_{\|\lambda \odot.\|_1}(x) = \text{ST}_\lambda (x)\quad, \end{equation} where $\text{ST}_\lambda$ is the soft-thresholding operator, defined $\forall k \in\{1,\dots,(I+1)n^2\}$ as \begin{equation} \text{ST}_\lambda(x) [k]=\left\{\begin{array}{ll} x[k]-\text{sgn}\left(x[k]\right)\lambda \text{ if }\left|x[k]\right|\geq \left|\lambda[k]\right|\\ 0 \text{ otherwise} \end{array} \right.\quad. \end{equation} Nevertheless, in practice, the hard-thresholding operator is preferred over the soft-thresholding in order to reduce the bias introduced by image restoration~\cite{starck2002nonlinear,blumensath2009iterative}. Let $\text{HT}_\lambda$ denote the hard-thresholding operator, defined $\forall k \in\{1,\dots,(I+1)n^2\}$ as \begin{equation} \text{HT}_\lambda(x) [k]=\left\{\begin{array}{ll} x[k] \text{ if }\left|x[k]\right|\geq \left|\lambda[k]\right|\\ 0 \text{ otherwise} \end{array} \right.\quad. \end{equation} From these, we define \begin{equation} \text{p}_{\lambda_0}(x) = \left[\phi^{-1}\left(\text{HT}_{\lambda_0}\left[\phi\left(x\right)\right]\right)\right]_+\quad, \end{equation} the approximation of $\text{prox}_{L_{0p}}$ we will use in the present work. \\ \textbf{Remark: }$\text{p}_{\lambda_0}$ relies on two approximations. The first is related to the starlet, which is redundant (thus non-orthogonal), as an orthogonal transform. The second is due to assuming that the proximal operator of the sum of two non-differentiable terms is the composition of the proximal operators of the terms, which does not hold in general. The implementation of SRA used in this work is based on a proximal splitting frame, more precisely, on forward-backward splitting methods~\cite{starck2015sparse}. The resulting algorithm is given in alg.~\ref{alg:sra}. We still need to set the stopping criterion, $A$; the first-guess, $t \in \mathbb{R}^{n\times n}$; $\alpha_{\epsilon}$ and $\hat{\lambda}_0$ which are respectively the estimations of $\alpha_0$ and $\lambda_0$. \begin{algorithm} \caption{SRA algorithm} \label{alg:sra} \begin{algorithmic} \STATE{\textbf{Task:} Restore $x_T$ using $y$ and $h$.} \STATE{\textbf{Parameters:} $\gamma$, $\epsilon >0$, boolean $A$.} \STATE{\textbf{Initialization: $x^{(0)}\gets t$, $\beta \gets \alpha_{\epsilon}^{-1}$}} \WHILE{not$\left(A\right)$} \STATE{$x^{(i+1)}\gets \text{p}_{\beta \hat{\lambda}}\left[x^{(i)}-\beta \nabla L_d \left(x^{(i)}\right)\right]$ \STATE{$i\gets i+1$} } \ENDWHILE \RETURN $x^{(i)}$ \end{algorithmic} \end{algorithm} We compute $\alpha_\epsilon$ by using the power iteration method to obtain an estimation of $\alpha$, and then we multiply the output by $(1+\epsilon)$ to make sure that we did not go below the lowest upper bound (for this paper, we set $\epsilon=0.05$). $\hat{\lambda}_0$ is computed using eq.~\ref{eq:lambda0}. And we set $t$ to $\frac{1}{n^2}\mathbf{1}_{n^2}$. For the stopping criterion, we considered two cases : \begin{description} \item[\textbf{The denoising case} ($h=\delta$)\textbf{:}] here the problem is well-conditioned, and we set $A$ to `$i\leq N_i$' where $N_i$ is the number of iterations. For the numerical experiments we set $N_i$ to 40. \item[\textbf{The general case:}] here the problem is ill-conditioned, prompting us to set $A$ to `$i\leq N_i$ and $\left|\frac{\left[L\left(x^{(i)}\right)+L\left(x^{(i-1)}\right)\right]-\left[L\left(x^{(i-2)}\right)+L\left(x^{(i-3)}\right)\right]}{L\left(x^{(i-2)}\right)+L\left(x^{(i-3)}\right)}\right|\leq c$'. In the present experiments, we set $N_i=150$ and $c=10^{-6}$. \end{description} \subsection{The Shape Constraint} \label{subsec:2_2} Ideally, the shape constraint should take the form of a data fidelity term in a space that corresponds to the ellipticity. However the ellipticity, $e(x)$, as defined in eq.~\ref{eq:ell_mom} is a non-linear function of the galaxy image, $x$. We thus express it as a combination of linear quantities, which will prove easier to handle mathematically. In~\cite{bernstein2014bayesian}, it is shown that the ellipticity can be rewritten using scalar products. Analogously, we derive in appendix~\ref{append:ell2inner_prod} the following formulae: \begin{equation} \label{eq:ell_scal} \begin{aligned} &\mathrm{e}_1(x) = \frac{\left<x,u_3\right>\left<x,u_5\right>-\left<x,u_1\right>^2+\left<x,u_2\right>^2}{\left<x,u_3\right>\left<x,u_4\right>-\left<x,u_1\right>^2-\left<x,u_2\right>^2} \quad, \\ &\mathrm{e}_2(x) = \frac{2(\left<x,u_3\right>\left<x,u_6\right>-\left<x,u_1\right>\left<x,u_2\right>)}{\left<x,u_3\right>\left<x,u_4\right>-\left<x,u_1\right>^2-\left<x,u_2\right>^2} \quad, \end{aligned} \end{equation} with $\left(u_k\right)_{k\in\{1,\dots,6\}}$ in $\mathbb{R}^{6\times n\times n}$, defined for all $i$ and $j$ in $\{1,\dots,n\}$ as \begin{equation} \label{eq:u_i} \begin{aligned} &u_1[(i-1)n+j] = (i), &&u_2[(i-1)n+j] = (j),\\ &u_3[(i-1)n+j] = (1), &&u_4[(i-1)n+j] = (i^2+j^2),\\ &u_5[(i-1)n+j] = (i^2-j^2), &&u_6[(i-1)n+j] = (ij). \end{aligned} \end{equation} Eq.~\ref{eq:ell_scal} shows that the ellipticity information is contained in the set of $6$ scalar products. Additionally, in eq.~\ref{eq:u_i}, we can see that $\left(u_k\right)_{k\in\{1,\dots,6\}}$ are constant vectors. The scalar products are therefore all linear functions of $x$. Consequently, we choose them as building blocks for the shape constraint, instead of directly using the ellipticity (which is not a linear function of $x$). From this, we give a preliminary formulation of the constraint: \begin{equation}\label{eq:m0} M_0(x) = \sum_{i=1}^6 \omega_i \left<x\ast h-y,u_i\right>^2 \quad, \end{equation} where the components of $\left(\omega_i\right)_{i\in\{1,\dots,6\}}$ are real valued scalar weights.\\ As discussed in sect.~\ref{sec:1}, the quantities in eq.~\ref{eq:m0} are extremely sensitive to noise. A natural way to increase the robustness of our shape constraint would be, in analogy with the ellipticity estimator of eq.~\ref{eq:ell_int}, to apply a weighting function $g$. This choice, however, comes with the burden of correctly choosing $g$. A fixed window function would lack flexibility and likely lead to poor estimators of ellipticity for some objects $y$. Fitting $g$ to $y$ would improve flexibility, but require an additional preprocessing step. An alternative approach is to apply the constraint on many windows of different sizes and orientations, so that at least one of them is a good fit to $y$. To find such a set of windows, we consider curvelet-like decompositions \cite{starck:sta01_3,starck2015sparse} where all the bands correspond to windows with different orientations, and every scale corresponds to a different size. \textit{Shearlets} are particularly appropriate for our ends, as can be seen from Fig.~\ref{fig:shearlets}. This choice was also motivated by the two following properties~\cite{kutyniok2012introduction,voigtlaender2017analysis}: \begin{itemize} \item{\textbf{Anistropy}:} Ellipticity is, itself, a measure of anisotropy and the use of the shearlets, which are an anisotropic transform, should help us discriminate objects according to this criterion. \item{\textbf{Grid conservation:}} The scaling and shearing operations that transition from one shearlet band to another conserve the points on the grid, which adds numerical stability. \end{itemize} \begin{figure} \includegraphics[width=\textwidth]{figures/shearlet_scales.png} \caption{Representation of the shearlet bands with 3 scales.} \label{fig:shearlets} \end{figure} Let $\psi=\left(\psi_j\right)_{j \in\{1,\dots,J\}}$ denote the shearlet transform operator, with $J$ its chosen number of components. We formulate the shape constraint as follows: \begin{equation} M(x)=\sum_{i=1}^6\sum_{j=1}^J\omega_{ij}\left<\psi_j(x\ast h)-\psi_j(y),u_i\right>^2\quad, \end{equation} where $\left(\omega_{ij}\right)_{\substack{i\in \{ 1,\dots,6 \}\\ j\in \{ 1,\dots,J \}}}$ are real-valued scalars (see sect.~\ref{subsec:3_2} for their practical selection). By taking into account the fact that the shearlet transform is a linear operator, and denoting $\psi_j^\ast$ the adjoint operator of $\psi_j$ for all $j$ in $\{1,\dots,J\}$, we have \begin{equation} \label{eq:SC} M(x) = \sum_{i=1}^6\sum_{j=1}^J\omega_{ij}\left<x\ast h-y,\psi_j^\ast\left(u_i\right)\right>^2\quad. \end{equation} Wavelet moments \cite{chen2011wavelet,farokhi2014near} and curvelet moments \cite{murtagh2008wavelet,dhahbi2015breast} have been similarly used in the past, but only for classification applications. After formulating the constraint, we will put it into use by adding it to a sparse restoration algorithm that computes a solution to eq.~\ref{eq:invprob}. We achieve this by creating the corresponding loss function, exhibiting its properties and, finally, building an algorithm that minimizes it: SCORE. \subsection{The Loss Function} \label{subsec:3_1} Combining the shape constraint from eq.~\ref{eq:SC} and the loss function of SRA from eq.~\ref{eq:L0}, we obtain the SCORE loss function, \begin{equation} \label{eq:loss_OSCAR} L(x) = \overbrace{\frac{1}{2\sigma^2}\|x\ast h-y\|^2_2+\underbrace{\frac{\gamma}{2\sigma^2}M(x)}_{\text{shape constraint}}}^{:=L_d(x),\text{ differentiable part}}+\overbrace{\|\lambda \odot \phi(x)\|_1+\iota_+(x)}^{:=L_p(x),\text{ non-diffrentiable part}} \quad, \end{equation} where $\gamma \in \mathbb{R}_+$ is the trade-off between the data-fidelity term and the shape constraint and $\lambda$ is, as before, a weighting matrix with non-negative entries. Before expressing the main properties of $L$, let us give a reformulation of its differentiable part, $L_d$. Namely, let us show that it can be recast as a single data-fidelity term with a modified norm. As a starting point, on the one hand, we have \begin{align} M(x) &= \frac{\gamma}{2\sigma^2}\sum_{i,j}\omega_{ij}\left<x\ast h-y,\psi_j^\ast\left(u_i\right)\right>^2 \quad,\\ &= \frac{\gamma}{2\sigma^2}\sum_{i,j}\omega_{ij}\left(\left[\psi_j^\ast\left(u_i\right)\right]^\top\left[x\ast h-y\right]\right)^\top\left(\left[\psi_j^\ast\left(u_i\right)\right]^\top\left[x\ast h-y\right]\right),\\ &= \frac{1}{2\sigma^2}\left(x\ast h-y\right)^\top\gamma\underbrace{\sum_{i,j}\overbrace{\omega_{ij}\psi_j^\ast\left(u_i\right)\left[\psi_j^\ast\left(u_i\right)\right]^\top}^{:=Q_{ij} \succeq 0}}_{:=Q \succeq 0}\left(x\ast h-y\right) \quad. \label{eq:sc_vec} \end{align} Similarly, on the other hand, we have \begin{equation}\label{eq:data_fid} \frac{1}{2\sigma^2}\|x\ast h-y\|^2_2 = \frac{1}{2\sigma^2}\left(x\ast h -y\right)^\top I_n\left(x\ast h -y\right) \quad. \end{equation} By summing eq.~\ref{eq:sc_vec} and~\ref{eq:data_fid}, we obtain \begin{align} L_d(x) &= \frac{1}{2\sigma^2}\left(x\ast h -y\right)^\top\underbrace{\left( I_n+\gamma Q\right)}_{:=S\succ 0}\left(x\ast h -y\right) \quad,\label{eq:loss_vec}\\ L_d(x) &= \frac{1}{2\sigma^2}\|x\ast h -y\|_S \quad. \label{eq:loss_norm} \end{align} We can thus interpret the weighted data-fidelity term in eq.~\ref{eq:loss_norm} as an extension of the space of the data-fidelity. When using it, we are effectively not only considering the image space by itself, but also taking the space of scalar products of $M$ into account. \subsection{Properties of $L$} \label{subsec:3_2} Analogously to sect.~\ref{subsec:2_1}, we first determine the values of the constants (other than $\gamma$, which we study in detail in sect.~\ref{subsec:4_1}) that appear in $L$, and how to handle its differentiable and non-differentiable parts within an optimization framework. To determine $\left(\omega_{ij}\right)_{\substack{i\in\{1,\dots,6\}\\j\in\{1,\dots,J\}}}$ in \eqref{eq:SC}, let us impose that the unweighted data-fidelity and the shape constraint exert the same relative influence when $\gamma$ is 1. In addition, without any further prior, we want all components of $Q$ to have equal influence. With no guarantee of orthogonality, we then impose the following conditions \begin{equation}\label{eq:mu} \left\{ \begin{array}{ll} \|I_n\|_\text{F}=\displaystyle \sum_{i=1}^6\sum_{j=1}^J\|Q_{ij}\|_\text{F} \quad,\\ \|Q_{ij}\|_\text{F}=\|Q_{kl}\|_\text{F}\quad,\,\forall i,k \in \left\{1,\dots,6\right\}\quad,\,\forall j,l \in \left\{1,\dots,J\right\}. \end{array} \right. \end{equation} Solving the system in~\ref{eq:mu}, leads to: \begin{equation} \omega_{ij}=\frac{n}{\left\|\psi_j^*\left(u_i\right)\right\|_2^2} \quad,\,\forall i \in \{1,\dots,6\}\,,\,\forall j \in \{1,\dots,J\}. \end{equation} The gradient of $L_d$ follows from eq.~\ref{eq:loss_vec}: \begin{equation} \nabla L_d(x) = \frac{1}{2\sigma^2}2h_\pi\ast S \left(x\ast h -y\right) \quad. \end{equation} Its Lipschitz constant, noted $\alpha$, is \begin{equation} \alpha = \frac{1}{2\sigma^2}\rho\left(2h_\pi\ast h\ast S\right) \quad. \end{equation} In order to set $\lambda$, we once again propagate residual noise. Since \begin{equation} \phi_i \ast \nabla L_d(x) = -\frac{1}{\sigma^2}\phi_i \ast h_\pi\ast Sb \quad, \forall i \in \{1,\dots,I\}\,, \end{equation} $\phi L_d(x)$ is colored Gaussian noise, with variance \begin{equation} \Sigma = \underbrace{\frac{1}{\sigma^2}\left[\left(\phi_i \ast h_\pi\ast S\right)\left(\phi_i \ast h_\pi\ast S\right)^\top\right]_{i\in\{1,\dots,I\}}}_{:=\left(\Sigma_i\right)_{i\in\{1,\dots,I\}}} \quad. \end{equation} This allows us to chose \begin{equation}\label{eq:lambda} \lambda = \left[\kappa[i] \cdot \text{diag}\left(\Sigma_i\right)\right]_{i\in\{1,\dots,I\}}\quad. \end{equation} Lastly, similar to sect.~\ref{subsec:3_1}, we approximate $\text{prox}_{L_p}$ by \begin{equation} \text{p}_\lambda(x) = \left[\phi^{-1}\left(\text{HT}_\lambda\left[\phi\left(x\right)\right]\right)\right]_+\quad. \end{equation} \subsection{Algorithm} \label{subsec:3_3} The SCORE algorithm is given in alg.~\ref{alg:score}. As with sect.~\ref{subsec:2_1}, we also need to set the stopping criterion, $A$; the first-guess, $t \in \mathbb{R}^{n\times n}$; $\alpha_\epsilon$ and $\hat{\lambda}$ which are respectively the estimations of $\alpha$ and $\lambda$ \begin{algorithm} \caption{SCORE algorithm} \label{alg:score} \begin{algorithmic} \STATE{\textbf{Task:} Restore $x_T$ using $y$ and $h$.} \STATE{\textbf{Parameters:} $\gamma$, $\epsilon >0$, boolean $A$.} \STATE{\textbf{Initialization: $x^{(0)}\gets t$, $\beta \gets \alpha_{\epsilon}^{-1}$}} \WHILE{not$\left(A\right)$} \STATE{$x^{(i+1)}\gets \text{p}_{\beta \hat{\lambda}}\left[x^{(i)}-\beta \nabla L_d \left(x^{(i)}\right)\right]$ \STATE{$i\gets i+1$} } \ENDWHILE \RETURN $x^{(i)}$ \end{algorithmic} \end{algorithm} As for alg.~\ref{alg:sra}, we compute $\alpha_\epsilon$ by using the power iteration method to obtain an estimation of $\alpha$, and then multiplying the output by $(1+\epsilon)$ (for this paper, $\epsilon=0.05$). For the other variables, we considered: \begin{description} \item[\textbf{The denoising case} ($h=\delta$)\textbf{:}] here the problem is well-conditioned, therefore we set $A$ to `$i\leq N_i$' where $N_i$ is the number of iterations. To set $t$, we gave it the value of the output of SRA. Finally, we directly compute $\hat{\lambda}$ using the formula in eq.~\ref{eq:lambda} \item[\textbf{The general case:}] here the problem is ill-conditioned, which leads us to set $A$ to `$i\leq N_i$ and $\left|\frac{\left[L\left(x^{(i)}\right)+L\left(x^{(i-1)}\right)\right]-\left[L\left(x^{(i-2)}\right)+L\left(x^{(i-3)}\right)\right]}{L\left(x^{(i-2)}\right)+L\left(x^{(i-3)}\right)}\right|\leq c$'. We choose the first guess $t=\frac{1}{n^2}\mathbb{1}$. To compute $\hat{\lambda}$, we generate $G$ realisations of white Gaussian noise of variance $\sigma^2I_{n^2}$. In this paper, we set $c$ to $10^{-6}$ and $G$ to 100. \end{description} Assuming that each image contains only one galaxy such that all of its active pixels are connected, we add a post-processing step to remove the other isolated blobs in the output image. To do so, we mask the isolated blobs by first binarizing each output image using its 80\textsuperscript{th} percentile pixel value as a threshold. Then, under the safe assumption that the galaxy of interest should correspond to the largest blob, we set every other blob's pixels to 0 \subsection{Dataset \& Implementation} \label{subsec:4_1} To build our dataset, we generate 300 galaxy images, simulated using parameters fitted on real galaxies from the catalog COSMOS~\cite{Mandelbaum_2011} and 300 PSF images with a Moffat profile. Each image has $96\times96$ pixels. For further details on the data generation, see appendix ~\ref{append:dataset}. To create the observations, we convolve each galaxy with a PSF then add noise. Regarding noise levels, we use the following definition for the signal-to-noise ratio ($\text{SNR}$) of an observation $y$ of $x_T$: \begin{equation*} \text{SNR}(y) = \frac{\left\|x_T\right\|_2}{\sigma} \quad. \end{equation*} The chosen SNR levels are 40, 75, 150 and 380, with 300 observations generated for each. The implementation was done using \texttt{Python 3.6.8}, \texttt{ModOpt 1.3.0}\footnote{\url{https://github.com/CEA-COSMIC/ModOpt}}, \texttt{Alpha}-\texttt{Transform}\footnote{\url{https://github.com/dedale-fet/alpha-transform}} and \texttt{Matplotlib}~\cite{hunter2007matplotlib}. In order to study the influence of $\gamma$ from eq.~\ref{eq:loss_OSCAR}, we perform a two-step grid search by first determining the magnitude of the optimal parameter, then testing a finer grid of values in that range. The criterion chosen is \begin{equation} \gamma_* = \underset{\gamma}{\text{argmin }} \delta_e (\gamma)\quad,\text{where }\delta_e(\gamma)=\underset{i}{\text{mean}}\left[\text{MSE}\left(e\left(\hat{x}_{\gamma,i}\right)\right)\right], \end{equation} such that $\underset{i}{\text{mean}}(x_i)$ is the mean of $\left(x_i\right)_i$ over $i$ and $\hat{x}_{\gamma,i}$ is the SCORE estimation of the $i^{th}$ galaxy with trade-off parameter equal to $\gamma$. The resulting $\gamma_*$ are shown, per $\text{SNR}$ level, in Table~\ref{tab:gamma_star}. Its value is close to 1 in all cases. \begin{table} \caption{Values of $\gamma_\ast$.} \centering \label{tab:gamma_star} \begin{tabular}{|c|c|c|} \hline \rule[-1ex]{0pt}{3.5ex} \text{SNR} & Denoising & Full restoration \\ \hline \rule[-1ex]{0pt}{3.5ex} 40 & 1.2 & 1.2 \\ \hline \rule[-1ex]{0pt}{3.5ex} 75 & 0.8 & 1.6 \\ \hline \rule[-1ex]{0pt}{3.5ex} 150 & 1.0 & 1.2 \\ \hline \rule[-1ex]{0pt}{3.5ex} 380 & 0.8 & 0.6 \\ \hline \end{tabular} \end{table} \subsection{Results} \label{subsec:4_2} \subsection*{Denoising} \begin{figure} \vbox{\center \includegraphics[width= 0.95\textwidth]{figures/den_gal38_SNR75.pdf} \includegraphics[width= 0.95\textwidth]{figures/den_residuals38_SNR75.pdf} } \caption{Denoising results of galaxy \#38 for SNR=75. Top: original image and observed data (i.e. blurred image with noise). Center: denoised images with SCORE and SRA. Bottom: residual images with SCORE and SRA. \href{https://github.com/CosmoStat/score/blob/master/reproducible_research/paper_results/figure_denoising.ipynb}{\faFileCodeO}} \label{fig:den_gal38_SNR75} \end{figure} \begin{figure} \hbox{ \includegraphics[width=0.5\textwidth]{figures/den_SNRvsMSE.pdf} \includegraphics[width=0.5\textwidth]{figures/den_SNRvsDELTAe.pdf} } \caption{Left: relative MSE per SNR of the galaxies for the denoising experiment. Right: ellipticity error $\delta_e$ per SNR. In both cases, the curves correspond to the mean per SNR and the vertical bars to the standard deviation. \href{https://github.com/CosmoStat/score/blob/master/reproducible_research/paper_results/plots_denoising.ipynb}{\faFileCodeO}} \label{fig:denSNR_MSE} \end{figure} We first consider the denoising case ($h=\delta$). The top row of Fig.~\ref{fig:den_gal38_SNR75} shows an example of an original galaxy image and its corresponding degraded observation, the center row shows the denoised images with both SCORE and SRA, and the bottom row the corresponding residual images. Fig.~\ref{fig:denSNR_MSE} shows the MSE and ellipticity errors $\delta_e$ as a function of SNR. We can see that SCORE leads to a slight degradation in pixel MSE, compared to SRA. This is not unexpected as the latter's data fidelity term is entirely expressed in the image domain, while that of SCORE is shared with a shape component, as shown in sect. \ref{subsec:3_1}. SCORE's ellipticity errors are significantly reduced, by a factor of about 2. \subsection*{Deconvolution} \begin{figure} \vbox{\center \includegraphics[width= 0.95\textwidth]{figures/dec_gal16_SNR75.pdf}\vspace*{+3mm} \includegraphics[width= 0.95\textwidth]{figures/dec_residuals16_SNR75.pdf} } \caption{Deconvolution results of galaxy \#16 for SNR=75. Top: original image and observed data (i.e. blurred image with noise). Center: deconvolved images with SCORE and SRA. Bottom: residual images for SCORE and SRA, using the same color bar. \href{https://github.com/CosmoStat/score/blob/master/reproducible_research/paper_results/figure_deconvolution.ipynb}{\faFileCodeO}} \label{fig:dec_gal16_SNR75} \end{figure} \begin{figure} \hbox{ \includegraphics[width=0.5\textwidth]{figures/dec_SNRvsMSE.pdf} \includegraphics[width=0.5\textwidth]{figures/dec_SNRvsDELTAe.pdf} } \caption{Same as Fig.~\ref{fig:denSNR_MSE}, for the deconvolution experiment. \href{https://github.com/CosmoStat/score/blob/master/reproducible_research/paper_results/plots_deconvolution.ipynb}{\faFileCodeO}} \label{fig:decSNR_MSE} \end{figure} Similarly, Fig.~\ref{fig:dec_gal16_SNR75} shows an example galaxy, its recovered profiles with both approaches, and the corresponding residuals, while Fig.~\ref{fig:decSNR_MSE} shows the distributions of pixel and ellipticity errors at all SNRs. In the case of deconvolution, SCORE performs better than SRA for both MSE and ellipticity errors. Indeed, the MSE yielded by SCORE is lower by at least 16\% (and 36.3\% at most) compared to SRA. The example of Fig.~\ref{fig:decSNR_MSE} illustrates that SCORE's output has a smoother profile, with a better restoration of the tail of the galaxy compared to SRA. Additionally, the residual of SCORE is, towards the center of the object, fainter than that of SRA. We observe different trends when looking at pixel MSE between the denoising case and the full restoration one. We believe this is due to the different conditioning of the two problems. The deconvolution is more ill-conditioned than a simple denoising. Therefore, the broader the space of solutions, the higher the chance that an additional constraint would bring the solution closer to the ground truth. In terms of ellipticity, SCORE's $\delta_e$ is not only lower than SRA, but seems also less biased and more consistent according to the error bars. In the denoising case, it is 44.1\% at least (and 70.3\% at most) lower, and in the deconvolution case, 49.5\% at least (and 62.3\% at most). Figs.~\ref{fig:den_gal38_SNR75} and~\ref{fig:dec_gal16_SNR75} show that the galaxy's profile and its shape are better preserved with SCORE than with SRA.
1,477,468,750,215
arxiv
\section{Introduction} \label{sec:introduction} Let $X$ be a smooth projective curve of genus $g\geq 2$ over the field of complex numbers. Fix $n \geq 2$ and $d\in\mathbb{Z}$. We shall denote by $M_X(n,L_0)$ the moduli space of polystable bundles $E$ over $X$ of rank $n$ and determinant $\det(E)=L_0$, where $L_0$ is a line bundle of degree $d$. This is a projective variety, which is smooth over the locus of stable bundles. If $n$ and $d$ are coprime, then there are no properly semistable bundles, and $M_X(n,L_0)$ is smooth and projective. If $n$ and $d$ are not coprime, then the open subset of stable bundles $M^s_X(n,L_0)\subset M_X(n,L_0)$ is a smooth quasi-projective variety, and $M_X(n,L_0)$ is in general singular. A pair $(E,\phi)$ over $X$ consists of a bundle $E$ of rank $n$ and determinant $\det(E)=L_0$ over $X$ together with a section $\phi\in H^0(E)$. There is a concept of stability for a pair which depends on the choice of a parameter $\tau \in \mathbb{R}$. This gives a collection of moduli spaces of $\tau$-polystable pairs ${\frak M}_X(\tau;n,L_0)$, which are projective varieties. It contains a smooth open subset ${\frak M}_X^s(\tau;n,L_0)\subset {\frak M}_X(\tau;n,L_0)$ consisting of $\tau$-stable pairs. Pairs are discussed at length in \cite{B,BD,GP,MOV1}. The range of the parameter $\tau$ is an open interval $I=I_{n,d}=(\tau_m,\tau_M) \subset \mathbb{R}$. This interval is split by a finite number of critical values $\tau_c$. For a non-critical value $\tau\in I$, there are no properly semistable pairs, so ${\frak M}_X(\tau;n,L_0)={\frak M}_X^s(\tau;n,L_0)$ is smooth and projective. For a critical value $\tau=\tau_c$, ${\frak M}_X(\tau;n,L_0)$ is in general singular at properly $\tau$-semistable points. Our first main result computes the third cohomology group of ${\frak M}_X^s(\tau;n,L_0)$. \begin{theorem} \label{thm:H3-pairs} Let $n\geq 2$. For $n=2$ let $\tau_L=\tau_M-1$, otherwise set $\tau_L=\tau_M$, and let $\tau\in (\tau_m,\tau_L)$. Assume that we are not in one of the following ``bad'' cases: \begin{itemize} \item $(n,g,d)= (3,2,2k)$, $k=1,2,3$; \item $(n,g,d)=(2,2,5),(2,3,5),(2,2,6)$, $\tau= \tau_M-2$; \item $(n,g,d)=(3,2,5)$, $\tau= 2$. \end{itemize} Then \begin{enumerate} \item $H^3({\frak M}_X^s(\tau;n,L_0))$ is a pure Hodge structure which is naturally polarised. \item There is an isomorphism $H^3({\frak M}_X(\tau;n,L_0)) \cong H^1(X)$ of polarised Hodge structures. \end{enumerate} \end{theorem} For $n=2$, the moduli space ${\frak M}_X(\tau;n,L_0)$ for $\tau\in (\tau_L,\tau_M)$ is a projective space. Therefore the above result for the third cohomology does not hold. On the other hand, the special cases that we remove are of low genus, rank and degree, and for particular critical values of $\tau$. The following corollary is a Torelli theorem for the moduli spaces of $\tau$-stable pairs. \begin{corollary} \label{cor:Torelli-fixed-det} Let $X$ be a smooth projective curve, $n\geq 2$, $L_0$ a line bundle of degree $d$ over $X$. For $n=2$ let $\tau_L=\tau_M-1$, otherwise set $\tau_L=\tau_M$, and let $\tau\in (\tau_m,\tau_L)$. Consider another collection $X'$, $n'$, $L_0'$, $d'$ and $\tau'$. Assume that $(n,g,d,\tau)$ and $(n',g',d',\tau')$ are not one of the exceptional cases enumerated in Theorem \ref{thm:H3-pairs}. Then \begin{itemize} \item If ${\frak M}_X(\tau;n,L_0)$ and ${\frak M}_{X'}(\tau';n',L_0')$ are isomorphic algebraic varieties, then $X\cong X'$. \item If ${\frak M}_X^s(\tau;n,L_0)$ and ${\frak M}_{X'}^s(\tau';n',L_0')$ are isomorphic algebraic varieties, then $X\cong X'$. \end{itemize} \end{corollary} The first statement is reduced to the second one since ${\frak M}_X^s(\tau;n,L_0)$ is the smooth locus of ${\frak M}_X(\tau;n,L_0)$. The second one is proved by looking at the polarised Hodge structure $H^3({\frak M}_X^s(\tau;n,L_0))\cong H^1(X)$ and recovering $X$ from $H^1(X)$ via the usual Torelli theorem. For values of $\tau$ slightly bigger than $\tau_m$, there is a natural map ${\frak M}_X(\tau;n,L_0)\to M_X(n,L_0)$. This allows to prove the following result. \begin{theorem} \label{thm:H3-bundles} Let $n\geq 2$. Assume that $(n,g,d)\neq (2,2,2k), (3,2,3k),(2,3,2k)$, $k\in \mathbb{Z}$. Then \begin{enumerate} \item $H^3(M_X^s(n,L_0))$ is a pure Hodge structure which is naturally polarised. \item There is an isomorphism $H^3(M_X^s(n,L_0)) \cong H^1(X)$ of polarised Hodge structures. \end{enumerate} \end{theorem} For $(n,g,d)=(2,2,even)$, the moduli space $M_X(n,L_0)$ is isomorphic to $\mathbb{P}^3$ (see \cite{NR1}), hence the above result does not hold. Also, $M_X^s(2,L_0)=M_X(2,L_0)-S$, where $S=\mathrm{Jac}^d X/\pm1$. From this it is easy to see that the result does not hold for $M_X^s(2,L_0)$ either. For $(n,g,d)=(2,3,2k), (3,2,3k)$, $k\in\mathbb{Z}$, $H^3(M_X^s(n,L_0))$ is a mixed Hodge structure, whose graded piece $\Gr_W^3 H^3(M_X^s(n,L_0))$ is isomorphic to $H^1(X)$. However we are not able to polarise it with the methods in this paper. \begin{corollary} \label{cor:torelli-bundles} Let $X$ be a curve, $(n,g,d)\neq (2,2,2k), (3,2,3k), (2,3,2k)$, $k\in\mathbb{Z}$. If $X'$ is another curve, $n'\geq 2$, and $L_0'$ is a line bundle over $X'$ of degree $d'$, and $(n',g',d')\neq (2,2,2k), (3,2,3k), (2,3,2k)$, $k\in\mathbb{Z}$, then a Torelli theorem holds: \begin{itemize} \item If $M_X(n,L_0)$ and $M_{X'}(n',L_0')$ are isomorphic algebraic varieties, then $X\cong X'$. \item If $M_X^s(n,L_0)$ and $M_{X'}^s(n',L_0')$ are isomorphic algebraic varieties, then $X\cong X'$. \end{itemize} \end{corollary} When $n$ and $d$ are coprime, Theorem \ref{thm:H3-bundles} has been proved in \cite{NR2,Tyu,Mum-New}. In the non-coprime case, the Torelli theorem was proved in \cite{Kou-Pan} by different methods. Theorem \ref{thm:H3-bundles} has been proved by Arapura and Sastry \cite{Ara-Sas}, but under the condition $g>\frac{3}{n-1}+\frac{n^2+3n+1}{2}$. Here we remove this lower bound assumption. \medskip Our strategy of proof is the following. First, we find more convenient to rephrase the problem in terms of triples. A triple $(E_1,E_2,\phi)$ consists of a pair of bundles $E_1$, $E_2$ of ranks $n_1,n_2$, with $\det(E_1)=L_1$, $\det(E_2)=L_2$, respectively, over $X$ and a homomorphism $\phi:E_2\to E_1$. Here $L_1$, $L_2$ are fixed line bundles of degrees $d_1,d_2$, respectively. There is a suitable concept of stability for triples depending on a real parameter $\sigma$. This gives rise to moduli spaces $\mathcal{N}_X(\sigma; n_1,n_2,L_1,L_2)$ of $\sigma$-polystable triples. There is an identification of moduli spaces of pairs and triples given by $$ {\frak M}_X(\tau;n,L_0)\to \mathcal{N}_X(\sigma;n,1,L_0,\mathcal{O}), \qquad (E,\phi)\mapsto (E,\mathcal{O},\phi), $$ where $\mathcal{O}$ is the trivial line bundle, and $\sigma= (n+1) \tau - d$. Actually, this rephrasing is a matter of aesthetic. The arguments can be carried out directly with the moduli spaces of pairs, but the formulas which appear using triples look more symmetric, and clearly they could eventually be generalised to the case of triples of arbitrary ranks $n_1,n_2$. The range of the parameter $\sigma$ is an interval $I=(\sigma_m,\sigma_M) \subset \mathbb{R}$ split by a finite number of critical values $\sigma_c$. When $\sigma$ moves without crossing a critical value, then $\mathcal{N}_{X,\sigma}=\mathcal{N}_X(\sigma;n,1,L_1,L_2)$ remains unchanged, but when $\sigma$ crosses a critical value, $\mathcal{N}_{X,\sigma}$ undergoes a birational transformation which we call \emph{a flip}. We compute the codimension of the locus where this birational map is not an isomorphism to be at least $2$, except in the bad case $n=2$, $\sigma=\sigma_M-3$ (corresponding to $\tau=\tau_M-1$). This allows to prove that the Hodge structures $H^3(\mathcal{N}_{X,\sigma}^s)$ are identified for different values of $\sigma\in I$. An explicit description of the moduli space $\mathcal{N}_{X,\sigma_M^-}$, where $\sigma_M^-=\sigma_M-\epsilon$, $\epsilon>0$ small, allows to compute $H^3(\mathcal{N}_{X,\sigma_M^-})$ by induction on $n$. For $\sigma=\sigma_m^+=\sigma_m+\epsilon$, $\epsilon>0$ small, we have a morphism $\mathcal{N}_{X,\sigma_m^+} \to M_X(n,L_0)$, $L_0=L_1\otimes L_2^{-n}$. This is a fibration over the locus $M^s_X(n,L_0)$, when $d=\deg(L_0)$ is large enough. A computation of the codimension of the locus of strictly semistable bundles allows to check that $H^3(M_X^s(n,L_0))\cong H^3(\mathcal{N}_{X,\sigma_m^+})$. \bigskip We end up with the study of the case of non-fixed determinant. Let $M_X(n,d)$ denote the moduli space of semistable bundles of rank $n$ and degree $d$ over $X$. The open subset consisting of stable bundles will be denoted $M^s_X(n,d) \subset M_X(n,d)$. There is a natural map $M_X(n,d) \to \mathrm{Jac}^d X$, whose fiber over $L_0$ is $M_X(n,L_0)$. Also let ${\frak M}_X(\tau;n,d)$ be the moduli space of $\tau$-semistable pairs $(E,\phi)$, where $E$ is a bundle of rank $n$ and degree $d$. There is a map ${\frak M}_X(\tau ; n,d)\to \mathrm{Jac}^d X$ as before. Denote by ${\frak M}_X^s(\tau;n,d)$ the open subset of $\tau$-stable triples. The following theorem is proved by reducing to the case of fixed determinant. \begin{corollary} \label{cor:Torelli-non-fixed-det} The following Torelli theorems hold. Let $X$, $X'$ be projective smooth curves of genus $g,g'\geq 2$. Let $n,n'\geq 2$, and $d,d'\in\mathbb{Z}$. Let $\tau\in (\tau_m,\tau_L)$, $\tau'\in (\tau_m',\tau_L')$. Assume that $(n,g,d,\tau)$ and $(n',g',d',\tau')$ are not in one of the bad cases enumerated in Theorem \ref{thm:H3-pairs}. Then \begin{itemize} \item If ${\frak M}_X(\tau;n,d)\cong {\frak M}_{X'}(\tau';n',d')$ then $X\cong X'$. \item If ${\frak M}_X^s(\tau;n,d)\cong {\frak M}_{X'}^s(\tau';n',d')$ then $X\cong X'$. \end{itemize} Assume that $(n,g,d), (n'g',d')$ are not of the form $(2,2,2k), (3,2,3k), (2,3,2k)$, $k\in\mathbb{Z}$. Then \begin{itemize} \item If $M_X(n,d)\cong M_{X'}(n',d')$ then $X\cong X'$. \item If $M_X^s(n,d)\cong M_{X'}^s(n',d')$ then $X\cong X'$. \end{itemize} \end{corollary} \section{Moduli spaces of triples} \label{sec:triples} Let $X$ be a smooth projective curve of genus $g\geq 2$ over $\mathbb{C}$. A triple $T = (E_{1},E_{2},\phi)$ on $X$ consists of two vector bundles $E_{1}$ and $E_{2}$ over $X$, of ranks $n_1$ and $n_2$ and degrees $d_1$ and $d_2$, respectively, and a homomorphism $\phi \colon E_{2} \to E_{1}$. We shall refer to $(n_1,n_2,d_1,d_2)$ as the \emph{type} of the triple. For any $\sigma \in \mathbb{R}$ the $\sigma$-slope of $T$ is defined by $$ \mu_{\sigma}(T) = \frac{d_1+d_2}{n_1+n_2} + \sigma \frac{n_{2}}{n_{1}+n_{2}}\ . $$ We say that a triple $T = (E_{1},E_{2},\phi)$ is $\sigma$-stable if $\mu_{\sigma}(T') < \mu_{\sigma}(T)$ for any proper subtriple $T' = (E_{1}',E_{2}',\phi')$. We define $\sigma$-semistability by replacing the above strict inequality with a weak inequality. A triple $T$ is $\sigma$-polystable if it is the direct sum of $\sigma$-stable triples of the same $\sigma$-slope. We denote by $$ \mathcal{N}_X(\sigma;n_1,n_2,d_1,d_2) $$ the moduli space of $\sigma$-polystable triples of type $(n_1,n_2,d_1,d_2)$. This moduli space was constructed in \cite{BGP} and \cite{Sch}. It is a complex projective variety. The open subset of $\sigma$-stable triples will be denoted by $\mathcal{N}_X^s(\sigma;n_1,n_2,d_1,d_2)$. Let $L_1,L_2$ be two bundles of degrees $d_1,d_2$ respectively. Then the moduli spaces of $\sigma$-semistable triples $T=(E_1,E_2,\phi)$ with $\det(E_1)=L_1$ and $\det(E_2)=L_2$ will be denoted $$ \mathcal{N}_X(\sigma;n_1,n_2,L_1,L_2)\, , $$ and $\mathcal{N}_X^s(\sigma;n_1,n_2,L_1,L_2)$ will be the open subset of $\sigma$-stable triples. \medskip Let $\mu(E)=\deg(E)/\rk(E)$ denote the slope of a bundle $E$, and let $\mu_i=\mu(E_i)=d_i/n_i$, for $i=1,2$. Write \begin{align*} \sigma_m = &\, \mu_1-\mu_2\ , \\ \sigma_M = & \left\{ \begin{array}{ll} \left(1+ \frac{n_1+n_2}{|n_1 - n_2|}\right)(\mu_1 - \mu_2)\ , \qquad & \mbox{if $n_1\neq n_2$\ ,} \\ \infty, & \mbox{if $n_1=n_2$\ ,} \end{array} \right. \end{align*} and let $I$ denote the interval $I=(\sigma_m,\sigma_M)$. Then a necessary condition for $\mathcal{N}_X^s(\sigma;n_1,n_2,d_1,d_2)$ to be non-empty is that $\sigma\in I$ (see \cite{BGPG}). Note that $\sigma_m>0$. To study the dependence of the moduli spaces on the parameter $\sigma$, we need to introduce the concept of critical value \cite{BGP,MOV1}. \begin{definition}\label{def:critical} The values of $\sigma_c\in I$ for which there exist $0 \le n'_1 \leq n_1$, $0 \le n'_2 \leq n_2$, $d'_1$ and $d'_2$, with $n_1'n_2\neq n_1n_2'$, such that \begin{equation}\label{eqn:sigmac} \sigma_c=\frac{(n_1+n_2)(d_1'+d_2')-(n_1'+n_2')(d_1+d_2)}{n_1'n_2-n_1n_2'}, \end{equation} are called \emph{critical values}. \end{definition} The interval $I$ is split by a finite number of values $\sigma_c \in I$. The stability and semistability criteria for two values of $\sigma$ lying between two consecutive critical values are equivalent; thus the corresponding moduli spaces are isomorphic. When $\sigma$ crosses a critical value, the moduli space undergoes a transformation which we call a \emph{flip}. We shall study the flips in some detail in the next section. \subsection*{Relationship with pairs} A pair $(E,\phi)$ over $X$ consists of a vector bundle $E$ of rank $n$ and with $\det(E)=L_0$, where $L_0$ is some fixed bundle of degree $d$, and $\phi\in H^0(E)$. Let $\tau\in \mathbb{R}$. We say that $(E,\phi)$ is $\tau$-stable (see \cite[Definition 4.7]{GP}) if: \begin{itemize} \item For any subbundle $E'\subset E$, we have $\mu(E')<\tau$. \item For any subbundle $E'\subset E$ with $\phi\in H^0(E')$, we have $\mu(E/E')>\tau$. \end{itemize} The concept of $\tau$-semistability is defined by replacing the strict inequalities by weak inequalities. A pair $(E,\phi)$ is $\tau$-polystable if $E=E'\oplus E''$, where $\phi\in H^0(E')$ and $E''$ is a polystable bundle of slope $\tau$. The moduli space of $\tau$-polystable pairs is denoted by ${\frak M}_X(\tau; n, L_0)$. Interpreting $\phi\in H^0(E)$ as a morphism $\phi:\mathcal{O} \to E$, where $\mathcal{O}$ is the trivial line bundle on $X$, we have a map $(E,\phi)\mapsto (E,\mathcal{O},\phi)$ from pairs to triples. The $\tau$-stability of $(E,\phi)$ corresponds to the $\sigma$-stability of $(E,\mathcal{O},\phi)$, where (see \cite{BGP}) \begin{equation}\label{eqn:s-to-tau} \sigma=(n+1)\tau -d . \end{equation} Therefore we have an isomorphism of moduli spaces \begin{equation}\label{eqn:isom} {\frak M}_X(\tau;n,L_0) \cong \mathcal{N}_X(\sigma;n,1,L_0,\mathcal{O})\, , \end{equation} Alternatively, (\ref{eqn:isom}) may be taken as the definition of the moduli space of pairs. Note that $\sigma_m$ and $\sigma_M$ correspond under (\ref{eqn:s-to-tau}) to $$ \begin{aligned} \tau_m &\,=\frac{d}{n}, \\ \tau_M&\, =\frac{d}{n-1}. \end{aligned} $$ \begin{theorem}\label{thm:pairs} For non-critical values $\sigma\in I$, $\mathcal{N}_{X,\sigma}=\mathcal{N}_X(\sigma;n,1,L_1,L_2)$ is smooth and projective, and it only consists of $\sigma$-stable points (i.e. $\mathcal{N}_{X,\sigma}=\mathcal{N}_{X,\sigma}^s$). For critical values $\sigma=\sigma_c$, $\mathcal{N}_{X,\sigma}$ is projective, and the open subset $\mathcal{N}_{X,\sigma}^s\subset \mathcal{N}_{X,\sigma}$ is smooth. The dimension of $\mathcal{N}_{X,\sigma}$ is $(n^2-n-1)(g-1)+d_1-nd_2-1$. \end{theorem} \begin{proof} In general, if $\sigma$ is not a critical value for triples of type $(n_1,n_2,d_1,d_2)$ and $\mathrm{gcd}(n_1,n_2,d_1+d_2) = 1$, then $\sigma$-semistability is equivalent to $\sigma$-stability. This follows from \cite[Remark 3.8]{MOV1}. Smoothness for the $\sigma$-stable points follows from \cite[Proposition 6.3]{BGP}, since any $\sigma$-stable triple $T=(E_1,E_2,\phi)$ of type $(n,1,d_1,d_2)$ satisfies automatically that $\phi:E_2\to E_1$ is injective. The result if for non-fixed determinant, but the proof carries over to the case of fixed determinant. The dimension appears in \cite[Theorem 5.13]{GP} in the case of non-fixed determinant. Going over the proof, we see that we only have to substract $2g$ to the formula in \cite[Theorem 5.13]{GP}. \end{proof} There is an isomorphism \begin{equation}\label{eqn:lll} \mathcal{N}_X(\sigma; n,1,L_1,L_2) \cong \mathcal{N}_X(\sigma; n,1,L_1\otimes (L_2^*)^{\otimes n},\mathcal{O}), \end{equation} given by $(E_1,L_2,\phi)\mapsto (E_1\otimes L_2^*, \mathcal{O}, \phi)$, so the moduli space (\ref{eqn:isom}) is as general as the moduli spaces $\mathcal{N}_X(\sigma; n,1,L_1,L_2)$. \section{Flips for the moduli spaces of pairs} \label{sec:flips} The homological algebra of triples is controlled by the hypercohomology of a certain complex of sheaves which appears when studying infinitesimal deformations \cite[Section 3]{BGPG}. Let $T'=(E'_1,E'_2,\phi')$ and $T''=(E''_1,E''_2,\phi'')$ be two triples of types $(n_{1}',n_{2}',d_{1}',d_{2}')$ and $(n_{1}'',n_{2}'',d_{1}'',d_{2}'')$, respectively. Let $\Hom(T'',T')$ denote the linear space of homomorphisms from $T''$ to $T'$, and let $\Ext^1(T'',T')$ denote the linear space of equivalence classes of extensions of the form $$ 0 \longrightarrow T' \longrightarrow T \longrightarrow T'' \longrightarrow 0, $$ where by this we mean a commutative diagram $$ \begin{CD} 0@>>>E_1'@>>>E_1@>>> E_1''@>>>0\\ @.@A\phi' AA@A \phi AA@A \phi'' AA\\ 0@>>>E'_2@>>>E_2@>>>E_2''@>>>0. \end{CD} $$ To analyze $\Ext^1(T'',T')$ one considers the complex of sheaves \begin{equation} \label{eqn:extension-complex} C^{\bullet}(T'',T') \colon ({E_{1}''}^{*} \otimes E_{1}') \oplus ({E_{2}''}^{*} \otimes E_{2}') \overset{c}{\longrightarrow} {E_{2}''}^{*} \otimes E_{1}', \end{equation} where the map $c$ is defined by $$ c(\psi_{1},\psi_{2}) = \phi'\psi_{2} - \psi_{1}\phi''. $$ We introduce the following notation: \begin{align*} \mathbb{H}^i(T'',T') &= \mathbb{H}^i(C^{\bullet}(T'',T')), \\ h^{i}(T'',T') &= \dim\mathbb{H}^{i}(T'',T'), \\ \chi(T'',T') &= h^0(T'',T') - h^1(T'',T') + h^2(T'',T'). \end{align*} By \cite[Proposition 3.1]{BGPG}, there are natural isomorphisms \begin{align*} \Hom(T'',T') &\cong \mathbb{H}^{0}(T'',T'), \\ \Ext^{1}(T'',T') &\cong \mathbb{H}^{1}(T'',T'). \end{align*} We shall use the following results later: \begin{lemma}[{\cite[Lemma 3.10]{Mu}}] \label{lem:H2=0} If $T''=(E_1'',E_2'',\phi'')$ is an injective triple, that is $\phi'':E_2''\to E_1''$ is injective, then $\mathbb{H}^2(T'',T')=0$. \end{lemma} \begin{proposition}[{\cite[Proposition 3.2]{BGPG}}] \label{prop:chi(T'',T')} For any holomorphic triples $T'$ and $T''$ we have \begin{align*} \chi(T'',T') &= (1-g)(n''_1 n'_1 + n''_2 n'_2 - n''_2 n'_1) + n''_1 d'_1 - n'_1 d''_1 + n''_2 d'_2 - n'_2 d''_2 - n''_2 d'_1 + n'_1 d''_2. \end{align*} \end{proposition} \bigskip Fix the type $(n_1,n_2,d_1,d_2)$ for the moduli spaces of triples. For brevity, write $\mathcal{N}_\sigma=\mathcal{N}_X(\sigma;n_1,n_2,L_1,L_2)$. Let $\sigma_c\in I$ be a critical value and set $$ {\s_c^+} = \sigma_c + \epsilon,\quad {\s_c^-} = \sigma_c - \epsilon, $$ where $\epsilon > 0$ is small enough so that $\sigma_c$ is the only critical value in the interval $({\s_c^-},{\s_c^+})$. \begin{definition}\label{def:flip-loci} We define the \textit{flip loci} as \begin{align*} \mathcal{S}_{{\s_c^+}} &= \{ T\in\mathcal{N}_{{\s_c^+}}^s \ ; \ \text{$T$ is ${\s_c^-}$-unstable}\} \subset\mathcal{N}_{{\s_c^+}}^s \ ,\\ \mathcal{S}_{{\s_c^-}} &= \{ T\in\mathcal{N}_{{\s_c^-}}^s \ ; \ \text{$T$ is ${\s_c^+}$-unstable}\} \subset\mathcal{N}_{{\s_c^-}}^s \ . \end{align*} \end{definition} It follows that (see \cite[Lemma 5.3]{BGPG}) $$ \mathcal{N}_{{\s_c^+}}^s-\mathcal{S}_{{\s_c^+}}=\mathcal{N}_{\sigma_c}^s=\mathcal{N}_{{\s_c^-}}^s-\mathcal{S}_{{\s_c^-}}. $$ \begin{definition} \label{def:S-plusminus} Let $\sigma_c\in I$ be a critical value given by $(n_1',n_2',d_1',d_2')$ in (\ref{eqn:sigmac}), and let $(n_1'',n_2'',d_1'',d_2'') = (n_1-n_1',n_2-n_2',d_1-d_1',d_2-d_2')$. \begin{itemize} \item[(1)] Define $\tilde{\mathcal{S}}_{\sigma_c^+}^0(n_1',n_2',d_1',d_2')$ to be the set of all isomorphism classes of extensions \begin{displaymath} 0 \longrightarrow T' \longrightarrow T \longrightarrow T'' \longrightarrow 0, \end{displaymath} where $T'$ and $T''$ are $\sigma_c^+$-stable triples with types $(n_1',n_2',d_1',d_2')$ and $(n_1'',n_2'',d_1'',d_2'')$ respectively, and for which $T$ is $\sigma_c^+$-stable, $T\in \mathcal{N}_{\sigma_c^+}^s$. Note that in this case $\mu_{\sigma_c}(T')=\mu_{\sigma_c}(T)=\mu_{\sigma_c}(T'')$, and $\frac{n'_2}{n'_1+n'_2}<\frac{n''_2}{n''_1+n''_2}$. \item[(2)] Define \begin{align*} \tilde{\mathcal{S}}^0_{\sigma_c^+} = \bigcup \tilde{\mathcal{S}}^0_{\sigma_c^+}(n_1',n_2',d_1',d_2'), \end{align*} where the union is over all $(n'_1,n'_2,d'_1,d'_2)$ and $(n''_1,n''_2,d''_1,d''_2)$ such that the above conditions apply. \item[(3)] Similarly, define $\tilde{\mathcal{S}}^{0}_{\sigma_c^-} (n'_1,n'_2,d'_1,d'_2)$ and $\tilde{\mathcal{S}}^0_{\sigma_c^-}$, where now $\frac{n'_2}{n'_1+n'_2}>\frac{n''_2}{n''_1+n''_2}$. \end{itemize} \end{definition} The following is \cite[Lemma 5.8]{BGPG}. Actually the version in \cite{BGPG} is stated for the case of non-fixed determinant, but the fixed determinant version is completely similar. \begin{lemma}\label{lem:vmaps} There are maps $v^{\pm}:\tilde{\mathcal{S}}^0_{\sigma_c^{\pm}} \longrightarrow\mathcal{N}^s_{\sigma_c^{\pm}}$ which map triples to their equivalence classes. The images contain the flip loci $\mathcal{S}_{\sigma_c^{\pm}}$. \end{lemma} \begin{proposition}\label{prop:codim-estimate} Assume that $\mathbb{H}^0(T'',T')=\mathbb{H}^2(T'',T')=0$ for all $\sigma_c^\pm$-stable triples $T'$, $T''$ of types $(n_1',n_2',d_1',d_2')$, $(n_1'',n_2'',d_1'',d_2'')$ respectively. Then $\mathcal{S}_{\sigma_c^{\pm}}\subset\mathcal{N}^s_{\sigma_c^{\pm}}$ are contained in subvarieties of codimension bounded below by $$ \mathrm{min}\{-\chi(T',T'')\}, $$ where the minimum is over all $(n_1',n_2',d_1',d_2')$ which satisfy (\ref{eqn:sigmac}), $(n_1'',n_2'',d_1'',d_2'') = (n_1-n_1',n_2-n_2',d_1-d_1',d_2-d_2')$ and $\frac{n'_2}{n'_1+n'_2}<\frac{n''_2}{n''_1+n''_2}$ (in the case of $\mathcal{S}_{\sigma_c^{+}}$) or $\frac{n'_2}{n'_1+n'_2}>\frac{n''_2}{n''_1+n''_2}$ (in the case of $\mathcal{S}_{\sigma_c^{-}}$). \end{proposition} \begin{proof} The proof of Proposition 5.10 in \cite{BGPG} goes over to this case. The condition $\sigma_c>2g-2$ in \cite[Proposition 5.10]{BGPG} is only needed to conclude the vanishing of $\mathbb{H}^0(T'',T')$ and $\mathbb{H}^2(T'',T')$. Fixing the determinant of the triples $T$ forces that, once $T'$ is chosen, then the determinant of $T''$ is fixed. This reduces by $2g$ the dimension of the moduli space of $\sigma^\pm$-stable triples, and also reduces by $2g$ the dimension of the flip loci. Therefore the formula of the codimension is the same as in \cite[Proposition 5.10]{BGPG}. \end{proof} \section{Codimension estimates} \label{sec:codim-estimates} We are going to apply Proposition \ref{prop:codim-estimate} to the case of $n_2=1$, $n_1=n\geq 2$. Denote $\mathcal{N}_{X,\sigma}=\mathcal{N}_X(\sigma;n,1,L_1,L_2)$. Here $L_1,L_2$ are line bundles of degrees $d_1,d_2$ respectively. (We shall have no need of particularising $L_2=\mathcal{O}$ for the subsequent arguments to work, so we will not do it). We start by computing codimension estimates for the flip loci $\mathcal{S}_{\sigma_c^\pm}\subset \mathcal{N}_{X,\sigma_c^\pm} = \mathcal{N}_{X,\sigma_c^\pm}^s$. \begin{proposition}\label{prop:Scmas} Suppose $n_2=1$, $n_1=n\geq 2$. Let $\sigma_c$ be a critical value with $\sigma_m < \sigma_c < \sigma_M$. Then \begin{itemize} \item $\codim \mathcal{S}_{\sigma_c^+}\geq 3$, except in the case $n=2$, $g=2$, $d_1$ odd and $\sigma_c=\sigma_m+\frac32$ (in which case $\codim \mathcal{S}_{\sigma_c^+}=2$). \item $\codim \mathcal{S}_{\sigma_c^-}\geq 2$, except for $n=2$ and $\sigma_c=\sigma_M-3$ (in which case $\codim \mathcal{S}_{\sigma_c^-}=1$). Moreover, for $n=2$ we have that $\codim \mathcal{S}_{\sigma_c^-}=2$ only for $\sigma_c=\sigma_M-6$. \end{itemize} \end{proposition} \begin{proof} Let us do the case of $\mathcal{S}_{\sigma_c^+}$ first. The condition $ \frac{n'_2}{n'_1+n'_2}<\frac{n''_2}{n''_1+n''_2} $ implies that $n_2'=0$ and $n_2''=1$. Since $T'$ and $T''$ are $\sigma_c^+$-stable triples which are not isomorphic, we have that $\mathbb{H}^0(T'',T')=\Hom(T'',T')=0$. By Lemma \ref{lem:H2=0}, it is clear that $\mathbb{H}^2(T'',T')=0$. Proposition \ref{prop:chi(T'',T')} gives (using that $n_2'=d_2'=0$, $n_2''=1$, and paying attention to the fact that the roles of $T'$ and $T''$ are interchanged) $$ -\chi(T',T'') =(g-1)n_1'n_1'' + n_1''d_1'-n_1'd_1''\, . $$ The equality $\mu_{\sigma_c}(T')=\mu_{\sigma_c}(T)$ is rewritten as $$ \frac{d_1'}{n'_1} = \frac{d_1+d_2+\sigma_c}{n_1+1}\, . $$ Now $\sigma_c>\sigma_m=\frac{d_1}{n_1}-d_2$ implies that $$ \frac{d_1'}{n'_1} > \frac{1}{n_1+1}\left( d_1+ d_2+\frac{d_1}{n_1}-d_2 \right)=\frac{d_1}{n_1}\, . $$ So $\frac{d_1'}{n_1'}>\frac{d_1''}{n_1''}$, and hence $n_1''d_1'-n_1'd_1''>0$. This implies $$ -\chi(T',T'') \geq (g-1)n_1'n_1'' + 1 \geq 2\, , $$ using that $n=n_1'+n_1''$, $0<n_1',n_1''<n$. Moreover, except in the case $(n,g)=(2,2)$, we have that $-\chi(T',T'')\geq 3$. For $n=2$, $g=2$, $n_1'=n_1''=1$, $-\chi(T',T'')=d_1'-d_1''+1$, with $d_1'>d_1''$. For $d_1'-d_1''=1$, $d_1$ is odd, $d_1'=(d_1+1)/2$ and $d_1''=(d_1-1)/2$. Therefore $\sigma_c=(d_1+3)/2-d_2=\sigma_m + 3/2$. \medskip Now we turn to the case of $\mathcal{S}_{\sigma_c^-}$. The condition $$ \frac{n'_2}{n'_1+n'_2}>\frac{n''_2}{n''_1+n''_2} $$ implies that $n_2'=1$ and $n_2''=0$. Since $T'$ and $T''$ are $\sigma_c^-$-stable triples which are not isomorphic, we have that $\mathbb{H}^0(T'',T')=\Hom(T'',T')=0$. Lemma \ref{lem:H2=0} guarantees that $\mathbb{H}^2(T'',T')=0$. Proposition \ref{prop:chi(T'',T')} gives (using that $n_2''=d_2''=0$, $n_2'=1$) $$ -\chi(T',T'')= (g-1)n_1''(n_1'-1) + n_1''d_1'-n_1'd_1'' + d_1''-n_1''d_2\, . $$ Denote \begin{equation} \label{eqn:A} A= n_1''d_1'-n_1'd_1'' + d_1''-n_1''d_2\,. \end{equation} We have $$ A =n_1''(d_1-d_2) -d_1''(n_1'+n_1''-1) = n_1''(d_1-d_2)-d_1''(n_1-1)\, . $$ Also $\mu_{\sigma_c}(T'')=\mu_{\sigma_c}(T)$ means that $$ \frac{d_1''}{n_1''}= \frac{d_1+d_2+\sigma_c}{n_1+1}\, , $$ {}From where $$ A= n_1''(d_1-d_2)-n_1''(n_1-1)\frac{d_1+d_2+\sigma_c}{n_1+1} \, . $$ Now $$ \sigma_c < \sigma_M = \frac{2n_1}{n_1-1}\left(\frac{d_1}{n_1}-d_2\right) $$ so that $$ A > n_1'' (d_1-d_2) - n_1'' \frac{n_1-1}{n_1+1} \left(d_1+d_2+ \frac{2n_1}{n_1-1}\left(\frac{d_1}{n_1}-d_2\right)\right)=0. $$ This gives that $$ -\chi(T',T'')= (g-1)n_1''(n_1'-1) + A \geq 2, $$ in the case $n_1'>1$. In the case $n_1'=1$, we have a more explicit formula $$ -\chi(T',T'')=A= (n_1-1)(d_1'-d_2)>0. $$ Hence $-\chi(T',T'')\geq 2$, for $n\geq 3$ and for $n=2$ and $d_1'-d_2\geq 2$. The only remaining case corresponds to $-\chi(T',T'')=d_1'-d_2=1$, $n_1'=n_1''=1$, $d_1'=d_2+1$, $\sigma_M=2d_1-4d_2$, $\sigma_m=d_1/2-d_2$ and $\sigma_c=3d_1''-d_1-d_2=\sigma_M-3$. Finally, note that for $n=2$, $-\chi(T',T'')=2$ only in the case $d_1'-d_2=2$, which corresponds to $\sigma_c=\sigma_M-6$. \end{proof} \begin{lemma} \label{lem:codim-triples} Let $\sigma_c$ be a critical value. Assume that $\sigma_c< \sigma_L$ in the case $n=2$. Then $\codim(\mathcal{N}_{X,\sigma_c}-\mathcal{N}_{X,\sigma_c}^s)\geq 5$ except in the following cases: \begin{itemize} \item $n=2$, $g=2,3$, $\sigma_c=\sigma_M-6$ and $d_1-2d_2=5$, \item $n=2$, $g=2$, $\sigma_c=\sigma_M-6$ and $d_1-2d_2=6$, \item $n=3$, $g=2$, $\sigma_c=2$ and $d_1-3d_2=4$, \item $n=3$, $g=2$, $\sigma_c=3$ and $d_1-3d_2=5$. \end{itemize} \end{lemma} \begin{proof} By Theorem \ref{thm:pairs}, the dimension of $\mathcal{N}_{X,\sigma}$ is $$ \dim \mathcal{N}_X(\sigma; n,1,L_1,L_2)=(n^2-n-1)(g-1)+d_1-nd_2 -1 \, . $$ The set $S=\mathcal{N}_{X,\sigma_c}-\mathcal{N}_{X,\sigma_c}^s$ is formed by strictly $\sigma_c$-polystable triples. Therefore it is covered by the images of the sets $$ \mathcal{X}_{L_1,L_2}\subset \mathcal{N}_X^s(\sigma_c;n_1',1,d_1',d_2) \times M_X(n_1'',d_1'')\,, $$ where $d_1=d_1'+d_1''$, $n_1=n_1'+n_1''$, \begin{equation}\label{eqn:1} \frac{d_1''}{n_1''}= \frac{d_1'+d_2+\sigma_c}{n_1'+1} = \frac{d_1+d_2+\sigma_c}{n_1+1} \,, \end{equation} and $\mathcal{X}_{L_1,L_2}$ corresponds to those triples of the form $T=(E_1,L_2,\phi)=(E_1',L_2,\phi)\oplus (E_1'',0,0)$ with fixed determinant $\det(E_1)=L_1$. Therefore $\mathcal{X}_{L_1,L_2}$ is a fibration over $M_X(n_1'',d_1'')$ whose fibers are moduli spaces of $\sigma_c$-stable triples with fixed determinant $\det(E_1')=L_1\otimes \det(E_1'')^{-1}$. Thus $$ \dim \mathcal{X}= (n_1'^2-n_1'-1)(g-1)+d_1'-n_1'd_2-1 + (n_1''^2(g-1)+1). $$ So $$ \begin{aligned} \codim \mathcal{X} = & \, (n^2-n + n_1'-n_1'^2-n_1''^2)(g-1) +d_1-d_1' -(n-n_1')d_2-1 \\ = & \, (2n_1'n_1''-n_1'')(g-1) + d_1'' - n_1''d_2-1 \\ = &\, n_1''(2n_1'-1) (g-1) +d_1''- n_1''d_2-1 \, . \end{aligned} $$ Note that \begin{equation}\label{eqn:A+B} d_1''- n_1''d_2 = (n_1''d_1'-n_1'd_1'' + d_1''- n_1''d_2)+ (n_1'd_1''-n_1''d_1')\,. \end{equation} Define $A= n_1''d_1'-n_1'd_1'' + d_1''-n_1''d_2$ as in (\ref{eqn:A}). Using (\ref{eqn:1}), we get as in the proof of Proposition \ref{prop:Scmas} that $A>0$. Also the inequality $\sigma_c>\frac{d_1'}{n_1'}-d_2>0$ and (\ref{eqn:1}) give $ \frac{d_1''}{n_1''} > \frac{d_1' +d_2 +d_1'/n_1' - d_2}{n_1'+1}=\frac{d_1'}{n_1'} \, , $ hence $B=n_1'd_1''-n_1''d_1'>0$. To prove that $\codim \mathcal{X}\geq 5$ we need to prove that $$ 1+\codim \mathcal{X}= n_1''(2n_1'-1) (g-1) +A+B \geq 6. $$ This is true except possibly in the following cases: \begin{itemize} \item $n=2$. Then $n_1'=1$ and $n_1''=1$. We have that $A=d_1'-d_2$ and $B=d_1''-d_1'$. In this case we assume that $\sigma_c=3d_1''-d_1-d_2<\sigma_L=\sigma_M-3=2d_1-4d_2-3$, which can be rewritten as $A=d_1'-d_2>1$. So $1+\codim \mathcal{X}=(g-1)+A+B\geq 6$ except for $g=2,3$, $A=2$ and $B=1$ (that is, $\sigma_c=\sigma_M-6$ and $d_1-2d_2=5$), for $g=2$, $A=2$ and $B=2$ (that is, $\sigma_c=\sigma_M-6$ and $d_1-2d_2=6$). \item $n\geq 3$ and $n_1'=1$. Then $n_1''(2n_1'-1) (g-1) =n_1''(g-1)\geq 2$ and $A=n_1''(d_1'-d_2)\geq 2$. So $1+\codim \mathcal{X}\geq 6$ except if $n_1''=2$, $g=2$, $d_1'-d_2=1$ and $B=d_1''-n_1''d_1'=1$. This means that $\sigma_c=2d_1''-d_1-d_2=3d_2-d_1+6$. As $\sigma_M=d_1-3d_2$, $\sigma_m=\frac13 (d_1-3d_2)$ and $\sigma_m<\sigma_c<\sigma_M$, we have that it must be $\sigma_c=2$ and $d_1-3d_2=4$. \item $n\geq 3$ and $n_1'\geq 2$. Then $n_1''(2n_1'-1) (g-1)\geq 3$. So $1+\codim \mathcal{X}\geq 6$ except if $n_1''(2n_1'-1) (g-1)=3$, $A=1$ and $B=1$. This means $n_1'=2$, $n_1''=1$, $g=2$, $A=d_1'-d_1''-d_2=1$, $B=2d_1''-d_1'=1$. So $\sigma_c=4d_1''-d_1-d_2= 3d_2-d_1+8$. As $\sigma_M=d_1-3d_2$, $\sigma_m=\frac13 (d_1-3d_2)$ and $\sigma_m<\sigma_c<\sigma_M$, we have that it must be $\sigma_c=3$ and $d_1-3d_2=5$. \end{itemize} \end{proof} Also we need codimension estimates for the families of properly semistable bundles over $X$. \begin{lemma} \label{lem:codimsemist} Let $S$ be a bounded family of isomorphism classes of strictly semistable bundles of rank $n$ and determinant $L_0$. Then $\dim M_X(n,L_0)- \dim S \geq (n-1)(g-1)$. \end{lemma} \begin{proof} We may stratify $S$ according to the ranks and degrees of the elements in the Jordan-H\"older filtration of the bundles. So we may assume that $S$ consists only of bundles whose Jordan-H\"older filtration has associated graded object $\gr(Q)=\oplus_{i=1}^r Q_i$, with $n_i=\rk Q_i$, $d_i=\deg Q_i$, $\bigotimes_{i=1}^r \det(Q_i)=L_0$, $r\geq 2$. We use now \cite[Proposition 7.9]{BGPMN}, but it has to be modified to take into account that we are fixing the determinant of $Q$. This fixes the determinant of one of the $Q_i$, say $Q_r$. This reduces the dimension stated in \cite[Proposition 7.9]{BGPMN} by $g$. So $$ \dim S\leq \left( \sum n_i^2 +\sum_{i<j} n_in_j\right)(g-1) +1 -g \,. $$ As $\dim M_X(n,L_0)= n^2(g-1) +1 -g$, we have that $$ \dim M(n,L_0)- \dim S \geq \sum_{i<j} n_in_j (g-1)\, . $$ The minimum of the right hand side is attained for $n=2$, $n_1=1$, $n_2=n-1$, whence the statement. \end{proof} \begin{lemma} \label{lem:codimMX} Suppose $n\geq 2$, and let $S=M_X(n,L_0)-M_X^s(n,L_0)$ be the locus of strictly polystable bundles. Then $\dim M_X(n,L_0)- \dim S \geq 2(n-1)(g-1)-1$. \end{lemma} \begin{proof} Working as in the proof of Lemma \ref{lem:codimsemist}, we have that if $S$ consists of polystable bundles $Q=\oplus Q_i$, with $n_i=\rk Q_i$, $d_i=\deg Q_i$, $\bigotimes_{i=1}^r \det(Q_i)=L_0$, $r\geq 2$, then $$ \dim S= \left( \sum n_i^2 (g-1) +1 \right) -g \,. $$ As $\dim M_X(n,L_0)= n^2(g-1) +1 -g$, we have that $$ \dim M_X(n,L_0)- \dim S \geq 2 \sum_{i<j} n_in_j (g-1) +1-r\geq 2(n-1)(g-1)-1\, , $$ since the minimum occurs for $r=2$, $n_1=1$, $n_2=n-1$. \end{proof} \section{Cohomology groups of the moduli spaces of pairs} \label{sec:H1-2-3} Now we aim to compute the cohomology groups $H^i(\mathcal{N}_{X,\sigma}^s)$, for $i=1,2,3$. Note that for $\sigma$ non-critical, $\mathcal{N}_{X,\sigma}=\mathcal{N}_{X,\sigma}^s$, so we are actually computing $H^i(\mathcal{N}_{X,\sigma})$. Moreover, in this case $\mathcal{N}_{X,\sigma}$ is smooth and projective, so these are automatically pure Hodge structures. As a byproduct, we shall obtain the cohomology groups $H^i(M^s_X(n,L))$, for $i=1,2,3$. For $n$ and $d$ coprime, $M_X(n,L)=M^s_X(n,L)$, which is smooth and projective. In this case, these cohomology groups are well-known, however we shall recover them easily from our arguments. For $n$ and $d$ not coprime, the cohomology groups $H^i(M_X^s(n,L))$ seem to be known to experts, but it is difficult to locate them in the literature. We start with a small lemma. \begin{lemma} \label{lem:M-S} Let $M$ be a smooth projective variety and let $S\subset M$ be a closed subset in the Zariski topology. If either: \begin{itemize} \item[(1)] $\codim S\geq 3$, or \item[(2)] $\codim S=2$ and $H^3(M-S)$ is a pure Hodge structure, \end{itemize} then $$ H^i(M) \cong H^i(M-S), $$ for $i\leq 3$. \end{lemma} \begin{proof} Let $m=\dim_\mathbb{C} M$. Using Poincar\'{e} duality, the statement of the lemma is equivalent to proving an isomorphism $H^{2m-i}_c(M)\cong H^{2m-i}_c(M-S)$, where $H^*_c$ stands for cohomology with compact support. This is clear for $i\leq 2$, since $S$ has real codimension at least $4$. For $i=3$, we have an exact sequence $$ H^{2m-4}_c(S) \stackrel{\partial}{\longrightarrow} H^{2m-3}_c(M-S) \longrightarrow H^{2m-3}_c(M) \longrightarrow 0 \, . $$ The group $H_c^{2m-4}(S)$ is generated by the irreducible components $S_i$ of dimension $m-2$ of $S$. To see this, let $N_1=\bigcup_{i\neq j} (S_i\cap S_j)$. Then let $S_i^o\subset S_i$ be the smooth locus of $S_i-N_1$, and consider $N_2=\bigcup (S_i-S_i^o)$. Then $N=N_1\cup N_2$ is of positive codimension in $S$, so that $H_c^{2m-4}(S)\cong H_c^{2m-4} (S-N)=\bigoplus_i H_c^{2m-4}(S_i^o)$. Note that $H_c^{2m-4}(S)$ is a pure Hodge structure of weight $(m-2,m-2)$. As $\partial$ preserves the weight of the Hodge structure, and $H_c^{2m-3}(M-S)$ is assumed to be of pure type, then $\partial=0$. The result follows. \end{proof} \begin{remark}\label{rem:M-S} If $S\subset M$ is of codimension $2$ and $M$ is a smooth projective variety, then in general, $H^3(M-S)$ is a mixed Hodge structure. It has pieces of weight $2$ and $3$, and the piece of weight $3$ satisfies $\Gr_3^W H^3(M-S) \cong H^3(M)$. \end{remark} \begin{proposition} \label{prop:equalH3} Assume $n\geq 3$. Let $\sigma_c$ be a critical value, $\sigma_m <\sigma_c<\sigma_M$. Then $$ H^i(\mathcal{N}_{X,\sigma_c^+}) \cong H^i(\mathcal{N}_{X,\sigma_c^-})\cong H^i(\mathcal{N}_{X,\sigma_c}^s), $$ for $i\leq 3$. So all Hodge structures $H^i(\mathcal{N}_{X,\sigma}^s)$ are naturally isomorphic, for $i\leq 3$. \newline (For $\sigma$ non-critical, we have that $\mathcal{N}_{X,\sigma}=\mathcal{N}_{X,\sigma}^s$, so we are actually talking about $H^i(\mathcal{N}_{X,\sigma})$.) \end{proposition} \begin{proof} By Proposition \ref{prop:Scmas}, $\codim \mathcal{S}_{\sigma_c^+} \geq 3$ and $\codim \mathcal{S}_{\sigma_c^-} \geq 2$. Then Lemma \ref{lem:M-S} applied to $\mathcal{S}_{\sigma_c^+}$ implies that $H^3(\mathcal{N}_{X,\sigma_c^+}) \cong H^3(\mathcal{N}_{X,\sigma_c}^s)$, since $\mathcal{N}_{X,\sigma}^s=\mathcal{N}_{X,\sigma_c^+}-\mathcal{S}_{\sigma_c^+}$. In particular, $H^3(\mathcal{N}_{X,\sigma_c}^s)$ is a pure Hodge structure. Applying Lemma \ref{lem:M-S} to $\mathcal{S}_{\sigma_c^-}$, we have $H^3(\mathcal{N}_{X,\sigma_c^-}) \cong H^3(\mathcal{N}_{X,\sigma_c}^s)$. \end{proof} \begin{proposition} \label{prop:case-n=2} Assume $n=2$. The critical values are the numbers $\sigma_c=\sigma_M-3n$, for $0<n<(\sigma_M-\sigma_m)/3$, $n\in\mathbb{Z}$. Denote $\sigma_L=\sigma_M-3$. If $\sigma_L>\sigma_m$ then for any $\sigma\in (\sigma_m, \sigma_L)$, we have $H^1(\mathcal{N}_{X,\sigma}^s)=0$, $H^2(\mathcal{N}_{X,\sigma}^s)\cong \mathbb{Z}\oplus\mathbb{Z}$, and all Hodge structures $H^3(\mathcal{N}_{X,\sigma}^s)$ are naturally isomorphic. Moreover, $$ H^3(\mathcal{N}_{X,\sigma}^s)\cong H^1(X), $$ for any $\sigma \in (\sigma_m, \sigma_L)$, with the exception of $g=2$, $d_1-2d_2=5$ and $\sigma=\sigma_M-6=4$. \newline (For $\sigma$ non-critical, we have that $\mathcal{N}_{X,\sigma}=\mathcal{N}_{X,\sigma}^s$, so we are actually talking about $H^i(\mathcal{N}_{X,\sigma})$.) \end{proposition} \begin{proof} The collection of moduli spaces $\mathcal{N}_{X,\sigma}$ for $n=2$ is described in detail in \cite{Th}. The critical values are given by \cite[Lemma 5.3]{MOV1} to be of the form $\sigma_c=3(d_1-d_2-n) -d_1-d_2= \sigma_M-3n$, with $n>0$ and the constraint $\sigma_m<\sigma_c<\sigma_M$ (this also follows easily from (\ref{eqn:sigmac})). The last moduli space $\mathcal{N}_{X,\sigma_M^-}$ is a projective space $\mathbb{P}$ (see \cite[(3.1)]{Th}, or argue as in the proof of Proposition \ref{prop:last-H3} below: in the discussion of the proof of Proposition \ref{prop:last-H3}, $F$ should be a fixed line bundle, since the determinant is fixed, so $\mathcal{N}_{X,\sigma_M^-}=U$ is the projective space $\mathbb{P}=\mathbb{P} H^1(F^*\otimes L)$). By \cite[(3.4)]{Th}, there is an embedding $X \hookrightarrow \mathbb{P}$, given by $p\mapsto [\delta\left((F^*\otimes L(p))_p\right)] \in \mathbb{P}$, where $\delta:H^0((F^*\otimes L(p))_p)\to H^1(F^*\otimes L)$ is the connecting map associated to the exact sequence $$ F^*\otimes L \to F^*\otimes L(p) \to (F^*\otimes L(p))_p. $$ By \cite[(3.19)]{Th}, the moduli space $\mathcal{N}_{X,\sigma_L^-}$, for $\sigma_L=\sigma_M-3$, is the blow-up of $\mathbb{P}$ along $X$. The usual computation of the cohomology of a blow-up gives that $H^3(\mathcal{N}_{X,\sigma_L^-}) \cong H^1(E)\cong H^1(X)$, where $E$ is the exceptional divisor, which is a projective bundle over $X$. Also $H^2(\mathcal{N}_{X,\sigma_L^-}) \cong H^2(\mathbb{P})\oplus \mathbb{Z}[E] \cong \mathbb{Z}\oplus \mathbb{Z}$ and $H^1(\mathcal{N}_{X,\sigma_L^-}) =0$. Now we have to prove that $$ H^i(\mathcal{N}_{X,\sigma_c^+})\cong H^i(\mathcal{N}_{X,\sigma_c^-})\cong H^i(\mathcal{N}_{X,\sigma_c}^s), $$ for $i\leq 3$ and $\sigma_m<\sigma_c<\sigma_L$. In the case $g\neq 2$ (or in the case $g=2$ and $d_1$ even or $\sigma_c\neq \sigma_m+\frac32$), it follows from the fact that $\codim \mathcal{S}_{\sigma_c^+} \geq 3$ and $\codim \mathcal{S}_{\sigma_c^-} \geq 2$, which follows in turn from Proposition \ref{prop:Scmas}. This is enough to complete the proof of the proposition. For the exceptional case $(n,g)=(2,2)$, $d_1$ odd and $\sigma_c=\sigma_m+\frac32$, we have the following cases: \begin{itemize} \item If $d_1-2d_2=1$ then there are no flips. So there is no such $\sigma_c$ and nothing to prove. (Actually $\mathcal{N}_{X,\sigma}$ is a projective space for all allowable values of $\sigma$.) \item If $d_1-2d_2=3$ then $\sigma_c=\sigma_m+\frac32=\sigma_M-3$. Again there is no such $\sigma_c$. (Note that $\mathcal{N}_{X,\sigma_m^+}=\mathcal{N}_{X,\sigma_L^-}$, so for all $\sigma\in (\sigma_m,\sigma_L)$ we have $\mathcal{N}_{X,\sigma} =\mathcal{N}_{X,\sigma_m^+}$ whose cohomology has been computed above.) \item If $d_1-2d_2\geq 7$ then $\sigma_m+\frac32<\sigma_M-6$. Then for $\sigma_c=\sigma_m+\frac32$, $\codim \mathcal{S}_{\sigma_c^+}=2$ but $\codim \mathcal{S}_{\sigma_c^-}\geq 3$ (see the last line in Proposition \ref{prop:codim-estimate}). Then $H^3(\mathcal{N}_{X,\sigma_c^+})\cong H^3(\mathcal{N}_{X,\sigma_c^-})$ as required. \item If $d_1-2d_2=5$ then $\sigma_c=\sigma_m+\frac32=\sigma_M-6$. For $\sigma\in (\sigma_M-6,\sigma_M-3)$, we have $\mathcal{N}_{X,\sigma}=\mathcal{N}_{X,\sigma_L^-}$ for which the cohomology is computed above. For $\sigma\in (\sigma_m,\sigma_M-6)$, we have that $\mathcal{N}_{X,\sigma}=\mathcal{N}_{X,\sigma_m^+}$. As $\mu_1-\mu_2>2$, so have that $H^3(\mathcal{N}_{X,\sigma_m^+}) \cong H^3(M_X(2,L_0))$, where $L_0=L_1\otimes L_2^{-2}$ (see the proof of Theorem \ref{thm:H3-bundles-proof}, and note that $\deg(L_0)$ is odd). We can twist $L_0$ by a large power of a line bundle $\mu$ to arrange that $\deg(L_0\otimes \mu^{2k})$ is large. As $M_X(2,L_0)\cong M_X(2, L_0\otimes \mu^{2k})$, we get that $H^3(\mathcal{N}_{X,\sigma_m^+}(2,1,L_1,L_2))$ is isomorphic to $H^3(\mathcal{N}_{X,\sigma_m^+}(2,1,L_1\otimes \mu^{2k},L_2))$. We have seen already that such cohomology group is isomorphic to $H^1(X)$ for $d_1-2d_2>>0$. So $H^3(\mathcal{N}_{X,\sigma_c^-})\cong H^3(\mathcal{N}_{X,\sigma_c^+})$ in this case. \newline The argument fails exactly for the critical value $\sigma_c=\sigma_M-6= \sigma_m+\frac32$. However, Remark \ref{rem:M-S} implies that $H^3(\mathcal{N}_{X,\sigma_c}^s)$ is a mixed Hodge structure whose $\Gr^3_W$-piece is isomorphic to $H^1(X)$. \end{itemize} \end{proof} Now we shall compute the cohomology groups of $\mathcal{N}_{X,\sigma}^s$ and $M^s_X(n,L_0)$ simultaneously. We will prove the following two theorems. \begin{theorem}\label{thm:H3-bundles-proof} Suppose $n\geq 2$, and $(n,g,d)\neq (2,2,even)$. Then \begin{itemize} \item $H^1(M_X^s(n,L_0))=0$, \item $H^2(M_X^s(n,L_0))\cong \mathbb{Z}$, \item $H^3(M_X^s(n,L_0)) \cong H^1(X)$, except in the cases $(n,g,d)=(3,2,3k)$ and $(n,g,d)=(2,3,2k)$, $k\in \mathbb{Z}$. In these cases, $H^3(M_X^s(n,L_0))$ is a mixed Hodge structure and $\Gr_W^3H^3(M_X^s(n,L_0))\cong H^1(X)$. \end{itemize} (Recall that when $n$ and $d$ are coprime, $M_X(n,L_0)=M_X^s(n,L_0)$.) \end{theorem} \begin{theorem} \label{thm:pairs-all-H3} Assume $n\geq 2$. Let $\sigma\in (\sigma_m,\sigma_M)$ if $n\geq 3$ and $\sigma\in (\sigma_m,\sigma_M-3)$ if $n=2$. Then \begin{itemize} \item $H^1(\mathcal{N}_{X,\sigma}^s) = 0$, \item $H^2(\mathcal{N}_{X,\sigma}^s) \cong \mathbb{Z}\oplus \mathbb{Z}$, \item $H^3(\mathcal{N}_{X,\sigma}^s) \cong H^1(X)$, \end{itemize} except for the cases $(n,g,d_1-2d_2)=(2,2,5)$, $\sigma=\sigma_M-6=4$, and $(n,g,d_1-3d_2)=(3,2,2k)$, $k=1,2,3$. \end{theorem} We prove both Theorem \ref{thm:H3-bundles-proof} and \ref{thm:pairs-all-H3} as follows. First we know that Theorem \ref{thm:pairs-all-H3} is true for $n=2$ by Proposition \ref{prop:case-n=2}. Then we prove Theorem \ref{thm:H3-bundles-proof} for rank $n$ assuming Theorem \ref{thm:pairs-all-H3} for the same rank $n$. Finally we prove Theorem \ref{thm:pairs-all-H3} for rank $n\geq 3$ using Theorem \ref{thm:H3-bundles-proof} for rank $n-1$. \bigskip \noindent {\em Proof of Theorem \ref{thm:H3-bundles-proof}.\/} Since the moduli spaces $M_X(n,L_0)$ and $M_X(n,L_0\otimes \mu^{n})$ are isomorphic via $E\mapsto E\otimes \mu$, for any fixed line bundle $\mu$, we may assume that the degree $d$ is large, say $d>(2g-2)n$. Fix $d_1=d$, $d_2=0$, $L_1=L_0$ and $L_2=\mathcal{O}$, and consider the moduli spaces $\mathcal{N}_{X,\sigma}=\mathcal{N}_X(\sigma; n,1,L_0,\mathcal{O})$. The moduli space $\mathcal{N}_{X,\sigma}$ for the smallest possible values of the parameter can be explicitly described. Let ${\s_m^+}=\sigma_m+\epsilon$, $\epsilon>0$ small enough. By \cite[Proposition 4.10]{MOV1}, there is a morphism $$ \pi:\mathcal{N}_{X,{\s_m^+}} \to M_X(n,L_0) $$ which sends $T=(E,L,\phi)\mapsto E$. Let $U=\pi^{-1}(M^s_X(n,L_0))$. By \cite[Proposition 4.10]{MOV1}, $\pi:U\to M_X^s(n,L_0)$ is a projective fibration whose fibers are the projective spaces $\mathbb{P} H^0(E)$, since $d_1/n-d_2 >2g-2$. Therefore $$ \begin{aligned} H^1(U) \cong & \, H^1(M^s_X(n,L_0)), \\ H^2(U)\cong & \, H^2(M^s_X(n,L_0))\oplus \mathbb{Z}, \\ H^3(U) \cong & \, H^3(M^s_X(n,L_0)). \end{aligned} $$ Let us compute the cohomology groups of $U$. The complement $S=\mathcal{N}_{X,{\s_m^+}}-U$ consists of triples $(E,L_2,\phi)$ where $E$ is semistable. By Lemma \ref{lem:codimsemist}, the codimension of the family of such bundles is at least $(n-1)(g-1)$. The fiber over $E$ is contained in (but it may not be equal to) $\mathbb{P} H^0(E )$. As $E$ is semistable and $d_1/n-d_2>2g-2$, this dimension is constant. So $\codim S \geq (n-1)(g-1)$. If $(n,g,d)\neq (3,2,3k)$ and $(n,g,d)\neq(2,3,2k)$, $k\in \mathbb{Z}$, then $\codim S\geq 3$. Then Lemma \ref{lem:M-S} implies that $H^1(U)=H^1(\mathcal{N}_{X,{\s_m^+}})$, $H^2(U)=H^2(\mathcal{N}_{X,{\s_m^+}})$ and $H^3(U)=H^3(\mathcal{N}_{X,{\s_m^+}})$. The result now follows from Theorem \ref{thm:pairs-all-H3} for rank $n$. If either $(n,g,d)=(3,2,3k)$ or $(n,g,d)=(2,3,2k)$, $k\in \mathbb{Z}$, then $\codim S=2$ and we only know by Remark \ref{rem:M-S} that $H^3(U)$ is a mixed Hodge structure with $\Gr_W^3 H^3(U) =H^3(\mathcal{N}_{X,{\s_m^+}})$. \hfill $\Box$ \bigskip \noindent {\em Proof of Theorem \ref{thm:pairs-all-H3}.\/} We assume $n\geq 3$ since the case $n=2$ is covered by Proposition \ref{prop:case-n=2}. Using Proposition \ref{prop:equalH3}, we see that it is enough to prove Proposition \ref{prop:last-H3}. \qquad $\Box$ \begin{proposition} \label{prop:last-H3} Assume $n\geq 3$, and assume that Theorem \ref{thm:H3-bundles-proof} holds for rank $n-1$. Let $\sigma_M^-=\sigma_M-\epsilon$, $\epsilon>0$ small enough. Then \begin{itemize} \item $H^1(\mathcal{N}_{X,\sigma_M^-}) = 0$, \item $H^2(\mathcal{N}_{X,\sigma_M^-}) \cong \mathbb{Z}\oplus \mathbb{Z}$, \item $H^3(\mathcal{N}_{X,\sigma_M^-}) \cong H^1(X)$, \end{itemize} except for the case $(n,g,d_1-3d_2)=(3,2,2k)$, $k=1,2,3$. \end{proposition} \begin{proof} By Propositions 7.5 and 7.6 in \cite{BGPG}, the triples in $\mathcal{N}_{X,\sigma_M^-}$ satisfy that $\phi:L_2\to E_1$ is injective with torsion-free cokernel. Let $F=E_1/\phi(L_2)$. Then there is a short exact sequence $L_2\to E_1\to F$, and $F$ is a semistable bundle. Moreover, the determinant of $F$ is fixed, since $\det(F)=\det(E_1)\otimes L_2^{-1}=L_1\otimes L_2^{-1}$. The extension is always non-trivial. Moreover, if $F$ is stable, then any non-zero extension gives rise to a $\sigma_M^-$-stable triple. By \cite[Proposition 6.9]{BGPG}, the dimension $\dim H^1(F^*\otimes L_2)$ is constant, for $F$ as above. Let $U\subset \mathcal{N}_{X,\sigma_M^-}$ be the open subset formed by those triples with $F$ a stable bundle. Then there is a fibration $$ U\to M_X^s(n-1,L_1\otimes L_2^{-1}) $$ whose fibers are projective spaces $\mathbb{P} H^1(F^*\otimes L_2)$. Therefore $$ \begin{aligned} H^1(U)= & \, 0, \\ H^2(U) \cong &\, H^2(M_X^s(n-1,L_1\otimes L_2^{-1})) \oplus \mathbb{Z}\cong \mathbb{Z}\oplus \mathbb{Z}, \\ H^3(U)\cong &\, H^3(M_X^s(n-1,L_1\otimes L_2^{-1})). \end{aligned} $$ As the dimension $\dim H^1(F^*\otimes L_2)$ is constant, the codimension of $\mathcal{N}_{X,\sigma_M^-}-U$ is at least the codimension of a locus of semistable bundles. By Lemma \ref{lem:codimsemist} applied to $M_X(n-1,L_1\otimes L^{-1}_2)$, this is at least $(n-2)(g-1)$. If $(n-1,g,d_1-d_2)\neq (2,2,2k)$, $(3,2,3k)$, $(2,3,2k)$, then this codimension is at least three. So $H^i(\mathcal{N}_{X,\sigma_M^-})\cong H^i(U)$, for $i\leq 3$. Applying Theorem \ref{thm:H3-bundles-proof} for rank $n-1$ we get the result. If $(n-1,g,d_1-d_2)=(3,2,3k)$ or $(2,3,2k)$, then $\codim (\mathcal{N}_{X,\sigma_M^-}-U)=2$, so $H^3(\mathcal{N}_{X,\sigma_M^-})\cong \Gr_W^3H^3(U)$. But again Theorem \ref{thm:H3-bundles-proof} gives us the result. Suppose finally that $(n,g,d_1-d_2)=(3,2,2k)$, $k\in \mathbb{Z}$. By Proposition \ref{prop:equalH3}, $H^3(\mathcal{N}_{X,\sigma_M^-})\cong H^3(\mathcal{N}_{X,\sigma_m^+})$. By the proof of Theorem \ref{thm:H3-bundles-proof}, $$ H^3(\mathcal{N}_{X,\sigma_m^+}) \cong \Gr_W^3 H^3(M_X^s(3,L_0)), $$ for $d_1/3-d_2>2g-2=2$, $L_0=L_1\otimes L_2^{-3}$ (note that we are not assuming that the right hand side is known). So assume $d_1-3d_2>6$. Twist $L_0$ by a line bundle $\mu$ of degree $1$ so to change $\deg(L_0)$ to $\deg(L_0\otimes \mu^{3k})=\deg(L_0) + 3k$. This allows to change the parity of $d_1-3d_2$. Therefore $H^3(\mathcal{N}_{X,\sigma_m^+})$ is independent of the parity of $d_1-d_2 \equiv d_1-3d_2 \pmod 2$. Since the case that $d_1-d_2$ is odd is already known, the result follows. \end{proof} \section{Reconstructing the polarisation}\label{sec:polarisation} We want to show that $H^3(M_X^s(n,L_0))$ and $H^3(\mathcal{N}_{X,\sigma}^s)$ have natural polarisations, which make them into polarised Hodge structures. The word ``natural'' means that they are constructed in families. \begin{proposition} \label{prop:polarisation-MX} Suppose $(n,g,d)\neq (2,2,2k)$, $(3,2,3k)$, $(2,3,2k)$. The Hodge structure $H^3(M_X^s(n,L_0))$ is naturally polarised, and the isomorphism $H^3(M_X^s(n,L_0))\cong H^1(X)$ respects the polarisations. \end{proposition} \begin{proof} For $M=M_X^s(n,L_0)$ the polarisation is constructed as follows (see \cite[Section 8]{Ara-Sas}; a similar argument is in \cite[Section 4]{BM}). Let $\overline M=M_X(n,L_0)$. Since $H^2(M)=\mathbb{Z}$, we have that $\Pic(M)=\mathbb{Z}$, so there is a unique ample generator of the Picard group. Take a general $(k-3)$-fold hyperplane section $Z\subset M$, where $k=\dim M$. By Lemma \ref{lem:codimMX}, $\codim(M_X(n,L_0)-M_X^s(n,L_0))\geq 4$, so $Z$ is smooth. Define \begin{equation} \label{eqn:otroo} \begin{aligned} H^3(M)\otimes H^3(M) &\,\longrightarrow \mathbb{Z}\, ,\\ \beta_1\otimes \beta_2 &\, \mapsto \langle \beta_1\cup\beta_2, [Z]\rangle\,. \end{aligned} \end{equation} This is a polarisation. This is proved as follows (see \cite[Proposition 6.2.1]{Ara-Sas}): take a generic $(k-4)$-fold hyperplane section $W\subset M$. As $\codim(M_X(n,L_0)-M_X^s(n,L_0))\geq 5$, $W$ is smooth. By the Lefschetz theorem \cite[Theorem 6.1.1]{Ara-Sas} applied to the open smooth variety $M$, we have that $H^3(W)\cong H^3(M)$. Then by hard Lefschetz, cupping with the hyperplane class gives an isomorphism $H^3(W) \stackrel{\cong}{\longrightarrow} H^5(W)\cong H^3(W)^*$. This map coincides with (\ref{eqn:otroo}), which proves that it is a non-degenerate pairing. Let $\pi:\mathcal{X}\to \mathcal{T}$ be the family of curves of genus $g$ with no automorphisms, and let $\theta$ be the standard polarisation on $R^1\pi_*\underline\mathbb{Z}$ corresponding to the cup product. We consider the universal Jacobian $$ q:\mathcal{J}^d\to \mathcal{T}, $$ that is, the family of Jacobians $\mathrm{Jac}^d X$, for $X\in \mathcal{T}$. Then there is a universal moduli space $$ p:\mathcal{M}_\mathcal{T} (n,d) \to \mathcal{J}^d $$ which puts over any $(X,L_0)\in \mathcal{J}^d$ the moduli space $M_X^s(n,L_0)$. If $\underline \mathbb{Z}$ denotes the local system over $\mathcal{M}_\mathcal{T}(n,d)$, then $R^3p_* \underline\mathbb{Z}$ is the local system over $\mathcal{J}^d$ whose fibers are the Hodge structures $H^3(M_X^s(n,L_0))$. Let $\theta'$ be the natural polarisation on $R^3p_*\underline \mathbb{Z}$ as defined above. There is an isomorphism $$ R^3p_*\underline \mathbb{Z} \cong q^*R^1\pi_*\underline\mathbb{Z}. $$ The natural map $\mathcal{J}^d\to M_g$ is dominant. So \cite[Lemma 8.1.1]{Ara-Sas} implies that there exists an integer $m\neq 0$ such that $\theta'=m\,\theta$. As any polarisation is a unique positive multiple of a primitive polarisation, $\theta'$ determines a unique primitive polarisation on $H^3(M_X^s(n,L_0))$ for any $(X,L_0)$. So the isomorphism $H^3(M_X^s(n,L_0))\cong H^1(X)$ respects the polarisations. \end{proof} \begin{proposition} \label{prop:polarisation-NX} Let $n\geq 2$. Let $\sigma\in (\sigma_m,\sigma_M)$ if $n\geq 3$ and $\sigma\in (\sigma_m,\sigma_M-3)$ if $n=2$. Assume that we are not in any of the cases enumerated in Lemma \ref{lem:codim-triples}, and also that $(n,g,d_1-3d_2)\neq (3,2,2k)$, $k=1,2,3$. Then $H^3(\mathcal{N}_{X,\sigma}^s)$ is naturally polarised, and the isomorphism $H^3(\mathcal{N}_{X,\sigma}^s)\cong H^1(X)$ respects the polarisations. \end{proposition} \begin{proof} Let $N=\mathcal{N}_{X,\sigma}^s=\mathcal{N}_X^s(\sigma;n,1,L_1,L_2)$ and $\overline N=\mathcal{N}_{X,\sigma}=\mathcal{N}_X(\sigma;n,1,L_1,L_2)$. By (\ref{eqn:lll}), we can assume $L_1=L_0$ and $L_2=\mathcal{O}$, with $d=\deg(L_0)=d_1-nd_2$. As $H^2(N)\cong \mathbb{Z}\oplus \mathbb{Z}$, we have that $\Pic(N)\cong \mathbb{Z}\oplus\mathbb{Z}$. Fix a basis $H_1,H_2$ for $\Pic(N)$. For $\sigma$ rational, $\overline N$ is naturally polarised, that is, there are $a,b\in \mathbb{Z}$ such that $H=aH_1+bH_2$ is a (primitive) polarisation of $N$. Take a generic $(k-3)$-fold hyperplane intersection $Z\subset N$. For $\sigma$ non-critical, $N$ is projective and smooth, so $Z$ is smooth. For $\sigma$ critical, $Z$ is smooth as the codimension of the singular locus $\overline N-N$ is at least $4$. Now consider the polarisation \begin{eqnarray*} H^3(N)\otimes H^3(N) &\longrightarrow& \mathbb{Z}\, ,\\ \beta_1\otimes \beta_2 &\mapsto & \langle \beta_1\cup\beta_2, [Z]\rangle\,. \end{eqnarray*} This is a polarisation since $\codim (\overline N-N)\geq 5$, which is proved as in Proposition \ref{prop:polarisation-MX}. Again let $\pi:\mathcal{X}\to \mathcal{T}$ be the family of curves of genus $g$ with no automorphisms, and consider the universal Jacobian $q:\mathcal{J}^d\to \mathcal{T}$. Then there is a universal moduli space $$ p:\mathcal{N}_{\mathcal{T},\sigma}=\mathcal{N}_\mathcal{T} (\sigma;n,1,d,\mathcal{O}) \to \mathcal{J}^d $$ which puts over any $(X,L_0)\in \mathcal{J}^d$ the moduli space $\mathcal{N}_{X}^s(\sigma; n,1,L,\mathcal{O})$. There is a map $\mathcal{N}_{\mathcal{T},\sigma_M^-}\to \mathcal{M}_\mathcal{T}(n-1,d)$ defined on an open subset whose complement has codimension at least two. Pulling back the relative ample generator of $\mathcal{M}_\mathcal{T}(n-1,d)\to \mathcal{J}^d$, we get an element $H_2$ well defined in the family, $H_2\in \Pic(N)$. As $p$ is a projective bundle (off a subset of codimension at least two), there is another element $H_1$ which is well defined in the family, $H_1\in \Pic(N)$. The construction of the flips can be done in families, so $\Pic(N)\cong \mathbb{Z}[H_1]\oplus \mathbb{Z}[H_2]$ with $H_1,H_2$ defined in families. Let $\sigma$ be non-critical. Then $H+\epsilon_1H_1+\epsilon_2 H_2$ is also a polarisation, for small rational $\epsilon_1,\epsilon_2$. It is well-defined globally for the family. So the result \cite[Lemma 8.1.1]{Ara-Sas} gives us a rational number $m=m(\epsilon)$ so that $$ \beta_1\cup \beta_2 \cup (H+\epsilon_1H_1+\epsilon_2 H_2)^{k-3}= m \, \theta (\beta_1,\beta_2)\,\quad \forall \beta_1,\beta_2\in H^3(N). $$ This implies that $\beta_1\cup \beta_2 \cup H_1^a \cup H_2^{k-3-a}= m_a \, \theta$, for some $m_a\in \mathbb{Q}$, for any $0\leq a\leq k-3$. The conclusion is that for all possible polarisations $\theta'$ of $N$ defined in families, we get that $\theta'$ is a multiple of $\theta$. If $\sigma$ is critical, then if there is only one polarisation for $\mathcal{N}_{X,\sigma}^s$, there is nothing to prove. If there are several, then the ample cone contains an open set. So we can work as above to prove that all of the possible polarisations of $\mathcal{N}_{X,\sigma}^s$ give the same polarisation (up to multiples) for $H^3(\mathcal{N}_{X,\sigma}^s)$. \end{proof} \section{The case of non-fixed determinant} \label{sec:non-fixed-det} In this section we shall prove the Torelli theorem for the moduli spaces of pairs and bundles with non-fixed determinant, that is, Corollary \ref{cor:Torelli-non-fixed-det}. We shall use the following lemma. \begin{lemma} \label{lem:fibers} Let $X$ be a projective connected variety and $f:X\to Y$ a map to another (quasi-projective) variety such that $f^*:H^k(Y)\to H^k(X)$ is zero for all $k>0$. Then $f$ is constant. \end{lemma} \begin{proof} Substituting $X$ by an irreducible component, we can assume $X$ is irreducible. Note that if $f$ is constant on each irreducible component, then it is constant, by the connectedness of $X$. Let $d$ be the dimension of a generic fiber of $f$, and consider a generic $d$-fold hyperplane intersection $Z\subset X$, which is transverse to the generic fiber. Then $f|_Z:Z \to Y$ is a proper and generically finite map. Therefore $f|_Z:Z\to f(Z)$ is of finite degree $N>0$, and $f(Z)\subset Y$ is a closed subvariety. Therefore $H^{2t}(f(Z))\cong \mathbb{Z} \to H^{2t}(Z)\cong \mathbb{Z}$, $t=\dim Z$, is multiplication by $N$. If $t>0$, the assumption of the lemma implies that $H^{2t}(Y)\to H^{2t}(f(Z))$ should be zero. But this is impossible, since a generic $t$-fold hyperplane intersection in $Y$ maps to a non-zero element in $H^{2t}(f(Z))$. Therefore $t=0$, i.e. $f$ is constant. \end{proof} \noindent \emph{Proof of Corollary \ref{cor:Torelli-non-fixed-det}.\/} Let ${\frak M}_X(\tau;n,d)$ be the moduli space of $\tau$-polystable pairs of rank $n$ and degree $d$. There is a determinant map $$ \mathrm{det} :{\frak M}_X(\tau;n,d)\to \mathrm{Jac}^d X\, , $$ sending $(E,\phi)\mapsto \det(E)$, whose fiber over $L_0$ is the moduli space ${\frak M}_X(n,L_0)$. Assume that $F:{\frak M}_{X}(\tau;n,d)\stackrel{\cong}{\longrightarrow} {\frak M}_{X'}(\tau';n',d')$, for $X'$ another curve. Fix a line bundle $L_0'$ on $X'$ of degree $d'$ and consider the composition $$ f:{\frak M}_{X'}(\tau';n',L_0') \hookrightarrow {\frak M}_{X'}(\tau';n',d') \cong {\frak M}_{X}(\tau;n,d) \to \mathrm{Jac}^d X\, . $$ As $f^*:H^1(\mathrm{Jac}^d X)\to H^1({\frak M}_{X'}(\tau';n,L_0))=0$ is the zero map, and $H^*(\mathrm{Jac}^d X)$ is generated by $H^1(\mathrm{Jac}^d X)$, we have that the map $f^*:H^k(\mathrm{Jac}^d X)\to H^k({\frak M}_{X'}(\tau';n,L_0))$ is zero for all $k>0$. Applying Lemma \ref{lem:fibers}, we have that $f$ is constant. Therefore there exists a line bundle $L_0$ on $X$ of degree $d$ such that $F$ maps $M'={\frak M}_{X'}(\tau';n',L_0')$ to $M={\frak M}_{X}(\tau;n,L_0)$. Working analogously, the map $F^{-1}$ maps ${\frak M}_{X}(\tau;n,L_0)$ into some fiber of the map $\det$, which must be $M'$. This implies that $F|_{M'}:M'\to M$ is an isomorphism. Now we apply Corollary \ref{cor:Torelli-fixed-det} to conclude that $X\cong X'$. Suppose that $\tau'$ is a critical value and that there is an isomorphism $F:{\frak M}_{X}^s(\tau;n,d)\cong {\frak M}_{X'}^s(\tau';n',d')$. Then we have a map $$ f:M'={\frak M}_{X'}^s(\tau';n',L_0') \hookrightarrow {\frak M}_{X'}^s(\tau';n',d') \cong {\frak M}_{X}^s(\tau;n,d) \to \mathrm{Jac}^d X\, . $$ Now take any compactification $\bar M'$ of $M'$. So there is a rational map $f:\bar M'\dashrightarrow \mathrm{Jac}^d X$. After blowing-up, we have a compactificaction $\tilde M'$ of $M'$ and a map $\tilde{f}:\tilde M'\to \mathrm{Jac}^d X$ which extends $f$. As $H^1(M')=0$, we have that $H^1(\tilde M')=0$ as well. Using Lemma \ref{lem:fibers}, we have that $f$ is a constant map. The rest of the argument is as before. The case of bundles is entirely analogous.\hfill $\Box$
1,477,468,750,216
arxiv
\section{INTRODUCTION} Ising model is a simplest model for describing the magnetic system, where a spin variable that takes two possible values lives on each lattice site, which interact only with its nearest neighbors\cite{nel,par}. It is a well-known fact that the one-dimensional Ising model does not exhibit phase transition at a finite temperature. In particular, only the magnetic susceptibility diverges, and the specific heat remains finite as $T \to 0$, giving rise to a peak with a finite height called the Schottky anomaly~\cite{par,tari}. Although the partition function as well as average values of various physical quantities of the one-dimensional Ising model can be obtained analytically using transfer matrix formalism for an arbitrary lattice size $N$, usual focus of analysis has been the thermodynamics limit, where existence or absence of phase transition can be discussed. However, finite-size one-dimensional Ising model is interesting as a model of a biopolymer, where the spin variable corresponds to the direction of each bond~\cite{nel}. In this work, I will consider one-dimensional Ising models of finite sizes, in the absence of the magnetic field. Especially, the focus will be on the dependence of the low-temperature behavior on the boundary conditions. It has been shown for the Ising chain that the finite-size scaling of the fluctuations and the distribution of magnetization depends on boundary condition in the low temperature limit\cite{racz}. It has also been noted that the leading term in the low-temperature expansion of free-field Ising chain is different from that of the infinite chain\cite{wang,saul}, if the periodic boundary condition is imposed. Here, I will explicitly derive the explicit expression of the low-temperature expansion for a free-field finite Ising chain, which exhibit a strong dependence on boundary conditions. I will then show that these distinct behaviors are related to different distributions of the zeros of the partition function on the complex temperature plane. In particular, I show that with periodic boundary, the size scaling of the leading term is directly related to the approach of the zeros toward the origin. This is in contrast to chains with open boundary where both partition function zeros and the specific heat per bond do not have any size dependence. The results suggest that the partition function zeros contain information on the system other than phase transition. \section{The model} The one-dimensional Ising model with $N$ lattices sites, in the absence of the magnetic field, is described by the Hamiltonian\cite{nel} \begin{eqnarray} H(\{\sigma_k\}) = - J \sum_{i=1}^{N-1} \sum_{j=i+1}^N \sigma_i \sigma_j \end{eqnarray} for open boundary condition, where each $\sigma_i$ takes the value of $+1$ or $-1$. Here, $J>0$ corresponds to ferromagnet, and $J<0$ corresponds to anti-ferromagnet. For periodic boundary condition, there is an additional coupling between the last spin and the first spin, so the Hamiltonian reads\cite{par} \begin{eqnarray} {\cal H}(\{\sigma_k\}) = - J \sum_{i=1}^{N-1} \sum_{j=i+1}^N \sigma_i \sigma_j - J \sigma_N \sigma_1. \end{eqnarray} The partition function $Z \equiv \sum_{\{\sigma_k\} \}} \exp (-\beta {\cal H}(\{\sigma_k\})) $ can be expressed in terms of transfer matrix\cite{par,nel} \begin{eqnarray} T_{\sigma \tilde \sigma} \equiv e^{\beta J \sigma \tilde \sigma} \end{eqnarray} as \begin{eqnarray} Z_{\rm open} &=& {\bf v}^\dagger {\bf T}^{N-1} {\bf v} \nonumber\\ Z_{\rm periodic} &=& \Tr {\bf T}^{N}. \end{eqnarray} where the subscripts denote the corresponding boundary conditions, and $\bf v$ is the vector whose components are all 1. The partition functions can be expressed in terms of the two eigenvalues of $T$, \begin{eqnarray} \lambda_\pm = e^{\beta J} \pm e^{-\beta J} = y^{1/2} \pm y^{-1/2}, \end{eqnarray} where $y \equiv e^{ 2 \beta J }$. Note that for ferromagnet ($J>0$), the positive temperature region corresponds to $1 < y < \infty$, whereas for antiferromagnet ($J<0$), it is $0 < y < 1$. The vector $\bf v$ is the eigenvector of ${\bf T}$ for $\lambda_+$, and the partition functions are \begin{eqnarray} Z_{\rm open} &=& 2 {\lambda_+}^{N-1} = 2 (y^{1/2}+y^{-1/2})^{N-1} = 2 y^{-(N-1)/2} (1+y)^{N-1} \nonumber\\ Z_{\rm periodic} &=& {\lambda_+}^{N} + {\lambda_-}^N = (y^{1/2}+y^{-1/2})^N + (y^{1/2}-y^{-1/2})^N\nonumber\\ &=& y^{-N/2} \left[(y+1)^N + (y-1)^N \right] \label{part} \end{eqnarray} Note that the partition function is invariant under the transformation $y \leftrightarrow y^{-1}$ for open boundary condition, but the invariance holds only for even values of $N$ for periodic boundary condition. In the original Hamiltonian, this transformation corresponds to $J \leftrightarrow -J$, which can be compensated by change of variable $\sigma_{2j} \leftrightarrow -\sigma_{2j}$. Obviously, such an operation cannot be done on a periodic lattice with odd number of sites, where odd and even positions cannot be defined consistently. The energy is \begin{eqnarray} \langle E \rangle &=& -\frac{\partial}{\partial \beta} \ln Z = - 2 J y \frac{\partial}{\partial y} \ln Z= \left\{ \begin{array}{ll} J [ N-1 - \frac{ 2(N-1) y}{y+1} ] & ({\rm open}),\\ J [ N - \frac{ 2 N y \left( (y+1)^{N-1} + (y-1)^{N-1} \right) }{(y+1)^N + (y-1)^N} ] & ({\rm periodic}) \end{array} \right. \label{DS} \end{eqnarray} Since the numbers of inter-spin bonds are $N-1$ and $N$ with open and periodic boundary conditions, the energies also scale with these quantities. In particular, we note that for open boundary condition, the energy is strictly proportional to $N-1$. Therefore we compare \begin{eqnarray} \frac{\langle E_{\rm open} \rangle}{(N-1)} &=& J [ 1 - \frac{ 2 y}{y+1}]\nonumber\\ \frac{\langle E_{\rm periodic} \rangle}{N } &=& J [ 1 - \frac{ 2 y \left( (y+1)^{N-1} + (y-1)^{N-1} \right) }{(y+1)^N + (y-1)^N}] \label{Ene} \end{eqnarray} The specific heat per bond is then \begin{eqnarray} \frac{C_{\rm open}}{4 J^2 (N-1)} &=& \frac{1}{k_B T^2 (N-1)}(y \frac{\partial}{\partial y})^2 \ln Z_{\rm open} = \frac{1}{k_B T^2 } \frac{y}{(y+1)^2},\nonumber\\ \frac{C_{\rm periodic}}{4 J^2 N } &=& \frac{1}{k_B T^2 N}(y \frac{\partial}{\partial y})^2 \ln Z_{\rm periodic} \nonumber\\ &=& \frac{1}{k_B T^2 } \frac{ y \left( (y+1)^{2N-2} - (y-1)^{2N-2} + 4(N-1) y (y^2-1)^{N-2} \right) }{[(y+1)^N + (y-1)^N]^2} \label{sh} \end{eqnarray} Note that both energy per bond and specific heat per bond are independent of size with the open boundary condition. This is because that this model is equivalent to $N-1$ noninteracting Ising spins in a magnetic field. In fact, the elementary excitation in the free-field one-dimensional Ising model is a kink, defined as the boundary between an anti-aligned pair for ferromagnet and that between the aligned pair for the antiferromagnet, each of them costing the energy of $2 |J|$ (Fig.\ref{kink}). Since each kink can appear anywhere among $N-1$ inter-spin bonds, and the total energy is simply the sum of energy of each kink, this model can mapped into $N-1$ noninteracting Ising spins in a magnetic field, possessing two energy states per site\footnote{The difference of the free-field Ising chain from noninteracting spin chain under magnetic field is that the former has a two-fold reflection symmetry. This appears as an overall factor of 2 in the partition function and do not affect the physical quantities.}. We also see that $(y+1)>|y-1|$ for $0 < y < \infty$, and consequently powers of $y+1$ dominate over that of $y-1$ in the limit of $N \to \infty$ in the equations above. Therefore, we see the intensive physical quantities such as energy per bond and specific heat per bond under the periodic boundary condition converge to those under open boundary condition. \begin{figure} \includegraphics[width=8.0cm]{kink.eps} \caption{Kinks for open chains, for (a) a ferromagnet and (b) an antiferromagnet, shown as dashed line. For periodic boundary condition the kinks appear in pairs, shown for (c) a ferromagnet and (d) an antiferromagnet. (e) A periodic antiferromagnetic chain with odd number of sites has at least one kink, so the total number of kinks is odd. }\label{kink} \end{figure} \section{Low temperature behavior} Assuming nonnegative temperature, $T \to 0$ corresponds to $y \to \infty$ for $J>0$ (ferromagnet) and $y \to 0$ for $J<0$ (antiferromagnet). As $y \to \infty$, the leading behavior of the specific heat is \begin{eqnarray} \frac{C_{\rm open} k_B T^2 }{4 J^2 (N-1)} &\sim& y^{-1} \nonumber\\ \frac{C_{\rm periodic} k_B T^2 }{4 J^2 N } &\sim& 2 (N-1) y^{-2} \label{yinf} \end{eqnarray} On the other hand, as $y \to 0$, \begin{eqnarray} \frac{C_{\rm open} k_B T^2}{4 J^2 (N-1)} &\sim& y \nonumber\\ \frac{C_{\rm periodic} k_B T^2}{4 J^2 N } &\sim& \left\{ \begin{array}{ll} 2 (N-1) y^2 & ({\rm periodic,\ N\ even}) \\ \frac{2}{3} \frac{(N-1)(N-2)}{N} y^2 & ({\rm periodic,\ N\ odd})\end{array} \right. \label{yzero} \end{eqnarray} The results can be summarized as the as the limiting behavior as $T \to 0$: \begin{eqnarray} \frac{C k_B T^2}{4 J^2 N_b} &\sim& \left\{ \begin{array}{ll} \exp(-2|J|\beta) & ({\rm open}),\\ 2 (N-1) \exp(-4|J|\beta) & ({\rm periodic,}\ J>0\ {\rm or\ N\ even}) \\ \frac{2}{3}\frac{(N-1)(N-2)}{N} \exp(-4|J|\beta) & ({\rm periodic,}\ J<0\ {\rm and\ N\ odd}). \end{array} \right. \label{sh0} \end{eqnarray} where the number bonds is $N_b = N-1$ for open boundary and $N_b=N$ for periodic boundary. This means that as along as $N$ is finite, even when the overall shape of intensive quantities under periodic boundary condition look very similar to that under open boundary, it will look different if we magnify the low-temperature region. An example is shown in Figure \ref{shLs}, where I show the specific heat per bond for periodic boundary for $N=10,100,500$ and compare with that for open boundary, corresponding to $N=\infty$. As shown in Fig.\ref{shLs} (a), $N=100$ approximates $N=\infty$ much better than $N=10$, but the deviation from $N=\infty$ becomes severe as $T\to 0$. As can be seen from the magnified figure, Fig.\ref{shLs}(b), the specific heat under the periodic boundary condition drops more abruptly than that for the open boundary since it falls as $e^{-4 |J| \beta}/T^2$ instead of $e^{-2 |J| \beta}/T^2$. This is why its coefficients must grow with size, in order to reduce the deviation. Divergence of this coefficient with $O(N)$ in the limit of $N \to \infty$ turns $e^{-4 |J| \beta}/T^2$ into $e^{-2 |J| \beta}/T^2$~\cite{wang}. On the other hand, in the case of the spin-glass, the coefficient grows slower than this, leading to a different low-energy behavior in the thermodynamic limit\cite{saul}. The specific heat for $N=500$ is also shown in Fig.\ref{shLs} (b), which is a much better approximation to $N=\infty$ in this temperature range. Again, the deviation from $N=\infty$ reappears if we zoom into the lower temperature region. \begin{figure} \includegraphics[width=15.0cm]{shL.eps} \includegraphics[width=15.0cm]{shs.eps} \caption{The specific heats under periodic boundary condition for $N=10$, $N=100$ and $N=500$, compared with $N=\infty$ (open boundary).} \label{shLs} \end{figure} These distinct low-temperature behaviors can be understood from the viewpoint of general theory of low-temperature behavior of specific heat\cite{par,tari,saul,wang}. It is clear that if the energy levels are discrete, then in the limit of $T \to 0$ only the ground state and the next excited state are relevant. Let us assume that the energy gap between the ground state and the first excited state is $\Delta E$, and their degeneracies are $g(0)$ and $g(1)$, respectively. We then have in the limit of $T \to 0$, \begin{eqnarray} Z \sim g(0) e^{-\beta E_0} + g(1) e^{-\beta (E_0 +\Delta E)} \end{eqnarray} and \begin{eqnarray} \langle E \rangle &=& -\frac{\partial}{\partial \beta} \ln Z \sim E_0 + \frac{\omega(1) \Delta E e^{-\beta \Delta E}}{1 + \omega(1) e^{-\beta \Delta E}} = E_0 + \frac{\omega(1) \Delta E }{e^{\beta \Delta E} + \omega(1) },\nonumber\\ C &=& -\frac{1}{k_B T^2 }\frac{\partial}{\partial \beta} \langle E \rangle \sim \frac{\omega(1) (\Delta E)^2 e^{\beta \Delta E}}{k_B T^2 \left(e^{\beta \Delta E} + \omega(1)\right)^2 } \sim \frac{\omega(1) (\Delta E)^2 e^{-\beta \Delta E}}{k_B T^2 } \label{schott} \end{eqnarray} where $\omega(1) \equiv g(1)/g(0)$. In case of the Ising chain, there are two ground states related by reflection symmetry, where all the spins are aligned or anti-aligned depending on whether $J>0$ (ferromagnet) or $J<0$ (antiferromagnet). Under open boundary condition, the first excited states are generated from these ground states by creation of one kink, leading to $\Delta E= 2 |J|$~(Fig.\ref{kink} (a,b)). Since there are $N-1$ places between $N$ spins to create a kink, $\omega(1) = N-1$. For periodic boundary conditions, again there are two ground states for ferromagnet ($J>0$) and for antiferromagnet ($J<0$) with even $N$. However, this time the kinks are always created by pairs in order to satisfy the boundary condition, so the first excited state is generated by the creation of a kink pair, leading to $\Delta E= 4 |J|$~(Fig.\ref{kink} (c,d)). There are $N(N-1)/2$ places to create such a pair, so $\omega(1) = N(N-1)/2$. Antiferromagnet ($J<0$) with odd $N$ in the presence of periodic boundary condition is quite distinct from the other cases because it is impossible to construct a state where all the spins are anti-aligned. Therefore there is at least one kink present, and since additional kinks are created in pairs, the number of kinks is always odd(Fig.\ref{kink}(e)). Again the creation of a kink pair costs $\Delta E= 4 |J|$. Counting the possible positions to place a kink, we easily see that the number of the ground states is $g(0)=2 N$, and the that of the first excited states is $g(1) = 2 N (N-1) (N-2)/6$, so we get $\omega(1) = (N-1)(N-2)/6$. Substituting these results into Eq.(\ref{schott}) and multiplying by $k_B T^2/4 J^2 N_b$, we reproduce Eq.(\ref{sh0}). \section{Partition Function Zeros} The partition function zeros are a powerful tool for studying phase transition\cite{YL,Fisher,IPZ,Bo,JK,AH,YJK,B03,Wang,CL,BDL,SYK,JKPS1,JL,Ar00,BE03,A90,prd92,prd12, Bor04,Zhu06,PRC08,PRL,JKPS2}. I will consider the zeros in the complex temperature plane. For a system that undergoes phase transition in the thermodynamic limit, the zeros approach the positive real axis as the system size grows, which is usually related to the singularity of the specific heat. Defining $z \equiv e^{-\beta \epsilon}$ with some energy scale $\epsilon$ and denoting the zeros in the complex $z$-plane as $z_i$, the partition function can be written as \begin{eqnarray} Z=A(z) \prod_i (z-z_i) \end{eqnarray} where $A(z)$ is a function that is nonvanishing everywhere on the complex plane. Then the specific heat is obtained as \begin{eqnarray} C &=& \frac{\epsilon^2}{k_B T^2} (z\frac{\partial}{\partial z})^2 \ln Z\nonumber\\ &=& \frac{\epsilon^2}{k_B T^2} (z\frac{\partial}{\partial z}) \left( \sum_i \frac{z}{z-z_i} + \frac{z A'(z)}{A(z)} \right)\nonumber\\ &=& \frac{\epsilon^2}{k_B T^2} \left( -\sum_i \frac{z z_i}{(z-z_i)^2} + z\frac{\partial}{\partial z}\left(\frac{z A'(z)}{A(z)}\right)\right). \label{sh2} \end{eqnarray} Therefore we see that if the zeros approach the positive real axis fast enough as the system size grows, the specific heat will blow up. To see this more clearly, we note that since the coefficient of the equation $Z(z)=0$ is real, any roots with nonzero imaginary values form complex conjugate pairs. Writing them as $z_\pm = p \pm iq$, their contribution to $k_B T^2 C/\epsilon^2$ can be written as \begin{eqnarray} -\frac{z z_+}{(z-z_+)^2}-\frac{z z_-}{(z-z_-)^2} &=& \frac{z[-(z_+ + z_-) z^2 + 4 z_+ z_- z - z_+ z_- (z_+ + z_-) ] }{(z-z_+)^2(z-z_-)^2}\nonumber\\ &=& \frac{z(-2 p z^2 + 4 z (p^2 + q^2) - 2 p (p^2+q^2)}{(z^2 - 2 p z + p^2 + q^2)^2} \label{shpq} \end{eqnarray} which becomes increasingly sharper near $z=p$. In particular, we see that at $z=p$, is becomes $\frac{2 p^2}{q^2}$ showing that the height of the specific heat per site $C/N$ blows up if $q$ vanishes faster than $1/N^{1/2}$. All these standard arguments are valid only for $p>0$, and breaks down if $p=0$. That is, in contrast to zeros that approach positive real axis, those approaching the origin, $z=0$ or equivalently $T=0$, do not give rise to a singularity in the specific heat. However, as we will see in the case of the free-field Ising chain, the zeros approaching the origin gives rise to the $N$-dependence of the coefficients of the low-temperature expansion, Eq(\ref{sh0}). Let us first consider the free-field Ising chain under open boundary condition. We are interested in $0< y < 1$ for the case of antiferromagnet, and $1<y<\infty$ for ferromagnet, but we note that since the partition function for open boundary condition in Eq.(\ref{part}) is invariant under the inversion $y \to 1/y$, we may concentrate on $0< y < 1$ without loss of generality. From Eq.(\ref{part}) we see that the corresponding partition function has a pole at $y=0$, which plays no role in the specific heat, since $(y \frac{\partial}{\partial y})^2 \ln y^{-N/2}=0$. On the other hand there are zeros concentrated at $y=-1$. Their positions to not change, and only their multiplicity increases with $N-1$. Therefore they do not give rise to singularities of physical quantities. They only give overall factor of $N-1$ in front of energy and specific heat, since \begin{eqnarray} y \frac{\partial}{\partial y} \ln (y+1)^{N-1}&=&\frac{(N-1) y}{y+1}\nonumber\\ \left(y \frac{\partial}{\partial y}\right)^2 \ln (y+1)^{N-1}&=&-\frac{(N-1) y}{(y+1)^2} \end{eqnarray} This is to be expected, since the free-field Ising chain on $N$ sites with open boundary condition is equivalent to $N-1$ noninteracting two-state spins in a magnetic field. Extensive quantities of such noninteracting particles are simply $N-1$ times those for a single particle, and consequently intensive quantities have no $N$ dependence, leaving no room for singularity. Next we consider the free-field Ising chain under periodic boundary condition. Let us first consider the ferromagnetic case. Since $1 < y < \infty$, it is convenient to consider the complex plane of its inverse $z \equiv y^{-1}$ so that the zeros in the region $0 < z < 1$ can be investigated. In terms of $z$, partition function is written as \begin{eqnarray} Z_{\rm periodic} = z^{-N/2} \left[(1+z)^N + (1-z)^N \right] \end{eqnarray} Again, the pole at $z=0$ is irrelevant. Since the remaining factor is a polynomial of order $N$, $Z_{\rm periodic}$ can be rewritten as \begin{eqnarray} Z_{\rm periodic} = 2 z^{-N/2} \prod_{i=1}^N (z - z_i) \end{eqnarray} where $z_i$s $(i=1, \cdots, N)$ denote the $N$ zeros of the partition function. Therefore $A(z)$ dependent term in the expression Eq.(\ref{sh2}) is absent and the specific heat is proportional to the sum of the terms of the form Eq.(\ref{shpq}) evaluated at the zeros $z_i$. The zeros are obtained by solving \begin{eqnarray} (1+z)^N + (1-z)^N = 0. \end{eqnarray} which is equivalent to \begin{equation} (1+z)=\exp\left(\frac{(2j+1)i\pi}{N}\right)(1-z) \end{equation} with integer values of $j$, leading to \begin{equation} z_j=i \tan \left(\frac{(2j+1)\pi}{2N}\right) \end{equation} Utilizing the relation $\tan (-\theta) = - \tan \theta$, the roots can alternatively be expressed as conjugate pairs lying on the imaginary axis: \begin{eqnarray} z^{\pm}_j = \pm i \tan (\frac{(2j+1)\pi}{2 N}) \end{eqnarray} where $j=0 \cdots N/2-1$ for an even value of $N$, and $j=0, \cdots (N-3)/2$ for an odd value of $N$. Note that there are only $N-1$ zeros in the finite region when $N$ is odd, since the tangent function blows up for $j=(N-1)/2$. From Eq.(\ref{sh2}) with $A(z)=2z^{-N/2}$ and Eq.(\ref{shpq}) with $p=0$, we see that \begin{eqnarray} \frac{k_B T^2 C_{\rm periodic}}{4 J^2} &=& 4 \sum_{0 \leq j < (N-1)/2} \frac{z^2 \tan^2((2j+1)\pi/2 N)}{\left(z^2 + \tan^2((2j+1)\pi/2 N)\right)^2} \label{shz} \end{eqnarray} The leading order term as $z \to 0$ is \begin{eqnarray} \frac{k_B T^2 C_{\rm periodic}}{4 J^2} &\sim& 4 z^2 \sum_{0 \leq j < (N-1)/2} \cot^{2}\frac{(2j+1)\pi}{2 N} \label{sh0z} \end{eqnarray} To find the zeros relevant for antiferromagnets, the partition function is expressed in terms of $y$ variables: \begin{eqnarray} Z_{\rm periodic} &=& y^{-N/2} \left[(y+1)^N + (y-1)^N \right] = 2 y^{-N/2} \prod_{i=1}^N (y - y_i) \end{eqnarray} where again, $y_i$s $(i=1, \cdots, N)$ denote the $N$ zeros of the partition function in the complex $y$-plane. The zeros are obtained by solving\cite{JKPS2} \begin{eqnarray} (y+1)^N + (y-1)^N = 0. \end{eqnarray} leading to \begin{equation} y_j=i \cot \frac{(2j+1)\pi}{2N} \end{equation} with integer values of $j$. Utilizing the relations $\cot(\theta)=\tan(\pi/2-\theta)$ and $\tan(-\theta)=-\tan \theta$, we express them as conjugate pairs of zeros lying on the imaginary axis: \begin{eqnarray} y^{\pm}_j = \pm i \tan \frac{(2j+1)\pi}{2 N}\quad (j=0, \cdots, N/2-1). \end{eqnarray} for even values of $N$. On the other hand, for odd value of $N$, in addition to the pair of zeros \begin{eqnarray} y^{\pm}_j = \pm i \tan \frac{j\pi}{N} \quad (j=1, \cdots, (N-1)/2), \end{eqnarray} there is an additional zero at the origin \begin{eqnarray} y_0 = 0. \end{eqnarray} The zeros for $N=7$ and $N=8$ are shown in Figure \ref{zero} as examples, where it is to be understood that the complex plane is the $z$-plane for ferromagnets and the $y$-plane for antiferromagnets. The zeros approach the origin as the size increases, which causes the coefficient of the low-energy expansion in Eq.(\ref{sh0}) grow with size, as shown next. \begin{figure} \includegraphics[width=15.0cm]{zero2.eps} \caption{The zeros for one-dimensional free-field Ising model under periodic boundary condition for $N=7$ and $N=8$. Only the region with nonnegative imaginary values are shown, since there is a reflection symmetry with respect to the real axis. FM and AF denote the zeros for a ferromagnet and an antiferromagnet, respectively. The pattern of zeros for FM and AF are the same for an even value of $N$. }\label{zero} \end{figure} From Eq.(\ref{shpq}) with $p=0$, we see that the specific heat can be written as \begin{eqnarray} \frac{k_B T^2 C_{\rm periodic}}{4 J^2} &=& \left\{ \begin{array}{ll} 4 (\sum_{j=0}^{N/2-1} \frac{y^2 \tan^2((2j+1)\pi/2 N)}{\left(y^2 + \tan^2((2j+1)\pi/2 N)\right)^2} & ({\rm N\ even}),\\ 4 \sum_{j=1}^{(N-1)/2} \frac{y^2 \tan^2(j\pi/N)}{\left(y^2 + \tan^2(j\pi/N)\right)^2} & ({\rm N\ odd}) \end{array} \right. \label{shy} \end{eqnarray} Note that the zero at the origin for odd $N$ has vanishing contribution to the specific heat. The leading order term as $y \to 0$ is \begin{eqnarray} \frac{k_B T^2 C_{\rm periodic}}{4 J^2} &\sim& \left\{ \begin{array}{ll} 4 y^2 \sum_{j=0}^{N/2-1} \cot^{2}\frac{(2j+1)\pi}{2 N} & (N\ {\rm even}),\\ 4 y^2 \sum_{j=0}^{(N-3)/2} \cot^{2}\frac{(j+1)\pi}{N} & (N\ {\rm odd}) \end{array} \right. \label{sh0y} \end{eqnarray} Summarizing Eq.(\ref{sh0z}) and (\ref{sh0y}) as behaviors near $T \to 0$, we now get \begin{eqnarray} \frac{k_B T^2 C_{\rm periodic}}{4 J^2} &\sim& \left\{ \begin{array}{ll} 4 e^{-4|J|\beta} \left( \sum_{j=0}^{n-1} \cot^{2}\frac{(2j+1)\pi}{4 n} \right) & (N=2n) \\ 4 e^{-4|J|\beta} \left( \sum_{j=0}^{n-1} \cot^{2}\frac{(2j+1)\pi}{(4n + 2)} \right) & (N=2n+1\ {\rm and\ }J>0) \\ 4 e^{-4|J|\beta} \left( \sum_{j=0}^{n-1} \cot^{2}\frac{(j+1)\pi}{(2 n + 1)} \right) & (N=2n+1\ {\rm and\ }J<0). \end{array} \right. \label{sh0t} \end{eqnarray} where $n$ is an integer. We again confirm that the leading term is proportional to $e^{-4|J|\beta}$, with its coefficients coming from the zeros. By comparing Eq.(\ref{sh0t}) with Eq.(\ref{sh0}) for $N=2n$ and $N=2n+1$, we prove the summation formula for the series of the cotangents: \begin{eqnarray} \sum_{j=0}^{n-1} \cot^2\frac{(2j+1) \pi}{4n+2} &=& n (2n+1),\nonumber\\ \sum_{j=0}^{n-1} \cot^2\frac{(2j+1) \pi}{4n} &=& 3 \sum_{j=0}^{n-1} \cot^2\frac{(j+1) \pi}{2n+1} = n(2n-1). \end{eqnarray} In particular, the growth of the coefficients of $C_{\rm periodic}$ as $O(N^2)$ as $N \to \infty$ is seen to be directly related to the approach of the zeros to the origin with $O(1/N)$. \section{CONCLUSIONS} In this work, I explicitly derived the expressions for the low-temperature expansion of one-dimensional free-field Ising models. The intensive physical quantities have no size dependence under open boundary condition, related to the fact that the positions of the partition function zeros are fixed in the complex temperature plane. Therefore, the intensive quantities are in exact agreement with those of the infinite chain, including their low-temperature behavior. On the other hand, even the intensive quantities have size-dependence under periodic boundary condition. Although these quantities approach that of the open boundary condition in the thermodynamic limit, the leading term of the low-temperature expansion is quite distinct from that of the open boundary condition. I have shown that the size-dependence of the leading coefficient comes from the partition zeros approaching the origin on the complex temperature plane. \begin{acknowledgments} This work was supported by the National Research Foundation of Korea, funded by the Ministry of Education, Science, and Technology (NRF- 2012M3A9D1054705). \end{acknowledgments}
1,477,468,750,217
arxiv
\section{Introduction} \label{section:introduction} Many applications in control engineering \cite{Keshavarz2011,Panchea2017,Molloy2016,Molloy2018,Yokoyama2017,Maillot2013}, economics \cite{Konstantakopoulos2017}, and robotics \cite{Mombaur2010,Levine2012,Puydupin2012} involve inferring the underlying objectives of agents and systems from their behaviours. Inverse optimal control (or inverse reinforcement learning) is a promising methodology for computing the objectives of control systems from given state and control trajectories, and its recent applications include learning driving styles \cite{Kuderer2015}, modelling human movement \cite{Mombaur2010}, and inferring the intent of aircraft \cite{Yokoyama2017}. Motivated by applications involving systems subject to control constraints and operating indefinitely in real-time (such as vehicles), in this paper we propose a novel method of inverse optimal control for when the optimisation horizon is unknown, the controls are subject to constraints, and the given trajectories are to be processed recursively online. Inverse optimal control is the problem of computing the (unknown) parameters of an optimal control problem's objective function such that given state and control trajectories are optimal (see \cite{Molloy2018,Keshavarz2011,Johnson2013,Pauwels2014} and references therein). In contrast, the standard problem of (forward) optimal control involves finding optimal state and control trajectories given complete knowledge of the objective function. The solution of (forward) optimal control problems with state and/or control constraints has received considerable recent attention, and a variety of efficient solution techniques now exist including the exact penalty method \cite{Li2011,Li2013} and the constraint transcription method \cite{Liu2014,Liu2017,Li2011a} (see also \cite{Yang2016} for a summary of the implementation and use of these and other techniques). In these methods, constrained (forward) optimal control problems that are difficult (or intractable) to solve analytically are solved by employing novel control parameterisation schemes that parameterise the optimal controls as combinations of basis functions. Despite the recent success in solving constrained (forward) optimal control problems, the inverse optimal control problem has received considerably less attention in settings where the states and controls may be subject to constraints and the horizon is unknown and potentially infinite. Under the assumption that the horizon is known and finite, methods of inverse optimal control have been proposed on the basis of bilevel optimisation \cite{Mombaur2010}, Karush-Kuhn-Tucker (KKT) conditions \cite{Keshavarz2011,Puydupin2012,Jin2018}, Pontryagin's minimum principle \cite{Molloy2018,Molloy2016,Johnson2013}, and the Hamilton-Jacobi-Bellman equation \cite{Pauwels2014}. Few of these methods are directly applicable in discrete-time settings when the given trajectories contain active control constraints. For example, neither the recent methods nor performance guarantees of \cite{Molloy2018} and \cite{Jin2018} are applicable when the given trajectories contain active control constraints. Furthermore, the majority of these finite-horizon inverse optimal control methods (including those in \cite{Molloy2018} and \cite{Jin2018}) store and process the given trajectories in batches or in their entirety. They therefore lack efficient online implementations and their memory and computational complexities increase with the length of the given trajectories. Methods of inverse optimal control have also been proposed under the assumption that the horizon is infinite \cite{Priess2015,Keshavarz2011,Molloy2018b,Boyd1994,Kalman1964}. As in the finite-horizon case, few (if any) of these inverse methods are applicable when the given trajectories contain active control constraints. Indeed, most existing infinite-horizon inverse methods are limited to unconstrained linear systems with quadratic objective functions \cite{Priess2015,Keshavarz2011,Molloy2018b,Boyd1994,Kalman1964}. For example, the infinite-horizon method of \cite{Priess2015} is wholly reliant on this linear-quadratic structure since it involves computing a feedback gain matrix and then computing the objective-function parameters by solving a system of linear matrix inequalities (see \cite[Section 10.6]{Boyd1994} and references therein). Similarly, the approach of \cite{Keshavarz2011} assumes a quadratic form of the objective function and relies on the very restrictive assumption that the stage function of the optimal control problem is known. Despite these efforts and our recent proposal of a method of infinite-horizon inverse optimal control for discrete-time unconstrained nonlinear systems in \cite{Molloy2018b}, the problem of control-constrained inverse optimal control remains largely unresolved in both finite and infinite horizon settings. The key contribution of this paper is the proposal of a novel method of online discrete-time inverse optimal control that computes objective-function parameters from trajectories with control constraints. A secondary contribution of this paper is the establishment of conditions under which our proposed online method is guaranteed to compute unique objective-function parameters. We develop our method and performance guarantees by establishing a new combined discrete-time minimum principle for both finite and infinite horizon optimal control problems that involves a forward recursion for the costates (rather than the backward recursions present in prior art, cf.~\cite{Molloy2018,Blot2000,Bertsekas2005,Goodwin2006}). By exploiting this combined minimum principle, our method and performance guarantees are applicable to both finite and infinite horizon problems with constrained controls without requiring explicit knowledge of the horizon. In contrast, the recent treatments of discrete-time inverse optimal control in \cite{Molloy2018, Jin2018, Molloy2018b} are specialised to either finite or infinite horizon settings and are only applicable to trajectories without control constraints. Thus, in the finite-horizon setting, our method contrasts with those of \cite{Molloy2018} and \cite{Jin2018} by handling trajectories with constrained controls, having an efficient online implementation, and not requiring prior knowledge of the horizon. In the infinite-horizon setting, in contrast to the method of \cite{Molloy2018b}, our method handles trajectories with constrained controls. This paper is structured as follows. In Section \ref{sec:problem}, we pose online inverse optimal control. In Section \ref{sec:ioc}, we develop a combined minimum principle for both finite and infinite horizon discrete-time optimal control problems and propose our novel method of online inverse optimal control. In Section \ref{sec:performance} we establish performance guarantees for our method. We present an illustrative example in Section \ref{sec:examples} and provide conclusions in Section \ref{sec:conclusion}. \section{Problem Formulation} \label{sec:problem} Let us consider the discrete-time deterministic system \begin{equation} \label{eq:dynamics} x_{k+1} = f_k \left( x_k, u_k \right), \quad x_0 \in \mathbb{R}^n \end{equation} for $k \geq 0$ where $f_k : \mathbb{R}^n \times \mathcal{U} \mapsto \mathbb{R}^n$ are continuously differentiable (possibly nonlinear) functions, $x_k \in \mathbb{R}^n$ are state vectors, and $u_k \in \mathcal{U}$ are (potentially multidimensional) control variables belonging to the closed and convex constraint set $\mathcal{U} \subset \mathbb{R}^m$. Let us define the objective function \begin{equation} \label{eq:ocCost} V \left( \x{0}{K}, \u{0}{K}, \theta \right) \triangleq \sum_{k = 0}^{K} \theta' L_k\left(x_k, u_k \right) \end{equation} with the possibly infinite horizon $K > 0$ where $\theta \in \Theta$ is a time-invariant parameter vector from the parameter set $\Theta \subset \mathbb{R}^N$, and $L_k : \mathbb{R}^n \times \mathbb{R}^m \mapsto \mathbb{R}^N$ for $k \geq 0$ are basis functions that are continuously differentiable in both of their arguments. We shall use $^\prime$ to denote the transpose operator, $\x{0}{K}$ to denote the state sequence $\{x_k : 0 \leq k \leq K\}$ and $\u{0}{K}$ to denote the control sequence $\{u_k : 0 \leq k \leq K\}$. In the (well-posed) discrete-time optimal control problem, we solve \begin{align} \label{eq:ocProblem} \begin{aligned} &\underset{\u{0}{K}}{{\inf}} & & V \left( \x{0}{K}, \u{0}{K}, \theta \right) < \infty\\ &\mathrm{s.t.} & & x_{k+1} = f_k (x_k, u_k), \quad k \geq 0 \\ & & & u_k \in \mathcal{U}, \quad k \geq 0\\ & & & x_{0} \in \mathbb{R}^n \end{aligned} \end{align} for the optimal state $\x{0}{K}$ and control $\u{0}{K}$ trajectories given knowledge of the possibly infinite horizon $K$, the dynamics $f_k$, the constraint set $\mathcal{U}$, the time-invariant parameter vector $\theta$, and the basis functions $L_k$. In this paper, we consider the problem of inverse optimal control in which we seek to compute parameter vector $\theta \in \Theta$ of the objective function \eqref{eq:ocCost} such that a (possibly infinite) pair of state and control trajectories $\x{0}{K}$ and $\u{0}{K}$ constitute an optimal solution to the optimal control problem \eqref{eq:ocProblem}. We shall specifically consider a novel \emph{online} inverse optimal control problem in which we seek to compute the parameter vector $\theta$ from a sequence of state and control pairs $(x_k,u_k)$ drawn from the (possibly infinite) trajectories $\x{0}{K}$ and $\u{0}{K}$ without storing and processing the pairs in batches. In this inverse optimal control problem, we assume that we have knowledge of the dynamics $f_k$, basis functions $L_k$, and constraint set $\mathcal{U}$. We note that in contrast to previous formulations of discrete-time inverse optimal control (cf.~\cite{Molloy2018}), our online inverse optimal control problem assumes no prior knowledge of the (possibly infinite) horizon $K$ and prohibits the storage of the trajectories $\x{0}{K}$ and $\u{0}{K}$. On occasion in this paper, we shall make use of the following assumption to differentiate between cases where the trajectories $\x{0}{K}$ and $\u{0}{K}$ constitute a solution to the optimal control problem \eqref{eq:ocProblem} for some $\theta = \theta^* \in \Theta$, and cases where they do not constitute a solution to \eqref{eq:ocProblem} for any $\theta = \theta^* \in \Theta$. \begin{assumption}[Forward Optimality] \label{assumption:trueParameters} The trajectories $\x{0}{K}$ and $\u{0}{K}$ constitute a solution to \eqref{eq:ocProblem} with dynamics $f_k$, basis functions $L_k$, constraint set $\mathcal{U}$, unknown (possibly infinite) horizon $K$, and unknown unique objective-function parameter vector $\theta = \theta^* \in \Theta$. \end{assumption} In this paper, we also seek to investigate conditions under which our inverse optimal control problem has a unique solution (especially under Assumption 1). As a first step towards establishing these conditions, we note that scaling the objective function $V$ of the optimal control problem \eqref{eq:ocProblem} by any $r > 0$ does not change the nature of the optimising trajectories $\x{0}{K}$ and $\u{0}{K}$ but does scale the minimum value of the objective function $V$. Thus, an immediate condition necessary (though not sufficient) for our inverse optimal control problem to possess a unique solution is that parameter set $\Theta$ must not contain $\theta = \theta^*$ and $\theta = r\theta^*$ for any $r > 0$ and any $\theta^*$. In this paper we follow existing approaches (cf.~\cite{Molloy2018,Molloy2016,Molloy2018b}), and consider the parameter set to be of the form $\Theta \triangleq \{ \theta \in \mathbb{R}^N : \theta^1 = a\}$ for some scalar $a > 0$. We note that there is no loss of generality with this choice of parameter set and we expect results analogous to those of this paper to hold when the parameter set is instead constructed as the fixed-normalisation set $\Theta = \{ \theta \in \mathbb{R}^{N} : \|\theta\| = a\}$ as in \cite{Albrecht2011} (see also \cite{Molloy2018b} for a comparison of infinite-horizon inverse optimal control results with fixed-element and fixed-normalisation parameter sets). \section{Online Inverse Optimal Control} \label{sec:ioc} In this section, we exploit minimum principles for both finite-horizon and infinite-horizon discrete-time optimal control problems to propose our novel method of online inverse optimal control. \subsection{Finite and Infinite Horizon Minimum Principles} To present the discrete-time minimum principles that we shall exploit, let us define the Hamiltonian function associated with the optimal control problem \eqref{eq:ocProblem} as \begin{equation} \label{eq:hamDef} H_k \left( x_k, u_k, \lambda_{k+1}, \theta \right) \triangleq \theta' L_k\left( x_k, u_k \right) + \lambda_{k+1}' f_k \left( x_k, u_k \right) \end{equation} where $\lambda_k \in \mathbb{R}^n$ for $k \geq 0$ are costate (or adjoint) vectors. Let us also define $\nabla_{x} H_k \left( x_k, u_k, \lambda_{k+1}, \theta \right) \in \mathbb{R}^n$ and $\nabla_{u} H_k \left( x_k, u_k, \lambda_{k+1}, \theta \right) \in \mathbb{R}^m$ as the column vectors of partial derivatives of the Hamiltonian with respect to $x_k$ and $u_k$, respectively, and evaluated at $x_k$, $u_k$, $\lambda_{k+1}$, and $\theta$. We shall similarly use $\nabla_x f_k \in \mathbb{R}^{n \times n}$, $\nabla_u f_k \in \mathbb{R}^{m \times n}$, $\nabla_x L_k \in \mathbb{R}^{n \times N}$, and $\nabla_u L_k \in \mathbb{R}^{m \times N}$ to denote the matrices of partial derivatives of $f_k$ and $L_k$. We also require the following assumption. \begin{assumption}[Jacobian Invertibility] \label{assumption:invertable} The derivative matrix of the dynamics $\nabla_x f_k$ at $(x_k, u_k)$ is invertible for all $k \geq 0$. \end{assumption} Assumption \ref{assumption:invertable} is potentially restrictive; for example, it corresponds to requiring the invertibility of the state transition matrix in linear systems. However it has previously been used to establish both finite and infinite horizon discrete-time minimum principles (cf.~\cite{Blot2000} and \cite[Theorem 3.3.1]{Goodwin2006}). We shall use Assumption \ref{assumption:invertable} to combine the finite-horizon minimum principle of \cite[Proposition 3.3.2]{Bertsekas2005} with the infinite-horizon minimum principle of \cite[Theorem 2]{Blot2000}. Before we present this combined minimum principle, let us introduce the following definition. \begin{definition}[Inactive Constraint Times] Given the controls $u_k$ for $k \geq 0$, we shall define the \emph{inactive constraint times} up to and including some time $\ell \geq 0$ as the set of times \begin{equation*} \mathcal{K}_\ell \triangleq \{0 \leq k \leq \ell : u_k \in \interior \mathcal{U} \} \end{equation*} where $u_k \in \interior \mathcal{U}$ denotes that the control $u_k$ is in the interior (i.e., not on the boundary) of the control constraint set $\mathcal{U}$. \end{definition} We now present our combined finite and infinite horizon discrete-time minimum principle. \begin{lemma} \label{lemma:minP} If Assumptions \ref{assumption:trueParameters} and \ref{assumption:invertable} hold so that $\x{0}{K}$ and $\u{0}{K}$ constitute a solution to \eqref{eq:ocProblem} with $\theta = \theta^* \in \Theta$ and potentially infinite $K > 0$, then \begin{equation} \label{eq:backwardsInduction} \lambda_k = \nabla_{x} H_k \left( x_k, u_k, \lambda_{k+1}, \theta \right) \end{equation} for all $0 \leq k \leq K$ with $\lambda_{K+1} = 0$ if $K < \infty$ and $\lambda_{K+1}$ undefined if $K = \infty$. Furthermore, \begin{equation} \label{eq:minPrinciple} \nabla_u H_k \left( x_k, u_k, \lambda_{k+1}, \theta \right) = 0 \end{equation} for all $k \in \mathcal{K}_K$ where $\mathcal{K}_K$ are the \emph{inactive constraint times} up to and including time $K$. \end{lemma} \begin{proof} In the case $0 < K < \infty$, \cite[Proposition 3.3.2]{Bertsekas2005} establishes that $\lambda_k$ satisfies \eqref{eq:backwardsInduction} for $0 \leq k \leq K$ with $\lambda_{K+1} = 0$, and that \begin{equation} \label{eq:variationalIneq} \nabla_{u} H_k \left( x_k, u_k, \lambda_{k+1}, \theta \right)' \left( \bar{u} - u_k \right) \geq 0 \end{equation} for all $\bar{u} \in \mathcal{U}$ and all $0 \leq k \leq K$. The variational inequality \eqref{eq:variationalIneq} simplifies to \eqref{eq:minPrinciple} at times $k \in \mathcal{K}_K$ proving the lemma assertion when $K$ is finite. In the case $K = \infty$, \cite[Theorem 2]{Blot2000} under Assumption \ref{assumption:invertable} establishes that $\lambda_k$ satisfies \eqref{eq:backwardsInduction} for $k \geq 0$ without a defined terminal or initial condition, and $u_k$ satisfies the variational inequality \eqref{eq:variationalIneq} for all $\bar{u} \in \mathcal{U}$ and all $k \geq 0$. Again, the variational inequality \eqref{eq:variationalIneq} simplifies to \eqref{eq:minPrinciple} at times $k \in \mathcal{K}_K$ proving the lemma assertion when $K$ is infinite and completing the proof. \end{proof} Lemma \ref{lemma:minP} describes the properties of the costates and the gradients of the Hamiltonian when the state $\x{0}{K}$ and control $\u{0}{K}$ trajectories constitute a solution to the optimal control problem \eqref{eq:ocProblem} with $\theta = \theta^* \in \Theta$ for any (possibly infinite) horizon $K > 0$. We note that the terminal boundary condition for $\lambda_{K+1}$ is only defined in the case of a finite horizon $K < \infty$, and no boundary or initial conditions are imposed on $\lambda_k$ in the case of an infinite horizon $K = \infty$ (consistent with the infinite-horizon minimum principle of \cite{Blot2000}). In the follow theorem, we shall omit the terminal boundary condition $\lambda_{K+1} = 0$ when $K < \infty$ and use Assumption \ref{assumption:invertable} in order to convert the costate backward recursion \eqref{eq:backwardsInduction} to a forward recursion. We will later exploit our forward recursion to propose our method of online inverse optimal control. \begin{theorem} \label{theorem:linearSystem} If Assumptions \ref{assumption:trueParameters} and \ref{assumption:invertable} hold so that \eqref{eq:ocProblem} is solved by $\x{0}{K}$ and $\u{0}{K}$ with $\theta = \theta^* \in \Theta$ and potentially infinite $K > 0$, then \begin{align} \label{eq:hamSystemeasier} F_k \mathcal{G}_k \alpha &= 0 \end{align} for all $k \in \mathcal{K}_K$ where $\alpha \triangleq [ \theta' \; \lambda_0' ]' $ and \begin{align} \label{eq:fMatrix} F_k \triangleq \begin{bmatrix} \nabla_u L_k && \nabla_u f_k \end{bmatrix}. \end{align} Here, $\mathcal{G}_k \triangleq \prod_{\ell = 0}^{k} G_{\ell}$ is given by the forward recursion \begin{align} \label{eq:gRecursion} \mathcal{G}_k &= G_{k} \times \mathcal{G}_{k-1} \end{align} for $k \geq 1$ with $\mathcal{G}_{0} = G_0$ and \begin{align} \label{eq:gMatrix} G_k \triangleq \begin{bmatrix} I && 0 \\ - \nabla_x f_k^{-1} \nabla_x L_k && \nabla_x f_k^{-1} \end{bmatrix} \in \mathbb{R}^{(n + N) \times (n + N)}. \end{align} \end{theorem} \begin{proof} The definition of the Hamiltonian \eqref{eq:hamDef} combined with the backward recursion \eqref{eq:backwardsInduction} from Lemma \ref{lemma:minP} holding under Assumption \ref{assumption:invertable} implies that \begin{align} \label{eq:forwardRecursion} \lambda_{k+1} &= \nabla_x f_k^{-1} \lambda_{k} - \nabla_x f_k^{-1} \nabla_x L_k \theta \end{align} for all $k \geq 0$ where we have noted that $\nabla_x f_k$ is invertible under Assumption \ref{assumption:invertable}. By defining $z_k \triangleq [ \theta' \; \lambda_k']'$ for $k \geq 0$ and recalling the definitions of $G_k$ and $\mathcal{G}_k$, \eqref{eq:forwardRecursion} may be rewritten as the forward recursion \begin{align}\label{eq:recursive} z_{k+1} = G_k z_k = \mathcal{G}_{k} \alpha \end{align} for $k \geq 0$ where we note that $z_0 = \alpha$. Similarly, applying the definition of the Hamiltonian \eqref{eq:hamDef} to \eqref{eq:minPrinciple} under Assumption \ref{assumption:invertable} gives \begin{align*} 0 &= \nabla_u L_k \theta + \nabla_u f_k \lambda_{k+1}\\ &= F_k \mathcal{G}_{k} \alpha \end{align*} for $k \in \mathcal{K}_K$ where the last line follows from \eqref{eq:recursive}. The theorem assertion follows and the proof is complete. \end{proof} The matrix equation \eqref{eq:hamSystemeasier} summarises both the costate \eqref{eq:backwardsInduction} and Hamiltonian-gradient \eqref{eq:minPrinciple} conditions of Lemma \ref{lemma:minP}. By rewriting the costate backward recursion \eqref{eq:backwardsInduction} as a forward recursion, we have eliminated the costate vectors $\lambda_k$ for $k \geq 1$ from \eqref{eq:hamSystemeasier}. When Assumption \ref{assumption:trueParameters} holds, we may thus, in principle, solve \eqref{eq:hamSystemeasier} at any time $k \in \mathcal{K}_K$ for the vector $\alpha$ which will yield values of the parameter vector $\theta$ and initial costates $\lambda_0$. However, in practice the matrices $F_k$ and $\mathcal{G}_k$ may be rank deficient and the equality in \eqref{eq:hamSystemeasier} may not hold exactly due to violation of Assumption \ref{assumption:trueParameters}; for example, the given trajectories may not be optimal for any $\theta \in \Theta$ due to misspecified dynamics or basis functions. To handle these situations, we shall next propose an inverse optimal control method by considering sums of squared residuals $\| F_k \mathcal{G}_k \alpha \|^2$. \subsection{Proposed Online Inverse Optimal Control Method} To propose our online inverse optimal control method, let us consider the \emph{inactive constraint times} up to and including time $k \geq 0$, namely, $\mathcal{K}_k$. Under Assumption \ref{assumption:invertable}, let us also define \begin{align}\notag J_k \left( \alpha \right) &\triangleq \sum_{\ell \in \mathcal{K}_k} \left\| F_\ell \mathcal{G}_\ell \alpha \right\|^2\\ \label{eq:ocost} &= \alpha' \mathcal{Q}_k \alpha \end{align} as the sum of squared residuals of \eqref{eq:hamSystemeasier} where \begin{align*} \mathcal{Q}_k &\triangleq \sum_{\ell \in \mathcal{K}_k} \left( F_\ell \mathcal{G}_\ell \right)'\left(F_\ell \mathcal{G}_\ell \right) \end{align*} is a symmetric positive semidefinite matrix. Our proposed method of online inverse optimal control is then to find vectors $\hat{\alpha}_k$ at each time $k \geq 0$ that solve the optimisation problem \begin{align} \label{eq:method} \begin{aligned} &\inf_{\alpha} & & J_k \left( \alpha \right) &\mathrm{s.t.} & & \mathcal{I}\alpha \in \Theta \end{aligned} \end{align} where $\mathcal{I} \triangleq [I \; 0 ] \in \mathbb{R}^{N \times (N + n)}$. The objective-function parameter vector $\hat{\theta}_k$ and initial costates $\hat{\lambda}_0$ computed by our method are then given by $\hat{\theta}_k = \mathcal{I} \hat{\alpha}_k$ and $\hat{\lambda}_0 = \bar{\mathcal{I}} \hat{\alpha}_k$ where $\bar{\mathcal{I}} \triangleq [ 0 \; I ] \in \mathbb{R}^{n \times (N + n)}$. Our method \eqref{eq:method} has an online form in the sense that it can process the pairs $(x_k,u_k)$ for $k \geq 0$ sequentially since $\mathcal{Q}_k$ is given by the recursion \begin{align} \label{eq:qRecursion} \mathcal{Q}_k &= \begin{cases} \mathcal{Q}_{k-1} + \left( F_k \mathcal{G}_k \right)'\left(F_k \mathcal{G}_k \right) & \text{if } u_k \in \interior \, \mathcal{U},\\ \mathcal{Q}_{k-1} & \text{otherwise} \end{cases} \end{align} for $k \geq 0$ where $\mathcal{Q}_{-1} \triangleq 0$ and $\mathcal{G}_k$ is given by the recursion \eqref{eq:gRecursion}. Furthermore, the dimensionality of the optimisation in our method is $N + n$ and does not grow with time. In contrast, the dimensionality of the optimisation problems in existing minimum principle and KKT methods of inverse optimal control grow linearly with the length of the trajectories considered since, for example, they involve optimisation over the entire costate trajectory $\lambda_0, \lambda_1, \ldots, \lambda_k$ \cite{Molloy2018,Keshavarz2011,Puydupin2012,Jin2018}. The key to the constant dimensionality and efficient online form of our method is the forward-recursive expression of the finite and infinite horizon minimum principles established in Theorem \ref{theorem:linearSystem}. By minimising the residual cost function \eqref{eq:ocost} over $\alpha$, our method \eqref{eq:method} computes a parameter vector $\hat{\theta}_k$ and initial costates $\hat{\lambda}_0$ that minimise the violation of the minimum principle conditions \eqref{eq:hamSystemeasier}. If Assumption \ref{assumption:trueParameters} holds so that $\x{0}{K}$ and $\u{0}{K}$ are a solution to the optimal control problem \eqref{eq:ocProblem} for $\theta = \theta^* \in \Theta$, then $\hat{\alpha}_k = [\theta^{*\prime} \; \lambda_0']'$ will be one (possibly nonunique) solution to \eqref{eq:method} for all $k \geq 0$. If Assumption \ref{assumption:trueParameters} does not hold so that $\x{0}{K}$ and $\u{0}{K}$ are suboptimal under \eqref{eq:ocCost} for all $\theta \in \Theta$ (e.g., due to noise or misspecified basis functions and dynamics), then our method \eqref{eq:method} will yield parameters $\hat{\theta}_k$ that minimise the extent to which the minimum principle conditions of \eqref{eq:hamSystemeasier} are violated. The extent to which the minimum principle conditions of \eqref{eq:hamSystemeasier} are violated can be determined online since $J_k ( \hat{\alpha}_k) = \hat{\alpha}_k' \mathcal{Q}_k \hat{\alpha}_k$ will be equal to zero when Assumption \ref{assumption:trueParameters} holds and greater than zero when it does not. We note that if $J_k ( \hat{\alpha}_k)$ is large, the method is unlikely to yield useful parameters without modification of the dynamics or basis functions, or preprocessing the trajectories to remove noise as in \cite{Johnson2013}. Methods for computing parameter vectors that minimises the extent to which the minimum principle conditions of \eqref{eq:hamSystemeasier} are violated have been previously proposed for problems with known finite horizons $K < \infty$ (cf.~\cite{Molloy2018,Keshavarz2011,Puydupin2012}). The key novelty of our method \eqref{eq:method} is that it exploits our novel reformulation of the minimum principle conditions in Theorem \ref{theorem:linearSystem} to yield parameter vectors online without any prior knowledge of the (potentially infinite) horizon $K$ and without storing or processing batches of states and controls. Furthermore, unlike the existing methods of \cite{Molloy2018,Keshavarz2011,Puydupin2012}, our method handles state and control trajectories with active control constraints. We shall next establish conditions under which our method computes a unique parameter vector, and we will further describe its online implementation. \section{Performance Guarantees and Online Implementation} \label{sec:performance} In this section, we present our main result guaranteeing the uniqueness of solutions to our method \eqref{eq:method}. We also describe its online implementation. \subsection{Performance Guarantees} To establish our main performance result, let us define $\bar{\mathcal{Q}}_k \in \mathbb{R}^{(n + N -1) \times (n + N -1)}$ as the principal submatrix of $\mathcal{Q}_k$ formed by removing its first row and first column, and let us also define $ q_k \in \mathbb{R}^{n + N - 1} $ as the first column of $\mathcal{Q}_k$ without its first element. We now present our main result and performance guarantee. \begin{theorem} \label{theorem:conElement} Consider $(x_k,u_k)$ for $k \geq 0$, suppose that Assumption \ref{assumption:invertable} holds, and let $\Theta = \{ \theta \in \mathbb{R}^N : \theta^1 = a\}$ for some $a > 0$. For any $k \geq 0$, if $\bar{\mathcal{Q}}_k$ has full rank then the unique solution to \eqref{eq:method} is \begin{align} \label{eq:alphaConElement} \hat{\alpha}_k &=a \begin{bmatrix} 1\\ -\bar{\mathcal{Q}}_k^{-1}q_k \end{bmatrix}. \end{align} If, in addition, Assumption \ref{assumption:trueParameters} holds and there exists an $r > 0$ such that $r \theta^* \in \Theta$, then the unique solution to \eqref{eq:method} is \begin{align*} \hat{\alpha}_k =a \begin{bmatrix} 1\\ -\bar{\mathcal{Q}}_k^{-1}q_k \end{bmatrix} = r\begin{bmatrix} \theta^*\\ \lambda_0 \end{bmatrix}. \end{align*} \end{theorem} \begin{proof} Under Assumption \ref{assumption:invertable} and given $\Theta$, the Lagrangian function for \eqref{eq:method} for any $k \geq 0$ is \begin{align*} \mathcal{L}(\alpha,p) = \alpha' \mathcal{Q}_k \alpha + p (\alpha^1 - a) \end{align*} with the Lagrange multiplier $p \in \mathbb{R}$. Noting that $\mathcal{Q}_k$ is symmetric, and letting $c \triangleq p/2$, we have the derivatives \begin{align*} \dfrac{d\mathcal{L}(\alpha,p)}{dp} &= \alpha^1 - a \text{ and } \dfrac{d\mathcal{L}(\alpha,p)}{d\alpha} = 2 \mathcal{Q}_k \alpha + 2c e_1 \end{align*} where $e_1 \in \mathbb{R}^{N + n}$ is an indicator vector with $1$ in its first component and zeros elsewhere. Following the method of Lagrange multipliers, setting $d\mathcal{L}(\hat{\alpha}_k,p)/dp = 0$ leads to $\hat{\alpha}_k^1 = a$, and setting $d\mathcal{L}(\hat{\alpha}_k,p)/d\alpha = 0$ whilst noting that $\mathcal{Q}_k$ is symmetric leads to the system \begin{align*} \begin{bmatrix} \mathcal{Q}_k^{1,1}& q_k' \\ q_k & \bar{\mathcal{Q}}_k \end{bmatrix} \begin{bmatrix} a \\ \bar{\alpha}_k \end{bmatrix} = -c e_1 \end{align*} where $ \bar{\alpha}_k \triangleq [ \hat{\alpha}_k^2 \; \cdots \; \hat{\alpha}_k^{N + n}]' $ and $\mathcal{Q}_k^{1,1}$ is the first element of $\mathcal{Q}_k$. Since $a$ is known, we may discard the first row of both sides and perform straightforward matrix manipulations to obtain the equation $ \bar{\mathcal{Q}}_k \bar{\alpha}_k+ a q_k = 0. $ Since $\bar{\mathcal{Q}}_k $ is invertible under the theorem conditions, we have $ \bar{\alpha}_k = - a \bar{\mathcal{Q}}_k^{-1} q_k $ and the first theorem result follows. To prove the second theorem result we note that \eqref{eq:hamSystemeasier} established in Theorem \ref{theorem:linearSystem} under Assumptions \ref{assumption:trueParameters} \& \ref{assumption:invertable} holds with $\alpha = \alpha^* = r [\theta^{*\prime} \; \lambda_0']'$ for all $r > 0$. Thus, under Assumptions \ref{assumption:trueParameters} \& \ref{assumption:invertable}, $\hat{\alpha} = \alpha^*$ is a solution to \eqref{eq:method}. Recalling the first theorem assertion, $\hat{\alpha} = \alpha^*$ will be the unique solution when $\bar{\mathcal{Q}}_k$ is full rank and $r > 0$ is such that $\theta^* \in \Theta$. The proof is complete. \end{proof} The first assertion of Theorem \ref{theorem:conElement} establishes that by constraining $\hat{\theta}^1 = a$, our method \eqref{eq:method} is guaranteed to compute a unique parameter vector given by \begin{align} \label{eq:parameters} \hat{\theta}_k &= a \mathcal{I} \begin{bmatrix} 1\\ -\bar{\mathcal{Q}}_k^{-1}q_k \end{bmatrix} \end{align} when the submatrix $\bar{\mathcal{Q}}_k$ has full rank. The second assertion of Theorem \ref{theorem:conElement} establishes that when Assumption \ref{assumption:trueParameters} holds, the unique parameter vector will correspond to a scaled version of the true unknown parameter vector $\theta^*$ provided that the true parameter vector can be re-scaled by $r > 0$ to belong to the set $\Theta$. This scaling condition is typically not restrictive but may require the permutation of the basis functions $L_k$ (in practice via trial and error) to avoid the first element of $\theta^*$ being zero. The first assertion of Theorem \ref{theorem:conElement} holds without Assumption \ref{assumption:trueParameters} and its rank condition involving $\bar{\mathcal{Q}}_k$ is always useful for determining if the sequence of state and control pairs $(x_\ell,u_\ell)$ for $0 \leq \ell \leq k$ provide sufficient information for the parameter vector and initial costate vector to be computed uniquely with our method (regardless of Assumption \ref{assumption:trueParameters}). The rank condition therefore fulfils a role analogous to the persistence of excitation conditions that appear in adaptive control and parameter estimation in which data or experiments are assessed as sufficient or not to yield unambiguous parameter estimates (without considering the properties of the estimates themselves). A similar rank condition is established for finite-horizon inverse optimal control without control constraints in \cite{Molloy2018,Molloy2016} for offline methods that involve storing and processing the full sequences $\x{0}{K}$ and $\u{0}{K}$. Our rank condition of Theorem \ref{theorem:conElement} will fail to hold when the system dynamics or initial conditions are degenerate and lead to uninformative state and control trajectories (e.g., trajectories that are equilibria of the dynamics). It will also fail to hold when too few state and control pairs are used to construct $\bar{\mathcal{Q}}_k$ (e.g., due to a short horizon $K$ or $\mathcal{K}_k$ having too few elements because $k$ is small or the control constraints being active too frequently). If $\bar{\mathcal{Q}}_k$ is rank deficient, it is therefore advantageous in practice to wait for more state and control pairs to be processed before seeking to compute a unique parameter vector $\theta$. The following proposition reinforces this intuition by showing that the rank of $\bar{\mathcal{Q}}_k$ is non-decreasing as more state and control pairs are processed. \begin{proposition} \label{proposition:rank} Consider $(x_k,u_k)$ for $k \geq 0$ and suppose that Assumption \ref{assumption:invertable} holds. Then, \begin{align*} \rank \left( \bar{\mathcal{Q}}_{k-1} \right) \leq \rank \left( \bar{\mathcal{Q}}_k \right) \end{align*} for all $k \geq 1$. \end{proposition} \begin{proof} Consider any $k \geq 1$ and note that $\bar{\mathcal{Q}}_{k}$ and $\bar{\mathcal{Q}}_{k-1}$ exist under Assumption \ref{assumption:invertable}. The proposition holds trivially when $u_k \not\in \interior \, \mathcal{U}$ since \eqref{eq:qRecursion} implies $ \bar{\mathcal{Q}}_{k} = \bar{\mathcal{Q}}_{k-1}. $ If $u_k \in \interior \, \mathcal{U}$, then \eqref{eq:qRecursion} implies that $ \bar{\mathcal{Q}}_{k} = \bar{\mathcal{Q}}_{k-1} + \bar{\mathcal{F}}_k $ where $\bar{\mathcal{F}}_k$ is the product matrix $(F_k\mathcal{G}_k)'(F_k\mathcal{G}_k)$ without its first row and first column. By noting that $\bar{\mathcal{F}}_k$ is positive semidefinite, we have that \begin{align*} v'\bar{\mathcal{Q}}_{k}v = v'\bar{\mathcal{Q}}_{k-1}v + v'\bar{\mathcal{F}}_kv \geq v'\bar{\mathcal{Q}}_{k-1}v \end{align*} for all $v \in \mathbb{R}^{n + N}$. The null space of $\bar{\mathcal{Q}}_{k}$ is thus a subset of the null space of $\bar{\mathcal{Q}}_{k-1}$, and so $ \nullM(\bar{\mathcal{Q}}_{k-1}) \geq \nullM(\bar{\mathcal{Q}}_{k}). $ The rank-nullity theorem then implies that \begin{align*} \rank(\bar{\mathcal{Q}}_{k-1}) &= \rank(\bar{\mathcal{Q}}_{k}) + \nullM(\bar{\mathcal{Q}}_{k}) - \nullM(\bar{\mathcal{Q}}_{k-1})\\ &\leq \rank(\bar{\mathcal{Q}}_{k}) \end{align*} and the proof is complete. \end{proof} An important consequence of Proposition \ref{proposition:rank} is that if the rank condition of Theorem \ref{theorem:conElement} is satisfied at any time $\ell \geq 0$, then it will also be satisfied for all subsequent times $k \geq \ell$. Theorem \ref{theorem:conElement} and Proposition \ref{proposition:rank} together therefore imply that if $\bar{\mathcal{Q}}_\ell$ has full rank for any $\ell \geq 0$, then our method \eqref{eq:method} will have unique solutions \eqref{eq:alphaConElement} for all $k \geq \ell$ under the conditions of Theorem \ref{theorem:conElement}. \subsection{Online Implementation} In light of Theorem \ref{theorem:conElement}, our method \eqref{eq:method} can be implemented by computing $\mathcal{Q}_k$, $\mathcal{G}_k$, and $F_k$ before solving \eqref{eq:parameters}. If $\bar{\mathcal{Q}}_k$ is rank deficient, then we may substitute the inverse of $\bar{\mathcal{Q}}_k$ in \eqref{eq:alphaConElement} with the Moore-Penrose pseudoinverse of $\bar{\mathcal{Q}}_k$ to yield the minimum-norm solution to \eqref{eq:method} or simply set $\hat{\theta}_k = 0$. The online implementation of our method \eqref{eq:method} is summarised in Algorithm \ref{algorithm:method}. \begin{algorithm}[H] \caption{Online Implementation of \eqref{eq:method}} \label{algorithm:method} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE States and controls $(x_k,u_k)$ for $k \geq 0$, dynamics $f_k$, basis functions $L_k$, constraint set $\mathcal{U}$, and parameter set $\Theta = \{ \theta \in \mathbb{R}^N : \theta^1 = a\}$. \ENSURE Parameter vector $\hat{\theta}_k$ for $k \geq 1$. \FOR{$k = 0, 1, \ldots$} \STATE Receive $(x_k,u_k)$. \STATE Compute $G_k$ and $F_k$ with \eqref{eq:fMatrix} and \eqref{eq:gMatrix}. \IF {$k = 0$} \STATE Initialise $\mathcal{G}_0 = G_0$. \IF {$u_0 \in \interior \, \mathcal{U}$} \STATE Initialise $\mathcal{Q}_0 = (F_0\mathcal{G}_0)'(F_0\mathcal{G}_0)$. \ELSE \STATE Initialise $\mathcal{Q}_0 = 0$. \ENDIF \ELSE \STATE Compute $\mathcal{G}_k = G_k \times \mathcal{G}_{k-1}$. \IF {$u_k \in \interior \, \mathcal{U}$} \STATE Compute $\mathcal{Q}_k = \mathcal{Q}_{k-1} + (F_k\mathcal{G}_k)'(F_k\mathcal{G}_k)$. \ELSE \STATE Set $\mathcal{Q}_k = \mathcal{Q}_{k-1}$. \ENDIF \ENDIF \STATE Extract $\bar{\mathcal{Q}}_k$ and $q_k$ from $\mathcal{Q}_k$. \IF {$\rank (\bar{\mathcal{Q}}_k) = n + N - 1$} \STATE Compute unique $\hat{\theta}_k$ with \eqref{eq:parameters}. \ELSE \STATE Set $\hat{\theta}_k = 0$. \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} The memory complexity of Algorithm \ref{algorithm:method} is dominated by the need to store the most recent $\mathcal{Q}_k$, $\mathcal{G}_k$, and $F_k$ which leads to a total memory complexity of $O(m(n + N) + (n + N)^2)$. The computational complexity of Algorithm \ref{algorithm:method} is similarly dominated by the computation of $\mathcal{Q}_k$, $\mathcal{G}_k$, and the inversion of $\bar{\mathcal{Q}}_k$ in \eqref{eq:parameters} which leads to a computational complexity of $O(m(n+N)^2 + (n+N)^3)$ at each time $k$. In contrast, the total memory complexity of the recently proposed finite-horizon inverse optimal control method of \cite{Molloy2018} is $O((m + n)NK)$ whilst its total computational complexity is $O(n^3K + (mNK)^3)$. The horizon $K$ will typically be greater than the dimensions $m$, $n$ and $N$, and so the complexities of our method will typically be less than those of the method of \cite{Molloy2018} (and those of other methods, e.g. \cite{Keshavarz2011,Puydupin2012,Jin2018}). Importantly from an online implementation perspective, the memory complexity of our method is independent of $K$ whilst the memory complexity of the method of \cite{Molloy2018} is linear in $K$. The computational complexity of the method of \cite{Molloy2018} is also cubic in time $k$ whilst the total computational complexity of our method is only linear in time $k$. \section{Simulation Examples} \label{sec:examples} In this section, we first illustrate our method in a simple illustrative example alongside the current state-of-the-art method of \cite{Molloy2018}. We then consider an application-inspired example that cannot be solved with the method of \cite{Molloy2018} due to the presence of control constraints. \subsection{Illustrative Example} Consider the single integrator $ x_{k+1} = x_k + u_k $ with $x_k \in \mathbb{R}$ for $0 \leq k \leq K$ regulated with an optimal controller designed with the objective function \begin{align*} V \left( \x{0}{K}, \u{0}{K}, \theta \right) &= \sum_{k = 0}^K (x_k)^2 + 5 (u_k)^2. \end{align*} The parameter vector and basis functions of this objective function are $\theta = \theta^* = [1 \; 5]'$ and $L_k = [(x_k)^2 \; (u_k)^2]$, respectively. Thus, $\nabla_u L_k = [0 \; 2u_k]$, $\nabla_x L_k = [2x_k \; 0]$, and Assumption \ref{assumption:invertable} holds with $\nabla_x f_k^{-1} = 1$. For the purpose of illustration, we simulated the optimal state and control trajectories with $K = 10$ and $x_0 = 10$ shown in Fig.~\ref{fig:exampleOptimal} and applied our method \eqref{eq:method} by following Algorithm \ref{algorithm:method}. Specifically, at $k = 0$, we receive $(x_0,u_0) = (10.0,-3.58)$ and so computing $G_0$ and $F_0$, and initialising $\mathcal{G}_0$ and $\mathcal{Q}_0$, leads to \begin{equation*} \bar{\mathcal{Q}}_0 = \begin{bmatrix} 51.2820 & -7.1611\\ -7.1611 & 1.0000 \end{bmatrix}. \end{equation*} Here, $\bar{\mathcal{Q}}_0$ is rank deficient and so there is no unique solution to \eqref{eq:method}. Thus, we set $\hat{\theta}_0 = 0$ and proceed to $k = 1$. At $k = 1$, we receive $(x_1,u_1) = (6.42, -2.30)$ and so computing $G_1$, $F_1$, $\mathcal{G}_1$, and $\mathcal{Q}_1$ leads to \begin{equation*} \bar{\mathcal{Q}}_1 = \begin{bmatrix} 72.3810 & -11.7545\\ -11.7545 & 2.0000 \end{bmatrix} \end{equation*} which is full rank. Thus, noting that $q_k = [294.1 \; -52.8]'$, the solution of \eqref{eq:parameters} yields the unique parameter vector $\hat{\theta}_1 = [1 \; 5]'$ solving \eqref{eq:method}. Our method \eqref{eq:method} therefore yields the unknown parameter vector $\theta^*$ online from only two pairs of states and controls, $(x_0,u_0)$ and $(x_1,u_1)$, without knowledge of the horizon $K$, and in a time of $1.2$ ms with our MATLAB implementation. For comparison, we also processed the trajectories of Fig.~\ref{fig:exampleOptimal} with the state-of-the-art method of \cite{Molloy2018} which must process the entire trajectories $\x{0}{10}$ and $\u{0}{10}$, and requires knowledge of the horizon $K = 10$. Whilst the method of \cite{Molloy2018} also computed the unknown parameter vector $\theta^*$, it took $3.4$ ms in our MATLAB implementation (over two times slower than our online method on this short horizon with the same computer system). \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{fig1Illustrative.eps} \caption{Simulated trajectories for illustrative example.} \label{fig:exampleOptimal} \end{figure} \subsection{Application-Inspired Example} We now consider an example inspired by the study of how human pilots fly aircraft (cf.~\cite{Maillot2013}), and how pilot behaviours can be modelled and mimicked with optimal control problems. We specifically consider the regulation of pitch in a fixed-wing aircraft. Let us therefore consider a discrete-time model of aircraft pitch dynamics\footnote{http://ctms.engin.umich.edu/CTMS/index.php} \begin{align*} x_{k+1} &= Ax_k + Bu_k, \; x_0 \in \mathbb{R}^3 \end{align*} for $k \geq 0$ where the three states $x_k^1, x_k^2,$ and $x_k^3$ are the angle of attack (in radians), the aircraft pitch rate (in radians per second), and the aircraft pitch angle (in radians), respectively, and \begin{align*} A &= \begin{bmatrix} 0.9654 & 5.4572 & 0\\ -0.0013 & 0.9545 & 0\\ -0.0038 & 5.5437 & 1 \end{bmatrix} \text{ and } B = \begin{bmatrix} 0.0284 & 0.0142\\ 0.0020 & 0.0010\\ 0.0056 & 0.0028 \end{bmatrix}. \end{align*} Assumption \ref{assumption:invertable} holds with $\nabla_x f_k^{-1} = (A')^{-1}$. The control input vector $u_k = [u_k^1 \; u_k^2]' \in \mathbb{R}^2$ consists of two components, the first $u_k^1$ being the deflection angle of the elevator control surface (in radians) and the second $u_k^2$ being the deflection angle of a second (smaller trim-tab) elevator control surface (in radians). The control inputs are both constrained to the set $\mathcal{U} = \{ u = [u^1 \; u^2]' \in \mathbb{R}^2 : -\Delta \leq u^1,u^2 \leq \Delta \}$ for some constraint-magnitude $\Delta > 0$. In an experimental setting, the controls $u_k$ would be provided by human test subjects. However, for the purpose of illustrating our method, we simulated the system from an initial state of $x_0 = [ 0.5 \; 0 \; 0.2]'$ and a constraint-magnitude of {$\Delta = 0.09$} after it was regulated with an optimal controller designed with \begin{align*} &V \left( \x{0}{K}, \u{0}{K}, \theta \right)\\ &\quad= \sum_{k = 0}^K x_k' \begin{bmatrix} 1 && 0 && 0\\ 0 && 4 && 0\\ 0 && 0 && 2 \end{bmatrix} x_k + u_k' \begin{bmatrix} 3 && 0\\ 0 && 6 \end{bmatrix} u_k. \end{align*} Thus, our aim in this example is to recover the parameter vector $\theta = \theta^* = [1 \; 4 \; 2 \; 3 \; 6]'$ (the diagonal elements of the state and control weighting matrices) without knowledge of the horizon (which we simulated as {$K = 250$}). The simulated (optimal) state and control trajectories are shown in Fig.~\ref{fig:optimal} for $k \leq 100$. We see that the control constraints are active in the time interval {$k \in [6,33]$}. We applied our method to the trajectories in Fig.~\ref{fig:optimal} using Algorithm \ref{algorithm:method}. The unique parameter vector computed by our method for $k \leq 100$ are shown in Fig.~\ref{fig:computed}, and these correspond to the true parameter vector $\theta^*$ for {$k \geq 35$}. Prior to {$k = 35$}, the parameter vector are not computed due to $\bar{\mathcal{Q}}_k$ being singular (or numerically close to singular) and the control constraints being active for {$k \in [6,33]$}. \begin{figure}[!t] \centering \includegraphics[width=0.99\columnwidth]{fig1} \caption{Simulated trajectories for application-inspired example. The control constraints are active in the shaded region.} \label{fig:optimal} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.99\columnwidth]{fig2} \caption{Objective-function parameter vector $\hat{\theta}_k$ calculated from the trajectories of Fig.~\ref{fig:optimal} using our method \eqref{eq:method}. The true parameter vector is $\theta = \theta^* = [1 \; 4 \; 2 \; 3 \; 6]'$. The control constraints are active in the shaded region.} \label{fig:computed} \end{figure} To study the impact of control constraints on the number of time steps before unique parameter vector can be computed with our method, we simulated optimal trajectories from an initial state of $x_0 = [0.5 \; 0 \; 0.2]'$ with a horizon of {$K = 250$} and constraint magnitudes of between {$\Delta = 0.07$ and $\Delta = 0.11$}. We applied our method to each of these trajectories. For comparison purposes, we also applied an ad-hoc version of our method in which we wait until after the constraints are active for the last time before initialising the recursions for $\mathcal{Q}_k$ and $\mathcal{G}_k$. That is, the ad-hoc version method does not process any states and controls when the constraints are active. In contrast, our proposed method \eqref{eq:method} processes states and controls in the recursion \eqref{eq:gRecursion} for $\mathcal{G}_k$ but not in the recursion \eqref{eq:qRecursion} for $\mathcal{Q}_k$ when the constraints are active. Fig.~\ref{fig:delay} reports the first time at which unique parameter vector can be computed with both methods versus the duration of time the control constraints were active. The duration of time that the control constraints are active corresponds directly to the constraint magnitude (i.e., the constraints are active for $19$ time steps when $\Delta = 0.11$ compared to $40$ time steps when $\Delta = 0.07$). From Fig.~\ref{fig:delay}, we see that the time required by both methods to compute unique parameter vector increases with the length of time the constraints are active. However, our proposed method \eqref{eq:method} uniformly computes the unique parameter vector in fewer time steps than the ad-hoc method. The processing of the state and control pairs in our proposed method \eqref{eq:method} with the recursion for $\mathcal{G}_k$ while the control constraints are active is therefore advantageous (despite our method not computing new values of $\mathcal{Q}_k$ or $\hat{\theta}_k$ when the constraints are active). \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{fig3} \caption{Time before unique objective-function parameter vector is computed versus duration of time the control constraints are active in our application-inspired example.} \label{fig:delay} \end{figure} \section{Conclusion} \label{sec:conclusion} We consider the problem of online inverse optimal control on possibly infinite horizons in discrete-time systems subject to control constraints. We exploit both finite and infinite horizon discrete-time minimum principles to propose a novel online inverse optimal control method and to establish novel conditions under which it is guaranteed to compute a unique objective-function parameter vector. We illustrate our method in simulations and demonstrate that it is able to compute unique parameter vectors online from trajectories with constrained controls.
1,477,468,750,218
arxiv
\section{Principles of Statistical Model Building} \begin{quote} `\textit{Part of a meaningful quantitative analysis is to look at models and try to figure out their deficiencies and the ways in which they can be improved.}' \begin{flushright} \vspace{-.14em} {\rm ----Nobel Lecture by Lars Peter \cite{hansen2014nobel}} \end{flushright} \end{quote} \vspace{-.25em} Scientific investigation never happens in a vacuum. It builds upon previously accumulated knowledge instead of starting from scratch. Statistical modeling is no exception to this rule. \vskip.34em Suppose we are given $n$ random samples $X_1,\ldots,X_n$ from an unknown discrete distribution $p(x)$. Before we jump into the statistical analysis part, the scientist provided us a hint on what might be an initial believable model for the data: `from my years of experience working in this field, I expect the underlying distribution to be somewhat close to $p_0(x)$.' This information came with a disclaimer: `don't take $p_0(x)$ too seriously as it is only a simplified approximation of reality. Use it with caution and care.' \vskip.34em The general problem of statistical learning then aims to address the following questions: Whether the `shape of the data' is consistent with the presumed model-0. If it is not, then what is it? How is it different from $p_0$? Revealing \textit{new} hidden pattern in the data is often the most essential statistical modeling task in science and engineering. Of course, ultimately, the aim is to search for a rich class of sensible models in an automatic and faster manner, by appropriately changing the misspecified $p_0$. Knowing \textit{how} to change the anticipated $p_0$ is the first step towards scientific discovery that allows scientists to re-evaluate alternative theories to explain the data. If we succeed at this, it will provide a mechanism to build ``hybrid'' knowledge-data integrated models, which are far more interpretable than classical fully data-driven nonparametric models. Full development of these ideas requires a new conceptual framework and mathematical tools. \vskip.34em \textit{Organization}. Section \ref{sec:theory} introduces a new family of nonparametric approximation and smoothing techniques for discrete probability distributions, which is built on the principle of `Density Sharpening.' Section \ref{sec:app} highlights the role of the proposed theoretical framework in the development of statistical methods that is rich enough to include traditional as well as contemporary statistical methods: starting from as simple as one sample Z-test for a proportion to as sophisticated as compressive chi-square, $d$-sharp negative Binomial distribution, universal goodness-of-fit program, relative entropy estimation, Jaynes dice problem, sample-efficient learning of big distributions, etc. The paper ends with a discussion and conclusion Section \ref{sec:diss}. Additional applications and methodological details are deferred to the Supplementary Appendix to ensure the smooth flow of the main ideas. \section{Density Sharpening: Model and Mechanism} \label{sec:theory} We describe a method of nonparametric approximation of discrete distribution \textit{by} sharpening the initially assumed $p_0(x)$. The theory is remarkably simple, yet general enough to be vastly applicable in many areas \textit{beyond} density estimation, as described in Section \ref{sec:app}. Here is a bird's eye view of the core mechanism, which is a three-stage process. \vskip.35em \texttt{Stage 1}. Model-0 elicitation: The modeler starts a suitable $p_0(x)$ by using his/her experience or subject-matter knowledge. Often a particular parametric form of $p_0(x)$ is selected keeping convenience and simplicity in mind. \vspace{-.1em} \texttt{Stage 2}. Exploratory uncertainty analysis: Assess the uncertainty of the presumed model $p_0(x)$, in a way that can explain `why and how' the assumed model-0 (i.e., $p_0$) is inadequate for the data. \vspace{-.1em} \texttt{Stage 3}. Coarse-to-Refined density: Incorporate the `learned' uncertainty into $p_0(x)$ to produce an improved model $\hp(x)$ that will eliminate the incompatibility with the data. \vskip.65em The required theory is developed in the next few sections, which heavily relies on the following notation: let $X$ be a discrete variable with probability mass function $p_0(x)$, cumulative distribution function $F_0(x)$, and mid-distribution function $\Fmn(x)=F_0(x) - \frac{1}{2}p_0(x)$. The associated quantile function will be denoted by $Q_0(u)=\inf\{x: F_0(x) \ge u\}$ for $0<u<1$. By $\cL^2(dF_0)$ we mean the set of all square integrable functions with respect to the discrete measure $\dd F_0$, i.e, for a function $\psi \in \cL^2(dF_0)$: $\int |\psi|^2 \dd F_0 := \sum_x |\psi(x)|^2 p_0(x) < \infty$. The inner product of two functions $\psi_1$ and $\psi_2$ in $\cL^2(dF_0)$ will be denoted by $\langle \psi_1, \psi_2 \rangle_{F_0}:=\int \psi_1 \psi_2 \dd F_0$. Expectation with respect to $p_0(x)$ will be abbreviated as $\Ex_0(\psi(X)) :=\int \psi \dd F_0$. \subsection{Learning by Comparison: $d$-Sharp Density} We introduce a mechanism for nonparametrically estimating the density of $X_1,\ldots, X_n$ \textit{by comparing and sharpening} the presumed working model $p_0(x)$. \vskip.3em \begin{defn}[$d$-Sharp Density] For a discrete random variable $X$, we have the following universal density decomposition: \beq \label{eq:gd} p(x)\,=\,p_0(x)\,d\big(F_0(x);F_0,F\big), \eeq where the $d(u;F_0,F)$ is defined as \beq d(u;F_0,F)= \dfrac{p(Q_0(u))}{p_0(Q_0(u))}, ~\,0<u<1.\eeq The function $d(u;F_0,F)$ is called `comparison density' because it \textit{compares} the assumed $p_0$ with the true $p(x)$ and it integrates to one: \[\int _0^1 d(u;F_0,F)\dd u \,=\, \int_x d(F_0(x);F_0,F) \dd F_0(x) \,=\,\sum_x \big(p(x)/p_0(x)\big) p_0(x)\,=\, 1. ~~\] For brevity's sake, we will often abbreviate $d(F_0(x);F_0,F)$ as $d_0(x)$ throughout the article. \end{defn} \begin{rem}[The philosophy of `\textit{learning by comparison}'] The density representation formula \eqref{eq:gd} describes a way of building a general $p(x)$ \textit{by comparing it with} the initial $p_0(x)$. The $d$-modulated class of distributions is constructed by amending (instead of abandoning) the starting imprecise model $p_0(x)$. \end{rem} \begin{rem}[$d$-sharp density] Eq. \eqref{eq:gd} provides a formal statistical mechanism for sharpening the initial vague $p_0(x)$ using the data-guided perturbation function $d_0(x)$. For this reason, we call the improved $p_0(x) \times d_0(x)$ the `$d$-sharp' density. \end{rem} \begin{example}[Earthquake Data]\label{example1} We are given annual counts of major earthquakes (magnitude 6 and above) for the years 1900-2006. It is available in the R package \texttt{astsa}. Seismic engineers routinely use negative binomial distribution for modeling earthquake frequency \citep{kagan2000prob,kagan2010}. The best fitted negative binomial (NB) with ($\mu=19$ and $\phi=12$) is shown in Fig \ref{fig:earthq}, which we take as our rough initial model $p_0(x)$. From the figure, it is clearly evident that the conventional NB distribution is unable to adequately capture the shape of the earthquake count data. \vskip.3em {\bf Model uncertainty quantification}. For earthquake engineers it is of utmost importance to determine the uncertainty of the assumed NB model \citep{bernreuter1981seismic}. The problem of uncertainty quantification is the holy grail of earthquake science due to its importance in estimating risk and in making an accurate forecast of the next big earthquake. The comparison density $d(u;F_0,F)$ captures the uncertainty of the assumed NB model. The left plot in Fig \ref{fig:earthq} displays the estimated $\whd(u;F_0,F)$ for this data, which reveals the nature of deficiency of the base NB model. A robust nonparametric method of estimating $\whd$ from data will be discussed in the subsequent sections. But before going into the nonparametric approximation theory, we will spend some time on its interpretation. \begin{figure}[ ] \vspace{-1.1em} \centering \includegraphics[width=.46\linewidth,keepaspectratio,trim=1.15cm 1cm 1cm 1cm]{Figs/Earthquake-CompDensity.png}~~~~~~ \includegraphics[width=.46\linewidth,keepaspectratio,trim=1cm 1cm 1.15cm 1cm]{Figs/Earthquake-2desnity.png} \vskip.3em \caption{Modeling the earthquakes distribution. Left: Estimated comparison density $\whd(u;F_0,F)$; Right: The fitted NB distribution and the re-calibrated $d$-sharpened version.}\label{fig:earthq} \end{figure} \vskip.2em {\bf Interpretable Exploratory learning}. In our model \eqref{eq:gd}, $d$ plays the role of a data-driven correction function, measuring the discrepancy between the initial $p_0$ and the unknown $p$. Thus, the non-uniformity of $\whd$ immediately tells us that there's something more in the data than what was expected in light of $p_0(x)$. In fact, the shape of $\whd$ reveals the nature of the most prominent deviations between the data and the presumed $p_0(x)$---which, in this case, are bimodality and presence of heavier-tail than anticipated NB distribution. \vspace{-.3em} \end{example} \begin{rem}[Role of $d$] The comparison density $d$ performs dual functions: (i) its graph acts as an exploratory diagnostic tool that exposes the unexpected, forcing decision makers (e.g., legislators, natural security agencies, local administrations) to think outside the box: what might have caused this bimodality? how can we repair the old seismic hazard forecast model so that it incorporates this new information? etc. (ii) it provides a formal process of transforming and revising an initially misspecified model into a useful one. The red curve in the right panel is obtained by multiplying (perturbing) the NB pmf with the estimated comparison density, obeying the density representation formula Eq. \eqref{eq:gd}. \end{rem} \subsection{LP-Fourier Analysis} \label{sec:LPFA} The task is to nonparametrically approximate $d(F_0(x);F_0,F)$ to be able to apply the density sharpening equation \eqref{eq:gd}. We approximate $d \hspace{-.08em}\circ \hspace{-.08em}F_0(x) \in \cL^2({dF_0})$ by projecting it into a space of polynomials of $F_0(x)$ that are orthonormal with respect to the base measure$\dd F_0$. How to construct such a system of polynomials in a completely automatic and robust manner for \textit{any} given $p_0(x)$? In the section that follows, we discuss a universal construction. \subsubsection{Discrete LP-Basis} We describe a general theory of constructing LP-polynomials---a new class of robust polynomials $\{T_j(x;F_0)\}_{j\ge 1}$ that are a function of $F_0(x)$ (not raw $x$) and are orthonormal with respect to user-specified discrete distribution $p_0(x)$. \begin{figure}[ ] \centering \includegraphics[width=.45\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/Earthquake-basis1.png}~~~~~ \includegraphics[width=.45\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/Earthquake-basis2.png}\\[2em] \includegraphics[width=.45\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/Earthquake-basis3.png}~~~~~ \includegraphics[width=.45\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/Earthquake-basis4.png} \vskip1.5em \caption{Shape of the top four LP-orthonormal basis functions $\{S_j(u;F_0)\}_{1\le j \le 4}$ for $p_0(x) :={\rm NB}(x;\mu=19,\phi=12)$ for the earthquake data; where recall $S_j\hspace{-.084em}\circ \hspace{-.084em} F_0\,(x)=T_j(x;F_0)$. Notice of the global nonlinearity (linear, quadratic, cubic and so on) and the local staircase-like (piecewise-constant unequal-length segments) shape of these specially-designed polynomials. They are, by construction, orthonormal with respect to the chosen measure $p_0(x)$, here ${\rm NB}(x;\mu=19,\phi=12)$.}\label{fig:earthT} \end{figure} \texttt{Step 1:} Define the first-order LP-basis function as \textit{standardized} mid-distribution transform: \beq \label{eq:T1} T_1(x;F_0) \,=\,\dfrac{\sqrt{12} \big[\Fmn(x) - 0.5\big]}{\sqrt{1-\sum_x p_0^3(x)}}. \eeq Verify that $\Ex_0[T_1(X;F_0)]=0$ and $\Ex_0[|T_1(X;F_0)|^2]=1$, since $\Ex[\Fmn(X)]=1/2$ and $\Var[\Fmn(X)]=\sqrt{(1-\sum_x p_0^3(x))}/12$. \vskip.25em ~~\texttt{Step 2:} Apply a \emph{weighted} Gram-Schmidt procedure on $\{T_1^2,\ldots T_1^{k-1}\}$ to construct a higher-order LP orthogonal system $T_j(x;F_0)$ with respect to measure$\dd F_0$ \[\sum\nolimits_x p_0(x) T_j(x;F_0)=0;~~\,\sum\nolimits_x p_0(x) T_j(x;F_0)T_k(x;F_0)=\delta_{jk}, ~~1<j,k<M \] where $\delta_{jk}$ is the Kronecker delta function and the highest-degree of the LP-polynomials $M$ is always less than the support size of the discrete $p_0$. For example, if $X$ is binary, one can construct at most $2-1=1$ LP-basis function; see Section \ref{sec:bin}. \vskip.3em Fig \ref{fig:earthT} shows the top four LP-basis functions for the earthquake data with $p_0$ as ${\rm NB}(x;\mu=19,\phi=12)$. Here, we have displayed them in a unit interval as a function of $u=F_0(x)$, denoted by $S_j(u;F_0) := T_j(Q_0(u); F_0), 0<u<1$. Notice the typical shape of these custom-constructed discrete orthonormal polynomials: globally nonlinear (linear, quadratic, cubic, and so on) and locally piecewise-constant with unequal step size. \begin{rem}[Role of LP-coordinate system in unification of statistical methods] LP-bases play a unique role in statistical modeling---they provide an efficient coordinate (data-representation) system that is fundamental to developing unified statistical algorithms. \vspace{-.65em} \end{rem} \subsection{The {\boldmath$\DS(p_0,m)$} Model} \label{sec:dstheory} \begin{defn}[LP-canonical Expansion] Expand comparison density in the LP-orthogonal series \beq \label{cdm} d(F_0(x);F_0,F)\,=\,1+\sum_j \LP[j;F_0,F] \,T_j(x;F_0), \eeq where the $j$th LP-Fourier coefficient satisfies the following identity: \beq \label{eq:dlp} \LP[j;F_0,F]= \big\langle d \hspace{-.08em}\circ \hspace{-.08em}F_0, T_j \big \rangle_{F_0}.\eeq A change-of-basis perspective: The conventional way to represent a discrete distribution is through indicator basis (histogram representation): \beq \eta_j(i)\,=\, \ind \big\{ X_i \in [x_j, x_{j+1} ) \big\},~~ {\rm for}\, j=1,2,\ldots, r~ \eeq where $r$ is the domain size (number of unique values) of the empirical distribution $\tp(x)$. In \eqref{cdm}, we have performed a ``change of basis'' from the amorphous indicator-basis to a more structured LP-basis $\{T_j(x;F_0)\}$, where the expansion coefficients $\LP[j;F_0,F]$ act as the coordinates of $p(x)$ \textit{relative to} assumed $p_0(x)$: \[\big[ F \big]_{F_0} := \Big(\LP[1;F_0,F], \ldots, \LP[m;F_0,F]\Big),~~1\le m < r.\] For that reason, one may call these coefficients the discrete LP-Fourier Transform (LPT) of $p(x)$ relative to $p_0(x)$. \end{defn} \begin{defn} $\DS(p_0,m)$ denotes a class of distributions with the following representation: \beq \label{DSm} p(x)\,=\,p_0(x)\Big[ 1\,+\, \sum_{j=1}^m \LP[j;F_0,F]\, T_j(x;F_0)\Big], \eeq obtained by replacing \eqref{cdm} into \eqref{eq:gd}. Here $\DS(p_0,m)$ stands for {\bf D}ensity-{\bf S}harpening of $p_0(x)$ using $m$-term LP-series approximated $d_0(x)$. $\DS(p_0,m)$ is a class of nonparametrically-designed parametric models that are flexible enough to capture various \emph{shapes of discrete} $p(x)$, like multi-modality, excess-variation, long-tailed, and sharp peaks. \vspace{-.65em} \end{defn} To estimate the unknown coefficients $\LP[j;F_0,F]$ of the $\DS(p_0,m)$ model, note the following important identity: \vspace{-.35em} \bea \label{eq:lpeq} \LP[j;F_0,F]&=& \int d(F_0(x);F_0,F) T_j(x;F_0) \dd F_0(x) \nonumber \\ &=& \int T_j(x;F_0) \dd F(x) \nonumber \\ &=& \Ex_F\big[ T_j(X;F_0) \big]. \eea \vspace{-.4em} This immediately leads to the following ``weighted mean'' estimator: \beq \label{eq:eestlp} \tLP_j\,:= \LP[j;F_0;\widetilde F]\,=\, \Ex_{\wtF}\big[T_j(X;F_0)\big]\,=\,\sum_x \tp(x) T_j(x;F_0),~~~~ \eeq Using standard empirical process theory \citep{csorgHo1983quantile,parzen1998statistical} one can show that the limiting distribution of sample LP-statistic is i.i.d $\cN(0,n^{-1/2})$, under the null hypothesis $H_0:p=p_0$. Thus one can quickly obtain a sparse estimated $\DS(p_0,m)$ model by retaining only the `significant' LP-coefficients, which are greater than $2/\sqrt{n}$. {\bf Earthquake Data Example}. The first $m=10$ estimated $\tLP_j$ are shown in Fig. \ref{fig:earthlp} of the appendix, which indicates that the only interesting non-zero LP-coefficient is $\tLP_6$. The explicit form of the estimated $\DS({\rm NB},m=6)$ model for the earthquake data is given by: \beq \label{eq:dsequ} \hp(x) = p_0(x)\big[ 1 + 0.20 T_6(x;F_0) \big],\eeq where $p_0={\rm NB}(x;\mu=19, \phi=12)$. The resulting $\hp(x)$ is plotted as a red curve in Fig. \ref{fig:earthq}. \subsection{LP-Maximum Entropy Analysis} \label{sec:lpmaxent} To ensure non-negativity, we expand $\log d$ (instead of $d$ as we have done in Eq. \eqref{cdm}) in LP-Fourier series, which results in the following exponential model: \beq \label{eq:dexp} d_{\teb}(u;F_0,F)\,=\,\exp\Big \{ \sum_{j\ge 1} \te_j S_j(u;F_0)\,-\, \Psi(\teb)\Big \},~~0<u<1 \eeq where $\Psi(\teb)=\log \int_0^1 \exp\{ \sum_j \te_j S_j(u;F_0)\}\dd u.$ This model is also called the maximum-entropy (maxent) comparison density model because it maximizes the entropy $-\int d_{\teb} \log d_{\teb}$ (flattest possible; thus promotes smoothness) under the following LP-moment constraints: \beq \label{eq:cons} \Ex_{\teb}[S_j(U;F_0)]\,=\,\LP[j;F_0,\wtF],~~(j=1,2\ldots). \eeq LP-moments $\LP[j;F_0,\wtF]$ are `compressed measurements' (linear combinations of observed data; verify from \eqref{eq:eestlp}), which are sufficient statistics for the comparison density $d_{\teb}$. \vskip.4em \begin{defn}[Maxent $\DS(p_0,m)$ model] Replacing \eqref{eq:dexp} into \eqref{eq:gd}, we have the following maxent $\DS(p_0,m)$ model \beq \label{eq:maxentds} p(x)\,=\,p_0(x) \exp\Big \{ \sum_{j\ge 1} \te_j T_j(x;F_0)\,-\, \Psi(\teb)\Big\}, \vspace{-.5em} \eeq To estimate a sparse maxent comparison density model, we carry out the optimization routine by choosing only the `significant' LP-moments in \eqref{eq:cons}. \end{defn} \vskip.1em {\bf Earthquake Data Example}. The estimated maxent $\DS(p_0,m)$ model for the earthquake distribution is given by \beq \label{eq:xnbequake} \hhp(x) = p_0(x)\exp\big \{ 0.195 T_6(x;F_0) - 0.02\big \},\eeq whose shape is almost indistinguishable from the LP-Fourier estimated p.m.f. \eqref{eq:dsequ}. \section{Applications in Statistical Modelling} \label{sec:app} We describe how the general principle of `density sharpening' acts as a unified framework for the analysis of discrete data with a wide variety of applications, ranging from basic introductory methods to more advanced statistical modeling techniques. \subsection{One-sample Test of Proportion} \label{sec:bin} Given $n$ samples from a binary $X$, the one-sample proportion test is concerned with testing whether the population proportion $p$ is equal to the hypothesized proportion $p_0$. We approach this problem by reformulating it in our mathematical notation: Step 1. We start with the $\DS(p_0,m=1)$ model \beq \label{eq:dsbin} p(x)=p_0(x) \Big\{1+\LP[1;p_0,p] T_1(x;F_0)\Big\},~~x=0,1.\eeq where the null model $p_0(x)= xp_0 + (1-x) (1-p_0),$ for $x=0,1$. Step 2. We rewrite the original hypothesis $H_0:p=p_0$ in terms of LP-parameter as $H'_0:\LP[1;p_0,p]=0$. Step 3. We derive an explicit formula for $\LP[1;p_0,\tp]$. It's a two step process: First, we need the analytic expression of the LP-basis $T_1(x;p_0)$ \beq T_1(x;p_0) = \left\{ \begin{array}{rl} -\dfrac{p_0}{\sqrt{p_0(1-p_0)}} &\mbox{for $x=0$} \\ \dfrac{1-p_0}{\sqrt{p_0(1-p_0)}} &\mbox{for $x=1$.} \end{array} \right. \eeq We then apply formula \eqref{eq:eestlp} to deduce: \beq \label{lp1bin} \LP(1;p_0;\tp) = (1-\tp)T_1(0;p_0) + \tp T_1(1;p_0) = \dfrac{\widetilde p - p_0}{\sqrt{p_0(1-p_0)}}.~~~\eeq Step 4. A remarkable fact is that the test based on \eqref{lp1bin} exactly matches with the classical Z-test, whose null distribution is: $\sqrt{n} \LP[1;p_0,\tp] \sim \cN(0,1)$ as $\nti$. This shows how the LP-theoretical device provides a transparent \textit{first-principle derivation} of the one-sample proportion test, by shedding light on its genesis. \subsection{Expandable Negative Binomial Distribution}\label{sec:dnb} We will focus now on one important special case of maxent $\DS(p_0,m)$ family of distributions \eqref{eq:maxentds}, where the base measure $p_0(x)$ is taken to be a negative Binomial (NB) distribution. \beq \label{eq:xnb} p(x) \,= \,\binom{x + \phi - 1}{x} \, \left( \frac{\mu}{\mu+\phi} \right)^{\!x} \, \left( \frac{\phi}{\mu+\phi} \right)^{\!\phi} \! \exp\Big \{ \sum_{j\ge 1} \te_j T_j(x;F_0)\,-\, \Psi(\teb)\Big \},~~x \in \mathbb{N},~~\eeq note that the basis functions $\{T_j(x;F_0)\}_{j\ge 1}$ are specially-designed LP-orthonormal polynomials associated with the base measure $p_0 = {\rm NB}(\phi,\mu)$. We call \eqref{eq:xnb} the $m$th-order expandable NB distributions, denoted by \texttt{XNB}(m). A few practical advantages of \texttt{XNB}(m) distributions are: the computational ease they afford for estimating parameters; their compactly parameterizable yet shape-flexible nature; and, finally, their ability to provide explanatory insights into how $p(x)$ is different from the standard NB distribution. Due to their simplicity and flexibility, they have the potential to be a `default choice' for modeling count data. We have already seen an example of $\texttt{XNB}$-distribution in \eqref{eq:xnbequake} in the context of modeling the earthquake distribution. We now turn our attention to two further real-data examples. \begin{figure}[ ] \centering \includegraphics[width=.48\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/NMES-NB.png}~~~~~ \includegraphics[width=.48\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/NMES-boxplot.png}\\[2em] \includegraphics[width=.48\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/NMES-CD.png}~~~~~~~ \includegraphics[width=.48\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/NMES-XNB.png} \vskip1.5em \caption{Nonparametric modeling of NMES 1988 data. Top left: For clarity, we only show the data over the domain $[0,20]$. Otherwise, the ultra long-tailedness of the data will make it harder to see what's going on in the interesting zones; the arrows point out some specific aspects of the data that were missed by NB distribution. Top right: compares two boxplots---one is the observed data and the other is the simulated data from ${\rm NB}(1,5.7)$, which captures the difference in the tail behavior. The tail of NB has to be stretched to match the shape of the data. Bottom left: $\whd$ captures the `unexplained shape of the data,' which tells us \textit{how} to change the initial model-0 to fit the data. Bottom right: The red curve is the rectified \texttt{XNB} probability distribution.}\label{fig:NMES} \end{figure} \begin{example}[NMES 1988 Data] This is a part of the US National Medical Expenditure Survey (NMES) conducted in 1987 and 1988. It is available in the R-package \texttt{AER}. We have $n=4,406$ observations of a discrete random variable $X$ which denotes how many times an individual, aged 66 and covered by Medicare, visited physician's office. As displayed in the Fig. \ref{fig:NMES} boxplot, the distribution has a large support size (varies between $0$ and $89$), with some regions being extremely data-sparse. \vskip.35em The blue curve in the top left plot shows the ${\rm NB}(\hat \phi=1, \hat \mu=5.7)$, where the parameters are maximum-likelihood estimates. Next, we estimate the LP-maxent $\whd_{\teb}$, using the theory of Sec. \ref{sec:lpmaxent}. At this point, it is strongly advisable to pay attention to the shape $\whd_{\teb}$. Why? Because, it efficiently extracts and exposes `unanticipated' aspects in the data that cannot be explained by the initial NB distribution. The bottom-left Fig. \ref{fig:NMES} immediately reveals a few things: (i) NB underestimates the probability at $x=0$; (ii) it overestimates the probability mass around $x=2$ and $3$; (iii) there seems to be an excess probability mass (`bump' structure) around $x=4$; (iv) NB clearly has a shorter tail than what we see in the data---this can be seen from the sharply increasing right tail of the comparison density. To better understand the tail-behavior, we have simulated $n$ samples from ${\rm NB}(1,5.7)$ and contrasted the two boxplots in the top-right panel, which strongly indicates the long-tailedness of $p(x)$ relative to NB distribution. Any reader will agree that without the help of $\whd_{\teb}$, even experienced eyes could have easily missed these subtle patterns. Finally, multiply $\whd_{\teb}$ by the ${\rm NB}(1,5.7)$, following eq. \eqref{eq:xnb}, to get the estimated \texttt{XNB} distribution---the red p.m.f curve, shown in the bottom-right panel of Fig. \ref{fig:NMES}. \end{example} \begin{example}[Computer Breaks Data] \label{ex:comp} We are given the number of times a DEC-20 computer broke down at Open University in each of $n=128$ consecutive weeks of operation, starting in late 1983. The data shows positive skewness with a slightly longer tail; see Fig. \ref{fig:comp} in the appendix. The mechanics of \texttt{XNB} modeling proceed as follows: (i) We start by estimating the parameters of $p_0(x)$, which in this case are MLE-fitted (one can use any other method of estimation) ${\rm NB}(\hat \phi=1.7, \hat \mu=4)$. (ii) The next step is estimation of $\whd_{\teb}$, which in this case is just the uniform distribution---none of the LP-maxent parameters were large enough to be selected. This is depicted in the left panel of supplementary Fig. \ref{fig:comp}. This graphical diagnostic indicates that the initial $p_0$ fits the data satisfactorily; no repairing is necessary. (iii) Accordingly, our `density sharpening' principle returns the $\texttt{XNB}(m=0)$ as the final model, which is simply the starting parametric model ${\rm NB}(\hat \phi=1.7, \hat \mu=4)$. It is interesting to contrast our finding with \cite{saulo2020family}, where the authors fit a highly specialized nonparametric discrete distribution to this data. The beauty of our approach is that it performs nonparametric correction (through $d$) only when it is warranted. When the reality is already simple, we don't complicate it unnecessarily. \end{example} \vspace{-.3em} \subsection{{\boldmath$\chi^2$} and Compressive-{\boldmath$\chi^2$}} Given a random sample of size $n$ from the unknown population distribution $p(x)$, chi-square goodness-of-fit statistic between the sample probabilities $\tp(x)$ and the expected $p_0(x)$ can be re-written as follows: \beq \dfrac{\chi^2}{n} = \sum_x \dfrac{\big( \tp(x) - p_0(x) \big)^2}{p_0(x)} = \sum_x p_0(x) \big[ \tp(x)/p_o(x)\,-1 \big]^2= \int_0^1 \big[ d(u;p_0,\tp) - 1\big]^2\dd u,\eeq By applying Parseval's identity on the LP-Fourier expansion of $d$, we have the following important equality: \beq \label{eq:LPchisq} \dfrac{\chi^2}{n}\, =\, \sum_{j=1}^{r-1}\Big| \LP[j;p_0,\tp] \Big|^2\defeq\,\LP(p_0 \| \tp), \eeq where $r$ is the number of unique values in our sample $X_1,\ldots,X_n$. This shows that chi-square information statistic is a ``saturated'' raw-nonparametric measure with $r-1$ components. \begin{example}[The Gambler's Die] \label{ex:gam} A gambler rolls a die $n=60$ times and gets the following observed counts: \vskip.25em \begin{table}[h] \centering \caption{The observed frequencies} \renewcommand{\tabcolsep}{.3cm} \renewcommand{\arraystretch}{1.2} \begin{tabular}{lcccccc} \hline Number on die & $1$ & $2$ & $3$ & $4$ & $5$ & $6$\\ Observed $\tp$ & 4/60 & 6/60 &17/60 &16/60 & 8/60 &9/60\\ Hypothesized $p_0$ & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6\\ \hline \end{tabular} \end{table} \label{tab:gam} The gambler wishes to determine whether the die is fair. If it is fair, we would expect the outcomes of $1$ to $6$ are equally likely, with probability $1/6$. Pearsonian chi-square and a full-rank LP-analysis, both lead to the same answer: \[ \chi_{{\rm obs}}^2\, =\, n \times \sum_{j=1}^{6-1}\Big| \LP[j;p_0,\tp] \Big|^2=14.2, ~\,\text{with degrees of freedom $5$} \] with the resulting $p$-value $0.0143$. Note that the sum of squares of the $6-1=5$ LP-Fourier coefficients ``exactly'' reproduces (numerically) the Pearson's chisquare statistic! This further verifies the mathematical fact elucidated in eq. \eqref{eq:LPchisq}. Conclusion: The die is loaded at 5\% significance level. \begin{figure}[ ] \centering \includegraphics[width=.46\linewidth,keepaspectratio,trim=2cm 1.5cm 1cm .55cm]{Figs/Gam-d.png}~~~ \includegraphics[width=.46\linewidth,keepaspectratio,trim=1cm 1.5cm 2cm .55cm]{Figs/SD-dhat.png} \caption{The estimated $\whd(u;F_0,F)$ for examples \ref{ex:gam} and \ref{ex:SD}. It helps to identify `where' the interesting differences between data ($\tp$) and hypothesized model ($p_0$) lie.}\label{fig:gamdie} \end{figure} \textit{Exploratory insight}. Here we want to go beyond classical confirmatory analysis, with the goal to understand \textit{how the die is loaded}. The answer is hidden in the shape of the comparison density $\whd$. Fig. \ref{fig:gamdie}(a) firmly suggests that the die was loaded heavily in the middle ---especially on the sides $3$ and $4$, where it landed most frequently. \end{example} \begin{example}[Sparse Dice problem] \label{ex:SD} It is an example of sparse count data with many groups. Imagine a $k=20$ dimensional dice is rolled $n=20$ times: \begin{itemize}[itemsep=1pt,topsep=1pt] \item The hypothesized model:~~ \,$p_0=(1/4,1/4, 1/36,\ldots, 1/36)$ \item The observed probabilities:~~$\tp=(3/4, 1/4,0,\ldots,0)$. \end{itemize} We would like to know whether the postulated model $p_0(x)$ actually reflects the data or not. If it does not, then we want to know how the hypothesized model differs from the observed probabilities. Pearsonian chi-square\footnote{R-function \texttt{chisq.test}() generates the message that ``Chi-squared approximation may be incorrect!.''} yields value $\chi_{{\rm obs}}^2 =30$, with degrees of freedom $19$ and $p$-value $0.052$. Conclusion: there is no evidence of discrepancy at 5\% level, even though there is a glaring difference between $\tp(1)=3/4$ and $p_0(1)=1/4$. The legacy $\chi^2$ loses its power because of `inflated degrees of freedom' for large sparse problems: Large value of $k$ increases the critical value $\chi^2_{\al;k-1}$, making it harder to detect `small' but important changes. \vskip.4em LP-Analysis of the sparse dice problem: (i) Construct the discrete LP-polynomials $\{T_j(x;p_0)\}$ that are specially-designed for the given $p_0(x)$. Appendix Fig. \ref{fig:SD:basis} displays the shape of those basis functions. (ii) Compute the LP-Fourier coefficients $\LP[j;p_0,\,\tp]$ by $\sum_x \tp(x) T_j(x;p_0)$. The Appx. figure \ref{fig:sdice} identifies the first two LP-parameters as significant components. (iii) We now compute the compressive-$\chi^2$ based on these interesting components: \beq \LP(p_0 \| \tp) \,=\,\big|\LP[1;p_0,\,\tp]\big|^2 \,+\, \big|\LP[2;p_0,\,\tp]\big|^2=1.49,~\,~\text{with degrees of freedom $2$}. \eeq and $p$-value $3.4\times 10^{-7}$. Also noteworthy is the fact that compressive LP-chisquare is numerically almost same as the raw $\chi^2$: \[ \vspace{-1em} \chi^2_{{\rm obs}} \,=\, 30 ~\approx~ n \times \LP(p_0 \| \tp) = 29.8.\] \end{example} \begin{rem}[Auto-adaptability] LP-goodness-of-fit shows an impressive adaptability property: under the usual scenario (like in example \ref{ex:gam}) it reduces to the classical $\chi^2$ analysis, and for large-sparse problems, it automatically produces a chi-square statistic with the fewest possible degrees of freedom, which boosts its power. Our reformulation (in terms of modern LP-nonparametric language) allowed a better way of doing chi-square goodness-of-fit analysis that applies to a much broader class of applied statistics problems. In John Tukey's (1954) words: ``Do we need to find new techniques, or to use old ones better?'' \end{rem} \begin{rem}[Ungrouped case] What if we have \textit{ungrouped} data: given a random sample of counts $X_1,\ldots,X_n$, check (confirm) whether the data is compatible with the hypothesised $p_0(x)$, i.e., to test the hypothesis $H_0:p=p_0$. One way to approach this problem is to forcefully group the data points into different categories and then apply Pearson's $\chi^2$ test on it. This is (almost) always a bad strategy, since grouping leaks information. Moreover, the arbitrariness involved in choosing the groups makes it an even less attractive option. However, our LP-divergence measure $\LP(p_0 \| \tp)$ \eqref{eq:LPchisq} can be applied to ungrouped data, with no trouble. The next section expands on this. \end{rem} \subsection{Explanatory Goodness-of-Fit} \label{sec:Xgof} What is an explanatory goodness-of-fit? Why do we need it? This is perhaps best answered by quoting the views of John Tukey: \begin{quote} \vspace{-.4em} \textit{``What are we trying to do with goodness of fit tests? (Surely not to test whether the models fits exactly, since we know that no model fits exactly!) What then? How should we express the answer of such test?}'' \begin{flushright} \vspace{-1.35em} {\rm ---John \cite{tukey1954}}\end{flushright} \end{quote} \vspace{-.4em} To satisfactorily answer these questions we need to design a GOF-procedure that is \textit{simultaneously} confirmatory and exploratory in nature: \vskip.3em ~~$\bullet$ On the confirmatory side, it aims to develop a universal GOF statistic that is easy to use and fully automated for \textit{any} user-specified discrete $p_0(x)$. One such universal GOF measure is $\LP(p_0 \| \tp)$, defined as \beq \label{eq:ugof} \LP(p_0 \| \tp)\,=\,\int_0^1 \big(d(u;p_0,\tp) - 1\big)^2 \dd u = \sum_j \Big| \LP[j;p_0,\tp] \Big|^2= \sum_j \Big|\sum_x \tp(x) T_j(x;F_0)\Big|^2,~~ \eeq where the index $j$ runs over the significant components. See Appx. \ref{app:gof} for more details. \vskip.3em ~~$\bullet$ On the exploratory side, the graphical visualization of comparison density $d(u;p_0,\tp)$ provides explanations as to \textit{why} $p_0(x)$ is inadequate for the data (if so) and \textit{how} to rectify it to reduce its incompatibility with the data. This has special significance for data-driven discovery and hypothesis generation. In particular, the non-zero LP-coefficients indicate the ``main sources of discrepancies.'' In the following, we illustrate this method using three real data examples, each of which contains a different degree of lack-of-fit. \begin{example}[Spiegel Family Data] \label{ex:spiegel} \cite{spiegel1972} reported a survey data of $n=320$ families with five children. The numbers of families with $0,1,2,3,4$ and $5$ girls were $18,56, 110, 88, 40$ and $8$. As an obvious model for $p_0$ we choose Binomial$(5, \hat \pi=.463)$. Estimated LP-Fourier coefficients are \beq n \times \LP(p_0 \| \tp) \,=\,n \times \sum_{j=1}^5 \left|\LP[j;p_0,\tp]\right|^2 = 1.489~~\eeq with $p$-value $0.92$ under the chi-square null with df $5$. In other words, we have just shown that the comparison density is flat uniform $d_0(u)=1$, hence the binomial distribution is completely acceptable for this data; no further density sharpening is needed. \begin{rem} Note that in our analysis, the prime object of interest is the shape of the estimated $\whd_0(x)$ (because it addresses Tukey's concern about the practical utility of goodness-of-fit), not how big or small the $p$-value is. But if a data analyst is habituated to using a threshold $p$-value as a basis for decision making (not a good practice), then we recommend `double parametric bootstrap' \citep{beran1988} to compute the $p$-value--- admittedly a computationally demanding task. This adjusted $p$-value takes into account the fact that the null-parameters are not given (e.g., here the binomial proportion $\pi$); they are estimated from the data. \end{rem} \vspace{-.1em} \end{example} \begin{example}[Rutherford-Geiger polonium data] \label{ex:polo} \cite*{rutherford1910} presented experimental data on groups of alpha particles emitted by Polonium, a radioactive element, in 1/8 minute intervals. On the whole, $n = 2608$ time intervals were considered, in which $k$ ($k=0,1,\ldots,14$) decays were observed. The following table summarizes the data. \begin{table}[ht] \vskip.8em \caption{Observed number of collisions of alpha particles emitted from polonium.} \label{tab:RFord} \centering \renewcommand{\tabcolsep}{.27cm} \renewcommand{\arraystretch}{1.33} \begin{tabular}{|rrrrrrrrrrrrrrr|} \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 13 & 14\\ \hline & 57 & 203 & 383 & 525 & 532 & 408 & 273 & 139 & 45 & 27 & 10 & 4 & 1 & 1 \\ \hline \end{tabular} \vspace{.6em} \end{table} In the 1910 article, Bateman showed that (appended as a note at the end of the original 1910 paper) the theoretical distribution of alpha particles observed in a small interval follows Poisson law, which we select as our model-0. The estimated $\DS(p_0,m)$ model is given by \beq \label{eq:rf} \hp(x)\,=\,e^{-\la_0} \dfrac{\la_0^x}{x!} \,\Big [ 1 -0.03 T_2(x;F_0) -0.04 T_3(x;F_0) \Big ], ~~\text{with}~\la_0=3.88\eeq which is displayed in Fig. \ref{fig:polo} in the Appendix. The model \eqref{eq:rf} indicates that there is a `gap' between the theoretically predicted Poisson model and the experimental result--- second-order (under‐dispersed) and third-order (less skewed) corrections are needed. To quantify the lack-of-fit, compute: \[ n \times \LP(p_0 \| \tp) \,=\,n \times \sum_{j\in \{2,3\}} \left|\LP[j;p_0,\tp]\right|^2 = 6.82,~~\text{with pvalue $0.033$}. \] This is a borderline case, where it is important to consult subject matter specialists before choosing sides---scientific significance is as important as statistical significance. \cite{hoaglin1980} came to a similar conclusion using an exploratory diagnostic tool called ``Poissonness plot,'' shown in the Appendix Fig. \ref{fig:poloeda}. \end{example} \begin{example}[Sparrow data] This data composed of numbers of sparrow nests found in plots of area one hectare, the sample average being $\bar x=1.10$. \cite{zarbook} previously analyzed this dataset. We choose Poisson($1.10$) as our $p_0(x)$ for $\DS(p_0,m)$ model. The second-order $\LP[2;p_0,\tp]=-0.328$ (pvalue=.03) turns out to be the only significant component, which indicates that the data exhibit less-dispersion (due to the negative sign) than postulated ${\rm Poisson}(1.10)$. Our finding is in agreement with \cite{gurtler2000}. Finally, return the $d$-modified under-dispersed Poisson model for the data: \[\hp(x)\,=\,e^{-\la_0} \dfrac{\la_0^x}{x!} \,\big [ 1 -0.33 T_2(x;F_0)\big ], ~~\text{with}~\la_0=1.10\] which is displayed in Fig. \ref{fig:sparrow} of the Appendix. \end{example} \subsection{Relative Entropy and Model Uncertainty} \label{sec:rent} How to quantify the uncertainty of the chosen model $p_0(x)$? A general information-theoretic formula of model uncertainty is derived based on relative entropy between the true (unknown) $p(x)$ and the hypothesized $p_0(x)$. Express relative entropy (also called Kullback–Leibler divergence) as a functional of maxent comparison density $d_{\teb}$: \beq \KLD(p\|p_0)\,=\,\sum_x p(x) \log \Big\{\dfrac{p(x)}{p_0(x)}\Big\}\,=\,\Ex_F\Big[ \log d_{\teb}\big(F_0(X); p_0, p\big)\Big] \eeq Substituting $F_0(x)=u$, leads to the following important formula for relative entropy in terms of LP-parameters: \bea \label{eq:LPrent} \KLD(p\|p_0)&=&\int_0^1 d(u;F_0,F) \log d(u;F_0,F) \dd u \nonumber \\ &=& \int_0^1 d(u;F_0,F) \Big\{ \sum_j \te_j S_j(u;F_0)\,-\, \Psi(\teb) \Big\} \nonumber \\ &=&\sum_j \te_j \LP_j\,-\,\Psi(\teb). \label{eq:klp} \eea The second equality follows from eq. \eqref{eq:lpeq} and the last one from eq. \eqref{eq:lpeq}. Based on a random sample $X_1,\ldots,X_n$, a nonparametric estimate of relative entropy is obtained by replacing the unknown LP-parameters in \eqref{eq:klp} with their sample estimates. \vskip.34em {\bf Statistical Inference}. Relative entropy-based statistical inference procedure is applied to few real datasets in the context of model validation and uncertainty quantification. \vskip.1em $\bullet$ Estimation and Standard error: For the earthquake data, we like to quantify the uncertainty of $p_0={\rm NB}(12, 19)$. The estimated value of $\KLD(p\|p_0)$ is $0.070 \pm 0.020$ (bootstrap standard error, based on $B=1000$), indicating serious lack-of-fit of the starting NB model---which matches with our previous conclusion; see Fig. \ref{fig:earthq}. \vskip.2em $\bullet$ Testing: For the Spiegel family data, the estimated relative entropy we get is $0.0087$, quite small. Naturally, we perform (parametric bootstrap-based) testing to check if $H_0: \KLD(p\|p_0)=0$. Generate $n$ samples from $p_0(x)$; compute $\widehat{\KLD}(p\|p_0)$; repeat, say, $1000$ times; return the $p$-value based on the bootstrap null-distribution. For this example, the $p$-value we get is $0.093$, which reaffirms that binomial distribution explains the data fairly well. \subsection{Card Shuffling Problem} \label{sec:card} Consider the following question \citep{aldous1986}: How many times must a deck of cards be shuffled until it is close to random? To check whether a deck of $52$ cards is uniformly shuffled we use fixed point statistic, which is defined as the number of cards in the same position after a random permutation. Large values of fixed points (`too many cards left untouched') is an indication that the deck is not well-mixed. \vskip.23em \textit{Theoretically-expected distribution}: One of the classical theorems in this direction is due to Pierre \cite{de1713essay} who showed that the distribution of the number of fixed-points under $H_0$ (random permutation of $\{1,2,\ldots,52\})$ is approximately $p_0={\rm Poisson}(1)$. \vskip.23em \textit{Data and notation}: Let $n$ denotes sample size and $k$ number of shuffles. Then \texttt{CARD}$(k,n)$ stands for a dataset $X_1,X_2,\ldots, X_n$, where $X_i$ is the number of fixed points of a $k$-shuffled deck. By $\widetilde p_k$, we mean the sample distribution of fixed-point statistic after $k$ random permutations. The goal is to find the minimum value of $k$, such that it is safe to accept $H_0: p_k=p_0$, where, we should recall, the null-distribution $p_0$ is ${\rm Poisson}(1)$. \vskip.23em \textit{Modeling via goodness-of-fit}: Fig. \ref{fig:card150} shows a \texttt{CARD}$(k,n)$ dataset with $k=150$ and $n=500$. There is a clear discrepancy between the observed probabilities $\tp_k$ and the theoretical $p_0$. The estimated $d$-sharpen Poisson$(1)$ is given below: \beq \label{eq:card150} \widehat p_k(x)= \dfrac{e^{-1}}{x\,!} \big[ 1+ 0.130 \,T_1(x;F_0)\big], ~~~~x=0,1,\ldots\eeq which shows that `first order perturbation' (location correction) is needed: $\LP[1;p_0,\tp_k]=0.130$ with pvalue $0.003$. The positive sign of $\LP_1$ indicates that the mean of the fixed points distribution with $k=150$ is larger than the postulated $\la_0=1$; more shuffling is needed to make the deck close to random. The shape of \eqref{eq:card150} is shown in Fig. \ref{fig:card150}. \vskip.3em \textit{New updated mean}. A curious reader might want to know precisely how large the mean of $p_k$ is compared to $1$. For a distribution $F \sim {\rm DS}(p_0,m)$, we can write an expression for the mean of $F$ ($\la_F$) in terms of the mean of $F_0$($\la_0$). In this case, we have \bea \vspace{-1em} \la_{k} ~=~\Ex_{F_k}[X]&=& \int x \,\big[ 1+ 0.130 \,T_1(x;F_0) \big] \dd F_0(x) \nonumber\\ &=&\int_0^1 Q_0(u) \big[ 1+ 0.130 \,S_1(u;F_0) \big] \dd u \nonumber\\ &=&\int_0^1 Q_0(u) \dd u + 0.130 \,\langle Q_0, S_1 \rangle_{\cL^2[0,1]} \nonumber\\ &=& 1 + 0.130 \times 0.9596 \approx 1.125. \eea \vspace{-1.5em} A few additional comments: \begin{figure}[ ] \centering \vspace{-.8em} \includegraphics[width=.55\linewidth,keepaspectratio,trim=2cm 1.65cm 2cm 1cm]{Figs/card150.png} \caption{Distribution of fixed points: black is empirical and blue is theoretical---a clear mismatch between data and the theory. The rectified $\DS(p_0,m)$ model is displayed in red.}\label{fig:card150} \end{figure} $\bullet$ Thus far, we have verified that $k=150$ is not enough to produce a uniformly shuffled deck. However, we came to this conclusion based on a \textit{single} sample of size $n$. So, we generate (through computer simulation) several replications of \texttt{CARD}$(k,n)$ data with different $(k,n)$. \vspace{-.15em} $\bullet$ To reach a confident decision, we perform the experiment with $n=500$ and $k=150, 160,170, 180, 190, 200$. The analysis was done based on $B=250$ datasets from each $(n,k)$-pair. The results are summarized in appendix Fig. \ref{fig:card2}, which shows that $k=170$ shuffles is probably a safe bet to declare a deck to be fair---i.e., uniformly distributed. \vspace{-.15em} $\bullet$ \cite{diaconis2018bayGOF} describes a Bayesian approach to this problem. In contrast, we offered a completely nonparametric solution that \textit{leverages} the additional knowledge of the expected $p_0(x)$ and provides more insights into the nature of discrepancies. \subsection{Jaynes Dice Problem} The celebrated Jaynes' dice problem \citep{jaynes62} is as follows: Suppose a die has been tossed $N$ times (\textit{unknown}) and we are told only that the average number of the faces was $4.5$---not $3.5$, as we might expect from a fair die. Given this information (and nothing else), the goal is to determine probability assignment, i.e., what is the probability that the next throw will result in face $k$, for $k=1, \ldots,6$. \vskip.25em \textit{Solution of Jaynes' dice problem using density sharpening principle}. The initial $p_0(x)$ is selected as the discrete uniform distribution $p_0(x)=1/6$ for $x=1,\ldots, 6$, which reflects the null hypothesis of `fair' dice. As we are given only the first-order location information (mean is $4.5$) we consider the following $\DS(p_0,m=1)$ model: \beq \label{eq:Jds} p(x) = \dfrac{1}{6}\Big\{ 1 + \LP[1;p_0,p]\,T_1(x;p_0) \Big\} \eeq The coefficient $\LP[1;p_0,\tp]$ has to be estimated, and for that we also need to know the basis function $T_1(x;F_0)$. Step 1. To find an explicit formula for the discrete basis $T_1(x;F_0)$, apply \eqref{eq:T1} with $F_0(x)=x/6$ and $p_0(x)=1/6$ \beq T_1(x;F_0) \,=\, \sqrt{12} \dfrac{(x/6 - .5)}{\sqrt{1-\sum_{x=1}^6 (1/6^3)}}\,=\,\sqrt{\dfrac{12}{35}} \big( x - 3.5\big),~~\text{for}\,x=1,\ldots,6. \eeq Step 2. Compute $\LP[1;p_0,\tp]$ by applying formula \eqref{eq:eestlp} \beq \label{eq:lpJ} \LP[1;p_0,\tp] \,=\, \sum_x \tp(x) T_1(x;p_0) \,= \,\sqrt{\dfrac{12}{35}} \big( \sum_x x \tp(x) - 3.5\big) \,=\, 0.586. \eeq The non-zero $\LP[1;p_0,\tp]$ indicates it was a loaded die. Step 3. Substitute the value of $\LP[1;p_0,\tp]$ in eq. \eqref{eq:Jds} to get the LP-Fourier $DS(p_0,m=1)$ model as \beq \label{eq:Jdses} \widehat{p}(x) = \dfrac{1}{6}\Big\{ 1 + 0.586\,T_1(x;p_0) \Big\}, ~~x=1,\ldots,6. \eeq This is shown as the blue curve in Fig. \ref{fig:Jaynes}. Step 4. Finally, return the estimated exponential $\DS(p_0,m=1)$ probability estimates \beq \label{eq:maxentlp} \hhp(x) (x)\,=\,\dfrac{1}{6}\exp\big\{ -0.193 + 0.634\,T_1(x;F_0) \big\},~~x=1,\ldots,6. \eeq This is shown as the red curve in Fig. \ref{fig:Jaynes}. \begin{figure}[ ] \centering \includegraphics[width=.55\linewidth,keepaspectratio,trim=2cm 1.65cm 2cm 1cm]{Figs/jaynes.png} \vskip.4em \caption{The black dots denote the starting null-model---die with $6$ equally likely outcomes. The blue curve is our LP-Fourier density estimate given in eq. \eqref{eq:Jdses}. The exponential $\DS(p_0,m)$ density estimate, given in eq. \eqref{eq:maxentlp}, is shown in red line.}\label{fig:Jaynes} \end{figure} \begin{table}[h] \centering \vskip1em \caption{The estimated LP skew distribution for the Jaynes' dice problem.} \vskip.5em \renewcommand{\tabcolsep}{.3cm} \renewcommand{\arraystretch}{1.77} \begin{tabular}{c|cccccc} \hline $p_0(x)$ &1/6&1/6&1/6&1/6&1/6&1/6\\ LP-Fourier $\hp(x)$ &.025&.080 &.140 &.195 &.250 &.310\\ LP-MaxEnt $\hhp(x)$ &0.054 &0.079 &0.114 &0.165&0.240&0.347 \\ Jaynes'Answer &0.054 &0.079 &0.114 &0.165&0.240&0.347\\ \hline \end{tabular} \label{tab:Jaynes} \vskip.3em \end{table} \begin{rem} It is a quite remarkable fact that our density-sharpening principle-based probability assignment \textit{exactly} matches with Jaynes' maxent answer; see Table \ref{tab:Jaynes}. \vspace{-.25em} \end{rem} \subsection{Compressive Learning of Big Distributions} An important class of learning problem that has recently attracted researchers from various disciplines---including high-energy physics, neuroscience, theoretical computer science, and machine learning---can be viewed as a modeling problem based on samples from a distribution over large ordered domains. Let ${\bf p}=(p_1,\ldots,p_k)$ be a probability distribution over a very large domain $k$, where $p_i \ge 0, \sum_{i=1}^k p_i =1$. Let us look at a realistic example before discussing a general method. \vskip.45em \begin{example}[HEP data] This is an example from high-energy physics\footnote{High-energy physics is not the only discipline where this kind of very large and sparse histogram-like data appears. It frequently arises in many modern scientific domains: inter-spike interval data (neuronal firing patterns); relative abundance/intensity data (mass spectrometry data); DNA methylation and ChIP-seq data (genomics); pixel histogram data (astronomical images of stars, galaxies etc); histograms of activity intensities (biosignals from wearable sensor devices, mental illnesses studies by NIMH); photometric redshift data (photo-z spectra in Cosmology), just to name a few. There is an outstanding interest in developing new computational tool that allows rapid and approximate statistical learning for big-histogram-like datasets.} (HEP), motivated by the PHYSTAT 2011 Banff bump-hunting challenge task \citep{junk2011banff}. In HEP counting experiments (e.g., in Large Hadron Collider) one observes data in the following form: $n$ samples from unknown $p(x)$ as event counts (number of collisions) at $k$ finely binned energy-cells, which we denote by \texttt{HEP}$(k,n)$. Fig. \ref{fig:PP} displays one such data with $n=10,000$ and $k=250$, with the postulated background model (dictated by the known Standard Model) as discretized exponential distribution $f_0(x)=\lambda e^{-\lambda x}$ with $\lambda=1/20$: \beq \label{eq:nullHEP} p_0(i) \, \doteq \, \int_{x_i}^{x_{i+1}} f_0(x) \dd x,~~i=1,\ldots,k.\eeq Particle physicists are interested in discovering new particles that go \textit{beyond} the known Standard model described by the background model $p_0$. We present a four-step algorithmic program to address the general problem of data-driven `Learning and Discovery.' \end{example} \vskip.3em {\bf Phase 1.} \textit{Testing}. The first question a scientist would like answered is whether the data is consistent with the background-only hypothesis, i.e., $H_0:p=p_0$. We perform the information-theoretic test described in Sec. \ref{sec:rent}. In particular, we choose the relative entropy-based formula given in \eqref{eq:LPrent} as our test statistic. The $p$-value obtained using parametric bootstrap (with $B=50,000$) is almost zero---strongly suggesting that the data contain some surprising new information in light of the known physics model $p_0$. But to figure out whether that information is actually useful for physicists, we have to dig deeper. \begin{figure}[ ] \vspace{-.6em} \centering \includegraphics[width=.478\linewidth,keepaspectratio,trim=1cm 1cm .1cm 1.2cm]{Figs/PPcd2.png}~~~~~ \includegraphics[width=.48\linewidth,keepaspectratio,trim=.35cm 1cm 1cm 1cm]{Figs/PP.png} \vskip.6em \caption{\texttt{HEP}$(k,n)$ data analysis. Left: The estimated comparison density $\whd(u;p_0,p)$, which indicates there may be a bump (around $u=0.64$) that went \textit{unnoticed} by the theoretical model. Right: The data generated from a mixture of $p_0$ and $\cN(125, 2)$ with mixing proportion $0.1$. The theory-driven model $p_0$ is the blue curve and the red curve is the $d$-sharpen $p_0$. The green triangle denotes the true peak at the mass position 125 Gev. The shaded area under $\dhat_0(u)$ denotes the amount of excess mass on the top of the smooth background.}\label{fig:PP} \end{figure} \vskip.15em {\bf Phase 2.} \textit{Exploration and Discovery}. By definition, new discoveries can be made only by ``contrasting'' data with the existing model. This is what is achieved through $d(u;F_0,\wtF)$. The left panel of Fig. \ref{fig:PP} displays the estimated $\whd_0(u)$ for the HEP-data, which compactly encodes all the structure of the data that cannot be described by the assumed $p_0(x)$. \vskip.15em The exploratory graphical display of $\whd_0$ reveals a few noteworthy points. Firstly, the non-uniformity of $\whd_0$ makes us skeptical about $p_0$---this is completely in agreement with Phase-1 analysis. Secondly and more importantly, the shape of $\whd_0$ provides a refined understanding of the \textit{nature} of new physics that is hidden in the data, which, in this case, revealed itself as a bump \textit{above} the smooth background $p_0$. The word `above' is important because we are not interested in bumps on $p$ itself, but on $d_0$, which is the unanticipated `excess mass.' Hunt for new physics is the problem of bump hunting on $d(u;p_0,p)$, not on $p(x)$. For the HEP-data, we see a prominent bump in $d_0(u)$ around $u=0.64$, which (in the original data domain) corresponds to near $Q_0(.64) \approx$ 125 GeV; the green triangle in the above figure. \begin{rem}[The discovery function] Since, $d_0(x)$ encapsulates what's new in the data by separating the unknown from the known, we also call it the ``discovery function.'' It is the ``missing piece'' that glues together the known $p_0(x)$ and the unknown $p(x)$. It provides a graphical diagnostic tool that exposes the unexpected. These clues can guide domain-scientists to carry out more targeted follow-up studies. \end{rem} \vskip.3em {\bf Phase 3.} \textit{Inference and Excess Mass Problem}. Where is the interesting excess mass hiding? Is it a statistical fluke or something real? How substantial is the evidence? The real issue is: can we let the data confidently tell us where to look next for new particles? This will result in a complete paradigm shift because traditionally the HEP searches (for locating excess events) were guided by theoretical considerations only. \begin{quote} \vspace{-.3em} `\textit{One may feel uneasy that we may therefore only find new processes if a theorist has been clever enough to propose the corresponding theory ahead of time.}' \begin{flushright} \vspace{-.24em} {\rm ----Glen \cite{cowan2007}} \end{flushright} \end{quote} \vspace{-.3em} Interested readers may also refer to \cite{lyons08} and the Nature news article by \cite{castelvecchi2018lhc} for a clear exposition on the scientific importance of these issues. \vspace{.24em} {\bf Statistical Discovery: Inference Algorithm}. The following are the main steps of the inference algorithm whose results are summarized in Fig. \ref{fig:HEPxm}: Step 1. Parametric bootstrap: To measure the natural statistical variation of $\dhat_0(x)$ under the null hypothesis: simulate $n$ samples from $p_0(x)$ and estimate the comparison density. Repeat the whole process for a large number of iterations (say $B=10,000$ times) to get a bundle of comparison density curves, all of which fluctuate around the flat uniform line. Step 2. Pointwise $p$-value computation: At a fixed grid point $x \in [100, 250]$, we have the following values of the test statistic \[\Big\{ \whd_0^{(1)}(x), \ldots, \whd_0^{(B)}(x) \Big\} \] calculated from the $B$ bootstrap samples. Compute the bootstrap $p$-value at the point $x$ by \[{\rm Pval}(x)\,=\,\dfrac{1+ \big\{ \# \,{\rm of}\, \whd_0^{(j)}(x) \ge \whd_0(x)\big\} }{B+1 }\] Fig. \ref{fig:HEPxm} draws the curve $-\log_{10}({\rm Pval}(x))$ as a function of $x$. The $5\sigma$ discovery region $(121.5, 129.5)$ is highlighted in yellow, which includes the true excess mass point $125$ GeV. This is how modern nonparametric modeling based on `density sharpening principle' can convincingly guide researchers on \textit{where} to look for evidence of a deeper theory of physics. \begin{figure}[ ] \centering \includegraphics[width=.48\linewidth,keepaspectratio,trim=2.5cm 1cm 2.5cm 1cm]{Figs/HEP-dis.png} \vskip2em \caption{ The $5\sigma$ discovery interval $(121.5,129.5)$ correctly captures the true excess-mass point indicated by the green triangle.}\label{fig:HEPxm} \end{figure} \vskip.3em {\bf Phase 4.} \textit{Sharpen Scientific-Model}. Finally, the goal is to sharpen the initial scientific model $p_0(x)$ to achieve a more precise description of what is loosely known or suspected. The estimated $\DS(p_0,m)$ model sharpens the parametric null \eqref{eq:nullHEP} to provide a nonparametrically-adjusted, parsimonious model: \beq \label{eq:dshep} \hp(x)\,=\,p_0(x)\Big[ 1 -\sum_{j \mathcal{J}_5} \LP[j;p_o,\tp]\,T_j(x;F_0)\Big], \eeq where the active set $\mathcal{J}_5=\{2,3,5,7,8\}$ along with the LP-coefficients are given in Table \ref{tab:HEP}. \begin{table}[ht] \vskip.4em \caption{The selected no-zero LP-coefficients} \label{tab:HEP} \centering \renewcommand{\tabcolsep}{.4cm} \renewcommand{\arraystretch}{1.33} \begin{tabular}{|l |ccc cc|} \hline $\mathcal{J}_5$ & 2& 3& 5& 7 &8\\ \hline $\widehat{\LP}_j$ & -.097 & -.090& 0.117& -.095 & -.060 \\ \hline \end{tabular} \end{table} \vspace{-.3em} \begin{rem} A few remarks: ~1. The part in the square brackets of \eqref{eq:dshep} shows \textit{how} to change the prior scientific model $p_0(x)$ to make it consistent with the data. Knowing the \textit{nature} of the deficiency of the assumed model, is an important step towards data-driven discovery. As George \cite{box2001dis} said: ``\textit{discovery usually means learning how to change the model}.'' ~2. LP-parameterization requires only $5$-dimensional sufficient statistics to approximately capture the shape of the distribution! The ``compressiveness'' of the LP-transformation---the ability to extract a low-dimensional representation---makes it less data-hungry, as demonstrated in the next section. ~3. Our model \eqref{eq:dshep} is a `hybrid' between theory-driven and data-driven model, which decouples the overall density into two components: expected $p_0(x)$ and unexpected $d_0(x)$. \end{rem} \begin{rem}[An Appeal to Physicists: Hypothesis Testing $\neq$ Discovery Science] Classical statistical inference puts too much emphasis on testing, $p$-value, standard error, and confidence intervals, etc. This ideology is reflected in the practice of high-energy physicists---which entirely revolves around antique tools of hypothesis testing, likelihood ratio, and $p$-value. It's time to break the shackles of outdated data analysis technology that starts with hypothesis testing and ends with a $p$-value. George \cite{box2001dis} expressed a similar sentiment, arguing that the reason why engineering and the physical sciences rarely use statistics is: ``\textit{Much of what we have been doing is adequate for testing but not adequate for discovery.}'' \vskip.13em In this section my purpose has been to introduce some modern statistical tools and concepts that can help scientists with their everyday tasks of discovery and deeper exploration of data. After all, one of the main goals of data analysis is to sharpen the scientists' mental model by revealing the unexpected---a continuous cycle of knowledge refinement: \vskip2.5em \begin{center} \begin{figure} \begin{tikzpicture}[node distance =4cm, auto] \node [block] (x1) {Theory}; \node [block, right of =x1] (x2) {Measurement}; \node [block, right of =x2](x3){Discovery $d_0(x)$}; \node [block, right of =x3](x4){Better theory}; \path [line] ($(x1.0)+(.1cm,0cm)$)--($(x2.180)+(-.1cm,0cm)$); \path [line] ($(x2.0)+(.1cm,0cm)$)--($(x3.180)+(-.1cm,0cm)$); \path [line] ($(x3.0)+(.1cm,0cm)$)--($(x4.180)+(-.1cm,0cm)$); \path[line] ($(x4.south)+(0,-.1)$) -- +(0, -2em) -| ($(x2.south)+(0,-.1)$); \end{tikzpicture} \vskip1.5em \caption{Continuous learning by iterative model refinement: It develops increasingly `better' theory that explains new phenomena by broadening the scope of the previous theory.} \end{figure} \vspace{-2em} \end{center} \end{rem} \vspace{-.5em} \subsection{Data-efficient Statistical Learning}\label{sec:DEL} We are interested in the following statistical learning problem: Given a small handful of samples $n \ll k$ from a big probability distribution of size $k$, how to perform time-and-storage-efficient learning? When designing such algorithms we have to keep in mind that they must be (i) data-efficient: can learn from limited sample data $n \ll k$; and (ii) statistically powerful: can detect ``small'' deviations. \vskip.3em Classical nonparametric methods characterize big probability distributions using high-dimensional sufficient statistics based on histogram counts $\{N_j\}_{j=1}^k$, which, obviously, requires a very large sample for efficient learning. And as the required sample sizes increases, this slows down the algorithm running-time which scales with $n$. Hence, most of the `legacy' statistical algorithms (e.g., Pearson's chi-square, Freeman-Tukey statistic, etc.) become unusable on large-scale discrete data as `the minimum number of data points required to obtain an acceptable answer is too large to be practical.' Indeed, detecting sparse structure in a data-efficient manner from big distributions, such as from \texttt{HEP}($k,n$), is a challenging problem and requires a ``new breed'' of computational algorithms. There has been some impressive progress on this front by the Theoretical Computer Science community; see Appendix \ref{app:subl} for a related discussion on sub-linear algorithms for big data. \begin{figure}[ ] \vspace{-1em} \centering \includegraphics[width=.5\linewidth,keepaspectratio,trim=2.25cm 1cm 2.25cm 1.25cm]{Figs/HEPower.png} \vskip.7em \caption{HEP$(n,k=500)$ data for varying sample size $n$. Event counts (number of collisions) at $k = 500$ finely binned energy-cells. \texttt{LPgof} (shown in red) requires 50\% less data to reach the correct conclusion with power 1.}\label{fig:HEPower} \end{figure} \vskip.3em {\bf HEP Data Example}. We generate \texttt{HEP}($k=500,n$) data for varying sample size $n$. We used $350$ null simulated data sets to estimate the 95\% rejection cutoffs for all methods at the significance level $0.05$, and used $350$ simulated data sets from alternative to approximate the power, as displayed in Fig. \ref{fig:HEPower}. We compared our LPgof method \eqref{eq:ugof} with two state-of-the-art algorithms (proposed by theoretical computer scientists): (i) \cite{valiant2017automatic} and (ii) \cite{acharya2015optimal}---interestingly, this exact test has been proposed earlier by \cite{zelterman1987}, which in Statistics literature is known as Zelterman's D-statistic\footnote{`Those who ignore Statistics are condemned to reinvent it'---Brad Efron.}. Conclusion: \texttt{LPgof} requires {\bf 50\% less} data to reach the correct conclusion with power 1. \begin{figure}[ ] \vspace{-.7em} \centering \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power1.png}~~~~~~~~ \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power2.png} \\[1em] \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power3.png}~~~~~~~~ \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power4.png} \\[1em] \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power5.png}~~~~~~~~ \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power6.png} \\[1em] \caption{The red power curve is \texttt{LPgof} method with $m=8$. First row: Null distribution is $p_0=(1/4,1/4,1/2(k-2) \ldots, 1/2(k-2))$. Alternatives are $p_{\delta}=(1/4+\de,1/4-\de,1/2(k-2) \ldots, 1/2(k-2))$, with $\de=1/8,1/10$. Second row: Null distribution is $p_0=U_{[k]}$. Alternatives are $.95U_{[k]} + .05{\rm Zipf}(\al)$, with $\al=1.25,1.15$. Third row: Null distribution is $p_0=U_{[k]}$. Alternative probabilities are computed using the increments $\Delta D(j/k; \mu,\pi)$, $j=1,\ldots,k$ where $D(u)=F(\Phi^{-1}(u);X)$, and $F(x;X)=(1-\pi)\Phi(x) + \pi \Phi(x-\mu)$ for $\mu=-1.5$ and $\pi=\{.2,.1\}$. In all cases the dimension $k=5000$; and sample sizes are sublinear in dimension, i.e., $n \asymp \sqrt{k}$} \label{fig:power1} \end{figure} {\bf Empirical Power Comparisons}. We compare the power of different methods under six different settings, as described in the Figs \ref{fig:HEPower} description, and will not repeat this here. The overall conclusion is pretty clear: \texttt{LPgof} emerged as the most powerful data-efficient test---it can detect new discoveries \textit{quickly and more reliably}. The prime reason for achieving this level of performance is fully attributable to the good sparsity (energy compaction) property of ``LP-domain data analysis." The specially-designed discrete LP-transformation basis provides an efficient coordinate system that requires far fewer parameters than the size of the distribution to capture the essential information. For additional examples see Figs. \ref{fig:powerh0} and \ref{fig:power2} of the Appendix. \subsection{Discovery-source Separation Problem} ~~``\textit{At this scale it is not possible to keep all the data (the LHC produces up to a Petabyte of data per second) and it is essential to have efficient data-filtering mechanisms so that we can separate the wheat from the chaff.}'' \begin{flushright} \vspace{-1.35em} {\rm --- Bob Jones, Project Leader at CERN.}\end{flushright} Imagine a typical scenario, where massive experimental data are distributed across thousands of different computation units (say, servers). Each data source has its own `personal' distribution over a large domain of size $k=1000$, from which, we only have access to $n=1000$ samples. Thus, the observed data can be summarized as a collection of highly noisy and sparse empirical distributions $\{\tp_\ell(x)\}$. Suppose we are given $900$ such sparsely-sampled empirical distributions from $p_0=U_{[k]}$, $50$ from $.9U_{[k]} + .1 {\rm Zipf}(1.25)$, and $50$ from a mixture of $.9U_{[k]} + .1 \Delta{\rm Beta}(50, 50)$, where $\Delta{\rm Beta}(50, 50)$ denotes the increments of the cdf of ${\rm Beta}(50,50)$. Shapes of these three source-distributions are shown in the left of Fig. \ref{fig:dss}. \vskip.35em {\bf Discovery-source Separation (DSS)}. The goal here is to design algorithms that can quickly filter out the `interesting' data sources, having potential for a new physics discovery. A more ambitious goal would be to classify different data sources based on their `nature' of discoverability. \vskip.35em {\bf Algorithm}. It consists of two main steps: \vskip.2em Step 1. LP-Fourier Transform Matrix: Define the LP-transform matrix $L \in \cR^{g\times m}$ by \beq L[\ell,j]=\LP[j;p_0,\tp_\ell] = \int T_j(x;F_0) \dd \wtF_\ell, \eeq for $\ell=1,\ldots,g=1000$ and $j=1,\ldots, m=10$. \vskip.25em Step 2. DSS-plot: Perform the singular value decomposition (SVD) of $L=U\Lambda U^{T}$ $= \sum_r \la_r u_r u_r^{T}$, where $u_{ij}$ are the elements of the singular vector matrix $U=(u_1,\ldots,u_m)$, and $\Lambda={\rm diag}(\la_1,\ldots,\la_m)$, $\la_1 \ge$ $ \cdots \la_m \ge 0$. DSS-plot is the two-dimensional graph in the right panel of Fig \ref{fig:dss} (b), which is formed using the points $(\la_1 u_{1\ell}, \la_2 u_{2\ell})$, for $\ell=1,\ldots,g$ by taking the top two dominant singular vectors. Different data sources are shown as points, which captures the heterogeneity in terms of discoverability. \begin{figure}[ ] \centering \includegraphics[width=.46\linewidth,keepaspectratio,trim=2.25cm 1cm 1cm 1cm]{Figs/DSS-p.png}~~~~ \includegraphics[width=.46\linewidth,keepaspectratio,trim=1cm 1cm 2.25cm 1cm]{Figs/DSS-plot2.png} \vskip1em \caption{Discovery-source Separation (DSS) plot. Each data-batch is HEP$(k=1000,n=1000)$ with different shapes, among them $900$ were generated from $U_{[k]}$ (grey dotted line), $50$ were from $.9U_{[k]} + .1{\rm Zipf}(1.15)$ (red dotted line), and $50$ from a mixture of $U_{[k]}$ and increments of ${\rm Beta}(50,50)$ with mixing proportion $.1$ (blue dotted line). In the right panel, we show the DSS-plot. The chunks of points around zero indicate the null data sources. The sources are classified into three groups based on their nature of discoverability.}\label{fig:dss} \end{figure} \vskip.4em \textit{Interpretation}: DSS-plot displays and compares a large number of (empirical) distributions $\tp_\ell$ by embedding them in 2D Euclidean space. The cluster of points (data sources) that are near to the origin are the ones with background distribution. The distance from the origin \[\text{discovery-index}_\ell=(\la_1 u_{1\ell} -0)^2\,+\, (\la_2 u_{2\ell} -0)^2,~~~~\ell=1,\ldots,g=1000~~\] can be interpreted as the ``degree of newness'' of that dataset. The DSS-plot successfully separates different sources based on the statistical nature of their signal components. Researchers (like Bob Jones) can use this tool to \emph{quickly identify} interesting data sets for careful investigation. \section{Discussion} \label{sec:diss} Compared to the rich and well-developed tools for continuous data modeling problems, many companion discrete data modeling problems are still quite challenging. This was the main motivation for undertaking this research, which aimed at developing a widely-applicable general theory of discrete data modeling. This paper makes three broad contributions to the field of nonparametric statistical modeling: \vskip.5em ~~1) Model-Sharpening Principle: We have introduced a new principle of statistical model building, called `Density-Sharpening,' which performs three tasks in one step: model verification, model exploration, and model rectification. This was the guiding principle behind our systematic theory of discrete data modeling. As future work, we plan to explore how model-sharpening principle can help developing `auto-adaptable' machine-learning models. \vskip.5em ~~2) $\DS(p_0,m)$ Model: We have introduced a new class of nonparametric discrete probability model and robust estimation techniques that has the capability to leverage researchers' vague (misspecified) prior knowledge. It comes with novel exploratory graphical methods for `discovering' new knowledge from the data that investigators neither knew nor expected. \vskip.5em ~~3) Unified Statistical Framework: Our modern nonparametric treatment of the analysis of discrete data is shown to be rich enough to subsume a large class of statistical learning methods as a special case. This inclusivity of the general theory has some serious implications: Firstly, from a theoretical angle, it deepens our understanding of the ties between different statistical methods. Secondly, it simplifies practice by developing unified algorithms with expanded capabilities. And finally, it is also expected to be beneficial for modernizing Statistics curriculum in a way that is applicable to small- as well as large-scale problems. \section*{Supplementary Appendix} \label{SM} The supplementary material includes additional theoretical and algorithmic details. \bibliographystyle{Chicago} \section{Principles of Statistical Model Building} \begin{quote} `\textit{Part of a meaningful quantitative analysis is to look at models and try to figure out their deficiencies and the ways in which they can be improved.}' \begin{flushright} \vspace{-.14em} {\rm ----Nobel Lecture by Lars Peter \cite{hansen2014nobel}} \end{flushright} \end{quote} \vspace{-.25em} Scientific investigation never happens in a vacuum. It builds upon previously accumulated knowledge instead of starting from scratch. Statistical modeling is no exception to this rule. \vskip.34em Suppose we are given $n$ random samples $X_1,\ldots,X_n$ from an unknown discrete distribution $p(x)$. Before we jump into the statistical analysis part, the scientist provided us a hint on what might be an initial believable model for the data: `from my years of experience working in this field, I expect the underlying distribution to be somewhat close to $p_0(x)$.' This information came with a disclaimer: `don't take $p_0(x)$ too seriously as it is only a simplified approximation of reality. Use it with caution and care.' \vskip.34em The general problem of statistical learning then aims to address the following questions: Whether the `shape of the data' is consistent with the presumed model-0. If it is not, then what is it? How is it different from $p_0$? Revealing \textit{new} hidden pattern in the data is often the most essential statistical modeling task in science and engineering. Of course, ultimately, the aim is to search for a rich class of sensible models in an automatic and faster manner, by appropriately changing the misspecified $p_0$. Knowing \textit{how} to change the anticipated $p_0$ is the first step towards scientific discovery that allows scientists to re-evaluate alternative theories to explain the data. If we succeed at this, it will provide a mechanism to build ``hybrid'' knowledge-data integrated models, which are far more interpretable than classical fully data-driven nonparametric models. Full development of these ideas requires a new conceptual framework and mathematical tools. \vskip.34em \textit{Organization}. Section \ref{sec:theory} introduces a new family of nonparametric approximation and smoothing techniques for discrete probability distributions, which is built on the principle of `Density Sharpening.' Section \ref{sec:app} highlights the role of the proposed theoretical framework in the development of statistical methods that is rich enough to include traditional as well as contemporary statistical methods: starting from as simple as one sample Z-test for a proportion to as sophisticated as compressive chi-square, $d$-sharp negative Binomial distribution, universal goodness-of-fit program, relative entropy estimation, Jaynes dice problem, sample-efficient learning of big distributions, etc. The paper ends with a discussion and conclusion Section \ref{sec:diss}. Additional applications and methodological details are deferred to the Supplementary Appendix to ensure the smooth flow of the main ideas. \section{Density Sharpening: Model and Mechanism} \label{sec:theory} We describe a method of nonparametric approximation of discrete distribution \textit{by} sharpening the initially assumed $p_0(x)$. The theory is remarkably simple, yet general enough to be vastly applicable in many areas \textit{beyond} density estimation, as described in Section \ref{sec:app}. Here is a bird's eye view of the core mechanism, which is a three-stage process. \vskip.35em \texttt{Stage 1}. Model-0 elicitation: The modeler starts a suitable $p_0(x)$ by using his/her experience or subject-matter knowledge. Often a particular parametric form of $p_0(x)$ is selected keeping convenience and simplicity in mind. \vspace{-.1em} \texttt{Stage 2}. Exploratory uncertainty analysis: Assess the uncertainty of the presumed model $p_0(x)$, in a way that can explain `why and how' the assumed model-0 (i.e., $p_0$) is inadequate for the data. \vspace{-.1em} \texttt{Stage 3}. Coarse-to-Refined density: Incorporate the `learned' uncertainty into $p_0(x)$ to produce an improved model $\hp(x)$ that will eliminate the incompatibility with the data. \vskip.65em The required theory is developed in the next few sections, which heavily relies on the following notation: let $X$ be a discrete variable with probability mass function $p_0(x)$, cumulative distribution function $F_0(x)$, and mid-distribution function $\Fmn(x)=F_0(x) - \frac{1}{2}p_0(x)$. The associated quantile function will be denoted by $Q_0(u)=\inf\{x: F_0(x) \ge u\}$ for $0<u<1$. By $\cL^2(dF_0)$ we mean the set of all square integrable functions with respect to the discrete measure $\dd F_0$, i.e, for a function $\psi \in \cL^2(dF_0)$: $\int |\psi|^2 \dd F_0 := \sum_x |\psi(x)|^2 p_0(x) < \infty$. The inner product of two functions $\psi_1$ and $\psi_2$ in $\cL^2(dF_0)$ will be denoted by $\langle \psi_1, \psi_2 \rangle_{F_0}:=\int \psi_1 \psi_2 \dd F_0$. Expectation with respect to $p_0(x)$ will be abbreviated as $\Ex_0(\psi(X)) :=\int \psi \dd F_0$. \subsection{Learning by Comparison: $d$-Sharp Density} We introduce a mechanism for nonparametrically estimating the density of $X_1,\ldots, X_n$ \textit{by comparing and sharpening} the presumed working model $p_0(x)$. \vskip.3em \begin{defn}[$d$-Sharp Density] For a discrete random variable $X$, we have the following universal density decomposition: \beq \label{eq:gd} p(x)\,=\,p_0(x)\,d\big(F_0(x);F_0,F\big), \eeq where the $d(u;F_0,F)$ is defined as \beq d(u;F_0,F)= \dfrac{p(Q_0(u))}{p_0(Q_0(u))}, ~\,0<u<1.\eeq The function $d(u;F_0,F)$ is called `comparison density' because it \textit{compares} the assumed $p_0$ with the true $p(x)$ and it integrates to one: \[\int _0^1 d(u;F_0,F)\dd u \,=\, \int_x d(F_0(x);F_0,F) \dd F_0(x) \,=\,\sum_x \big(p(x)/p_0(x)\big) p_0(x)\,=\, 1. ~~\] For brevity's sake, we will often abbreviate $d(F_0(x);F_0,F)$ as $d_0(x)$ throughout the article. \end{defn} \begin{rem}[The philosophy of `\textit{learning by comparison}'] The density representation formula \eqref{eq:gd} describes a way of building a general $p(x)$ \textit{by comparing it with} the initial $p_0(x)$. The $d$-modulated class of distributions is constructed by amending (instead of abandoning) the starting imprecise model $p_0(x)$. \end{rem} \begin{rem}[$d$-sharp density] Eq. \eqref{eq:gd} provides a formal statistical mechanism for sharpening the initial vague $p_0(x)$ using the data-guided perturbation function $d_0(x)$. For this reason, we call the improved $p_0(x) \times d_0(x)$ the `$d$-sharp' density. \end{rem} \begin{example}[Earthquake Data]\label{example1} We are given annual counts of major earthquakes (magnitude 6 and above) for the years 1900-2006. It is available in the R package \texttt{astsa}. Seismic engineers routinely use negative binomial distribution for modeling earthquake frequency \citep{kagan2000prob,kagan2010}. The best fitted negative binomial (NB) with ($\mu=19$ and $\phi=12$) is shown in Fig \ref{fig:earthq}, which we take as our rough initial model $p_0(x)$. From the figure, it is clearly evident that the conventional NB distribution is unable to adequately capture the shape of the earthquake count data. \vskip.3em {\bf Model uncertainty quantification}. For earthquake engineers it is of utmost importance to determine the uncertainty of the assumed NB model \citep{bernreuter1981seismic}. The problem of uncertainty quantification is the holy grail of earthquake science due to its importance in estimating risk and in making an accurate forecast of the next big earthquake. The comparison density $d(u;F_0,F)$ captures the uncertainty of the assumed NB model. The left plot in Fig \ref{fig:earthq} displays the estimated $\whd(u;F_0,F)$ for this data, which reveals the nature of deficiency of the base NB model. A robust nonparametric method of estimating $\whd$ from data will be discussed in the subsequent sections. But before going into the nonparametric approximation theory, we will spend some time on its interpretation. \begin{figure}[ ] \vspace{-1.1em} \centering \includegraphics[width=.46\linewidth,keepaspectratio,trim=1.15cm 1cm 1cm 1cm]{Figs/Earthquake-CompDensity.png}~~~~~~ \includegraphics[width=.46\linewidth,keepaspectratio,trim=1cm 1cm 1.15cm 1cm]{Figs/Earthquake-2desnity.png} \vskip.3em \caption{Modeling the earthquakes distribution. Left: Estimated comparison density $\whd(u;F_0,F)$; Right: The fitted NB distribution and the re-calibrated $d$-sharpened version.}\label{fig:earthq} \end{figure} \vskip.2em {\bf Interpretable Exploratory learning}. In our model \eqref{eq:gd}, $d$ plays the role of a data-driven correction function, measuring the discrepancy between the initial $p_0$ and the unknown $p$. Thus, the non-uniformity of $\whd$ immediately tells us that there's something more in the data than what was expected in light of $p_0(x)$. In fact, the shape of $\whd$ reveals the nature of the most prominent deviations between the data and the presumed $p_0(x)$---which, in this case, are bimodality and presence of heavier-tail than anticipated NB distribution. \vspace{-.3em} \end{example} \begin{rem}[Role of $d$] The comparison density $d$ performs dual functions: (i) its graph acts as an exploratory diagnostic tool that exposes the unexpected, forcing decision makers (e.g., legislators, natural security agencies, local administrations) to think outside the box: what might have caused this bimodality? how can we repair the old seismic hazard forecast model so that it incorporates this new information? etc. (ii) it provides a formal process of transforming and revising an initially misspecified model into a useful one. The red curve in the right panel is obtained by multiplying (perturbing) the NB pmf with the estimated comparison density, obeying the density representation formula Eq. \eqref{eq:gd}. \end{rem} \subsection{LP-Fourier Analysis} \label{sec:LPFA} The task is to nonparametrically approximate $d(F_0(x);F_0,F)$ to be able to apply the density sharpening equation \eqref{eq:gd}. We approximate $d \hspace{-.08em}\circ \hspace{-.08em}F_0(x) \in \cL^2({dF_0})$ by projecting it into a space of polynomials of $F_0(x)$ that are orthonormal with respect to the base measure$\dd F_0$. How to construct such a system of polynomials in a completely automatic and robust manner for \textit{any} given $p_0(x)$? In the section that follows, we discuss a universal construction. \subsubsection{Discrete LP-Basis} We describe a general theory of constructing LP-polynomials---a new class of robust polynomials $\{T_j(x;F_0)\}_{j\ge 1}$ that are a function of $F_0(x)$ (not raw $x$) and are orthonormal with respect to user-specified discrete distribution $p_0(x)$. \begin{figure}[ ] \centering \includegraphics[width=.45\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/Earthquake-basis1.png}~~~~~ \includegraphics[width=.45\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/Earthquake-basis2.png}\\[2em] \includegraphics[width=.45\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/Earthquake-basis3.png}~~~~~ \includegraphics[width=.45\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/Earthquake-basis4.png} \vskip1.5em \caption{Shape of the top four LP-orthonormal basis functions $\{S_j(u;F_0)\}_{1\le j \le 4}$ for $p_0(x) :={\rm NB}(x;\mu=19,\phi=12)$ for the earthquake data; where recall $S_j\hspace{-.084em}\circ \hspace{-.084em} F_0\,(x)=T_j(x;F_0)$. Notice of the global nonlinearity (linear, quadratic, cubic and so on) and the local staircase-like (piecewise-constant unequal-length segments) shape of these specially-designed polynomials. They are, by construction, orthonormal with respect to the chosen measure $p_0(x)$, here ${\rm NB}(x;\mu=19,\phi=12)$.}\label{fig:earthT} \end{figure} \texttt{Step 1:} Define the first-order LP-basis function as \textit{standardized} mid-distribution transform: \beq \label{eq:T1} T_1(x;F_0) \,=\,\dfrac{\sqrt{12} \big[\Fmn(x) - 0.5\big]}{\sqrt{1-\sum_x p_0^3(x)}}. \eeq Verify that $\Ex_0[T_1(X;F_0)]=0$ and $\Ex_0[|T_1(X;F_0)|^2]=1$, since $\Ex[\Fmn(X)]=1/2$ and $\Var[\Fmn(X)]=\sqrt{(1-\sum_x p_0^3(x))}/12$. \vskip.25em ~~\texttt{Step 2:} Apply a \emph{weighted} Gram-Schmidt procedure on $\{T_1^2,\ldots T_1^{k-1}\}$ to construct a higher-order LP orthogonal system $T_j(x;F_0)$ with respect to measure$\dd F_0$ \[\sum\nolimits_x p_0(x) T_j(x;F_0)=0;~~\,\sum\nolimits_x p_0(x) T_j(x;F_0)T_k(x;F_0)=\delta_{jk}, ~~1<j,k<M \] where $\delta_{jk}$ is the Kronecker delta function and the highest-degree of the LP-polynomials $M$ is always less than the support size of the discrete $p_0$. For example, if $X$ is binary, one can construct at most $2-1=1$ LP-basis function; see Section \ref{sec:bin}. \vskip.3em Fig \ref{fig:earthT} shows the top four LP-basis functions for the earthquake data with $p_0$ as ${\rm NB}(x;\mu=19,\phi=12)$. Here, we have displayed them in a unit interval as a function of $u=F_0(x)$, denoted by $S_j(u;F_0) := T_j(Q_0(u); F_0), 0<u<1$. Notice the typical shape of these custom-constructed discrete orthonormal polynomials: globally nonlinear (linear, quadratic, cubic, and so on) and locally piecewise-constant with unequal step size. \begin{rem}[Role of LP-coordinate system in unification of statistical methods] LP-bases play a unique role in statistical modeling---they provide an efficient coordinate (data-representation) system that is fundamental to developing unified statistical algorithms. \vspace{-.65em} \end{rem} \subsection{The {\boldmath$\DS(p_0,m)$} Model} \label{sec:dstheory} \begin{defn}[LP-canonical Expansion] Expand comparison density in the LP-orthogonal series \beq \label{cdm} d(F_0(x);F_0,F)\,=\,1+\sum_j \LP[j;F_0,F] \,T_j(x;F_0), \eeq where the $j$th LP-Fourier coefficient satisfies the following identity: \beq \label{eq:dlp} \LP[j;F_0,F]= \big\langle d \hspace{-.08em}\circ \hspace{-.08em}F_0, T_j \big \rangle_{F_0}.\eeq A change-of-basis perspective: The conventional way to represent a discrete distribution is through indicator basis (histogram representation): \beq \eta_j(i)\,=\, \ind \big\{ X_i \in [x_j, x_{j+1} ) \big\},~~ {\rm for}\, j=1,2,\ldots, r~ \eeq where $r$ is the domain size (number of unique values) of the empirical distribution $\tp(x)$. In \eqref{cdm}, we have performed a ``change of basis'' from the amorphous indicator-basis to a more structured LP-basis $\{T_j(x;F_0)\}$, where the expansion coefficients $\LP[j;F_0,F]$ act as the coordinates of $p(x)$ \textit{relative to} assumed $p_0(x)$: \[\big[ F \big]_{F_0} := \Big(\LP[1;F_0,F], \ldots, \LP[m;F_0,F]\Big),~~1\le m < r.\] For that reason, one may call these coefficients the discrete LP-Fourier Transform (LPT) of $p(x)$ relative to $p_0(x)$. \end{defn} \begin{defn} $\DS(p_0,m)$ denotes a class of distributions with the following representation: \beq \label{DSm} p(x)\,=\,p_0(x)\Big[ 1\,+\, \sum_{j=1}^m \LP[j;F_0,F]\, T_j(x;F_0)\Big], \eeq obtained by replacing \eqref{cdm} into \eqref{eq:gd}. Here $\DS(p_0,m)$ stands for {\bf D}ensity-{\bf S}harpening of $p_0(x)$ using $m$-term LP-series approximated $d_0(x)$. $\DS(p_0,m)$ is a class of nonparametrically-designed parametric models that are flexible enough to capture various \emph{shapes of discrete} $p(x)$, like multi-modality, excess-variation, long-tailed, and sharp peaks. \vspace{-.65em} \end{defn} To estimate the unknown coefficients $\LP[j;F_0,F]$ of the $\DS(p_0,m)$ model, note the following important identity: \vspace{-.35em} \bea \label{eq:lpeq} \LP[j;F_0,F]&=& \int d(F_0(x);F_0,F) T_j(x;F_0) \dd F_0(x) \nonumber \\ &=& \int T_j(x;F_0) \dd F(x) \nonumber \\ &=& \Ex_F\big[ T_j(X;F_0) \big]. \eea \vspace{-.4em} This immediately leads to the following ``weighted mean'' estimator: \beq \label{eq:eestlp} \tLP_j\,:= \LP[j;F_0;\widetilde F]\,=\, \Ex_{\wtF}\big[T_j(X;F_0)\big]\,=\,\sum_x \tp(x) T_j(x;F_0),~~~~ \eeq Using standard empirical process theory \citep{csorgHo1983quantile,parzen1998statistical} one can show that the limiting distribution of sample LP-statistic is i.i.d $\cN(0,n^{-1/2})$, under the null hypothesis $H_0:p=p_0$. Thus one can quickly obtain a sparse estimated $\DS(p_0,m)$ model by retaining only the `significant' LP-coefficients, which are greater than $2/\sqrt{n}$. {\bf Earthquake Data Example}. The first $m=10$ estimated $\tLP_j$ are shown in Fig. \ref{fig:earthlp} of the appendix, which indicates that the only interesting non-zero LP-coefficient is $\tLP_6$. The explicit form of the estimated $\DS({\rm NB},m=6)$ model for the earthquake data is given by: \beq \label{eq:dsequ} \hp(x) = p_0(x)\big[ 1 + 0.20 T_6(x;F_0) \big],\eeq where $p_0={\rm NB}(x;\mu=19, \phi=12)$. The resulting $\hp(x)$ is plotted as a red curve in Fig. \ref{fig:earthq}. \subsection{LP-Maximum Entropy Analysis} \label{sec:lpmaxent} To ensure non-negativity, we expand $\log d$ (instead of $d$ as we have done in Eq. \eqref{cdm}) in LP-Fourier series, which results in the following exponential model: \beq \label{eq:dexp} d_{\teb}(u;F_0,F)\,=\,\exp\Big \{ \sum_{j\ge 1} \te_j S_j(u;F_0)\,-\, \Psi(\teb)\Big \},~~0<u<1 \eeq where $\Psi(\teb)=\log \int_0^1 \exp\{ \sum_j \te_j S_j(u;F_0)\}\dd u.$ This model is also called the maximum-entropy (maxent) comparison density model because it maximizes the entropy $-\int d_{\teb} \log d_{\teb}$ (flattest possible; thus promotes smoothness) under the following LP-moment constraints: \beq \label{eq:cons} \Ex_{\teb}[S_j(U;F_0)]\,=\,\LP[j;F_0,\wtF],~~(j=1,2\ldots). \eeq LP-moments $\LP[j;F_0,\wtF]$ are `compressed measurements' (linear combinations of observed data; verify from \eqref{eq:eestlp}), which are sufficient statistics for the comparison density $d_{\teb}$. \vskip.4em \begin{defn}[Maxent $\DS(p_0,m)$ model] Replacing \eqref{eq:dexp} into \eqref{eq:gd}, we have the following maxent $\DS(p_0,m)$ model \beq \label{eq:maxentds} p(x)\,=\,p_0(x) \exp\Big \{ \sum_{j\ge 1} \te_j T_j(x;F_0)\,-\, \Psi(\teb)\Big\}, \vspace{-.5em} \eeq To estimate a sparse maxent comparison density model, we carry out the optimization routine by choosing only the `significant' LP-moments in \eqref{eq:cons}. \end{defn} \vskip.1em {\bf Earthquake Data Example}. The estimated maxent $\DS(p_0,m)$ model for the earthquake distribution is given by \beq \label{eq:xnbequake} \hhp(x) = p_0(x)\exp\big \{ 0.195 T_6(x;F_0) - 0.02\big \},\eeq whose shape is almost indistinguishable from the LP-Fourier estimated p.m.f. \eqref{eq:dsequ}. \section{Applications in Statistical Modelling} \label{sec:app} We describe how the general principle of `density sharpening' acts as a unified framework for the analysis of discrete data with a wide variety of applications, ranging from basic introductory methods to more advanced statistical modeling techniques. \subsection{One-sample Test of Proportion} \label{sec:bin} Given $n$ samples from a binary $X$, the one-sample proportion test is concerned with testing whether the population proportion $p$ is equal to the hypothesized proportion $p_0$. We approach this problem by reformulating it in our mathematical notation: Step 1. We start with the $\DS(p_0,m=1)$ model \beq \label{eq:dsbin} p(x)=p_0(x) \Big\{1+\LP[1;p_0,p] T_1(x;F_0)\Big\},~~x=0,1.\eeq where the null model $p_0(x)= xp_0 + (1-x) (1-p_0),$ for $x=0,1$. Step 2. We rewrite the original hypothesis $H_0:p=p_0$ in terms of LP-parameter as $H'_0:\LP[1;p_0,p]=0$. Step 3. We derive an explicit formula for $\LP[1;p_0,\tp]$. It's a two step process: First, we need the analytic expression of the LP-basis $T_1(x;p_0)$ \beq T_1(x;p_0) = \left\{ \begin{array}{rl} -\dfrac{p_0}{\sqrt{p_0(1-p_0)}} &\mbox{for $x=0$} \\ \dfrac{1-p_0}{\sqrt{p_0(1-p_0)}} &\mbox{for $x=1$.} \end{array} \right. \eeq We then apply formula \eqref{eq:eestlp} to deduce: \beq \label{lp1bin} \LP(1;p_0;\tp) = (1-\tp)T_1(0;p_0) + \tp T_1(1;p_0) = \dfrac{\widetilde p - p_0}{\sqrt{p_0(1-p_0)}}.~~~\eeq Step 4. A remarkable fact is that the test based on \eqref{lp1bin} exactly matches with the classical Z-test, whose null distribution is: $\sqrt{n} \LP[1;p_0,\tp] \sim \cN(0,1)$ as $\nti$. This shows how the LP-theoretical device provides a transparent \textit{first-principle derivation} of the one-sample proportion test, by shedding light on its genesis. \subsection{Expandable Negative Binomial Distribution}\label{sec:dnb} We will focus now on one important special case of maxent $\DS(p_0,m)$ family of distributions \eqref{eq:maxentds}, where the base measure $p_0(x)$ is taken to be a negative Binomial (NB) distribution. \beq \label{eq:xnb} p(x) \,= \,\binom{x + \phi - 1}{x} \, \left( \frac{\mu}{\mu+\phi} \right)^{\!x} \, \left( \frac{\phi}{\mu+\phi} \right)^{\!\phi} \! \exp\Big \{ \sum_{j\ge 1} \te_j T_j(x;F_0)\,-\, \Psi(\teb)\Big \},~~x \in \mathbb{N},~~\eeq note that the basis functions $\{T_j(x;F_0)\}_{j\ge 1}$ are specially-designed LP-orthonormal polynomials associated with the base measure $p_0 = {\rm NB}(\phi,\mu)$. We call \eqref{eq:xnb} the $m$th-order expandable NB distributions, denoted by \texttt{XNB}(m). A few practical advantages of \texttt{XNB}(m) distributions are: the computational ease they afford for estimating parameters; their compactly parameterizable yet shape-flexible nature; and, finally, their ability to provide explanatory insights into how $p(x)$ is different from the standard NB distribution. Due to their simplicity and flexibility, they have the potential to be a `default choice' for modeling count data. We have already seen an example of $\texttt{XNB}$-distribution in \eqref{eq:xnbequake} in the context of modeling the earthquake distribution. We now turn our attention to two further real-data examples. \begin{figure}[ ] \centering \includegraphics[width=.48\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/NMES-NB.png}~~~~~ \includegraphics[width=.48\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/NMES-boxplot.png}\\[2em] \includegraphics[width=.48\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/NMES-CD.png}~~~~~~~ \includegraphics[width=.48\linewidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/NMES-XNB.png} \vskip1.5em \caption{Nonparametric modeling of NMES 1988 data. Top left: For clarity, we only show the data over the domain $[0,20]$. Otherwise, the ultra long-tailedness of the data will make it harder to see what's going on in the interesting zones; the arrows point out some specific aspects of the data that were missed by NB distribution. Top right: compares two boxplots---one is the observed data and the other is the simulated data from ${\rm NB}(1,5.7)$, which captures the difference in the tail behavior. The tail of NB has to be stretched to match the shape of the data. Bottom left: $\whd$ captures the `unexplained shape of the data,' which tells us \textit{how} to change the initial model-0 to fit the data. Bottom right: The red curve is the rectified \texttt{XNB} probability distribution.}\label{fig:NMES} \end{figure} \begin{example}[NMES 1988 Data] This is a part of the US National Medical Expenditure Survey (NMES) conducted in 1987 and 1988. It is available in the R-package \texttt{AER}. We have $n=4,406$ observations of a discrete random variable $X$ which denotes how many times an individual, aged 66 and covered by Medicare, visited physician's office. As displayed in the Fig. \ref{fig:NMES} boxplot, the distribution has a large support size (varies between $0$ and $89$), with some regions being extremely data-sparse. \vskip.35em The blue curve in the top left plot shows the ${\rm NB}(\hat \phi=1, \hat \mu=5.7)$, where the parameters are maximum-likelihood estimates. Next, we estimate the LP-maxent $\whd_{\teb}$, using the theory of Sec. \ref{sec:lpmaxent}. At this point, it is strongly advisable to pay attention to the shape $\whd_{\teb}$. Why? Because, it efficiently extracts and exposes `unanticipated' aspects in the data that cannot be explained by the initial NB distribution. The bottom-left Fig. \ref{fig:NMES} immediately reveals a few things: (i) NB underestimates the probability at $x=0$; (ii) it overestimates the probability mass around $x=2$ and $3$; (iii) there seems to be an excess probability mass (`bump' structure) around $x=4$; (iv) NB clearly has a shorter tail than what we see in the data---this can be seen from the sharply increasing right tail of the comparison density. To better understand the tail-behavior, we have simulated $n$ samples from ${\rm NB}(1,5.7)$ and contrasted the two boxplots in the top-right panel, which strongly indicates the long-tailedness of $p(x)$ relative to NB distribution. Any reader will agree that without the help of $\whd_{\teb}$, even experienced eyes could have easily missed these subtle patterns. Finally, multiply $\whd_{\teb}$ by the ${\rm NB}(1,5.7)$, following eq. \eqref{eq:xnb}, to get the estimated \texttt{XNB} distribution---the red p.m.f curve, shown in the bottom-right panel of Fig. \ref{fig:NMES}. \end{example} \begin{example}[Computer Breaks Data] \label{ex:comp} We are given the number of times a DEC-20 computer broke down at Open University in each of $n=128$ consecutive weeks of operation, starting in late 1983. The data shows positive skewness with a slightly longer tail; see Fig. \ref{fig:comp} in the appendix. The mechanics of \texttt{XNB} modeling proceed as follows: (i) We start by estimating the parameters of $p_0(x)$, which in this case are MLE-fitted (one can use any other method of estimation) ${\rm NB}(\hat \phi=1.7, \hat \mu=4)$. (ii) The next step is estimation of $\whd_{\teb}$, which in this case is just the uniform distribution---none of the LP-maxent parameters were large enough to be selected. This is depicted in the left panel of supplementary Fig. \ref{fig:comp}. This graphical diagnostic indicates that the initial $p_0$ fits the data satisfactorily; no repairing is necessary. (iii) Accordingly, our `density sharpening' principle returns the $\texttt{XNB}(m=0)$ as the final model, which is simply the starting parametric model ${\rm NB}(\hat \phi=1.7, \hat \mu=4)$. It is interesting to contrast our finding with \cite{saulo2020family}, where the authors fit a highly specialized nonparametric discrete distribution to this data. The beauty of our approach is that it performs nonparametric correction (through $d$) only when it is warranted. When the reality is already simple, we don't complicate it unnecessarily. \end{example} \vspace{-.3em} \subsection{{\boldmath$\chi^2$} and Compressive-{\boldmath$\chi^2$}} Given a random sample of size $n$ from the unknown population distribution $p(x)$, chi-square goodness-of-fit statistic between the sample probabilities $\tp(x)$ and the expected $p_0(x)$ can be re-written as follows: \beq \dfrac{\chi^2}{n} = \sum_x \dfrac{\big( \tp(x) - p_0(x) \big)^2}{p_0(x)} = \sum_x p_0(x) \big[ \tp(x)/p_o(x)\,-1 \big]^2= \int_0^1 \big[ d(u;p_0,\tp) - 1\big]^2\dd u,\eeq By applying Parseval's identity on the LP-Fourier expansion of $d$, we have the following important equality: \beq \label{eq:LPchisq} \dfrac{\chi^2}{n}\, =\, \sum_{j=1}^{r-1}\Big| \LP[j;p_0,\tp] \Big|^2\defeq\,\LP(p_0 \| \tp), \eeq where $r$ is the number of unique values in our sample $X_1,\ldots,X_n$. This shows that chi-square information statistic is a ``saturated'' raw-nonparametric measure with $r-1$ components. \begin{example}[The Gambler's Die] \label{ex:gam} A gambler rolls a die $n=60$ times and gets the following observed counts: \vskip.25em \begin{table}[h] \centering \caption{The observed frequencies} \renewcommand{\tabcolsep}{.3cm} \renewcommand{\arraystretch}{1.2} \begin{tabular}{lcccccc} \hline Number on die & $1$ & $2$ & $3$ & $4$ & $5$ & $6$\\ Observed $\tp$ & 4/60 & 6/60 &17/60 &16/60 & 8/60 &9/60\\ Hypothesized $p_0$ & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6\\ \hline \end{tabular} \end{table} \label{tab:gam} The gambler wishes to determine whether the die is fair. If it is fair, we would expect the outcomes of $1$ to $6$ are equally likely, with probability $1/6$. Pearsonian chi-square and a full-rank LP-analysis, both lead to the same answer: \[ \chi_{{\rm obs}}^2\, =\, n \times \sum_{j=1}^{6-1}\Big| \LP[j;p_0,\tp] \Big|^2=14.2, ~\,\text{with degrees of freedom $5$} \] with the resulting $p$-value $0.0143$. Note that the sum of squares of the $6-1=5$ LP-Fourier coefficients ``exactly'' reproduces (numerically) the Pearson's chisquare statistic! This further verifies the mathematical fact elucidated in eq. \eqref{eq:LPchisq}. Conclusion: The die is loaded at 5\% significance level. \begin{figure}[ ] \centering \includegraphics[width=.46\linewidth,keepaspectratio,trim=2cm 1.5cm 1cm .55cm]{Figs/Gam-d.png}~~~ \includegraphics[width=.46\linewidth,keepaspectratio,trim=1cm 1.5cm 2cm .55cm]{Figs/SD-dhat.png} \caption{The estimated $\whd(u;F_0,F)$ for examples \ref{ex:gam} and \ref{ex:SD}. It helps to identify `where' the interesting differences between data ($\tp$) and hypothesized model ($p_0$) lie.}\label{fig:gamdie} \end{figure} \textit{Exploratory insight}. Here we want to go beyond classical confirmatory analysis, with the goal to understand \textit{how the die is loaded}. The answer is hidden in the shape of the comparison density $\whd$. Fig. \ref{fig:gamdie}(a) firmly suggests that the die was loaded heavily in the middle ---especially on the sides $3$ and $4$, where it landed most frequently. \end{example} \begin{example}[Sparse Dice problem] \label{ex:SD} It is an example of sparse count data with many groups. Imagine a $k=20$ dimensional dice is rolled $n=20$ times: \begin{itemize}[itemsep=1pt,topsep=1pt] \item The hypothesized model:~~ \,$p_0=(1/4,1/4, 1/36,\ldots, 1/36)$ \item The observed probabilities:~~$\tp=(3/4, 1/4,0,\ldots,0)$. \end{itemize} We would like to know whether the postulated model $p_0(x)$ actually reflects the data or not. If it does not, then we want to know how the hypothesized model differs from the observed probabilities. Pearsonian chi-square\footnote{R-function \texttt{chisq.test}() generates the message that ``Chi-squared approximation may be incorrect!.''} yields value $\chi_{{\rm obs}}^2 =30$, with degrees of freedom $19$ and $p$-value $0.052$. Conclusion: there is no evidence of discrepancy at 5\% level, even though there is a glaring difference between $\tp(1)=3/4$ and $p_0(1)=1/4$. The legacy $\chi^2$ loses its power because of `inflated degrees of freedom' for large sparse problems: Large value of $k$ increases the critical value $\chi^2_{\al;k-1}$, making it harder to detect `small' but important changes. \vskip.4em LP-Analysis of the sparse dice problem: (i) Construct the discrete LP-polynomials $\{T_j(x;p_0)\}$ that are specially-designed for the given $p_0(x)$. Appendix Fig. \ref{fig:SD:basis} displays the shape of those basis functions. (ii) Compute the LP-Fourier coefficients $\LP[j;p_0,\,\tp]$ by $\sum_x \tp(x) T_j(x;p_0)$. The Appx. figure \ref{fig:sdice} identifies the first two LP-parameters as significant components. (iii) We now compute the compressive-$\chi^2$ based on these interesting components: \beq \LP(p_0 \| \tp) \,=\,\big|\LP[1;p_0,\,\tp]\big|^2 \,+\, \big|\LP[2;p_0,\,\tp]\big|^2=1.49,~\,~\text{with degrees of freedom $2$}. \eeq and $p$-value $3.4\times 10^{-7}$. Also noteworthy is the fact that compressive LP-chisquare is numerically almost same as the raw $\chi^2$: \[ \vspace{-1em} \chi^2_{{\rm obs}} \,=\, 30 ~\approx~ n \times \LP(p_0 \| \tp) = 29.8.\] \end{example} \begin{rem}[Auto-adaptability] LP-goodness-of-fit shows an impressive adaptability property: under the usual scenario (like in example \ref{ex:gam}) it reduces to the classical $\chi^2$ analysis, and for large-sparse problems, it automatically produces a chi-square statistic with the fewest possible degrees of freedom, which boosts its power. Our reformulation (in terms of modern LP-nonparametric language) allowed a better way of doing chi-square goodness-of-fit analysis that applies to a much broader class of applied statistics problems. In John Tukey's (1954) words: ``Do we need to find new techniques, or to use old ones better?'' \end{rem} \begin{rem}[Ungrouped case] What if we have \textit{ungrouped} data: given a random sample of counts $X_1,\ldots,X_n$, check (confirm) whether the data is compatible with the hypothesised $p_0(x)$, i.e., to test the hypothesis $H_0:p=p_0$. One way to approach this problem is to forcefully group the data points into different categories and then apply Pearson's $\chi^2$ test on it. This is (almost) always a bad strategy, since grouping leaks information. Moreover, the arbitrariness involved in choosing the groups makes it an even less attractive option. However, our LP-divergence measure $\LP(p_0 \| \tp)$ \eqref{eq:LPchisq} can be applied to ungrouped data, with no trouble. The next section expands on this. \end{rem} \subsection{Explanatory Goodness-of-Fit} \label{sec:Xgof} What is an explanatory goodness-of-fit? Why do we need it? This is perhaps best answered by quoting the views of John Tukey: \begin{quote} \vspace{-.4em} \textit{``What are we trying to do with goodness of fit tests? (Surely not to test whether the models fits exactly, since we know that no model fits exactly!) What then? How should we express the answer of such test?}'' \begin{flushright} \vspace{-1.35em} {\rm ---John \cite{tukey1954}}\end{flushright} \end{quote} \vspace{-.4em} To satisfactorily answer these questions we need to design a GOF-procedure that is \textit{simultaneously} confirmatory and exploratory in nature: \vskip.3em ~~$\bullet$ On the confirmatory side, it aims to develop a universal GOF statistic that is easy to use and fully automated for \textit{any} user-specified discrete $p_0(x)$. One such universal GOF measure is $\LP(p_0 \| \tp)$, defined as \beq \label{eq:ugof} \LP(p_0 \| \tp)\,=\,\int_0^1 \big(d(u;p_0,\tp) - 1\big)^2 \dd u = \sum_j \Big| \LP[j;p_0,\tp] \Big|^2= \sum_j \Big|\sum_x \tp(x) T_j(x;F_0)\Big|^2,~~ \eeq where the index $j$ runs over the significant components. See Appx. \ref{app:gof} for more details. \vskip.3em ~~$\bullet$ On the exploratory side, the graphical visualization of comparison density $d(u;p_0,\tp)$ provides explanations as to \textit{why} $p_0(x)$ is inadequate for the data (if so) and \textit{how} to rectify it to reduce its incompatibility with the data. This has special significance for data-driven discovery and hypothesis generation. In particular, the non-zero LP-coefficients indicate the ``main sources of discrepancies.'' In the following, we illustrate this method using three real data examples, each of which contains a different degree of lack-of-fit. \begin{example}[Spiegel Family Data] \label{ex:spiegel} \cite{spiegel1972} reported a survey data of $n=320$ families with five children. The numbers of families with $0,1,2,3,4$ and $5$ girls were $18,56, 110, 88, 40$ and $8$. As an obvious model for $p_0$ we choose Binomial$(5, \hat \pi=.463)$. Estimated LP-Fourier coefficients are \beq n \times \LP(p_0 \| \tp) \,=\,n \times \sum_{j=1}^5 \left|\LP[j;p_0,\tp]\right|^2 = 1.489~~\eeq with $p$-value $0.92$ under the chi-square null with df $5$. In other words, we have just shown that the comparison density is flat uniform $d_0(u)=1$, hence the binomial distribution is completely acceptable for this data; no further density sharpening is needed. \begin{rem} Note that in our analysis, the prime object of interest is the shape of the estimated $\whd_0(x)$ (because it addresses Tukey's concern about the practical utility of goodness-of-fit), not how big or small the $p$-value is. But if a data analyst is habituated to using a threshold $p$-value as a basis for decision making (not a good practice), then we recommend `double parametric bootstrap' \citep{beran1988} to compute the $p$-value--- admittedly a computationally demanding task. This adjusted $p$-value takes into account the fact that the null-parameters are not given (e.g., here the binomial proportion $\pi$); they are estimated from the data. \end{rem} \vspace{-.1em} \end{example} \begin{example}[Rutherford-Geiger polonium data] \label{ex:polo} \cite*{rutherford1910} presented experimental data on groups of alpha particles emitted by Polonium, a radioactive element, in 1/8 minute intervals. On the whole, $n = 2608$ time intervals were considered, in which $k$ ($k=0,1,\ldots,14$) decays were observed. The following table summarizes the data. \begin{table}[ht] \vskip.8em \caption{Observed number of collisions of alpha particles emitted from polonium.} \label{tab:RFord} \centering \renewcommand{\tabcolsep}{.27cm} \renewcommand{\arraystretch}{1.33} \begin{tabular}{|rrrrrrrrrrrrrrr|} \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 13 & 14\\ \hline & 57 & 203 & 383 & 525 & 532 & 408 & 273 & 139 & 45 & 27 & 10 & 4 & 1 & 1 \\ \hline \end{tabular} \vspace{.6em} \end{table} In the 1910 article, Bateman showed that (appended as a note at the end of the original 1910 paper) the theoretical distribution of alpha particles observed in a small interval follows Poisson law, which we select as our model-0. The estimated $\DS(p_0,m)$ model is given by \beq \label{eq:rf} \hp(x)\,=\,e^{-\la_0} \dfrac{\la_0^x}{x!} \,\Big [ 1 -0.03 T_2(x;F_0) -0.04 T_3(x;F_0) \Big ], ~~\text{with}~\la_0=3.88\eeq which is displayed in Fig. \ref{fig:polo} in the Appendix. The model \eqref{eq:rf} indicates that there is a `gap' between the theoretically predicted Poisson model and the experimental result--- second-order (under‐dispersed) and third-order (less skewed) corrections are needed. To quantify the lack-of-fit, compute: \[ n \times \LP(p_0 \| \tp) \,=\,n \times \sum_{j\in \{2,3\}} \left|\LP[j;p_0,\tp]\right|^2 = 6.82,~~\text{with pvalue $0.033$}. \] This is a borderline case, where it is important to consult subject matter specialists before choosing sides---scientific significance is as important as statistical significance. \cite{hoaglin1980} came to a similar conclusion using an exploratory diagnostic tool called ``Poissonness plot,'' shown in the Appendix Fig. \ref{fig:poloeda}. \end{example} \begin{example}[Sparrow data] This data composed of numbers of sparrow nests found in plots of area one hectare, the sample average being $\bar x=1.10$. \cite{zarbook} previously analyzed this dataset. We choose Poisson($1.10$) as our $p_0(x)$ for $\DS(p_0,m)$ model. The second-order $\LP[2;p_0,\tp]=-0.328$ (pvalue=.03) turns out to be the only significant component, which indicates that the data exhibit less-dispersion (due to the negative sign) than postulated ${\rm Poisson}(1.10)$. Our finding is in agreement with \cite{gurtler2000}. Finally, return the $d$-modified under-dispersed Poisson model for the data: \[\hp(x)\,=\,e^{-\la_0} \dfrac{\la_0^x}{x!} \,\big [ 1 -0.33 T_2(x;F_0)\big ], ~~\text{with}~\la_0=1.10\] which is displayed in Fig. \ref{fig:sparrow} of the Appendix. \end{example} \subsection{Relative Entropy and Model Uncertainty} \label{sec:rent} How to quantify the uncertainty of the chosen model $p_0(x)$? A general information-theoretic formula of model uncertainty is derived based on relative entropy between the true (unknown) $p(x)$ and the hypothesized $p_0(x)$. Express relative entropy (also called Kullback–Leibler divergence) as a functional of maxent comparison density $d_{\teb}$: \beq \KLD(p\|p_0)\,=\,\sum_x p(x) \log \Big\{\dfrac{p(x)}{p_0(x)}\Big\}\,=\,\Ex_F\Big[ \log d_{\teb}\big(F_0(X); p_0, p\big)\Big] \eeq Substituting $F_0(x)=u$, leads to the following important formula for relative entropy in terms of LP-parameters: \bea \label{eq:LPrent} \KLD(p\|p_0)&=&\int_0^1 d(u;F_0,F) \log d(u;F_0,F) \dd u \nonumber \\ &=& \int_0^1 d(u;F_0,F) \Big\{ \sum_j \te_j S_j(u;F_0)\,-\, \Psi(\teb) \Big\} \nonumber \\ &=&\sum_j \te_j \LP_j\,-\,\Psi(\teb). \label{eq:klp} \eea The second equality follows from eq. \eqref{eq:lpeq} and the last one from eq. \eqref{eq:lpeq}. Based on a random sample $X_1,\ldots,X_n$, a nonparametric estimate of relative entropy is obtained by replacing the unknown LP-parameters in \eqref{eq:klp} with their sample estimates. \vskip.34em {\bf Statistical Inference}. Relative entropy-based statistical inference procedure is applied to few real datasets in the context of model validation and uncertainty quantification. \vskip.1em $\bullet$ Estimation and Standard error: For the earthquake data, we like to quantify the uncertainty of $p_0={\rm NB}(12, 19)$. The estimated value of $\KLD(p\|p_0)$ is $0.070 \pm 0.020$ (bootstrap standard error, based on $B=1000$), indicating serious lack-of-fit of the starting NB model---which matches with our previous conclusion; see Fig. \ref{fig:earthq}. \vskip.2em $\bullet$ Testing: For the Spiegel family data, the estimated relative entropy we get is $0.0087$, quite small. Naturally, we perform (parametric bootstrap-based) testing to check if $H_0: \KLD(p\|p_0)=0$. Generate $n$ samples from $p_0(x)$; compute $\widehat{\KLD}(p\|p_0)$; repeat, say, $1000$ times; return the $p$-value based on the bootstrap null-distribution. For this example, the $p$-value we get is $0.093$, which reaffirms that binomial distribution explains the data fairly well. \subsection{Card Shuffling Problem} \label{sec:card} Consider the following question \citep{aldous1986}: How many times must a deck of cards be shuffled until it is close to random? To check whether a deck of $52$ cards is uniformly shuffled we use fixed point statistic, which is defined as the number of cards in the same position after a random permutation. Large values of fixed points (`too many cards left untouched') is an indication that the deck is not well-mixed. \vskip.23em \textit{Theoretically-expected distribution}: One of the classical theorems in this direction is due to Pierre \cite{de1713essay} who showed that the distribution of the number of fixed-points under $H_0$ (random permutation of $\{1,2,\ldots,52\})$ is approximately $p_0={\rm Poisson}(1)$. \vskip.23em \textit{Data and notation}: Let $n$ denotes sample size and $k$ number of shuffles. Then \texttt{CARD}$(k,n)$ stands for a dataset $X_1,X_2,\ldots, X_n$, where $X_i$ is the number of fixed points of a $k$-shuffled deck. By $\widetilde p_k$, we mean the sample distribution of fixed-point statistic after $k$ random permutations. The goal is to find the minimum value of $k$, such that it is safe to accept $H_0: p_k=p_0$, where, we should recall, the null-distribution $p_0$ is ${\rm Poisson}(1)$. \vskip.23em \textit{Modeling via goodness-of-fit}: Fig. \ref{fig:card150} shows a \texttt{CARD}$(k,n)$ dataset with $k=150$ and $n=500$. There is a clear discrepancy between the observed probabilities $\tp_k$ and the theoretical $p_0$. The estimated $d$-sharpen Poisson$(1)$ is given below: \beq \label{eq:card150} \widehat p_k(x)= \dfrac{e^{-1}}{x\,!} \big[ 1+ 0.130 \,T_1(x;F_0)\big], ~~~~x=0,1,\ldots\eeq which shows that `first order perturbation' (location correction) is needed: $\LP[1;p_0,\tp_k]=0.130$ with pvalue $0.003$. The positive sign of $\LP_1$ indicates that the mean of the fixed points distribution with $k=150$ is larger than the postulated $\la_0=1$; more shuffling is needed to make the deck close to random. The shape of \eqref{eq:card150} is shown in Fig. \ref{fig:card150}. \vskip.3em \textit{New updated mean}. A curious reader might want to know precisely how large the mean of $p_k$ is compared to $1$. For a distribution $F \sim {\rm DS}(p_0,m)$, we can write an expression for the mean of $F$ ($\la_F$) in terms of the mean of $F_0$($\la_0$). In this case, we have \bea \vspace{-1em} \la_{k} ~=~\Ex_{F_k}[X]&=& \int x \,\big[ 1+ 0.130 \,T_1(x;F_0) \big] \dd F_0(x) \nonumber\\ &=&\int_0^1 Q_0(u) \big[ 1+ 0.130 \,S_1(u;F_0) \big] \dd u \nonumber\\ &=&\int_0^1 Q_0(u) \dd u + 0.130 \,\langle Q_0, S_1 \rangle_{\cL^2[0,1]} \nonumber\\ &=& 1 + 0.130 \times 0.9596 \approx 1.125. \eea \vspace{-1.5em} A few additional comments: \begin{figure}[ ] \centering \vspace{-.8em} \includegraphics[width=.55\linewidth,keepaspectratio,trim=2cm 1.65cm 2cm 1cm]{Figs/card150.png} \caption{Distribution of fixed points: black is empirical and blue is theoretical---a clear mismatch between data and the theory. The rectified $\DS(p_0,m)$ model is displayed in red.}\label{fig:card150} \end{figure} $\bullet$ Thus far, we have verified that $k=150$ is not enough to produce a uniformly shuffled deck. However, we came to this conclusion based on a \textit{single} sample of size $n$. So, we generate (through computer simulation) several replications of \texttt{CARD}$(k,n)$ data with different $(k,n)$. \vspace{-.15em} $\bullet$ To reach a confident decision, we perform the experiment with $n=500$ and $k=150, 160,170, 180, 190, 200$. The analysis was done based on $B=250$ datasets from each $(n,k)$-pair. The results are summarized in appendix Fig. \ref{fig:card2}, which shows that $k=170$ shuffles is probably a safe bet to declare a deck to be fair---i.e., uniformly distributed. \vspace{-.15em} $\bullet$ \cite{diaconis2018bayGOF} describes a Bayesian approach to this problem. In contrast, we offered a completely nonparametric solution that \textit{leverages} the additional knowledge of the expected $p_0(x)$ and provides more insights into the nature of discrepancies. \subsection{Jaynes Dice Problem} The celebrated Jaynes' dice problem \citep{jaynes62} is as follows: Suppose a die has been tossed $N$ times (\textit{unknown}) and we are told only that the average number of the faces was $4.5$---not $3.5$, as we might expect from a fair die. Given this information (and nothing else), the goal is to determine probability assignment, i.e., what is the probability that the next throw will result in face $k$, for $k=1, \ldots,6$. \vskip.25em \textit{Solution of Jaynes' dice problem using density sharpening principle}. The initial $p_0(x)$ is selected as the discrete uniform distribution $p_0(x)=1/6$ for $x=1,\ldots, 6$, which reflects the null hypothesis of `fair' dice. As we are given only the first-order location information (mean is $4.5$) we consider the following $\DS(p_0,m=1)$ model: \beq \label{eq:Jds} p(x) = \dfrac{1}{6}\Big\{ 1 + \LP[1;p_0,p]\,T_1(x;p_0) \Big\} \eeq The coefficient $\LP[1;p_0,\tp]$ has to be estimated, and for that we also need to know the basis function $T_1(x;F_0)$. Step 1. To find an explicit formula for the discrete basis $T_1(x;F_0)$, apply \eqref{eq:T1} with $F_0(x)=x/6$ and $p_0(x)=1/6$ \beq T_1(x;F_0) \,=\, \sqrt{12} \dfrac{(x/6 - .5)}{\sqrt{1-\sum_{x=1}^6 (1/6^3)}}\,=\,\sqrt{\dfrac{12}{35}} \big( x - 3.5\big),~~\text{for}\,x=1,\ldots,6. \eeq Step 2. Compute $\LP[1;p_0,\tp]$ by applying formula \eqref{eq:eestlp} \beq \label{eq:lpJ} \LP[1;p_0,\tp] \,=\, \sum_x \tp(x) T_1(x;p_0) \,= \,\sqrt{\dfrac{12}{35}} \big( \sum_x x \tp(x) - 3.5\big) \,=\, 0.586. \eeq The non-zero $\LP[1;p_0,\tp]$ indicates it was a loaded die. Step 3. Substitute the value of $\LP[1;p_0,\tp]$ in eq. \eqref{eq:Jds} to get the LP-Fourier $DS(p_0,m=1)$ model as \beq \label{eq:Jdses} \widehat{p}(x) = \dfrac{1}{6}\Big\{ 1 + 0.586\,T_1(x;p_0) \Big\}, ~~x=1,\ldots,6. \eeq This is shown as the blue curve in Fig. \ref{fig:Jaynes}. Step 4. Finally, return the estimated exponential $\DS(p_0,m=1)$ probability estimates \beq \label{eq:maxentlp} \hhp(x) (x)\,=\,\dfrac{1}{6}\exp\big\{ -0.193 + 0.634\,T_1(x;F_0) \big\},~~x=1,\ldots,6. \eeq This is shown as the red curve in Fig. \ref{fig:Jaynes}. \begin{figure}[ ] \centering \includegraphics[width=.55\linewidth,keepaspectratio,trim=2cm 1.65cm 2cm 1cm]{Figs/jaynes.png} \vskip.4em \caption{The black dots denote the starting null-model---die with $6$ equally likely outcomes. The blue curve is our LP-Fourier density estimate given in eq. \eqref{eq:Jdses}. The exponential $\DS(p_0,m)$ density estimate, given in eq. \eqref{eq:maxentlp}, is shown in red line.}\label{fig:Jaynes} \end{figure} \begin{table}[h] \centering \vskip1em \caption{The estimated LP skew distribution for the Jaynes' dice problem.} \vskip.5em \renewcommand{\tabcolsep}{.3cm} \renewcommand{\arraystretch}{1.77} \begin{tabular}{c|cccccc} \hline $p_0(x)$ &1/6&1/6&1/6&1/6&1/6&1/6\\ LP-Fourier $\hp(x)$ &.025&.080 &.140 &.195 &.250 &.310\\ LP-MaxEnt $\hhp(x)$ &0.054 &0.079 &0.114 &0.165&0.240&0.347 \\ Jaynes'Answer &0.054 &0.079 &0.114 &0.165&0.240&0.347\\ \hline \end{tabular} \label{tab:Jaynes} \vskip.3em \end{table} \begin{rem} It is a quite remarkable fact that our density-sharpening principle-based probability assignment \textit{exactly} matches with Jaynes' maxent answer; see Table \ref{tab:Jaynes}. \vspace{-.25em} \end{rem} \subsection{Compressive Learning of Big Distributions} An important class of learning problem that has recently attracted researchers from various disciplines---including high-energy physics, neuroscience, theoretical computer science, and machine learning---can be viewed as a modeling problem based on samples from a distribution over large ordered domains. Let ${\bf p}=(p_1,\ldots,p_k)$ be a probability distribution over a very large domain $k$, where $p_i \ge 0, \sum_{i=1}^k p_i =1$. Let us look at a realistic example before discussing a general method. \vskip.45em \begin{example}[HEP data] This is an example from high-energy physics\footnote{High-energy physics is not the only discipline where this kind of very large and sparse histogram-like data appears. It frequently arises in many modern scientific domains: inter-spike interval data (neuronal firing patterns); relative abundance/intensity data (mass spectrometry data); DNA methylation and ChIP-seq data (genomics); pixel histogram data (astronomical images of stars, galaxies etc); histograms of activity intensities (biosignals from wearable sensor devices, mental illnesses studies by NIMH); photometric redshift data (photo-z spectra in Cosmology), just to name a few. There is an outstanding interest in developing new computational tool that allows rapid and approximate statistical learning for big-histogram-like datasets.} (HEP), motivated by the PHYSTAT 2011 Banff bump-hunting challenge task \citep{junk2011banff}. In HEP counting experiments (e.g., in Large Hadron Collider) one observes data in the following form: $n$ samples from unknown $p(x)$ as event counts (number of collisions) at $k$ finely binned energy-cells, which we denote by \texttt{HEP}$(k,n)$. Fig. \ref{fig:PP} displays one such data with $n=10,000$ and $k=250$, with the postulated background model (dictated by the known Standard Model) as discretized exponential distribution $f_0(x)=\lambda e^{-\lambda x}$ with $\lambda=1/20$: \beq \label{eq:nullHEP} p_0(i) \, \doteq \, \int_{x_i}^{x_{i+1}} f_0(x) \dd x,~~i=1,\ldots,k.\eeq Particle physicists are interested in discovering new particles that go \textit{beyond} the known Standard model described by the background model $p_0$. We present a four-step algorithmic program to address the general problem of data-driven `Learning and Discovery.' \end{example} \vskip.3em {\bf Phase 1.} \textit{Testing}. The first question a scientist would like answered is whether the data is consistent with the background-only hypothesis, i.e., $H_0:p=p_0$. We perform the information-theoretic test described in Sec. \ref{sec:rent}. In particular, we choose the relative entropy-based formula given in \eqref{eq:LPrent} as our test statistic. The $p$-value obtained using parametric bootstrap (with $B=50,000$) is almost zero---strongly suggesting that the data contain some surprising new information in light of the known physics model $p_0$. But to figure out whether that information is actually useful for physicists, we have to dig deeper. \begin{figure}[ ] \vspace{-.6em} \centering \includegraphics[width=.478\linewidth,keepaspectratio,trim=1cm 1cm .1cm 1.2cm]{Figs/PPcd2.png}~~~~~ \includegraphics[width=.48\linewidth,keepaspectratio,trim=.35cm 1cm 1cm 1cm]{Figs/PP.png} \vskip.6em \caption{\texttt{HEP}$(k,n)$ data analysis. Left: The estimated comparison density $\whd(u;p_0,p)$, which indicates there may be a bump (around $u=0.64$) that went \textit{unnoticed} by the theoretical model. Right: The data generated from a mixture of $p_0$ and $\cN(125, 2)$ with mixing proportion $0.1$. The theory-driven model $p_0$ is the blue curve and the red curve is the $d$-sharpen $p_0$. The green triangle denotes the true peak at the mass position 125 Gev. The shaded area under $\dhat_0(u)$ denotes the amount of excess mass on the top of the smooth background.}\label{fig:PP} \end{figure} \vskip.15em {\bf Phase 2.} \textit{Exploration and Discovery}. By definition, new discoveries can be made only by ``contrasting'' data with the existing model. This is what is achieved through $d(u;F_0,\wtF)$. The left panel of Fig. \ref{fig:PP} displays the estimated $\whd_0(u)$ for the HEP-data, which compactly encodes all the structure of the data that cannot be described by the assumed $p_0(x)$. \vskip.15em The exploratory graphical display of $\whd_0$ reveals a few noteworthy points. Firstly, the non-uniformity of $\whd_0$ makes us skeptical about $p_0$---this is completely in agreement with Phase-1 analysis. Secondly and more importantly, the shape of $\whd_0$ provides a refined understanding of the \textit{nature} of new physics that is hidden in the data, which, in this case, revealed itself as a bump \textit{above} the smooth background $p_0$. The word `above' is important because we are not interested in bumps on $p$ itself, but on $d_0$, which is the unanticipated `excess mass.' Hunt for new physics is the problem of bump hunting on $d(u;p_0,p)$, not on $p(x)$. For the HEP-data, we see a prominent bump in $d_0(u)$ around $u=0.64$, which (in the original data domain) corresponds to near $Q_0(.64) \approx$ 125 GeV; the green triangle in the above figure. \begin{rem}[The discovery function] Since, $d_0(x)$ encapsulates what's new in the data by separating the unknown from the known, we also call it the ``discovery function.'' It is the ``missing piece'' that glues together the known $p_0(x)$ and the unknown $p(x)$. It provides a graphical diagnostic tool that exposes the unexpected. These clues can guide domain-scientists to carry out more targeted follow-up studies. \end{rem} \vskip.3em {\bf Phase 3.} \textit{Inference and Excess Mass Problem}. Where is the interesting excess mass hiding? Is it a statistical fluke or something real? How substantial is the evidence? The real issue is: can we let the data confidently tell us where to look next for new particles? This will result in a complete paradigm shift because traditionally the HEP searches (for locating excess events) were guided by theoretical considerations only. \begin{quote} \vspace{-.3em} `\textit{One may feel uneasy that we may therefore only find new processes if a theorist has been clever enough to propose the corresponding theory ahead of time.}' \begin{flushright} \vspace{-.24em} {\rm ----Glen \cite{cowan2007}} \end{flushright} \end{quote} \vspace{-.3em} Interested readers may also refer to \cite{lyons08} and the Nature news article by \cite{castelvecchi2018lhc} for a clear exposition on the scientific importance of these issues. \vspace{.24em} {\bf Statistical Discovery: Inference Algorithm}. The following are the main steps of the inference algorithm whose results are summarized in Fig. \ref{fig:HEPxm}: Step 1. Parametric bootstrap: To measure the natural statistical variation of $\dhat_0(x)$ under the null hypothesis: simulate $n$ samples from $p_0(x)$ and estimate the comparison density. Repeat the whole process for a large number of iterations (say $B=10,000$ times) to get a bundle of comparison density curves, all of which fluctuate around the flat uniform line. Step 2. Pointwise $p$-value computation: At a fixed grid point $x \in [100, 250]$, we have the following values of the test statistic \[\Big\{ \whd_0^{(1)}(x), \ldots, \whd_0^{(B)}(x) \Big\} \] calculated from the $B$ bootstrap samples. Compute the bootstrap $p$-value at the point $x$ by \[{\rm Pval}(x)\,=\,\dfrac{1+ \big\{ \# \,{\rm of}\, \whd_0^{(j)}(x) \ge \whd_0(x)\big\} }{B+1 }\] Fig. \ref{fig:HEPxm} draws the curve $-\log_{10}({\rm Pval}(x))$ as a function of $x$. The $5\sigma$ discovery region $(121.5, 129.5)$ is highlighted in yellow, which includes the true excess mass point $125$ GeV. This is how modern nonparametric modeling based on `density sharpening principle' can convincingly guide researchers on \textit{where} to look for evidence of a deeper theory of physics. \begin{figure}[ ] \centering \includegraphics[width=.48\linewidth,keepaspectratio,trim=2.5cm 1cm 2.5cm 1cm]{Figs/HEP-dis.png} \vskip2em \caption{ The $5\sigma$ discovery interval $(121.5,129.5)$ correctly captures the true excess-mass point indicated by the green triangle.}\label{fig:HEPxm} \end{figure} \vskip.3em {\bf Phase 4.} \textit{Sharpen Scientific-Model}. Finally, the goal is to sharpen the initial scientific model $p_0(x)$ to achieve a more precise description of what is loosely known or suspected. The estimated $\DS(p_0,m)$ model sharpens the parametric null \eqref{eq:nullHEP} to provide a nonparametrically-adjusted, parsimonious model: \beq \label{eq:dshep} \hp(x)\,=\,p_0(x)\Big[ 1 -\sum_{j \mathcal{J}_5} \LP[j;p_o,\tp]\,T_j(x;F_0)\Big], \eeq where the active set $\mathcal{J}_5=\{2,3,5,7,8\}$ along with the LP-coefficients are given in Table \ref{tab:HEP}. \begin{table}[ht] \vskip.4em \caption{The selected no-zero LP-coefficients} \label{tab:HEP} \centering \renewcommand{\tabcolsep}{.4cm} \renewcommand{\arraystretch}{1.33} \begin{tabular}{|l |ccc cc|} \hline $\mathcal{J}_5$ & 2& 3& 5& 7 &8\\ \hline $\widehat{\LP}_j$ & -.097 & -.090& 0.117& -.095 & -.060 \\ \hline \end{tabular} \end{table} \vspace{-.3em} \begin{rem} A few remarks: ~1. The part in the square brackets of \eqref{eq:dshep} shows \textit{how} to change the prior scientific model $p_0(x)$ to make it consistent with the data. Knowing the \textit{nature} of the deficiency of the assumed model, is an important step towards data-driven discovery. As George \cite{box2001dis} said: ``\textit{discovery usually means learning how to change the model}.'' ~2. LP-parameterization requires only $5$-dimensional sufficient statistics to approximately capture the shape of the distribution! The ``compressiveness'' of the LP-transformation---the ability to extract a low-dimensional representation---makes it less data-hungry, as demonstrated in the next section. ~3. Our model \eqref{eq:dshep} is a `hybrid' between theory-driven and data-driven model, which decouples the overall density into two components: expected $p_0(x)$ and unexpected $d_0(x)$. \end{rem} \begin{rem}[An Appeal to Physicists: Hypothesis Testing $\neq$ Discovery Science] Classical statistical inference puts too much emphasis on testing, $p$-value, standard error, and confidence intervals, etc. This ideology is reflected in the practice of high-energy physicists---which entirely revolves around antique tools of hypothesis testing, likelihood ratio, and $p$-value. It's time to break the shackles of outdated data analysis technology that starts with hypothesis testing and ends with a $p$-value. George \cite{box2001dis} expressed a similar sentiment, arguing that the reason why engineering and the physical sciences rarely use statistics is: ``\textit{Much of what we have been doing is adequate for testing but not adequate for discovery.}'' \vskip.13em In this section my purpose has been to introduce some modern statistical tools and concepts that can help scientists with their everyday tasks of discovery and deeper exploration of data. After all, one of the main goals of data analysis is to sharpen the scientists' mental model by revealing the unexpected---a continuous cycle of knowledge refinement: \vskip2.5em \begin{center} \begin{figure} \begin{tikzpicture}[node distance =4cm, auto] \node [block] (x1) {Theory}; \node [block, right of =x1] (x2) {Measurement}; \node [block, right of =x2](x3){Discovery $d_0(x)$}; \node [block, right of =x3](x4){Better theory}; \path [line] ($(x1.0)+(.1cm,0cm)$)--($(x2.180)+(-.1cm,0cm)$); \path [line] ($(x2.0)+(.1cm,0cm)$)--($(x3.180)+(-.1cm,0cm)$); \path [line] ($(x3.0)+(.1cm,0cm)$)--($(x4.180)+(-.1cm,0cm)$); \path[line] ($(x4.south)+(0,-.1)$) -- +(0, -2em) -| ($(x2.south)+(0,-.1)$); \end{tikzpicture} \vskip1.5em \caption{Continuous learning by iterative model refinement: It develops increasingly `better' theory that explains new phenomena by broadening the scope of the previous theory.} \end{figure} \vspace{-2em} \end{center} \end{rem} \vspace{-.5em} \subsection{Data-efficient Statistical Learning}\label{sec:DEL} We are interested in the following statistical learning problem: Given a small handful of samples $n \ll k$ from a big probability distribution of size $k$, how to perform time-and-storage-efficient learning? When designing such algorithms we have to keep in mind that they must be (i) data-efficient: can learn from limited sample data $n \ll k$; and (ii) statistically powerful: can detect ``small'' deviations. \vskip.3em Classical nonparametric methods characterize big probability distributions using high-dimensional sufficient statistics based on histogram counts $\{N_j\}_{j=1}^k$, which, obviously, requires a very large sample for efficient learning. And as the required sample sizes increases, this slows down the algorithm running-time which scales with $n$. Hence, most of the `legacy' statistical algorithms (e.g., Pearson's chi-square, Freeman-Tukey statistic, etc.) become unusable on large-scale discrete data as `the minimum number of data points required to obtain an acceptable answer is too large to be practical.' Indeed, detecting sparse structure in a data-efficient manner from big distributions, such as from \texttt{HEP}($k,n$), is a challenging problem and requires a ``new breed'' of computational algorithms. There has been some impressive progress on this front by the Theoretical Computer Science community; see Appendix \ref{app:subl} for a related discussion on sub-linear algorithms for big data. \begin{figure}[ ] \vspace{-1em} \centering \includegraphics[width=.5\linewidth,keepaspectratio,trim=2.25cm 1cm 2.25cm 1.25cm]{Figs/HEPower.png} \vskip.7em \caption{HEP$(n,k=500)$ data for varying sample size $n$. Event counts (number of collisions) at $k = 500$ finely binned energy-cells. \texttt{LPgof} (shown in red) requires 50\% less data to reach the correct conclusion with power 1.}\label{fig:HEPower} \end{figure} \vskip.3em {\bf HEP Data Example}. We generate \texttt{HEP}($k=500,n$) data for varying sample size $n$. We used $350$ null simulated data sets to estimate the 95\% rejection cutoffs for all methods at the significance level $0.05$, and used $350$ simulated data sets from alternative to approximate the power, as displayed in Fig. \ref{fig:HEPower}. We compared our LPgof method \eqref{eq:ugof} with two state-of-the-art algorithms (proposed by theoretical computer scientists): (i) \cite{valiant2017automatic} and (ii) \cite{acharya2015optimal}---interestingly, this exact test has been proposed earlier by \cite{zelterman1987}, which in Statistics literature is known as Zelterman's D-statistic\footnote{`Those who ignore Statistics are condemned to reinvent it'---Brad Efron.}. Conclusion: \texttt{LPgof} requires {\bf 50\% less} data to reach the correct conclusion with power 1. \begin{figure}[ ] \vspace{-.7em} \centering \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power1.png}~~~~~~~~ \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power2.png} \\[1em] \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power3.png}~~~~~~~~ \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power4.png} \\[1em] \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power5.png}~~~~~~~~ \includegraphics[height=.26\textheight,width=\textwidth,keepaspectratio,trim=1cm 1cm 1cm 1cm]{Figs/power6.png} \\[1em] \caption{The red power curve is \texttt{LPgof} method with $m=8$. First row: Null distribution is $p_0=(1/4,1/4,1/2(k-2) \ldots, 1/2(k-2))$. Alternatives are $p_{\delta}=(1/4+\de,1/4-\de,1/2(k-2) \ldots, 1/2(k-2))$, with $\de=1/8,1/10$. Second row: Null distribution is $p_0=U_{[k]}$. Alternatives are $.95U_{[k]} + .05{\rm Zipf}(\al)$, with $\al=1.25,1.15$. Third row: Null distribution is $p_0=U_{[k]}$. Alternative probabilities are computed using the increments $\Delta D(j/k; \mu,\pi)$, $j=1,\ldots,k$ where $D(u)=F(\Phi^{-1}(u);X)$, and $F(x;X)=(1-\pi)\Phi(x) + \pi \Phi(x-\mu)$ for $\mu=-1.5$ and $\pi=\{.2,.1\}$. In all cases the dimension $k=5000$; and sample sizes are sublinear in dimension, i.e., $n \asymp \sqrt{k}$} \label{fig:power1} \end{figure} {\bf Empirical Power Comparisons}. We compare the power of different methods under six different settings, as described in the Figs \ref{fig:HEPower} description, and will not repeat this here. The overall conclusion is pretty clear: \texttt{LPgof} emerged as the most powerful data-efficient test---it can detect new discoveries \textit{quickly and more reliably}. The prime reason for achieving this level of performance is fully attributable to the good sparsity (energy compaction) property of ``LP-domain data analysis." The specially-designed discrete LP-transformation basis provides an efficient coordinate system that requires far fewer parameters than the size of the distribution to capture the essential information. For additional examples see Figs. \ref{fig:powerh0} and \ref{fig:power2} of the Appendix. \subsection{Discovery-source Separation Problem} ~~``\textit{At this scale it is not possible to keep all the data (the LHC produces up to a Petabyte of data per second) and it is essential to have efficient data-filtering mechanisms so that we can separate the wheat from the chaff.}'' \begin{flushright} \vspace{-1.35em} {\rm --- Bob Jones, Project Leader at CERN.}\end{flushright} Imagine a typical scenario, where massive experimental data are distributed across thousands of different computation units (say, servers). Each data source has its own `personal' distribution over a large domain of size $k=1000$, from which, we only have access to $n=1000$ samples. Thus, the observed data can be summarized as a collection of highly noisy and sparse empirical distributions $\{\tp_\ell(x)\}$. Suppose we are given $900$ such sparsely-sampled empirical distributions from $p_0=U_{[k]}$, $50$ from $.9U_{[k]} + .1 {\rm Zipf}(1.25)$, and $50$ from a mixture of $.9U_{[k]} + .1 \Delta{\rm Beta}(50, 50)$, where $\Delta{\rm Beta}(50, 50)$ denotes the increments of the cdf of ${\rm Beta}(50,50)$. Shapes of these three source-distributions are shown in the left of Fig. \ref{fig:dss}. \vskip.35em {\bf Discovery-source Separation (DSS)}. The goal here is to design algorithms that can quickly filter out the `interesting' data sources, having potential for a new physics discovery. A more ambitious goal would be to classify different data sources based on their `nature' of discoverability. \vskip.35em {\bf Algorithm}. It consists of two main steps: \vskip.2em Step 1. LP-Fourier Transform Matrix: Define the LP-transform matrix $L \in \cR^{g\times m}$ by \beq L[\ell,j]=\LP[j;p_0,\tp_\ell] = \int T_j(x;F_0) \dd \wtF_\ell, \eeq for $\ell=1,\ldots,g=1000$ and $j=1,\ldots, m=10$. \vskip.25em Step 2. DSS-plot: Perform the singular value decomposition (SVD) of $L=U\Lambda U^{T}$ $= \sum_r \la_r u_r u_r^{T}$, where $u_{ij}$ are the elements of the singular vector matrix $U=(u_1,\ldots,u_m)$, and $\Lambda={\rm diag}(\la_1,\ldots,\la_m)$, $\la_1 \ge$ $ \cdots \la_m \ge 0$. DSS-plot is the two-dimensional graph in the right panel of Fig \ref{fig:dss} (b), which is formed using the points $(\la_1 u_{1\ell}, \la_2 u_{2\ell})$, for $\ell=1,\ldots,g$ by taking the top two dominant singular vectors. Different data sources are shown as points, which captures the heterogeneity in terms of discoverability. \begin{figure}[ ] \centering \includegraphics[width=.46\linewidth,keepaspectratio,trim=2.25cm 1cm 1cm 1cm]{Figs/DSS-p.png}~~~~ \includegraphics[width=.46\linewidth,keepaspectratio,trim=1cm 1cm 2.25cm 1cm]{Figs/DSS-plot2.png} \vskip1em \caption{Discovery-source Separation (DSS) plot. Each data-batch is HEP$(k=1000,n=1000)$ with different shapes, among them $900$ were generated from $U_{[k]}$ (grey dotted line), $50$ were from $.9U_{[k]} + .1{\rm Zipf}(1.15)$ (red dotted line), and $50$ from a mixture of $U_{[k]}$ and increments of ${\rm Beta}(50,50)$ with mixing proportion $.1$ (blue dotted line). In the right panel, we show the DSS-plot. The chunks of points around zero indicate the null data sources. The sources are classified into three groups based on their nature of discoverability.}\label{fig:dss} \end{figure} \vskip.4em \textit{Interpretation}: DSS-plot displays and compares a large number of (empirical) distributions $\tp_\ell$ by embedding them in 2D Euclidean space. The cluster of points (data sources) that are near to the origin are the ones with background distribution. The distance from the origin \[\text{discovery-index}_\ell=(\la_1 u_{1\ell} -0)^2\,+\, (\la_2 u_{2\ell} -0)^2,~~~~\ell=1,\ldots,g=1000~~\] can be interpreted as the ``degree of newness'' of that dataset. The DSS-plot successfully separates different sources based on the statistical nature of their signal components. Researchers (like Bob Jones) can use this tool to \emph{quickly identify} interesting data sets for careful investigation. \section{Discussion} \label{sec:diss} Compared to the rich and well-developed tools for continuous data modeling problems, many companion discrete data modeling problems are still quite challenging. This was the main motivation for undertaking this research, which aimed at developing a widely-applicable general theory of discrete data modeling. This paper makes three broad contributions to the field of nonparametric statistical modeling: \vskip.5em ~~1) Model-Sharpening Principle: We have introduced a new principle of statistical model building, called `Density-Sharpening,' which performs three tasks in one step: model verification, model exploration, and model rectification. This was the guiding principle behind our systematic theory of discrete data modeling. As future work, we plan to explore how model-sharpening principle can help developing `auto-adaptable' machine-learning models. \vskip.5em ~~2) $\DS(p_0,m)$ Model: We have introduced a new class of nonparametric discrete probability model and robust estimation techniques that has the capability to leverage researchers' vague (misspecified) prior knowledge. It comes with novel exploratory graphical methods for `discovering' new knowledge from the data that investigators neither knew nor expected. \vskip.5em ~~3) Unified Statistical Framework: Our modern nonparametric treatment of the analysis of discrete data is shown to be rich enough to subsume a large class of statistical learning methods as a special case. This inclusivity of the general theory has some serious implications: Firstly, from a theoretical angle, it deepens our understanding of the ties between different statistical methods. Secondly, it simplifies practice by developing unified algorithms with expanded capabilities. And finally, it is also expected to be beneficial for modernizing Statistics curriculum in a way that is applicable to small- as well as large-scale problems. \section*{Supplementary Appendix} \label{SM} The supplementary material includes additional theoretical and algorithmic details. \bibliographystyle{Chicago}
1,477,468,750,219
arxiv
\section{Introduction} Interference channel (IFC) models the communication scenario that a number of transmitters wish to send independent messages simultaneously to their respective receivers using the same channel, while causing interference to each other. It is one of the most important fundamental channel setups in wireless communication systems, especially for contemporary mobile communication networks with almost universal frequency reuse and ever-increasing node densities. The information theoretical study of IFC has a long history\cite{1055812}. The largest known achievable rate region for IFC is called as Han-Kobayashi type region \cite{1056307}, which is achieved by splitting the transmit signal of each user into common and private messages, while each receiver decodes its own designated private message and the public messages. However, such capacity-approaching technique requires signal-level encoding/decoding cooperations among the users, which is challenging to practically implement. Alternatively, a low-complexity approach is to perform signal detection at the receivers by treating the interference as noise, while enabling transmitter-side cooperation via coordinated resource allocation strategy to enhance the achieve rate region of the IFC \cite{1237413,5895091}.\\ \hspace*{\parindent}Pareto boundary plays an important role in characterizing the achievable rate region of IFC, which consists of all the rate-tuples at which it is impossible to increase one user's rate without simultaneously decreasing other's. One method for characterizing the Pareto boundary for IFC is by solving a sequence of weighted sum-rate maximization (WSRMax) problems, which are usually non-convex problems \cite{5895091}. Alternatively, by using the concept of {\it rate profile}, finding a point on the Pareto boundary of the IFC usually corresponds to the weighted-minimum-rate maximization problem, which is usually convex and hence more efficiently to solve than the the WSRMax problems \cite{5504193}. It has been found that compared with the commonly assumed {\it proper} Gaussian signaling, i.e., the complex Gaussian signals whose in-phase and quadratic phase components are independent and identically distributed (i.i.d.), the achievable rate region of IFC by treating interference as noise can be further improved by using {\it improper} Gaussian signaling \cite{6489066}.\\ \hspace*{\parindent}Recently, a new type of electromagnetic surface structure called intelligent reflecting surface (IRS) or reconfigurable metasurface, is emerging as a promising component for wireless communications \cite{Smith2017Analysis,8910627}. IRS is usually composed of a large number of integrated electronic circuits that can be programmed to manipulate the incoming electromagnetic wave in a customizable manner, in which each unit of the IRS is implemented by reflective arrays that use varactor diodes with the resonant frequency electronically controlled. In wireless communication systems, IRS can be regarded as a cost-effective implementation of passive phase shifters, which has the capability of intelligent signal reflection without any power amplifier. Thanks to its low hardware footprint, IRS can be flexibly deployed on room ceiling, buildings facades, aerial platforms \cite{LuICC2020}, even to be integrated into smart wearable devices, which brings a new design degree of freedom (DoF) to deliberately manipulate the wireless communication channels.\\ \hspace*{\parindent} Extensive research efforts on IRS-aided wireless communications have been devoted to jointly optimize the reflective beamforming at the IRS and transmit beamforming at the multi-antenna access point \cite{9013288,8811733}. Furthermore, the IRS-aided wireless communications in various setups have been also studied, such as orthogonal frequency division multiplexing \cite{9014204}, non-orthogonal multiple access\cite{9000593} and physical layer security\cite{8743496}.\\ \hspace*{\parindent}To reap the full benefits of IRS-aided wireless communications, in this paper, we consider IRS-aided IFC, where each of the transmitter-receiver pair is aided by one IRS, as shown in Fig. 1. Intuitively, the introduction of IRS brings a new design DoF for both desired signal enhancement and interference suppression. To this end, we focus on the characterization of the Pareto boundary of the achievable rate region for the IRS-aided multiple-input single-output (MISO) IFC with interference treated as noise. Specifically, based on the concept of {\it rate-profile} \cite{5504193}, we formulate an optimization problem for Pareto boundary characterization by jointly designing active transmit beamforming at the transmitters and passive reflective beamforming at the IRSs. As the problem is non-convex, an efficient algorithm is proposed to find a high-quality suboptimal solution via block coordinate descent (BCD) method, where the transmit and reflective beamforming vectors are optimized in an alternating manner. In particular, with the fixed reflective beamforming, the optimal transmit beamforming can be obtained via second-order cone program (SOCP). On the other hand, with the fixed transmit beamforming vector, the reflective beamforming vector can be updated via the semi-definite relaxion (SDR) approach. Numerical results show that with the proposed algorithm, the achievable rate region of IRS-aided IFC is much larger than that without IRS.\\ \begin{figure}[!t] \centering \includegraphics[width=3.2in]{interference.pdf} \caption{MISO interference channel aided by IRS.} \label{fig1} \end{figure} \section{System Model and Problem Formulation} As shown in Fig. 1, we consider an IRS-aided MISO IFC with $K$ transmitter-receiver pairs, each of which is aided by one IRS. Each transmitter is assumed to have $M$ antennas and each IRS has $N$ passive reflecting elements. Denote the direct MISO channel from transmitter $j$ to receiver $k$ as ${\bf h}_{kj}\in {\mathbb C}^{M\times 1}$, where $j, k=\{1,\cdots,K\}$. Further denote the multiple-input multiple-output (MIMO) channel from transmitter $j$ to IRS $i$ as ${\bf G}_{ij}\in {\mathbb C}^{N\times M}$, and the MISO channel from IRS $i$ to receiver $k$ as ${\bf f}_{ki}\in {\mathbb C}^{N \times 1}$.\\ \hspace*{\parindent}Let ${\bf w}_j\in {\mathbb C}^{M\times 1}$ denote the transmit beamforming vector by transmitter $j\in \{1,....,K\}$, and $P_j$ is the maximum power of transmitter $j$, we then have $\|{\bf w}_j\|^2\leq P_j$. We assume that each reflective element of the IRS is able to dynamically manipulate the phase of the incoming electromagnetic waves, while keeping their magnitudes unchanged. For IRS $i \in \{1,\cdots, K\}$, denote $\phi_{in} \in [0, 2\pi)$ as the phase shift introduced by the $n$th element, $n\in \{1,...,N\}$. Further, define the reflective phase shift matrix as the diagonal matrix ${\bf \Phi}_i={\rm diag}({\bf v}_i)$, where ${\bf v}_i\in {\mathbb C}^{N\times 1}=(e^{j\phi_{i1}}, \cdots,e^{j\phi_{iN}})$. Thus, the baseband complex signal received at receiver $k$ can be written as \begin{align*} y_k=\sum\limits_{j=1}^K{\bf h}_{kj}^{\rm H}{\bf w}_js_j+\sum\limits_{j=1}^K\sum\limits_{i=1}^K{\bf f}_{ki}^{\rm H}{\bf \Phi}_i{\bf G}_{ij}{\bf w}_js_j+z_k \end{align*} \begin{align} &=\underbrace {\left({\bf h}_{kk}^{\rm H}+\sum\limits_{i=1}^K{\bf f}_{ki}^{\rm H}{\bf \Phi}_i{\bf G}_{ik}\right){\bf w}_ks_k}_{\rm {desired \mspace{3mu} signals}}\notag\\ &+\underbrace{\sum\limits_{j \neq k}\left({\bf h}_{kj}^{\rm H}+\sum\limits_{i=1}^K{\bf f}_{ki}^{\rm H}{\bf \Phi}_i{\bf G}_{ij}\right){\bf w}_js_j}_{\rm {interference}}+z_k\,,\forall k\,, \end{align} where $s_k$ is the information-bearing symbol for the $k$th user, $z_k$ denotes he additive white Gaussian noise (AWGN) at receiver $k$, which is assumed to be circularly symmetric complex Gaussian (CSCG) distributed with zero mean and power $\sigma^2$, i.e., $z_k\sim {\mathbb {CN}}(0,\sigma^2)$. Note that we have ignored the wireless links across different IRSs, since intuitively, the IRSs would only direct their incoming signals towards the receivers rather than other IRSs. It is observed from (1) that both the desired signal and interference of each receiver arrives not only from its own transmitter and IRS, but also from other IRSs. This provides additional DoF to enhance the achievable rate region as compared to the conventional MISO IFC without IRS. \\ \hspace*{\parindent}With the interference treated as noise at each receiver, the resulting signal-to-interference-plus-noise ratio (SINR) at receiver $k$ can be written as \begin{align} {\rm SINR}_k=\frac{\left|({\bf h}_{kk}^{\rm H}+\sum\limits_{i=1}^K{\bf f}_{ki}^{\rm H}{\bf \Phi}_i{\bf G}_{ik}){\bf w}_k\right|^2}{\sum\limits_{j \neq k}\left| ({\bf h}_{kj}^{\rm H}+\sum\limits_{i=1}^K{\bf f}_{ki}^{\rm H}{\bf \Phi}_i{\bf G}_{ij}){\bf w}_j\right|^2+\sigma^2}\,. \end{align} As ${\bf \Phi}_i$ is a diagonal matrix with diagonal elements given by vector ${\bf v}_i$, it is not difficult to see that the cascaded link from transmitter $j$ to receiver $k$ through IRS $i$ can be written as \begin{align} {\bf f}_{ki}^{\rm H}{\bf \Phi}_i{\bf G}_{ij}={\bf v}_i^{\rm H}{\boldsymbol \Gamma}_{kij}\,, \end{align} where we have defined the effective channel ${\boldsymbol \Gamma}_{kij}\in {\mathbb C}^{N\times M}={\rm diag}({\bf f}_{ki}^{\rm H}){\bf G}_{ij}$. By substituting (3) to (2), and assuming CSCG signaling, the achievable rate of receiver $k$ is given by: \begin{align}\label{rate1} R_k=\log_2\left(1+\frac{\left|({\bf h}_{kk}^{\rm H}+\sum\limits_{i=1}^K{\bf v}_i^{\rm H}{\boldsymbol \Gamma}_{kik}){\bf w}_k\right|^2}{\sum\limits_{j\neq k}\left|({\bf h}_{kj}^{\rm H}+\sum\limits_{i=1}^K{\bf v}_i^{\rm H}{\boldsymbol \Gamma}_{kij}){\bf w}_j\right|^2+\sigma^2}\right)\,. \end{align} The achievable rate region for the IRS-aided IFC is the set of all rate-tuples $(R_1,\cdots, R_k)$ for the $K$ user pairs that can be simultaneously achieved, which can be written as \begin{align}\label{achievable rate} {\mathcal R}=\mathop\bigcup_{\substack{|v_{in}|=1,||{\bf w}_j||^2\le P_j\\ i,j\in\{1,...K\}, n\in \{1,...,N\}}}\{(R_1, R_2,\cdots,R_K)\}\,. \end{align} The outer boundary of $\cal R$ is called the Pareto boundary, which consists of all the rate-tuples at which it is impossible to increase one user's rate without simultaneously decreasing that of other users \cite{6123786}. We are able to characterize the Pareto optimal rate-tuples based on the concept of rate profile \cite{5504193}. Specifically, any rate-tuple on the Pareto boundary of $\cal R$ can be obtained by solving the following optimization problem with a given vector ${\boldsymbol \zeta}=(\zeta_1,\cdots,\zeta_K)$: \begin{subequations}\label{orig_formulation} \begin{align} \mathop{\max}_{R,\{{\bf v}_i\}_{i=1}^K,\{{\bf w}_j\}_{j=1}^K} &\quad R\\ {\rm s.t.} \quad~~~ & R_k\ge \zeta_kR\,, \forall k\,,\\ & |v_{in}|=1\,, \forall i,n\,,\\ & ||{\bf w}_j||^2\le P_j, \forall j\,, \end{align} \end{subequations} where $\zeta_k\geq 0$ denotes the target rate ratio between the achievable rate of receiver $k$ and the sum rate $R$, with $\sum_{k=1}^K\zeta_k=1$. Therefore, with different $\boldsymbol \zeta$, the complete Pareto boundary of the achievable rate region $\cal R$ can be characterized. Note that in the absence of the IRSs, the optimization problem (\ref{orig_formulation}) degenerates to the transmit beamforming optimization for the conventional MISO-IFC, which can be optimally solved via SOCP. However, the introduction of the IRSs renders the problem (\ref{orig_formulation}) more challenging, since the transmit and reflective beamforming vectors are coupled, which makes (\ref{orig_formulation}) non-convex and difficult to be optimally solved. In the following, we propose an efficient algorithm to solve (\ref{orig_formulation}) based on the BCD technique, for which the transmit and reflective beamforming vectors are updated alternately. \section{Proposed Solution} To gain some insights, we first consider the special case of (\ref{orig_formulation}) with $\zeta_k=1$ for some $k$ and $\zeta_j=0, \forall j\neq k$. This corresponds to the single-user maximum rate point for user $k$. \subsection{Single-user Maximum Rate Point} With $\zeta_j=0, \forall j\neq k$, it is obvious that we should have ${\bf w}_j=0$, $\forall j\neq k$, and $\|{\bf w}_k\|^2=P_k$. As a result, problem (\ref{orig_formulation}) reduces to \begin{align}\label{single formuala} \mathop{\max}_{\{{\bf v}_i\}_{i=1}^K,{\bf w}_k} & {\rm log}_2\left(1+\frac{1}{\sigma^2}\left|({\bf h}_{kk}^{\rm H}+\sum\limits_{i=1}^K{\bf v}_i^{\rm H}{\boldsymbol \Gamma}_{kik}){\bf w}_k\right|^2\right)\notag\\ {\rm s.t.} \quad & |v_{in}|=1\,, \forall i,n\,,\notag\\ & ||{\bf w}_k||^2= P_k. \end{align} It is not difficult to see that for any given reflective beamforming vectors $\{{\bf v}_i\}_{i=1}^K$, the optimal transmit beamforming vector ${\bf w}_k^\star$ is given by the maximum ratio transmit (MRT) beamforming with the effective channel formed by a supposition of the direct channel and the reflective channels, which is given by \begin{align}\label{opti trans} {\bf w}_k^\star=\sqrt{P_k}\frac{{\bf h}_{kk}+\sum_{i=1}^K{\boldsymbol \Gamma}_{kik}^{\rm H}{\bf v}_i}{\left \|{\bf h}_{kk}+\sum_{i=1}^K{\boldsymbol \Gamma}_{kik}^{\rm H}{\bf v}_i\right\|}\,. \end{align} With (\ref{opti trans}), the resulting SNR for user $k$ is only a function of the reflective beamforming vectors $\{{\bf v}_i\}_{i=1}^K$, which is \begin{align}\label{snr} {\rm SNR}_k=\frac{P_k}{\sigma^2}\left\|{\bf h}_{kk}+\sum_{i=1}^K{\boldsymbol \Gamma}_{kik}^{\rm H}{\bf v}_i \right\|^2=\frac{P_k}{\sigma^2}f(\{{\bf v}_i\})\,. \end{align} \hspace*{\parindent}As a result, problem (\ref{single formuala}) reduces to the SNR maximization problem via reflective beamforming optimization: \begin{align}\label{formu_snr} \mathop{\max}_{\{{\bf v}_i\}_{i=1}^K} \mspace{3mu}{\rm SNR}_k, \quad {\rm s.t.} \mspace{3mu}|v_{in}|=1\,, \forall i,n\,. \end{align} \hspace*{\parindent}The non-convex unit-magnitude constraints on the elements of the reflective beamforming vectors in (\ref{formu_snr}) make it difficult to find the optimal solution efficiently. In the following, we propose a heuristic algorithm to find an efficient local optimal solution. Specifically, we aim to extract the contribution from each individual element $v_{in}$ to the ${\rm SNR}_k$ in (\ref{snr}), with all other elements fixed, and update each element alternately via the classic coordinate ascent method\cite{7389996}. To proceed, let ${\boldsymbol \Gamma}_{kik}^{\rm H}=[{\bf r}_1,{\bf r}_2,\cdots,{\bf r}_{N}]$, where ${\bf r}_n\in {\mathbb C}^{M\times 1}$ is the $n$th column of matrix ${\boldsymbol \Gamma}_{kik}^{\rm H}$. Then, $f(\{{\bf v}_i\})$ in (\ref{snr}) can be expressed as \begin{align}\label{maximizer} f(\{{\bf v}_i\})&=\|\underbrace{{\bf h}_{kk}+\sum_{i'\neq i}{\boldsymbol \Gamma}_{ki'k}^{\rm H}{\bf v}_{i'}}_{\triangleq {\bf g}_i}+ {\boldsymbol \Gamma}_{kik}^{\rm H}{\bf v}_i\|^2\notag\\ &=\|\underbrace{{\bf g}_i+\sum_{n'\neq n}{\bf r}_{n'}e^{j\phi_{in'}}}_{\triangleq {\bf \bar g}_{in}}+{\bf r}_ne^{j\phi_{in}}\|^2=||{\bf \bar g}_{in}+{\bf r}_ne^{j\phi_{in}}||^2\notag\\ &=||{\bf \bar g}_{in}||^2+||{\bf r}_n||^2+2{\rm Re}\{e^{j\phi_{in}}{\bf \bar g}_{in}^{\rm H}{\bf r}_n\}\,. \end{align} With all elements of $\{{\bf v}_i\}_{i=1}^K$ fixed except $e^{j\phi_{in}}$, the optimal value for $\phi_{in}$ should maximize ${\rm Re}\{e^{j\phi_{in}}{\bf \bar g}_i^{\rm H}{\bf r}_n\}$, which is $\phi_{in}^\star=-\angle{\bf \bar g}_{in}^{\rm H}{\bf r}_n$. Now, we are able to devise an iterative algorithm to and alternately update each entry $v_{in}=e^{j\phi_{in}^\star}$ with all other elements fixed. Since the ${\rm SNR}_k$ in (\ref{snr}) increases (at least non-decreasing) at each iteration, the algorithm is guaranteed to converge. \\ \hspace*{\parindent}It is noted that for the $K$-user MISO-IFC aided by IRS, for the special case with single-user maximum rate point, all the transmitters except transmitter $k$ will keep silence, and all IRSs work cooperatively to enhance the effective MISO channel of user $k$. Therefore, compared with the conventional IFC, the single-user maximum rate point is guaranteed to be improved in the presence of IRS. \subsection{Iterative Transmit and Reflective Beamforming Optimization Design} In this subsection, we consider the general optimization problem (\ref{orig_formulation}) for multi-user MISO-IFC aided by IRS. An iterative optimization algorithm based on the BCD technique is proposed, where the transmit active beamforming and reflec- tive passive beamforming are updated alternately with the other fixed. First, consider the transmit beamforming optimization problem with given reflective beamforming vectors $\{{\bf v}_i\}_{i=1}^K$. In this case, the effective MISO channel from transmitter $j$ to reciever $k$, denoted as ${\bf g}_{kj}\in {\mathbb C}^{M\times 1}$, can be written \begin{align*} {\bf g}_{kj}=\left({\bf h}_{kj}+\sum_{i=1}^K{\boldsymbol \Gamma}_{kij}^{\rm H}{\bf v}_i\right)\,. \end{align*} As a result, the sub-problem for transmit beamforming optimization of problem (\ref{orig_formulation}) reduces to \begin {subequations}\label{relax power} \begin{align} \mathop{\max}_{R,\{{\bf w}_j\}_{j=1}^K} &\quad R\\ {\rm s.t.} \quad &\log_2\left(1+\frac{\left|{\bf g}_{kk}^{\rm H}{\bf w}_k\right|^2}{\sum_{j\neq k}\left|{\bf g}_{kj}^{\rm H}{\bf w}_j\right|^2+\sigma^2}\right)\ge \zeta_kR\,,\forall k\,,\\ & ||{\bf w}_j||^2\le P_j, \forall j\,. \end{align} \end{subequations} Problem (\ref{relax power}) reduces to the beamforming optimization problem of the conventional MISO IFC, which can be optimally solved via SOCP together with bisection search method [8]. Specifically, for any given rate target $R > 0$, we have the following feasibility problem: \begin {subequations}\label{last power} \begin{align} {\rm Find:}\quad & \{{\bf w}_j\}_{j=1}^K,\\ {\rm s.t.}\quad &\frac{|{\bf g}_{kk}^{\rm H}{\bf w}_k|^2}{ (2^{\zeta_kR}-1)}\ge \sum\limits_{j\neq k}\left|{\bf g}_{kj}^{\rm H}{\bf w}_j\right|^2+\sigma^2\,,\forall k\,,\\ & ||{\bf w}_j||^2\le P_j, \forall j\,. \end{align} \end{subequations} If $R$ is feasible to problem (\ref{last power}), then, the optimal value of problem (\ref{relax power}), denoted as $R^\star$, is no smaller than the given value $R$, i.e., $R^\star \geq R$; otherwise, we have $R^{\star} < R$. Hence, the optimal solution $\{R^\star, \{{\bf w}_j^\star\}_{j=1}^K\}$ to problem (\ref{relax power}) can be obtained via bisection search over $R$ by solving a sequence of the feasibility problem (\ref{last power}). Moreover, it is noted that without loss of optimality, a common phase shift can be applied to ${\bf w}_k$ so that ${\bf g}_{kk}^{\rm H}{\bf w}_k$ is a real value for all $k$. Therefore, constraints (\ref{last power}b) can be recast as second order cone (SOC) constraints as \begin{align}\label{soc constraint} \frac{{{\mathop{\rm Re}\nolimits} \left( {{\bf{g}}_{kk}^{\rm{H}}{{\bf{w}}_k}} \right)}}{{{2^{{\zeta _k}R}} - 1}} \ge \left\| \begin{array}{l} {\bf{g}}_{k1}^{\rm{H}}{{\bf{w}}_1}\\ \quad \vdots \\ {\bf{g}}_{k(k - 1)}^{\rm{H}}{{\bf{w}}_{k - 1}}\\ {\bf{g}}_{k(k + 1)}^{\rm{H}}{{\bf{w}}_{k + 1}}\\ \quad\vdots \\ {\bf{g}}_{kK}^{\rm{H}}{{\bf{w}}_K}\\ {\sigma} \end{array} \right\|\,,\forall k\,, \end{align} which are convex constraints. Therefore, problem (\ref{last power}) can be further written as: \begin{align}\label{last socp} {\rm Find:}\quad \{{\bf w}_j\}_{j=1}^K\,,\quad {\rm s.t.}\quad (\ref{last power}c) \mspace{6mu} {\rm and} \mspace{6mu} (\ref{soc constraint}) \end{align} Problem (\ref{last socp}) is an SOCP problem, which can be efficiently solved by standard convex optimization solvers\cite{w1995x}. Thus, the solution of optimal transmit beamforming to problem (\ref{relax power}) is obtained via solving the SOCP problem together with a bisection search over $R$.\\ \hspace*{\parindent}Next, we focus on optimizing the reflective beamforming vectors $\{{\bf v}_i\}_{i=1}^K$ with fixed transmit beamforming. With any given $\{{\bf w}_j\}_{j=1}^K$, constraints (\ref{orig_formulation}b) can be rewritten as \begin{align}\label{formula constraint} \log_2\left(1+\frac{\left|a_{kk}+\sum\limits_{i=1}^K{\bf v}_i^{\rm H}{\boldsymbol \gamma}_{kik}\right|^2}{\sum_{j\neq k}\left| a_{kj}+\sum\limits_{i=1}^K{\bf v}_i^{\rm H}{\boldsymbol \gamma}_{kij}\right|^2+\sigma^2}\right)\ge \zeta_kR\,, \end{align} where $ a_{kj}\triangleq{\bf h}_{kj}^{\rm H}{\bf w}_j$ and ${\boldsymbol \gamma_{kij}}\triangleq{\boldsymbol \Gamma}_{kij}{\bf w}_j\in {\mathbb C}^{N\times 1}$. Define \begin{align*} {\bf \bar v}\buildrel \Delta \over = \left[ \begin{array}{l} {1}\\ {{\bf v}_1}\\ \vdots \\ {{\bf v}_K} \end{array} \right]\in {\mathbb C}^{(KN+1)\times 1}\,, {\bf b}_{kj}\buildrel \Delta \over = \left[ \begin{array}{l} a_{kj}\\ {{\boldsymbol \gamma}_{k1j}}\\ \vdots \\ {{\boldsymbol \gamma}_{kKj}} \end{array} \right]\in {\mathbb C}^{(KN+1)\times 1}\,. \end{align*} Then, the sub-problem for reflective beamforming optimization of problem (\ref{orig_formulation}) can be written as \begin{subequations} \label{same power} \begin{align} \mathop{\max}_{{\bf \bar v},R}\quad &R\\ {\rm s.t.}\quad&\log_2\left(1+\frac{|{\bf b}_{kk}^{\rm H}{\bf \bar v}|^2}{\sum_{j\neq k}|{\bf b}_{kj}^{\rm H}{\bf \bar v}|^2+\sigma^2}\right)\ge \zeta_kR\,, \forall k\,,\\ &{\bar v}_1=1\,,\\ &|{\bar v}_{n}|=1\,,n=2,3,\cdots,KN+1\,. \end{align} \end{subequations} Problem (\ref{same power}) is non-convex, due to the non-concave rate expression with respect to the reflective beamforming vectors and the non-convex unit-magnitude constraints. To address the problem, we leverage the celebrated SDR method to find a high-quality approximate solution\cite{4443878}. To proceed, define a rank-1 positive semi-definite matrix ${\bf V}={\bf \bar v}{\bf \bar v}^{\rm H}$. Then, problem (\ref{same power}) is further reformulated as \begin{subequations}\label{relax0} \begin{align} \mathop{{\max}}_{R,{\bf V}}\quad &R\\ {\rm s.t.}\quad &\frac{{\rm Tr}({\bf Q}_{kk}{\bf V})}{\sum_{j\neq k}{\rm Tr}({\bf Q}_{kj}{\bf V})+\sigma^2}\ge 2^{\zeta_kR}-1\,,\\ &|{\bf V}_{nn}|=1\,, n={1,2\cdots, KN+1}\,,\forall k\,,\\ & {\rm rank} ({\bf V})= 1\,,\\ & {\bf V}\succeq {\bf 0}\,, \end{align} \end{subequations} where ${\bf Q}_{kj}={\bf b}_{kj}{\bf b}_{kj}^{\rm H}$. For the given rate target $R>0$, we have the following feasibility problem: \begin{subequations}\label{relax1} \begin{align} {\rm Find}:\quad &{\bf V}\\ {\rm s.t.}\quad &\frac{{\rm Tr}({\bf Q}_{kk}{\bf V})}{ (2^{\zeta_kR}-1)}\ge\sum\limits_{j\neq k}{\rm Tr}({\bf Q}_{kj}{\bf V})+\sigma^2\,, \forall j,k\,,\\ &(\ref{relax0}c), \mspace{3mu}(\ref{relax0}d) \mspace{3mu}{\rm and}\mspace{3mu} (\ref{relax0}e) \,. \end{align} \end{subequations} Note that (\ref{relax1}) is still non-convex, due to the rank-one constraints. By dropping the rank-one constraints, we have the following relaxed problem: \begin{align}\label{relax2} {\rm Find}:\quad {\bf V}\,,\quad {\rm s.t.}\quad(\ref{relax0}c), \mspace{3mu} (\ref{relax0}e) \mspace{3mu}{\rm and}\mspace{3mu}(\ref{relax1}b)\,. \end{align} \hspace*{\parindent}Problem (\ref{relax2}) is a convex SDP problem, which can be solved directly. However, due to the relaxation, the resulting $\bf V$ from (\ref{relax2}) may not be rank-one matrix. To this end, we apply the standard Gaussian randomization steps\cite{4443878} to construct a rank-one solution to problem (\ref{relax1}). Therefore, problem (\ref{relax0}) is solved with bisection search over $R$, by solving a sequence of the SDP problem together with Gaussian randomization to construct rank-1 matrix. \\ \hspace*{\parindent}In summary, the proposed algorithm to solve problem (6) is presented in Alg.~1. In Alg.~1, for the different weight vector $\boldsymbol \zeta$, the transmit and reflective beamforming vectors are optimized alternately. By varying $\boldsymbol \zeta$, the resulting rate tuples constitute the Pareto boundary of the achievable rate region for the multi-user MISO IFC aided by the coordinated IRS. \section{Numerical Results} In this section, we provide an example to evaluate the performance of the proposed algorithms. We consider a two-user IFC with $K=2$ and $M=32$, and the two transmitters are located at $(0,50m)$ and $(50m, 50m)$, respectively, and the receivers are located at $(0,0)$ and $(50m,0)$, respectively. The two IRSs with $N=256$ elements are placed at two random locations between the transmitters and receivers. We assume the distance-dependent path loss model, i.e., $Q_{L}=C_0\left({d}/{d_0}\right)^{\beta}$, where $C_0$ denotes the path loss at the reference distance of $d_0=1$ meter, $\beta$ is the path loss exponent and $d$ denotes the individual link distance. For the transmitter-receiver, transmitter-IRS and IRS-receiver links, set to $3.6$, $2$ and $2.5$, respectively. Moreover, for the SDR approach, $3000$ Gaussion randomizations are used to construct rank-1 matrix for any obtained higher-rank matrices from (\ref{relax2}). The following benchmark schemes are also considered: \\ $\quad~\bullet$ \textbf{Scheme 1} (\emph{Random reflective beamforming}): The phase shift of each IRS element $v_{in}$ is set as a random value uniformly distributed in $[0,2\pi)$.\\ $\quad~\bullet$ \textbf{Scheme 2} (\emph{Without IRS}): This corresponds to the Pareto optimal solution of the conventional MISO-IFC without IRS.\\ \hspace*{\parindent}In Fig.~2, the achievable rate regions for an example two-user MISO IFC for various schemes are plotted with the ${\rm SNR}=20 \mspace{2mu}{\rm dB}$. The figure reveals that the achievable rate region for IRS-aided MISO IFC with the proposed design is much larger than that for the conventional MISO IFC. Moreover, it is observed that the IRS-aided MISO IFC with random beamforming may perform even worse than that of the IFC without IRS. This is because the reflective links may even weaken the signal strength of direct link, if the reflective phase shift is not properly designed. \begin{algorithm} \caption{Iterative transmit and reflective beamforming optimization for problem (6)} \label{alg1} \begin{algorithmic}[1] \STATE Initialize: threshold $\epsilon>0$, $\{{\bf v}_i\}_{i=1}^K$, $R_{\rm min}=0$ and $R_{\rm max}$ \\ to a sufficiently large value. Let $R_L=R_{\rm min}$ ; \STATE \textbf{Repeat} \STATE \quad Let $R_U=R_{\rm max}$;\\ \STATE \quad \textbf{Repeat} \STATE \quad~ Let $R=(R_L+R_U)/2$;\\ \STATE \quad~ With the given $\{{\bf v}_i\}_{i=1}^K$, solve the SOCP problem (\ref{last socp}) \\ \quad~ and denote the optimal solution as $\{{\bf w}_j^\star\}_{j=1}^k$;\\ \STATE \quad~ \textbf{if} (\ref{last socp}) is feasible, set $R_L=R$, ${\bf w}_j={\bf w}_j^\star,\forall j$ \STATE \quad~ \textbf{else} set $R_U=R$; \STATE \quad \textbf{Until} $R_U-R_L\le \epsilon$; \STATE \quad Reset $R_U=R_{\rm max}$;\\ \STATE \quad \textbf{Repeat} \STATE \quad~ $R=(R_L+R_U)/2$;\\ \STATE \quad With the fixed $\{{\bf w}_j\}_{j=1}^K$, solve problem (\ref{relax2}) and denote\\ \quad the optimal solution as ${\bf V}^\star$; \STATE \quad~ \textbf{if} (\ref{relax2}) is feasible \STATE \quad~~\textbf{if} ${\rm rank}({\bf V}^\star)=1$ \STATE \quad~~~we have ${\bf V}^\star={\bf \bar v}{\bf \bar v}^{\rm H}$, $\{{\bf v}_i\}_{i=1}^K$ and resulting rate $R$, \STATE \quad~~\textbf{else} \STATE \quad~~~Performing Gaussian randomization to obtain a rank-1\\ \quad~~~vector $\bf \bar v$, $\{{\bf v}_i\}_{i=1}^K$, and resulting rate $R$. \STATE \quad~~Set $R_L=R$,\\ \STATE \quad~ \textbf{else} set $R_U=R$; \STATE \quad \textbf{Until} $R_U-R_L\le \epsilon$; \STATE \textbf{Until} the increase of objective function is smaller than $\epsilon$ \STATE \textbf{Output:} Solution $\{{\bf w}_j\}_{j=1}^K$, $\{{\bf v}_i\}_{i=1}^K$ \end{algorithmic} \end{algorithm} \begin{figure}[!t] \includegraphics [width=2.7in]{achieve_coeff.pdf} \caption{Achievable rate region of two-user MISO IFC with or without IRS.} \label{fig2} \end{figure} \section{Conclusion} In this paper, we studied the achievable rate region of the multi-user MISO IFC aided by coordinated IRSs. By leveraging the additional design DoF provided by multiple IRSs, we proposed an iterative transmit and reflective beamforming design scheme to characterize the achievable rate region of IFC, based on SOCP and SDR optimization techniques. Numerical results were provided to demonstrate that the achievable rate region of IRS-aided IFC outperforms that of the conventional IFC, if the beamforming is properly designed. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,477,468,750,220
arxiv
\section{Introduction} In the March 2010 issue of the journal {\it Crux Mathematicorum with Mathematical Mayhem}, mayhem problem M429 was proposed (see reference \cite{1}): \vspace{.15in} Determine all positive integers $a,b,c$ that satisfy, $$ \begin{array}{rcll} a^{(b^c)} & = & (a^b)^c; & {\rm or\ equivalently}\\ \\ a^{b^c} & = & a^{bc}. \end{array} $$ \noindent A solution, by this author, was published in the December 2010 issue of {\it Crux Mathematicorum with Mathematical Mayhem} (see \cite{2}). According to this solution, the following ordered triples of positive integers $a,b,c$ are precisely those that satisfy the above exponentialequation: \vspace{.15in} \noindent The triples of the form $(1,b,c)$, with $b,c$ being any positive integers; \noindent the triples of the form $(a,b,1)$, with $a,b$ positive integers and with $a \geq 2$; \noindent and the triples of the form $(a,2,2)$ with $a \in {\Bbb Z}^+$, and $a \geq 2$. \vspace{.15in} In the language of diophantine equations, we are dealing with the three-variable diophantine equation \begin{equation} x^{(y^z)} = x^{yz}. \label{E1} \end{equation} \noindent Accordingly, the above results can be expressed in Theorem 1 as follows. \begin{theorem} Consider the three-variable diophantine equation, $x^{(y^z)} = x^{yz}$, over the set of positive integers ${\Bbb Z}^+$. If $S$ is the solution set of the above diophantine equation, then $S = S_1 \bigcup S_2 \bigcup S_3$, where $S_1,S_2,S_3$ are the pairwise disjoint sets, $$ \begin{array}{rcl} S_1 & = & \left\{ \left.(1,b,c)\right| b,c \in {\Bbb Z}^+ \right\};\\ \\ S_2 & = & \left\{\left. (a,b,1)\right| a \geq 2, a,b \in {\Bbb Z}^+\right\};\\ \\ S_3 & = & \left\{ \left.(a,2,2)\right| a \geq 2\ {\rm and}\ a \in {\Bbb Z}^+\right\}. \end{array} $$ \end{theorem} Motivated by mayhem problem M429, in this work we tackle another four exponential, three-variable diophantine equations. These are: \vspace{.15in} \begin{equation} x^{(y^z)} = x^{(z^y)}, \label{E2} \end{equation} \begin{equation} x^{(y^z)} = y^{xz}, \label{E3} \end{equation} \begin{equation} x^{yz} = y^{xz}, \label{E4} \end{equation} \noindent and \begin{equation} x^{(y^z)} = z^{xy} \label{E5} \end{equation} In Section 2, we state Theorems 2, 3, 4, and 5. In Theorems 2, 3 and 4, the solutions sets of the diophantine equations (\ref{E2}), (\ref{E3}), and (\ref{E4}) are stated. These three solution sets are determined with the aid of the two-variable exponential diophantine equation found in Section 3, whose solution set is given in Result 2. The proofs of Theorems 2,3, and 4, are given in Section 4. The proof of Theorem 5 is presented in Section 5. In Theorem 5, some solutions to equation (\ref{E5}) are given. \section{The four theorems} \begin{theorem} Consider the three-variable diophantine equation (over ${\Bbb Z}^+$), $$ x^{(y^z)} = x^{z^y}. $$ \noindent Let $S$ be the solution set of this equation. Then, $S = S_1 \bigcup S_2 \bigcup S_3 \bigcup S_4 \bigcup S_5$, where \vspace{.15in} $ \begin{array}{rcll} S_1 & = & \left\{\left. (1,b,c)\right| b,c \in {\Bbb Z}^+ \right\}& {\rm where}\\ \\ S_2 & = & \left\{ \left. (a,1,1)\right| a\geq 2, a \in {\Bbb Z}^+ \right\} & \\ \\ S_3 & = & \left\{\left. (a,b,b) \right| a \geq 2, b\geq 2, a, b \in {\Bbb Z}^+\right\}&\\ \\ S_4 & = & \left\{ \left. (a,4,2)\right| a \geq 2,\ a \in {\Bbb Z}^+\right\}&\\ \\ S_5 & = & \left\{ \left. (a,2,4)\right| a \geq 2, a \in {\Bbb Z}^+ \right\} & \end{array} $ \end{theorem} \vspace{.15in} \begin{theorem} Consider the three-variable diophantine equation (over ${\Bbb Z}^+$), $$ x^{(y^z)} = y^{xz} $$ Let $S$ be the solution set of this equation. Then, $S = S_1 \bigcup S_2 \bigcup S_3 \bigcup S_4 \bigcup S_5$ where $$\begin{array}{lrcl} &S_1& =& \left\{ \left.(1,1,c)\right| c \in {\Bbb Z}^+\right\}, \\ \\ &S_2 & = & \left\{ \left. (a,a,1)\right| a \geq 2, a \in {\Bbb Z}^+\right\} \\ \\ {\rm (singleton\ set)} & S_3 & =& \left\{( 4,2,1)\right\},\\ \\ {\rm (singleton\ set)} & S_4 & = & \left\{(2,4,1)\right\} \\ \\ & S_5 & = & \left\{ \left. (b^c,b,c)\right|b \geq 2, c \geq 2, b,c\in {\Bbb Z}^+\right\} \end{array} $$ \end{theorem} \begin{theorem} Consider the three-variable diophantine equation (over ${\Bbb Z^+} $ $$ x^{yz} = y^{xz}. $$ Let $S$ be its solution set. Then, $$ S = S_1 \bigcup S_2 \bigcup S_3 \bigcup S_4 \bigcup S_t \bigcup S_6 \bigcup S_7, $$ \noindent where $$ \begin{array}{lrcl} & S_1 & = & \left\{ \left. (1,1,c)\right| c \in {\Bbb Z}^+\right\} \\ \\ & S_2 & = & \left\{ \left. (a,a,1)\right| a \geq 2, a \in {\Bbb Z}^+\right\}\\ \\ {\rm (singleton \ set)} & S_3 & = & \left\{ (4,2,1)\right\},\\ \\ {\rm (singleton \ set)} & S_4 & = & \left\{ (2,4,1)\right\}\\ \\ & S_5 & = & \left\{\left. (a,a,c)\right| a\geq 2, c\geq 2, a,c \in {\Bbb Z}^+\right\}\\ \\ & S_6 & = & \left\{ \left. (4,2,c)\right| c \geq 2,\ c\in {\Bbb Z}^+\right\} \\ \\ & S_7 & = & \left\{ \left. (2,4,c) \right| c \geq , \ c \in {\Bbb Z}^+\right\} \end{array} $$ \end{theorem} \vspace{.15in} \begin{theorem} Consider the three-variable equation (over ${\Bbb Z}^+$) $$ x^{(y^z)} = z^{xy} $$ \begin{enumerate} \item[(i)] Let $S$ be the set of those solutions, $(x,y,z)$ such that at least one of $x,y$, or $z$ is equal to $1$. Then $$ S = \left\{ \left. (1,b,1)\right| b \in {\Bbb Z}^+\right\} $$ \item[(ii)] The only solution $(x,y,z)$ to the above equation, such that $x \geq 2,\ y \geq 2,\ z \geq 2$, and with $x=z$, is the triple $(2,2,2)$ \item[(iii)] Let $F$ be the family of solutions $(x,y,z)$ such that $x \geq 2,\ y \geq 2,\ z\geq 2$ and with $y = z \neq x$. Then $$ F=\left\{ \left.(b^b,b,b) \right| b \geq 2,\ b \in {\Bbb Z}^+\right\} $$ \end{enumerate} \end{theorem} \section{A key exponential diophantine equation} The diophantine equation, $x^y = y^x$, over the positive integers, is instrumental in determining the solution sets of the diophantine equations (\ref{E2}), (\ref{E3}), and (\ref{E4}). The following, Result 1, can be found in W. Sierpinski's book, ``Elementary Theory of Numbers'', (see reference \cite{3}). The proof is about half a page long. \begin{result} Consider the two-variable equation, $x^y = y^x$, over the set of positive rational numbers, ${\Bbb Q}^+$. Then all the solutions to this equation, with $x$ and $y$ being positive rationals, and with $y > x$, are given by $$ x = \left( 1+\dfrac{1}{n}\right)^n,\ \ \ y= \left( 1+\dfrac{1}{n}\right)^{n+1}, $$ \noindent where $n$ is a positive integer: $n = 1,2,3,\ldots$\ . \end{result} A simple or cursory examination of the formulas in Result 1 easily leads to Result 2. Observe that these formulas can be written in the form, $$ x=\left(\dfrac{n+1}{n}\right)^n ,\ \ \ y = \left(\dfrac{n+1}{n} \right)^{n+1}. $$ For $n = 1$, we obtain the integer solution $x = 2$ and $y = 4$. However, for $n \geq 2$, the number $\dfrac{n+1}{n}$ is a proper rational, i.e., a rational which is not an integer. This is clear since $n$ and $n+1$ are relatively prime, and $n \geq 2$. Thus, since for $n \geq 2$, $\dfrac{n+1}{n}$ is a proper rational, so must be any positive integer power of $\dfrac{n+1}{n}$. This observation takes us immediately to Result 2 below. \begin{result} Consider the two-variable diophantine equation (over ${\Bbb Z}^+$) $$ x^y = y^x. $$ Let $S$ be its solution set. Then, $S = S_1 \bigcup S_2 \bigcup S_3$. Where $$ \begin{array}{lrcl} & S_1 & = & \left\{ \left. (a,a)\right| a \in {\Bbb Z}^+ \right\},\\ \\ {\rm (singleton\ set)} & S_2 & = & \left\{ (4,2)\right\},\\ \\ {\rm and\ (singleton\ set)} & S_3 & = & \left\{ (2,4)\right\} \end{array} $$ \end{result} Result 2 is used in the proofs of Theorems 2, 3, and 4 below. \section{Proofs of Theorems 2, 3, and 4} \begin{enumerate}\item[(1)] \begin{proof} {\bf Theorem 2} Suppose that $(a,b,c)$ is a solution to equation (\ref{E2}). We have \begin{equation} a^{(b^c)} = a^{(c^b)} \label{E6} \end{equation} \noindent If $a=1$, then $b$ and $c$ can be arbitrary positive integers; and (\ref{E6}) is satisfied. \noindent If $b=1$ and $a \geq 2$, then by (\ref{E6}) we get \vspace{.15in} \hspace{2.25in} $a=a^c$. \hfill (6a) \vspace{.15in} \noindent Since $ a \geq 2$, by inspection, we see that (6a) is satisfied only when $c = 1$. So, we obtain the solutions of the form $(a,1,1)$ with $a \geq 2$. If $a \geq 2, \ b\geq 2$, and $c =1$, equation (\ref{E6}) yields $$ a^b = a, $$ \noindent which is impossible with $a \geq 2$ and $b \geq 2$. Finally, assume that $a \geq 2,\ b \geq 2$, and $c\geq 2$ in (\ref{E6}). Then (\ref{E6}) $\Leftrightarrow$ (since $a \geq 2$) $b^c = c^b$; and by Result 2, it follows that either $b=4$ and $c=2$; or $b=2$ and $c =4$; or $b = c$. We have shown that if $(a,b,c)$ is a positive integer solution of equation (\ref{E2}), then $(a,b,c)$ must belong to one of the sets $S_1,S_2,S_3,S_4$, or $S_5$. Conversely, a routine calculation shows that any member of these five sets is a solution to (\ref{E2}). \end{proof} \item[(2)] \begin{proof} {\bf Theorem 3}. Let $(a,b,c)$ be a solution to equation (\ref{E3}). We then have, \begin{equation} a^{(b^c)} = b^{ac} \label{E7} \end{equation} \noindent If $a =1$, then by (\ref{E7}), $1=b^c$, which in turn implies $b=1$; and $c$ an arbitrary positive integer. \noindent If $a \geq 2$ and $b=1$, (\ref{E7}) becomes impossible for any value of $c$. If $a \geq 2,\ b\geq 2$, and $c = 1$, (\ref{E7}) yields $a^b = b^a$; and by Result 2 we must have either $a = 4$ and $b=2$, or $a = 2$ and $b=4$; or $a =b$. If $a \geq 2,\ b\geq 2,\ c \geq 2$. Then by (\ref{E7}), \vspace{.15in} \hspace{2.25in} $a^{(b^c)} = (b^c)^a$ \hfill (7a) \vspace{.15in} \noindent Combining (7a) with Result 2 implies that either $a=4$ and $b^c=2$, which is impossible since $b \geq 2$ and $c \geq 2$, or that $a=2$ and $b^c = 4$, which gives $a = 2 = b = c$. Or, the third possibility, $a = b^c$. We have shown that if $(a,b,c)$ is a positive integer solution of equation (\ref{E3}), then it must belong to one of the sets $S_1,\ S_2,\ S_3,\ S_4$ or $S_5$. Conversely, a routine calculation shows that any member of these five sets is a solution to (\ref{E3}). \end{proof} \item[(3)] \begin{proof} {\bf Theorem 4}. Let $(a,b,c)$ be a positive integer solution to equation (\ref{E4}) \begin{equation} a^{bc} = b^{ac} \label{E8} \end{equation} \noindent If $a=1$, we obtain $1=b^c$; and so $b=1$, with $c$ being an arbitrary positive integer. \noindent If $a \geq 2$ and $b=1$, (\ref{E8}) gives $a^c=1$, which is impossible since $a \geq 2$. \noindent If $a \geq 2,\ b \geq 2$, and $c=1$, we obtain from (\ref{E8}) \vspace{.15in} \hspace{1.0in} $a^b = b^a$ \hfill (8a) \vspace{.15in} \noindent Equation (8a), combined with Result 2, implies that either $a = 4$ and $b=2$; or $a = 2$ and $b=4$; or $a = b$. If $a \geq 2,\ b\geq 2$, and $c \geq 2$, we have from (\ref{E8}) \vspace{.15in} \hspace{1.0in} $a^{bc} = b^{ac} \Leftrightarrow (a^b)^c = (b^a)^c$ \hfill (8b) \vspace{.15in} \noindent Equation (8b) demonstrates that the $c$th powers of the positive integers $a^b$ and $b^a$ are equal. Since these two integers are greater than $1$, equation (8b) implies $$a^b=b^a$$ \noindent which once more, when combined with Result 2, implies either $a = 4$ and $b=2$ or $a=2$ and $b=4$; or $a =b$. We have shown that if $(a,b,c)$ is a positive integer solution of equation (\ref{E4}), it must belong to one of the sets $S_1,\ S_2,\ S_3,\ S_4,\ S_5, \ S_6,$ or $S_7$. Conversely, a routine calculation establishes that any member of these seven sets is a solution to (\ref{E4}). \end{proof} \item[(5)] {\bf Proof of Theorem 5} The following lemma can be easily proved by using mathematical induction. We omit the details. We will use the lemma in the proof of Theorem 5. \begin{lemma} \ \ \ \ \begin{enumerate} \item[(i)] If $b \geq 3$, then $b^{n-1} > n$ for all positive integers $n \geq 2$. \item[(ii))] $2^{n-1} > n$, for all positive integers $n \geq 3$. \item[(iii)] If $c \geq 2$, then $c^n > n$, for all positive integers $n$. \end{enumerate} \end{lemma} \begin{proof} {\bf Theorem 5} \ \ \ \ \begin{enumerate} \item[(i)] Let $(a,b,c)$ be a solution to equation (\ref{E5}) with at least one of $a,b,c$ being equal to $1$. \vspace{.15in} \noindent If $a=1$, (\ref{E5}) implies $1 = c^b$, and so $c=1$ as well; and $b$ is an arbitrary positive integer. \noindent If $b=1$ and $a \geq 2$ we get $a =c^a$ which is impossible if $c \geq 2$, by Lemma 1(iii); and clearly, $c \neq 1$ since $a \geq 2$. Also, the case $b \geq 2, \ a \geq 2$, and $c=1$ is ruled out by inspection. We conclude that if $(a,b,c)$ is a solution to (\ref{E5}), with one of $a,b,c$ being $1$, then it must be of the form $(1,b,1)$. Conversely, a straightforward calculation established that $(1,b,1)$ is a solution of (\ref{E5}) for every positive integer $b$. \item[(ii)] Let $(a,b,c)$ be a solution to (\ref{E5}) with $a \geq 2,\ b\geq 2,\ c\geq 2$, and $a =c$. We have, by (\ref{E5}), $a^{(b^a)} = a^{ab} \Leftrightarrow $ (since $a\geq 2$) $b^a = ab$, or equivalently, $b^{a-1} = a$, which, when combined with Lemma 1, parts (i) and (ii), implies that either $b \geq 3$ and $a=1$; which is ruled out since $a \geq 2$; or alternatively, $b = 2$ and $a \leq 2$ which gives $a = 2$. We obtain $a = b = c = 2$. Conversely, $(2,2,2)$ is a solution to equation (\ref{E5}), $2^4 = 2^4$. \item[(iii)] Let $(a,b,c)$ be a solution to (\ref{E5}) with $a \geq 2,\ b \geq 2,\ c \geq 2$, and with $b = c \neq a$. We have, $$ a^{(b^b)} = b^{ab}; $$ \noindent or equivalently, \begin{equation} a^{(b^b)} = (b^{b})^a\label{E9} \end{equation} Equation (\ref{E9}) combined with Result 2 implies that, either $a = 4$ and $b^b = 2$ or $a = 2$ and $b^b=4$; or $a = b^b$. The first possibility is ruled out since $b^b \geq 2^2 > 2$, because $b \geq 2$ . The second possibility yields $b^b=4,\ b=2$, but then also have $a=2$ and so $a = b=c=2$, contrary to $b = c \neq a$. The third possibility establishes $(a,b,c) = (b^b,b,b)$. Conversely, $(b^b,b,b)$ is a solution to (\ref{E5}) for any positive integer $b \geq 2$. Both sides of (\ref{E5}) are equal to $b^{(b^{b+1})}$. \end{enumerate} \end{proof} \end{enumerate} \newpage
1,477,468,750,221
arxiv
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE Computer Society conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section*{Acknowledgment} We thank our reviewers for their invaluable comments. We also thank Liqian Ma and Caroline Chan for their great help with comparison; Neng Qian, Vladislav Golyanik, Yang He, Franziska Mueller and Ikhsanul Habibie for data acquisition; Gereon Fox for audio recording; Jiatao Gu and Daniele Panozzo for discussion. This work was supported by ERC Consolidator Grant 4DReply (770784), Lise Meitner Postdoctoral Fellowship, Max Planck Center for Visual Computing and Communications (MPC-VCC) and the Research Grant Council of Hong Kong (GRF 17210718). \bibliographystyle{IEEEtran} \section{Conclusion} We have presented a novel method for video synthesis of human actors. Our method is a data-driven approach that learns, from a monocular video, to generate realistic video footage of an actor, conditioned on skeleton pose input. In contrast to most existing methods that directly translate the sparse pose information into images, our proposed approach explicitly disentangles the learning of time-coherent fine-scale pose-dependent details from the embedding of the human in 2D screen space. As a result, our approach leads to significant better human video synthesis results, as we have demonstrated both qualitatively and quantitatively. \section{Experiments} \label{sec:experiments} \newcommand{\methodlabel}[1]{\parbox{2.79cm}{\centering {#1}}} \newcommand{0.33}{0.33} \begin{figure*} \rotatebox[origin=l]{90}{ \methodlabel{\textbf{Ours}} \methodlabel{\textbf{Rendering}} \methodlabel{\textbf{Driving Motion}} } \subfigure{\includegraphics[scale=0.33]{figures/qualitative/0.png}}% \subfigure{\includegraphics[scale=0.33]{figures/qualitative/1075.png}}% \subfigure{\includegraphics[scale=0.33]{figures/qualitative/19711.png}}% \subfigure{\includegraphics[scale=0.33]{figures/qualitative/23370.png}}% \subfigure{\includegraphics[scale=0.33]{figures/qualitative/43522.png}}% \subfigure{\includegraphics[scale=0.33]{figures/qualitative/44117.png}}% \vspace{-0.3cm} \rotatebox[origin=l]{90}{ \methodlabel{\textbf{Ours}} \methodlabel{\textbf{Rendering}} \methodlabel{\textbf{Driving Motion}} } \subfigure{\includegraphics[scale=0.33]{figures/qualitative/67012.png}}% \subfigure{\includegraphics[scale=0.33]{figures/qualitative/69985.png}}% \subfigure{\includegraphics[scale=0.33]{figures/qualitative/72911.png}}% \subfigure{\includegraphics[scale=0.33]{figures/qualitative/73149.png}}% \subfigure{\includegraphics[scale=0.33]{figures/qualitative/86985.png}}% \subfigure{\includegraphics[scale=0.33]{figures/qualitative/87367.png}}% \caption{Example frames for our motion transfer results. The 1st row shows frames from the source videos, the 2nd row shows the meshes rendered with the synthesized textures (input to our RefNet), and the 3rd row shows our final results. See our supplementary video for complete results. } \label{fig:motion_transfer_qualitative} \end{figure*} \begin{figure*} \rotatebox[origin=l]{90}{ \methodlabel{Esser et al. \cite{Esser2018}} \methodlabel{Ma et al. \cite{MaSGVSF2017}} \methodlabel{Chan et al. \cite{Chan2018}} \methodlabel{Wang et al. \cite{wang2018vid2vid}} \methodlabel{Liu et al. \cite{Liu2018Neural}} \methodlabel{\textbf{Ours}} \methodlabel{\textbf{Driving Motion}} } \subfigure{\includegraphics[scale=0.33]{figures/comparison/compare_fig2768.png}}% \subfigure{\includegraphics[scale=0.33]{figures/comparison/compare_fig4299.png}}% \subfigure{\includegraphics[scale=0.33]{figures/comparison/compare_fig4557.png}} \subfigure{\includegraphics[scale=0.33]{figures/comparison/compare_fig6350.png}}% \subfigure{\includegraphics[scale=0.33]{figures/comparison/compare_fig6479.png}}% \subfigure{\includegraphics[scale=0.33]{figures/comparison/compare_fig6741.png}}% \caption{Qualitative comparison against previous state-of-the-arts on the motion transfer application. The first row shows the input sequence that is used to drive the motion, the second row shows the results obtained from our method, and the remaining rows show results obtained by the methods from~Liu et al. \cite{Liu2018Neural},~Wang et al. \cite{wang2018vid2vid},~Chan et al. \cite{Chan2018}, ~Ma et al. \cite{MaSGVSF2017},~Esser et al. \cite{Esser2018}.} \label{fig:motion_transfer_comparison} \end{figure*} To evaluate our approach and provide comparisons to existing methods, we conduct experiments on the 7 video sequences from~\cite{Liu2018Neural}. Each sequence comprises approximately $12{,}000$ frames, where the subjects are instructed to perform a wide range of different motions, so that the space of motions is sufficiently covered by the training data. We split each sequence into a training sequence and a test sequence, where the last quarter of each sequence is used for testing. In addition, we captured a new sequence to demonstrate the use of our approach in a novel-view synthesis setting, and we also evaluate our method based on a community video as driving sequence. In the following, we show our qualitative results on the motion transfer and novel-view synthesis tasks and provide comparisons to previous state-of-the-art methods. Then, we perform an ablation study to evaluate the importance of each component of our approach. \begin{figure*}[ht] \includegraphics[width=\linewidth]{./figures/bullet_time_new.jpg} \caption{Bullet time video frame examples. Our method can be used to synthesize new views of the actor using just a monocular video.} \label{fig:bullet_time} \end{figure*} \subsection{Motion Transfer} For the motion transfer application, we make use of pairs of monocular video sequences and our goal is to synthesize a video of the target actor performing the motion of the source actor, i.e., to transfer the motion from the source video to the target video. To this end, we estimate the optimal pose of the target person for each frame by solving a inverse kinematics (IK) problem as in~\cite{Liu2018Neural}, which encourages the corresponding keypoints on both skeletons, including the joints and facial landmarks, to match each other in 3D as much as possible. Note that directly applying the source's skeletal pose parameters to the target skeleton may fail to produce acceptable results in general for two reasons: First, this would require that both skeletons have exactly the same structure, which may be overly restrictive in practice. Second, even more importantly, differences in the rigging of the skeleton would lead to incorrect poses if the pose parameters of the source skeleton are applied directly to the target skeleton. Several example frames of the motion transfer results are shown in Fig.~\ref{fig:motion_transfer_qualitative}. We can see from the mesh rendered with the synthesized dynamic texture (see Fig.~\ref{fig:motion_transfer_qualitative}, 2nd row) that our TexNet is able to capture the pose-dependant details, such as wrinkles, while the RefNet yields realistic images, where artifacts due to tracking/skinning errors are corrected and the natural blending and interaction (shadows) between foreground and background are synthesized. \rev{We point out that even the results of non-frontal motions look plausible. } In our supplementary video we show additional animated results. Our approach can also take a user-designed motion as source motion input, which allows the user to interactively reenact the actor using a handle-based editor (see the demonstration in our supplementary video). \rev{Furthermore, we stress test our approach by using internet video footage as driving motion. Although the driving motion is very different from the motions in our training corpus, our approach generates plausible results (see Fig.~\ref{fig:dance} and the supplementary video).} \begin{figure}% \includegraphics[width=1\linewidth]{figures/youtube.png} \caption{\rev{Reenactment result of using internet video footage as driving motion.}} \label{fig:dance} \end{figure} We compare our approach with the following five methods on two sequences: Esser et al. \cite{Esser2018}, Ma et al. \cite{MaSGVSF2017}, Liu et al. \cite{Liu2018Neural}, Chan et al. \cite{Chan2018}, and Wang et al. \cite{wang2018vid2vid}. \revnew{For fair comparison, we apply the input in the same formats to the networks in the comparison methods as they require. Specifically, the input to Esser et al. \cite{Esser2018}, Ma et al. \cite{MaSGVSF2017} and Chan et al. \cite{Chan2018} is the motion of a 2D skeleton. A part-based RGBD representation is used as input for Liu et al. \cite{Liu2018Neural}. The tracking results obtained with OpenPose \cite{cao2018openpose} and DensePose \cite{guler2018densepose} are used as input to Wang et al. \cite{wang2018vid2vid}. } The qualitative comparisons are provided in Fig.~\ref{fig:motion_transfer_comparison}. Again, we refer the reader to the supplementary video for better visual comparisons. As can be seen from the video, our approach yields temporally more coherent results and exhibits less artifacts than the competing methods. Especially, the artifact of missing limbs is significantly alleviated in our results. Also note that, in contrast to our method, the methods of Esser et al. \cite{Esser2018}, Ma et al. \cite{MaSGVSF2017} and Wang et al. \cite{wang2018vid2vid} do not preserve the identity (body shape) of the actors, since their motion transfer is done in the 2D image space (e.g. with 2D landmarks positions), while ours is done in the skeleton pose space. \revnew{Furthermore, our approach yields geometrically more consistent results. For example, wrinkles in our results move coherently with the garments, rather than being attached to a separated spatially fixed layer in screen space, as can be observed for the other methods. These benefits come from a well-designed three-stage pipeline that first generates a dynamic texture with time-coherent high-frequency details and then renders the mesh with the dynamic texture, which is eventually refined in screen space. To help understanding how each component of the pipeline contributes to the final result, we provide thorough ablations in Section \ref{ablation}, including the use of rendered mesh with dynamic texture rather than a sparse skeleton or rendered meshes with average/static texture as input to the second network, and the importance of a partial normal map as input to the first network, etc.} \rev{We also compare the output of TexNet with the texture map retrieved by a simple nearest-neighbor-based approach. The similarity of two motions is defined as the $\ell_2$-norm of the difference of the motions represented by 30 joint angles $\theta$ ($\theta \in (-\pi, \pi]$). We fetch the texels from the texture map of the closest pose and fill-in the invisible region using the average texture. The results are clearly worse and show many spatial and temporal artifacts (see the supplementary video).} \subsection{Novel-View Synthesis} Novel-view synthesis is an important task for many real-world applications, such as VR-based telepresence and the iconic ``bullet time'' visual effect for the film industry. Our proposed approach can deliver such results based on just a monocular video. To demonstrate this, we captured a monocular video sequence and showcase the bullet time visual effect based on our approach. In each video, the actor is asked to perform a similar set of motions (Karate exercise) for multiple repetitions in \revnew{eight} different global rotation angles \revnew{(rotated in 45 degrees steps)} with respect to the camera. This lets the camera capture similar poses from different viewing directions. The captured video is tracked and used for training our networks. For testing, we select a fixed pose out of the motion sequences, and then use a virtual camera orbiting around the actor to generate the conditional input images to our approach. This allows us to synthesize realistic video of the actor frozen in a certain pose, viewed from different angles. Some example frames are shown in Fig.~\ref{fig:bullet_time} and the complete video can be found in the supplementary material. Note that we do not synthesize background, i.e., the rotating floor and walls, but render them with Blender\footnote{https://www.blender.org/} with the same orbiting camera. Then, we segment out the foreground of our synthesized video, using the method of~\cite{Cae+17}, and composite the foreground and the rendered background. \subsection{User Study} Following many other image synthesis methods, we evaluate our approach in terms of user perception via a user study and also provide comparisons to existing methods in this manner. Therefore, we show pairs of video synthesis results from 6 different methods to 18 users. These six methods include ours and the methods of Esser et al. \cite{Esser2018}, Ma et al. \cite{MaSGVSF2017}, Liu et al. \cite{Liu2018Neural}, Chan et al. \cite{Chan2018}, and Wang et al. \cite{wang2018vid2vid}. Our result is always included in each pair, thus performing the direct comparison between our method and each of the existing methods. In total, 30 pairs of videos from two sequences are shown to the users. The user study video and the labels of all pairs are provided in the supplementary material. After watching the videos, the users are asked to select the one from each pair that appears more natural and realistic. In Table.~\ref{tab:user_study} we provide the percentages of votes for our method, when compared to the respective existing method. We can see that our results are considered more realistic than all existing methods. Although Wang et al. \cite{wang2018vid2vid} is slightly more preferable on sequence 2, we show in the supplementary video that their method only transfers the appearance but incorrectly scales the person to match the driving actors shape. Note that this user study does not allow relative comparison among the previous methods, since they are not directly shown to the user side by side. \begin{table}[h] \centering \small \caption{Comparison of our method with existing methods through a user study. The percentages of votes for our method are provided. Numbers larger than 50 mean that our results are considered more realistic.} \begin{tabular}{|l|c|c|c|} \hline \textbf{Methods} & \textbf{Seq 1} & \textbf{Seq 2} & \textbf{All} \\ \hline \textbf{Esser et al. \cite{Esser2018}} & 90.74 & 94.44 & 92.59 \\ \hline \textbf{Ma et al. \cite{MaSGVSF2017}} & 100.00 & 96.30 & 98.15\\ \hline \textbf{Liu et al. \cite{Liu2018Neural}} & 88.68 & 72.55 & 80.61 \\ \hline \textbf{Chan et al. \cite{Chan2018}} & 67.92 & 68.52 & 68.22\\ \hline \textbf{Wang et al. \cite{wang2018vid2vid}} & 79.63 & 46.30 & 62.96\\ \hline \end{tabular} \label{tab:user_study} \end{table} \subsection{Ablation Study} \label{ablation} \rev{ We evaluate the importance of individual components of our approach via a quantitative ablation study. To this end, we split one video into a training (12130 frames) and a test set (4189 frames). We evaluate the error on the test set with respect to the ground truth. As we are mainly interested in synthesizing the appearance of the human body, we compute the error only on the foreground region. } \rev{ \textbf{Relevance of TexNet.} First, we investigate the importance of using the dynamic texture generation based on TexNet. For this analysis, we consider the two cases where we train the RefNet based on two alternative inputs: 1) the static texture from the 3D reconstruction (cf.~Fig.~\ref{fig:pipeline} ``Static texture''), and 2) the average texture computed from the visible texels of the texture extracted from the training video (cf.~Sec.~\ref{sec:dynamictexture}). The reconstruction error of these two and our approach are shown in Tab.~\ref{tab:ablation_quant} (``Average texture (RefNet)'', ``Static texture (RefNet)'', and ``Ours (TexNet + RefNet)''). We can see that our full pipeline significantly outperforms these two baseline methods in terms of average per-pixel mean error and SSIM (see the supplementary video for the visual results).} \rev{ \textbf{Importance of filtering stage.} We have also analyzed the importance of the filtering stage as used for the target texture extraction (Sec.~\ref{sec:datamodel}). To this end, we trained one network on unfiltered data, see Tab.~\ref{tab:ablation_quant} (``Without filtering (TexNet) + RefNet''). It can be seen that our full approach outperforms this network. Although quantitatively the improvements may appear small due to the relatively small area that is affected, we have found that the filtering qualitatively improves the results significantly, see Fig.~\ref{fig:ablation_filtering}. } \rev{ \textbf{Importance of partial normal map input.} We have also analyzed the importance of the partial normal map as input to our TexNet. For this analysis, we consider two cases: 1) we train TexNet using a rendered 3D skeleton and its depth as input (``Rendered 3D skeleton (TexNet) + RefNet''), and 2) a direct mapping (only RefNet) from the rendered 3D skeleton to the final image (``Rendered 3D skeleton (RefNet)''). As shown in Tab.~\ref{tab:ablation_quant}, our full pipeline outperforms these two baselines. \revnew{For the first case, compared to a depth map of a 3D skeleton, using a partial normal map to encode the pose as the input to TexNet is more effective and more robust since it does not need more effort to learn the translation between different domains. Also, in the second case, we can see that the dense mesh representation is more informative than the sparse skeleton and therefore can achieve better results (see the supplementary video for the visual results). } } \begin{table}[ht] \centering \small \caption{\rev{Quantitative evaluation. We report the mean (for the whole sequence) of the L2 error and SSIM for the region of the person in the foreground. Our full approach obtains the best scores.}} \begin{tabular}{ | l | c | c |} \hline & \textbf{L2 error} & \textbf{SSIM} \\ \hline \hline \textbf{\rev{Rendered 3D skeleton (TexNet) + RefNet}} & 9.558 & 0.763 \\ \hline \textbf{Rendered 3D skeleton (RefNet)} & 9.726 & 0.755 \\ \hline \textbf{Average texture (RefNet)} & 9.133 & 0.771 \\ \hline \textbf{Static texture (RefNet)} & 8.958 & 0.775 \\ \hline \textbf{Without filtering (TexNet) + RefNet} & 8.744 & 0.781 \\ \hline \textbf{Ours (TexNet + RefNet)} & \textbf{8.675} & \textbf{0.784} \\ \hline \end{tabular} \label{tab:ablation_quant} \end{table} \begin{figure}% \includegraphics[width=1\linewidth]{figures/filtering/ablation_filtering.pdf} \caption{\rev{Ablative results for the proposed filtering procedure used for the target texture extraction. We show three instances, where the left images show the result of the rendered mesh with a dynamically generated texture without filtering, and the right images show the analogous images with filtering. When not using filtering, one can clearly see additional artifacts in the hand areas.}} \label{fig:ablation_filtering} \end{figure} \textbf{Size of training dataset.} We also evaluate the dependence of the performance on the size of the training dataset. In this experiment, we train TexNet and RefNet with 6000, 9000, 12130 frames of the target sequence. See Table~\ref{tab:ablation_trainsize} for the quantitative results, and the supplementary video for the visual results. As expected, larger training sets have more pose variety and hence can produce better results. For better generalizability, the poses in our training data should be as diverse as possible. If the testing pose is very different from any of the training poses, the synthesis quality will degrade but still look reasonable due to the generalizability of the networks (see, for example. the results with Youtube videos as driving sequences in the supplementary video). \begin{table}[ht] \centering \small \caption{Quantitative evaluation on the dependency of the performance on the training dataset size. We report the mean (for the whole sequence) of the L2 error and SSIM for the region of the person in the foreground. Our full training set obtains the best scores.} \begin{tabular}{ | l | c | c |} \hline & \textbf{L2 error} & \textbf{SSIM} \\ \hline \hline \textbf{6000 frames} & 10.003 & 0.749 \\ \hline \textbf{9000 frames} & 9.287 & 0.767 \\ \hline \textbf{12130 frames} & \textbf{8.675} & \textbf{0.784} \\ \hline \end{tabular} \label{tab:ablation_trainsize} \end{table} \section{Introduction} \begin{figure*} \includegraphics[width=\linewidth]{figures/teaser_new.jpg} \vspace{-0.8cm} \caption{\rev{We present an approach for synthesizing realistic videos of humans. Our method allows for: a) motion transfer between a pair of monocular videos, b) interactively controlling the pose of a person in the video, and c) monocular bullet time effects, where we freeze time and virtually rotate the camera.}} \label{fig:teaser} \end{figure*} Synthesizing realistic videos of humans is an important research topic in computer graphics and computer vision, which has a broad range of applications in visual effects (VFX) and games, virtual reality (VR) and telepresence, AI assistants, and many more. In this work, we propose a novel machine learning approach for synthesizing a realistic video of an actor that is driven from a given motion sequence. Only a monocular video and a personalized template mesh of the actor are needed as input. The motion of the actor in the target video can be controlled in different ways. For example by transferring the motion of a different actor in a source video, or by controlling the video footage directly based on an interactive handle-based editor. Nowadays, the de-facto standard for creating video-realistic animations of humans follows the conventional graphics-based human video synthesis pipeline based on highly detailed animated 3D models. The creation of these involves multiple non-trivial, decoupled, manual and time-consuming steps: These include 3D shape and appearance scanning or design, hand-design or motion capture of target motions and deformations, and time-consuming photorealistic rendering. Aiming to streamline and expedite this process, in recent years graphics and vision researchers developed data-driven methods to generate realistic images \cite{MaSJSTV2017,Balakrishnan2018,Esser2018} and videos \cite{Chan2018,Lischinski2018,wang2018vid2vid,SiaroSLS2017,Liu2018Neural} of humans. Many of these use variants of adversarially trained convolutional neural networks to translate coarse conditioning inputs, which encode human appearance and/or pose, into photo-realistic imagery. A prominent problem with existing methods is that fine-scale details are often over-smoothed and temporally incoherent, e.g. wrinkles often do not move coherently with the garments but look like lying on a separated spatially fixed layer floating in the screen space (see the supplementary video). While some approaches try to address these challenges by enforcing temporal coherence in the adversarial training objective \cite{Chan2018,Lischinski2018,wang2018vid2vid}, we argue that most problems are due to a combination of two limiting factors: 1) Conditioning input is often a very coarse and sparse 2D or 3D skeleton pose rather than a more complete 3D human animation model. 2) Image translation is learned only in 2D screen space. This fails to properly disentangle appearance effects from residual image-space effects that are best handled by 2D image convolutions. Since appearance effects are best described on the actual 3D body surface, they should be handled by suitable convolutions that take the manifold structure into account. As a consequence of these effects, networks struggle to jointly generate results that show both, complete human body imagery without missing body parts or silhouette errors, as well as plausible temporally coherent high-frequency surface detail. We propose a new human video synthesis method that tackles these limiting factors and explicitly disentangles learning of time-coherent pose-dependent fine-scale detail from the time-coherent pose-dependent embedding of the human in 2D screen space. Our approach relies on a monocular training video of the actor performing various motions, and a skinned person-specific template mesh of the actor. The latter is used to capture the shape and pose of the actor in each frame of the training video using an off-the-shelf monocular performance capture approach. Our video synthesis algorithm uses a three-stage approach based on two CNNs and the computer graphics texturing pipeline: 1) Given the target pose in each video frame encoded as a surface normal map of the posed body template, the first CNN is trained to predict a dynamic texture map that contains the pose-dependent and time-coherent high-frequency detail. In this normalized texture space, local details such as wrinkles always appear at the same uv-location, since the rigid and articulated body motion is already factored out by the monocular performance capture algorithm, which significantly simplifies the learning task. This frees the network from the task of having to synthesize the body at the right screen space location, leading to temporally more coherent and detailed results. 2) We apply the dynamic texture on top of the animated human body model to render a video of the animation that exhibits temporally stable high-frequency surface details, but that lacks effects that cannot be explained by the rendered mesh alone. 3) Finally, our second CNN conditions the generation of the final video on the temporally coherent output of the first CNN. This refinement network synthesizes foreground-background interactions, such as shadows, naturally blends the foreground and background, and corrects geometrical errors due to tracking/skinning errors, which might be especially visible at the silhouettes. To the best of our knowledge, our approach is the first dynamic-texture neural rendering approach for human bodies that disentangles human video synthesis into explicit texture-space and image-space neural rendering steps: pose-dependent neural texture generation and rendering to realistic video translation. This new problem formulation yields more accurate human video synthesis results, which better preserve the spatial, temporal, and geometric coherence of the actor's appearance compared to existing state-of-the-art methods. \if0 To the best of our knowledge, our approach is the first to combine learning of pose-dependent dynamic surface textures in 3D model space with learning of video synthesis in image space from monocular video. Our approach yields more accurate human video synthesis results, which better preserve the spatial, temporal, and geometric coherence of the actor's appearance compared to existing state-of-the-art methods. \fi \revnew{As shown in Figure \ref{fig:teaser}, our approach can be utilized in various applications, such as human motion transfer, interactive reenactment and novel view synthesis from monocular video. In our experiments, we demonstrate these applications and show that our approach is superior to the state of the art both qualitatively and quantitatively. } Our main contributions are summarized as follows: \begin{itemize} \item A novel three-stage approach that disentangles learning pose-dependent fine-scale details from the pose-dependent embedding of the human in 2D screen space. \item High-resolution video synthesis of humans with controllable target motions and temporally coherent fine-scale detail. \end{itemize} \section{Discussion and Limitations} In addition to the presented use-cases of motion transfer, interactive reenactment, and novel-view synthesis, another potential application of our approach is the generation of annotated large-scale human image or video datasets. Particularly, with the recent popularity of deep learning, such datasets could be used for many different computer vision tasks, such as human detection, body pose estimation, and person re-identification. Our experimental results demonstrate that our method outperforms previous approaches for the synthesis of human videos. However, there are still some issues that could be addressed in future work. One important issue is that the currently used neural network architectures (TexNet and RefNet) are computationally expensive to train. In order to move on to very high image resolutions, one needs to reduce the network training time. For example, training each network for an image resolution of $256 \times 256$ takes already two days, and training it for an image resolution of $512 \times 512$ takes about $6$ days on 8 high-end GPUs, and training for an image resolution of $1024 \times 1024$ takes about 10 days on 8 high-end GPUs. Another point that is a common issue in machine learning approaches is generalization. On the one hand, our trained networks can only produce reasonable results when the training data has a similar distribution to the test data. For example, it would not be possible to train a network using frontal body views only, and then synthesize reasonable backsides of a person. On the other hand, in our current approach we train person-specific networks, whereas it would be desirable to train networks for more general settings. While we cannot claim that the results produced by our approach are entirely free of artifacts, we have demonstrated that in overall the amount and severity of artifacts is significantly reduced compared to other methods. \rev{Another limitation is that we are not able to faithfully generate the fingers, since the human performance capture method cannot track finger motion. This can be alleviated in future works by incorporating a more complicated hand model and finger tracking components.} Furthermore, the artifacts regarding the hands and feet are due to the 3D tracking used for generating the training data. The error in the 3D tracking would lead to a misalignment between the ground truth image and the rendered mesh in the second stage, which makes it hard for the network to directly learn this mapping. \section{Method} In this section we describe our neural human video synthesis approach. As illustrated in Fig.~\ref{fig:pipeline}, given a monocular video of \revnew{a performing actor} and a textured mesh template of the actor, our method learns a person-specific embedding of the actor's appearance. To generate the training data, we first employ an off-the-shelf monocular human performance capture method \cite{reticam2018} to track the motion of the actor in the video (Sec.~\ref{sec:datamodel}). Based on the tracking results, we generate the (partial) dynamic texture by back-projecting the video frames to the animated template mesh. Having the motion data, partial dynamic textures, and the original video frames as the training corpus, our approach proceeds in three stages: In the first stage, we train our \emph{texture synthesis network (TexNet)} to regress a partial texture image, which depicts the \emph{pose-dependent} appearance details, such as wrinkles, given a certain pose as input. Here, the pose information is encoded in a (partial) normal map in the uv-space in order to obtain an \emph{image-based pose encoding in texture space}. In the second stage, we complete the predicted \emph{partial} texture image to a \emph{complete} texture image (Sec.~\ref{sec:dynamictexture}), and render the mesh with this \emph{complete} texture image. In the third stage, we translate the renderings into a realistic video with our \emph{refinement network (RefNet)} (Sec.~\ref{sec:videosynthesis}). During testing, our method takes a motion clip from arbitrary sources (e.g., motion capture, artist-designed, etc.), and generates a video of the actor performing the input motion. \subsection{Training Data Generation}\label{sec:datamodel} In this section we describe the human character model, how its texture mapping is obtained, and how the human motion is captured. \textbf{Image Sequence.} Let $\setI_{1},\ldots, \setI_f$ be a given image sequence comprising $f$ frames of a human actor that performs motions. The $j$-th frame $\setI_j \in [0,1]^{w\times h \times 3}$ is an RGB image of dimension $w \times h$. \textbf{3D Character Model.} For each subject we create a 3D character model based on the multi-view image-based 3D reconstruction software Agisoft Photoscan\footnote{\url{http://www.agisoft.com/}}. To this end, we capture approximately a hundred images from different view points of the actor in a static neutral pose (upright standing and the arms forming a ``T-pose'', see Fig.~\ref{fig:pipeline} ``Character model''). This data is then directly used as input to Photoscan, which produces a textured 3D model of the person, as shown in Fig.~\ref{fig:pipeline} (``Character model'' and ``Static texture''). Then, we rig the character model with a parameterized skeleton model, similarly as done in other approaches (e.g.~ \cite{reticam2018}). Based on this procedure we obtain a parameterized surface mesh model with vertex positions $\setM(\theta) \in \mathbb{R}^{n \times 3}$, where $n$ is the number of mesh vertices and $\theta \in \mathbb{R}^{33}$ is the pose parameter vector, where among the $33$ scalar values $6$ are global rigid pose parameters, and $27$ are pose articulation parameters in terms of joint angles. \textbf{Texture Mapping.} For texture mapping, we unwrap the human body surface mesh and map it onto the unit square $[0,1]^2$ using the quasi-harmonic surface parameterization method of~\cite{zayer2005discrete}, which reduces the parametric distortion by attempting to undo the area distortion in the initial conformal mapping. To this end, the mesh is first cut along the spine, followed by two cuts along the legs, as well as three cuts along the arms and the head. Then, this boundary is mapped to the boundary of the square. A so-created RGB texture $\setT \in [0,1]^{w \times h \times 3}$ is shown in Fig.~\ref{fig:pipeline} (``Static texture''). \textbf{Human Performance Capture.} We employ the recent real-time dense motion capture method of ~\cite{reticam2018}. Their two-stage energy-based method first estimates the actor's pose by using a sparse set of body and face landmarks, as well as the foreground silhouette. The output of the motion capture stage is the pose vector $\theta$, which can be used to pose the surface model, resulting in a deformed mesh with vertex positions $\setM(\theta)$. Next, the reconstruction is refined on the surface level to account for local non-rigid deformations that cannot be captured by a pure skeleton-based deformation. To this end, per-vertex displacements are estimated using a dense silhouette and photometric constraints. \textbf{Target Dynamic Texture Extraction.} After the performance capture, we generate the pose-specific partial dynamic texture $\setT_j$ by back-projecting the input image frame $\setI_j$ onto the performance capture result, i.e., the deformed mesh $\setM(\theta_j)$. Note that the generated dynamic textures are incomplete, since we only have front view observation, due to the monocular setup. \begin{figure} \centerline{ \subfigure{\includegraphics[width=\linewidth]{figures/filtering/filtering_comparison.png} } } \caption{Effect of our filtering procedure. The top row shows the texture map before filtering, and the bottom row shows it after filtering. } \label{fig:filtering} \end{figure} Although the reconstructed 3D body model yields a faithful representation of the true body geometry, small tracking errors between the digital model and the real human are inevitable. A major issue is that such small misalignments would directly result in an erroneous texture map $\setT_j$ (e.g. a common case is that a hand in front of the torso leads to the incorrect assignment of the hand color to a torso vertex, see Fig.~\ref{fig:filtering}). Using such noisy texture maps would be disadvantageous for learning, as the network would need to spend capacity on understanding and (implicitly) fixing these mismatches. Instead, based on a simple image-based analysis we filter out the erroneous parts and thereby avoid training data corruption. The filtering method consists of four simple steps: \begin{enumerate}[(i)] \item First, we generate an average texture map $\overline{\setT} \in [0,1]^{w \times h \times 3}$ by averaging all colors of $\setT_1,\ldots,\setT_f$ along the temporal axis. Note that texels that correspond to occluded mesh vertices of $\setM(\theta_j)$, i.e. zero values in the texture map $\setT_j$, are not taken into account for averaging. \item Next we use a $k$-means clustering procedure to cluster all the colors present in the average texture map $\overline{\setT}$ so that we obtain a small number of $k$ \emph{prototype colors} that are ``typical'' to the specific sequence at hand. \item Then, for all frames $j$ we assign to each texel of $\setT_j$ its nearest prototype color, which is then used to compute a (per-texel) histogram of the prototype colors over all the frames (again, only considering visible parts). \item Finally, for each texel we check whether there is a prototype color that only occurs very rarely. If yes, we suppose that it is caused by the transient color of a tracking error (e.g.~a wrongly tracked hand), and therefore discard the color assignment for this texel in all frames where the insignificant color is present. \end{enumerate} By doing so, erroneous color assignments are excluded from the partial textures to enhance network training quality. In Fig.~\ref{fig:filtering} we illustrate the effect of our filtering procedure. In addition, to avoid the background pixels from being projected onto the mesh, we apply a foreground mask, generated with the video segmentation method of~\cite{Cae+17}, on the input images when doing the back-projection. \rev{Subsequently, we fill in the discarded texels based on the average texture $\overline{\setT}$.} The so-created partial dynamic textures $\setT_j$, together with the tracking results $\theta_j$, are then used as training data to our networks. \subsection{Dynamic Texture Synthesis}\label{sec:dynamictexture} Now we describe our first network, the \emph{texture synthesis network} (TexNet), which generates a \emph{pose-dependent texture} given the corresponding pose $\theta$ as conditional input. With that, we are able to generate pose-dependent high-frequency details directly in texture space, such as for example cloth wrinkles, which otherwise would require complex and computationally expensive offline rendering approaches (e.g. cloth simulation). \textbf{Pose Encoding.} Since the texture that we aim to generate is represented in texture space (or \emph{uv-space}), it is advantageous to also use an input that lives in the same domain. Hence, we have chosen to represent the pose using a \emph{partial normal map in texture space} (cf.~Fig.~\ref{fig:pipeline}, ``Partial normal map''), which we denote by $\setN \in (\setS^2)^{w\times h \times 3}$, for $\setS^2$ being a unit 2-sphere embedded in 3D space (i.e. the set of all unit length 3D vector). We note that here we use the camera coordinate system for normal calculation, since the appearance/illumination would change if the person faces a different direction. In order to allow for texels that do not have an assigned normal, we include the zero vector in $\setS^2$. Compared to other pose representations, such as for example \rev{a depth map of a 3D skeleton}, using such an \emph{image-based} pose encoding in texture space facilitates simplified learning because the network does not need to additionally learn the translation between different domains (\rev{see the ablation study}). The partial normal map $\setN_j$ is created based on the 3D body reconstruction $\setM(\theta_j)$ at frame $j$. To this end, for each vertex of the fitted 3D model that is visible in the current frame, we compute its (world-space) surface normal, and then create the partial normal map using the mesh's uv-mapping (Sec.~\ref{sec:datamodel}). Note that those areas in the partial normal map that correspond to invisible vertices are set to zero, cf.~Fig.~\ref{fig:pipeline} (``Partial normal map''). \textbf{Texture Synthesis Network.} The TexNet has the purpose of creating a pose-dependent texture from a given input partial normal map, as illustrated in Fig.~\ref{fig:pipeline}. As such, we aim to learn the network parameters $\Theta$ that parameterize the TexNet $f^\text{tex}_{\Theta}$ translating a given partial normal map $\setN \in (\setS^2)^{w \times h}$ to a pose-dependent texture $\setT \in [0,1]^{w \times h \times 3}$. For training the network, we require pairs of partial normal maps and target partial texture maps $\{(\setN_j,\setT_j):1\leq j \leq f\}$, which are directly computed from the input sequence $\setI_1,\ldots,\setI_f$ based on motion capture as described in Sec.~\ref{sec:datamodel}. During test time, for each frame $\setI_j$ the partial normal map $\setN_j$ is extracted using the 3D reconstruction $\setM(\theta_j)$, and the texture map $\setT_j = f^\text{tex}_{\Theta}(\setN_j)$ is synthesized by the network. \textbf{Network Architecture.} Since the recent \emph{vid2vid} network~\cite{wang2018vid2vid} was shown to synthesize photo-realistic and temporally consistent videos, we build our network upon its state-of-the-art architecture. It considers the temporal consistency in a local window (we set the window size to 3 in our experiments). This is achieved by leveraging optical flow based warping together with conditional generative adversarial networks (cGANs). The cGANs jointly learn the generator function $f^{\text{tex}}_{\Theta}$ to produce the output texture map $\setT = f^{\text{tex}}_{\Theta}(\setN)$ from a given conditioning input partial normal map $\setN$, along with a discriminator function $\setD$. The latter has the purpose to classify whether a given texture map $\setT$ is a synthesized texture (produced by the generator $f^{\text{tex}}_{\Theta}$) or a real texture. The general cGAN loss function reads: \begin{align} % \label{eq:cGAN} \setL^{\text{cGAN}}(f^{\text{tex}}_{\Theta}, \setD) &= \mathbb{E}_{\setT,\setN}(\log \setD(\setT,\setN)) \\ &\qquad + \mathbb{E}_{\setN}(\log(1{-}\setD(f^{\text{tex}}_{\Theta}(\setN),\setN))) \,. \nonumber \end{align} To obtain realistic individual frames, as well as a temporally consistent sequence of frames, a per-frame cGAN loss term $\setL^{\text{frm}}$ is used in combination with a video cGAN loss term $\setL^{\text{vid}}$ that additionally incorporates the previous two frames. Furthermore, the term $\setL^{\text{flow}}$ is used to learn the optical flow fields. The total learning problem now reads: \begin{align} \label{eq:vid2vid} \min_{f^{\text{tex}}_{\Theta}} \max_{\setD^{\text{frm}},\setD^{\text{vid}}} & \setL^{\text{frm}}(f^{\text{tex}}_{\Theta}, \setD^{\text{frm}}) + \setL^{\text{vid}}(f^{\text{tex}}_{\Theta}, \setD^{\text{vid}}) + \lambda \setL^{\text{flow}} \,. \end{align} \textbf{Training.} We use approximately 12,000 training pairs, each of which consists of the ground truth texture map $\setT$ as well as the partial normal map $\setN$. For training, we set a hyper-parameter of $\lambda=10$ for the loss function, and use the Adam optimizer ($lr = 0.0002$, $\beta_1 = 0.5$, $\beta_2 = 0.99$), which we run for a total number of 10 epochs with a batch size of 8. For each sequence of $256 \times 256$ images, we use 8 Nvidia Tesla V100 GPUs to train for about 2 days. \revnew{\textbf{Runtime During Testing.} A forward pass of TexNet takes ~8ms/frame to generate a $256\times256$ image on a Nvidia Tesla V100 GPU. } \subsection{High-fidelity Video Synthesis}\label{sec:videosynthesis} By synthesizing the texture using TexNet, we bake pose-specific high-frequency details into the texture. This texture is now used for generating the final output by means of a \emph{refinement network} (RefNet). The RefNet has the task of synthesizing the background, as well as dealing with background-foreground interactions, such as shadows. Moreover, it implicitly learns to correct geometric errors due to tracking misalignments and due to skinning errors. \textbf{Training Data.} In order to train the RefNet, we first run TexNet in order to obtain the (partial) dynamic texture map of all frames. Subsequently, we fill in the invisible texels based on the average texture (across the temporal axis) to obtain a full texture map. Then, we use the full texture map to render the mesh of the 3D reconstruction obtained by motion capture. The RefNet is now trained on this data for the task of synthesizing the original input image, given the rendered mesh, cf.~Fig.~\ref{fig:pipeline}. \textbf{Network Architecture.} The architecture is the same as the TexNet, with the main difference being that instead of learning a function that maps a partial normal map to a color texture, we now learn a function $f_{\Phi}^{\text{ref}}$ that maps a rendered image to a realistic output, see Fig.~\ref{fig:pipeline}. The loss function is analogous to Eq.~\ref{eq:vid2vid} with $f_{\Phi}^{\text{ref}}$ in place of $f_{\Theta}^{\text{tex}}$. \textbf{Training.} We use approximately 12,000 training pairs, each of which consists of the rendered image and the original RGB image. For training, we set a hyper-parameter of $\lambda=10$ for the loss function, and use the Adam optimizer ($lr = 0.0002$, $\beta_1 = 0.5$, $\beta_2 = 0.99$) which we run for a total of 10 epochs with a batch size of 8. For each sequence of $256\times256$ images, we use 8 Nvidia Tesla V100 GPUs to train for about 2 days. For higher resolution results of $512\times 512$, we need about 6 days on the same GPUs. \revnew{\textbf{Runtime During Testing.} A forward pass of RefNet requires ~8ms/frame to generate $256\times 256$ images on a Nvidia Tesla V100 GPU, ~15ms/frame for $512\times 512$, and ~33ms/frame for $1024\times 1024$.} \section{Related work} In the following, we discuss human performance capture, classical video-based rendering, and learning-based human performance cloning, as well as the underlying image-to-image translation approaches based on conditional generative adversarial networks. \textbf{Classical Video-based Characters.} Classically, the domain gap between coarse human proxy models and realistic imagery can be bridged using image-based rendering techniques. These strategies can be used for the generation of video-based characters \cite{Xu:SIGGRPAH:2011,Li:2017,Casas:2014,Volino2014} and enable free-viewpoint video \cite{Carranza:2003,borshukov2005universal,Li:2014,zitnick2004high,collet2015high}. Even relightable performances can be obtained \cite{LiWSLVDT13} by disentangling illumination and scene reflectance. The synthesis of new body motions and viewpoints around an actor is possible \cite{Xu:SIGGRPAH:2011} with such techniques. \textbf{Modeling Humans from Data.} Humans can be modeled from data using mesh-based 3D representations. For example, parametric models for different body parts are widely employed \cite{Blanz99,FLAME:2017,Berard14,Wood16,Wu16b,MANO:2017} in the literature. Deep Appearance Models \cite{Lombardi:2018} learn dynamic view-dependent texture maps for the human head. The paGAN \cite{Nagano:2018} approach builds a dynamic avatar from a single monocular image. Recently, models of the entire human body have become popular \cite{Anguelov:2005,SMPL:2015}. There are also some recent works on cloth modeling \cite{PonsMollSiggraph2017, Lahner_2018_ECCV, Yang_2018_ECCV}. One drawback of these models is that they do not model the appearance of dressed humans, i.e., the color of different garments. To tackle this problem, generative models based on neural networks have been applied to directly synthesize 2D images of humans without having to model the 3D content. First, these approaches have been applied to individual parts of the human body \cite{Shrivastava2017,Mueller2017,kim2018DeepVideo}. Also models that capture the appearance of clothing have been proposed \cite{Lassner:GP:2017}. Nowadays, similar techniques are applied for the complete human body, i.e., for the synthesis of different poses \cite{MaSJSTV2017,SiaroSLS2017,Balakrishnan2018,Esser2018}. In contrast to previous approaches, we employ dense conditioning and learn dynamic high-frequency details in texture space to enable the temporally coherent generation of video. \textbf{Deep Video-based Performance Cloning.} Very recently, multiple approaches for video-based human performance cloning have been proposed \revnew{\cite{Liu2018Neural,Chan2018,Lischinski2018,wang2018vid2vid, Si_2018_CVPR, Pumarola_2018_CVPR, Esser_2018_ECCV_Workshops}} that output realistic video sequences. These approaches learn complex image-to-image mappings, i.e., from renderings of a skeleton \revnew{\cite{Chan2018, Pumarola_2018_CVPR, Esser_2018_ECCV_Workshops, Si_2018_CVPR}}, dense mesh \cite{Liu2018Neural,wang2018vid2vid}, or joint position heatmaps \cite{Lischinski2018}, to real images. Liu et al.~\cite{Liu2018Neural} proposed to translate simple synthetic computer graphics renderings of a human character into realistic imagery. \textit{Everybody Dance Now} \cite{Chan2018} predicts two consecutive video frames and employs a space-time discriminator to obtain temporally more coherent synthesis results. Deep performance cloning \cite{Lischinski2018} combines paired and unpaired training based on a two-branch network for better generalization. The vid2vid \cite{wang2018vid2vid} approach learns high-resolution video-to-video translation based on a sequential RNN generator and uses optical flow for explicitly forward warping the last frame estimate. All theses approaches learn an image-to-image mapping in 2D screen space based on a set of 2D convolution and deconvolution kernels. We argue that many artifacts of these approaches, e.g., the synthesized images are over-smoothed and temporally incoherent in fine-scale detail, are due to two limiting factors: 1) Only sparse 2D or 3D skeleton conditioning and 2) learning image translation in 2D screen space. In contrast to existing methods, we tackle these limiting factors and explicitly disentangle learning of time-coherent pose-dependent detail in texture space from the pose-dependent embedding of the human in 2D screen space. \textbf{Surface-based Modeling with Deep Learning.} Several previous works have integrated neural synthesis into surface-based modeling~\cite{Neverova_2018_ECCV, Shysheya_2019_CVPR, guler2018densepose, Thies2019DeferredNR, li2019dense, lwb2019}. Deferred Neural Rendering~\cite{Thies2019DeferredNR} proposed an end-to-end training strategy to learn neural textures and deferred neural rendering jointly. They produced photo-realistic renderings for static scenes and faces with imperfect 3D reconstructed geometry. Some works also focus on neural synthesis for human bodies. For example, Densepose~\cite{guler2018densepose} predicts UV coordinates of image pixels from the RGB inputs, and the works \cite{Shysheya_2019_CVPR, li2019dense, lwb2019} synthesize a new image of a person in a given pose based on a single image of that person. This is done by estimating dense 3D appearance flow to guide the transfer of pixels between poses. Textured Neural Avatars \cite{Shysheya_2019_CVPR} learns full body neural avatars with static textures based on pretrained Densepose~\cite{guler2018densepose} results. In contrast, our work aims at generating dynamic textures for photo-realistic renderings of human bodies, which is a more challenging task. \textbf{3D Performance Capture of Humans. } Monocular data based on recent performance capture techniques can provide the paired training corpora required for learning video-based performance cloning. Historically, 3D human performance capture has been based on complex capture setups, such as multi-view reconstruction studios with a large number of cameras \cite{matusik2000image,starck2007surface,waschbusch2005scalable,cagniart2010free,vlasic2009dynamic}. The highest quality approaches combine active and passive depth sensing \cite{collet2015high,Dou:2016,Dou:2017,wang2016capturing}. Recent dense tracking approaches build on top of joint detections, either in 2D \cite{pishchulin2016deepcut,wei2016convolutional}, in 3D \cite{zhou2016deep,mehta2016monocular,pavlakos2016coarse}, or a combination thereof \cite{elhayek2015efficient,rosales2006combining,VNect_SIGGRAPH2017}. The set of sparse detections provides initialization for optimization-based tracking approaches to start near the optimum to facilitate convergence. Many approaches simplify performance capture by tracking only the degrees of freedom of a low-dimensional skeleton \cite{gall2009motion,vlasic2008articulated,liu2011markerless}, thus resolving some of the ambiguities of truly dense capture. There is also a trend of using a reduced number of cameras, aiming to bring human performance capture to a commodity setting. For example, some approaches enable capturing human performances from two \cite{wu2013onset} or a sparse set of cameras \cite{de2008performance}. Recently, even lighter approaches \revnew{\cite{Zhang2014,bogo2015detailed,Helten:2013,yu2017bodyfusion, bogo2016smpl, 8491000, Pavlakos_2018_CVPR}} have been developed to deal with the rising demand for human performance capture in commodity settings, e.g., to enable virtual and augmented reality applications. Monocular dense 3D human performance capture \cite{MonoPerfCap_SIGGRAPH2018} is still a popular research problem, with recently real-time performance being demonstrated for the first time \cite{reticam2018}. \textbf{Conditional Generative Adversarial Networks. } Generative adversarial networks (GANs) \cite{GoodfPMXWOCB2014,RadfoMC2016,MirzaO2014,IsolaZZE2017} have been very successful in learning to generate arbitrary imagery using a generator network based on convolutional neural networks with an encoder-decoder structure \cite{HintoS2006}. They either start from scratch using a random vector \cite{GoodfPMXWOCB2014,RadfoMC2016}, or they learn conditional image-to-image synthesis based on an input image from a different domain \cite{MirzaO2014,IsolaZZE2017}. U-Nets \cite{RonneFB2015} with skip connections are often employed as generator networks. The discriminator network is trained based on a binary classification problem \cite{GoodfPMXWOCB2014} or is patch-based \cite{IsolaZZE2017}. The generator and the discriminator are jointly trained based on a minimax optimization problem. Very recently, high-resolution images have been generated using GANs \cite{KarraALL2018,WangLZTKC2018} with a progressive training strategy and using cascaded refinement networks \cite{ChenK2017}. While most of these techniques are trained in a fully supervised manner based on paired training data, some approaches tackle the harder problem of learning the translation between two domains based on unpaired data \cite{ZhuPIE2017,YiZTG2017,LiuBK2017,choi2017stargan}. Some recent works studied the problem of video-to-video synthesis. Vid2vid~\cite{wang2018vid2vid} learns high-resolution video-to-video translation based on a sequential RNN generator and uses optical flow for explicitly forward warping the last frame estimate. The recently proposed Recycle-GAN \cite{Bansal2018} approach enables unpaired learning of a coherent video-to-video mapping. In our work, we employ two vid2vid networks, where the first network has the task of generating a time-coherent texture with high-frequency details (e.g.~in clothing), and the second network has the task of producing the final output image by refining a rendering of a mesh that is textured with the output of the first network.
1,477,468,750,222
arxiv
\section{Introduction} \label{sec:introduction} Since their introduction by Toro and Titarev \cite{toro1,toro3,toro4,titarevtoro,Toro:2006a}, ADER (arbitrary high order derivatives) schemes for hyperbolic partial differential equations (PDE) have been improved and developed along different directions. A key feature of these methods is their ability to achieve uniformly high order of accuracy in space and time in a single step, without the need of intermediate Runge-Kutta stages \cite{Pareschi2005,Puppo2015}, by exploiting the approximate solution of a Generalized Riemann Problem (GRP) at cell boundaries. ADER schemes have been first conceived within the finite volume (FV) framework, but they were soon extended also to the discontinuous Galerkin (DG) finite element framework \cite{dumbser_jsc,taube_jsc} and to a unified formulation of FV and DG schemes, namely the so-called $\mathbb{P}_N\mathbb{P}_M$ approach \cite{Dumbser2008}. In the original ADER approach by Toro and Titarev, the approximate solution of the GRP is obtained through the solution of a conventional Riemann problem between the boundary-extrapolated values, and a sequence of linearized Riemann problems for the spatial derivatives. The required time derivatives in the GRP are obtained via the so-called Cauchy-Kowalevski procedure, which consists in replacing the time derivatives of the Taylor expansion at each interface with spatial derivatives of appropriate order, by resorting to the strong differential form of the PDE. Such an approach, though formally elegant, becomes prohibitive or even impossible as the complexity of the equations increases, especially for multidimensional problems and for relativistic hydrodynamics and magneto-hydrodynamics. On the contrary, in the modern reformulation of ADER~\cite{DumbserEnauxToro,Dumbser2008,Balsara2013934}, the approximate solution of the GRP is achieved by first evolving the data locally inside each cell through a \emph{local space-time discontinuous Galerkin predictor} (LSDG) step that is based on a weak form of the PDE, and, second, by solving a sequence of classical Riemann problems along the time axis at each element interface. This approach has the additional benefit that it can successfully cope with stiff source terms in the equations, a fact which is often encountered in physical applications. For these reasons, ADER schemes have been applied to real physical problems mostly in their modern version. Notable examples of applications include the study of Navier--Stokes equations, with or without chemical reactions~\cite{HidalgoDumbser,DumbserNSE}, geophysical flows~\cite{ADERNC}, complex three-dimensional free surface flows~\cite{Dumbser2013}, relativistic magnetic reconnection~\cite{DumbserZanotti,Zanotti2011b}, and the study of the Richtmyer--Meshkov instability in the relativistic regime~\cite{Zanotti2015b}. In the last few years, ADER schemes have been enriched with several additional properties, reaching a high level of flexibility. First of all, ADER schemes have been soon extended to deal with non-conservative systems of hyperbolic PDE ~\cite{Hidalgo2009,ADERNC,AMR3DNC}, by resorting to path-conservative methods~\cite{Pares2004,pares2006}. % ADER schemes have also been extended to the Lagrangian framework, in which they are currently applied to the solution of multidimensional problems on unstructured meshes for various systems of equations, ~\cite{Lagrange2D,LagrangeNC,LagrangeMDRS,LagrangeMHD,Lagrange3D}. On another side, ADER schemes have been combined with Adaptive Mesh Refinement (AMR) techniques ~\cite{AMR3DCL,Zanotti2015}, exploiting the local properties of the discontinuous Galerkin predictor step, which is applied cell-by-cell irrespective of the level of refinement of the neighbour cells. Moreover, ADER schemes have also been used in combination with Discontinuous Galerkin methods, even in the presence of shock waves and other discontinuities within the flow, thanks to a novel \textit{a posteriori} sub-cell finite volume limiter technique based on the MOOD approach \cite{CDL1,CDL2}, that is designed to stabilize the discrete solution wherever the DG approach fails and produces spurious oscillations or negative densities and pressures ~\cite{Dumbser2014,Zanotti2015d,Zanotti2015c}. The various implementations of ADER schemes mentioned so far differ under several aspects, but they all share the following common features: they apply the local space-time discontinuous Galerkin predictor to the conserved variables, which in turn implies that, if a WENO finite volume scheme is used, the spatial WENO reconstruction is also performed in terms of the conserved variables. Although this may be regarded as a reasonable choice, it has two fundamental drawbacks. The first one has to do with the fact that, as shown by \cite{Munz1986}, the reconstruction in conserved variables provides the worst shock capturing fidelity when compared to the reconstruction performed either in primitive or in characteristic variables. The second drawback is instead related to computational performance. Since the computation of the numerical fluxes requires the calculation of integrals via Gaussian quadrature, the physical fluxes must necessarily be computed at each space-time Gauss--Legendre quadrature point. However, there are systems of equations (e.g. the relativistic hydrodynamics or magnetohydrodynamics equations) for which the physical fluxes can only be written in terms of the primitive variables. As a result, a conversion from the conserved to the primitive variables is necessary for the calculations of the fluxes, and this operation, which is never analytic for such systems of equations, is rather expensive. For these reasons it would be very desirable to have an ADER scheme in which both the reconstruction and the subsequent local space-time discontinuous Galerkin predictor are performed in primitive variables. It is the aim of the present paper to explore this possibility. It is also worth stressing that in the context of high order finite difference Godunov methods, based on traditional Runge--Kutta discretization in time, the reconstruction in primitive variables has been proved to be very successful by \cite{DelZanna2007} in their ECHO general relativistic code (see also \cite{Bucciantini2011,Zanotti2011}). In spite of the obvious differences among the numerical schemes adopted, the approach that we propose here and the ECHO-approach share the common feature of requiring a single (per cell) conversion from the conserved to the primitive variables. The plan of the paper is the following: in Sect.~\ref{sec:num-approach} we describe the numerical method, with particular emphasis on Sect.~\ref{sec:WENO_reconstruction} and on Sect.\ref{sec:localDG}, where the spatial reconstruction strategy and the local space-time discontinuous Galerkin predictor in primitive variable are described. The results of our new approach are presented in Sect.~\ref{sec:num-tests} for a set of four different systems of equations. In Sect.~\ref{sec:extension} we show that the new strategy can also be extended to pure Discontinuous Galerkin schemes, even in the presence of space-time adaptive meshes (AMR). Finally, Sect.~\ref{sec:conclusions} is devoted to the conclusions of the work. \section{Numerical Method} \label{sec:num-approach} We present our new approach for purely regular Cartesian meshes, although there is no conceptual reason preventing the extension to general curvilinear or unstructured meshes, which may be considered in future studies. \subsection{Formulation of the equations} We consider hyperbolic systems of balance laws that contain both conservative and non-conservative terms, i.e. \begin{equation} \label{NCsyst} \frac{\partial \u}{\partial t}+\nabla\cdot\bf F(\u)+\bf B(\u)\cdot\nabla \u=\bf S(\u)\,, \end{equation} where $\u \in \Omega_\mathbf{Q} \subset \mathds{R}^\nu$ is the state vector of the $\nu$ {\em conserved variables}, which, for the typical gas dynamics equations, are related to the conservation of mass, momentum and energy. ${\bf F}(\u)=[{\bf f}^x(\u),{\bf f}^y(\u),{\bf f}^z(\u)]$ is the flux tensor\footnote{ Since we adopt Cartesian coordinates, ${\bf f}^x(\u),{\bf f}^y(\u),{\bf f}^z(\u)$ express the fluxes along the $x$, $y$ and $z$ directions, respectively.} for the conservative part of the PDE system, while ${\bf B}(\u)=[\textbf{B}_x(\u),\textbf{B}_y(\u),\textbf{B}_z(\u)]$ represents the non-conservative part of it. Finally, $\bf S(\u)$ is the vector of the source terms, which may or may not be present. In the follow up of our discussion it is convenient to recast the system (\ref{NCsyst}) in quasilinear form as \begin{equation} \label{Csyst} \frac{\partial \u}{\partial t}+ \bf{A}(\u) \cdot\nabla \u = \textbf{S}(\u)\,, \end{equation} where ${\bf A}(\u)=[{\bf A}_x,{\bf A}_y,{\bf A}_z]=\partial {\bf F}(\u)/\partial \u+{\bf B}(\u)$ accounts for both the conservative and the non-conservative contributions. As we shall see below, a proper discretization of Eq.~(\ref{Csyst}) can provide the time evolution of the conserved variables $\u$, but when the {\em primitive variables} $\mathbf{V}$ are adopted instead, Eq.~(\ref{Csyst}) translates into \begin{equation} \label{Primsyst} \frac{\partial \mathbf{V}}{\partial t}+ {\bf C}(\u) \cdot\nabla \mathbf{V} = \left( \frac{\partial \u}{\partial \mathbf{V}}\right)^{-1}{\bf S}(\u)\,, \quad \textnormal{ with } \quad {\bf C}(\u)=\left( \frac{\partial \u}{\partial \mathbf{V}}\right)^{-1} \bf{A}(\u) \left( \frac{\partial \u}{\partial \mathbf{V}} \right). \end{equation} In the following we suppose that the conserved variables $\u$ can always be written \textit{analytically} in terms of the primitive variables $\mathbf{V}$, i.e. the functions \begin{equation} \label{eq:prim2cons} \u=\u(\mathbf{V}) \end{equation} are supposed to be analytic for all PDE systems under consideration. On the contrary, the conversion from the conserved to the primitive variables, henceforth the {\em cons-to-prim conversion}, is not always available in closed form, i.e. the functions \begin{equation} \label{eq:cons2prim} \mathbf{V}=\mathbf{V}(\u) \end{equation} may \textit{not} be analytic (e.g. for relativistic hydrodynamics and magnetohydrodynamics to be discussed in Sect.~\ref{sec:RMHD}), thus requiring an approximate numerical solution. As a result, the matrix $\left( \frac{\partial \u}{\partial \mathbf{V}}\right)^{-1}$, which in principle could be simply computed as \begin{equation} \left( \frac{\partial \u}{\partial \mathbf{V}}\right)^{-1}=\left( \frac{\partial \mathbf{V}}{\partial \u}\right)\,, \end{equation} in practice it cannot be obtained in this manner, but it must be computed as \begin{equation} \left( \frac{\partial \u}{\partial \mathbf{V}}\right)^{-1}= \mathbf{M}^{-1} \,, \end{equation} where we have introduced the notation \begin{equation} \mathbf{M} = \left( \frac{\partial \u}{\partial \mathbf{V}}\right)\,, \end{equation} which will be used repeatedly below. Since $\u(\mathbf{V})$ is supposed to be analytic, the matrix $\mathbf{M}$ can be easily computed. Equation (\ref{NCsyst}) will serve us as the master equation to evolve the cell averages of the conserved variables $\u$ via a standard finite volume scheme. However, both the spatial WENO reconstruction and the subsequent LSDG predictor will act on the primitive variables $\mathbf{V}$, hence relying on the alternative formulation given by Eq.~(\ref{Primsyst}). The necessary steps to obtain such a scheme are described in the Sections \ref{sec:FV}--\ref{sec:localDG} below. \subsection{The finite volume scheme} \label{sec:FV} In Cartesian coordinates, we discretize the computational domain $\Omega$ through space-time control volumes ${\mathcal I}_{ijk}=I_{ijk}\times [t^n,t^n+\Delta t]=[x_{i-\frac{1}{2}},x_{i+\frac{1}{2}}]\times[y_{j-\frac{1}{2}},y_{j+\frac{1}{2}}]\times[z_{k-\frac{1}{2}},z_{k+\frac{1}{2}}]\times [t^n,t^n+\Delta t]$, with $\Delta x_i=x_{i+\frac{1}{2}}-x_{i-\frac{1}{2}}$, $\Delta y_j=y_{j+\frac{1}{2}}-y_{j-\frac{1}{2}}$, $\Delta z_k=z_{k+\frac{1}{2}}-z_{k-\frac{1}{2}}$ and $\Delta t=t^{n+1}-t^n$. Integration of Eq.~(\ref{NCsyst}) over ${\mathcal I}_{ijk}$ yields the usual finite volume discretization \begin{eqnarray} \label{FVformula} \label{eq:finite_vol} {\bar \u}_{ijk}^{n+1}&=&{\bar \u}_{ijk}^{n}- \frac{\Delta t}{\Delta x_i}\left[\left({\textbf f}^x_{i+\frac{1}{2},j,k} \,\, -{\textbf f}^x_{i-\frac{1}{2},j,k}\right)+\frac{1}{2} \,\, \left({{D}}^x_{i+\frac{1}{2},j,k} +{{D}}^x_{i-\frac{1}{2},j,k}\right)\right]\nonumber\\ && \hspace{9mm} -\frac{\Delta t}{\Delta y_j}\left[\left({\textbf f}^y_{i,j+\frac{1}{2},k}-{\textbf f}^y_{i,j-\frac{1}{2},k}\right)+\frac{1}{2} \left({{D}}^y_{i,j+\frac{1}{2},k}+{{D}}^y_{i,j-\frac{1}{2},k}\right)\right]\nonumber\\ && \hspace{9mm} -\frac{\Delta t}{\Delta z_k}\left[\left({\textbf f}^z_{i,j,k+\frac{1}{2}}-{\textbf f}^z_{i,j,k-\frac{1}{2}}\right)+\frac{1}{2} \left({{D}}^z_{i,j,k+\frac{1}{2}}+{{D}}^z_{i,j,k-\frac{1}{2}}\right)\right + \Delta t({\bf \bar{S}}_{ijk}- {\bf \bar{P}}_{ijk})\,, \nonumber \\ \end{eqnarray} where the cell average \begin{equation} {\bar \u}_{ijk}^{n}=\frac{1}{\Delta x_i}\frac{1}{\Delta y_j}\frac{1}{\Delta z_k}\int_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}}\int_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}}\int_{z_{k-\frac{1}{2}}}^{z_{k+\frac{1}{2}}}{\u}(x,y,z,t^n)dz\,dy\,\,dx \end{equation} is the spatial average of the vector of conserved quantities at time $t^n$. In Eq.~(\ref{eq:finite_vol}) we recognize two different sets of terms, namely those due to the conservative part of the system (\ref{NCsyst}), and those coming from the non-conservative part of it. In the former set we include the three time-averaged fluxes \begin{equation} \label{averF} {\bf f}^x_{i+\frac{1}{2},jk}= \frac{1}{\Delta t}\frac{1}{\Delta y_j}\frac{1}{\Delta z_k} \hspace{-1mm} \int \limits_{t^n}^{t^{n+1}} \! \int \limits_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}} \! \int \limits_{z_{k-\frac{1}{2}}}^{z_{k+\frac{1}{2}}} \hspace{-1mm} {\bf \tilde f}^x \! \left({\v}_h^-(x_{i+\frac{1}{2}},y,z,t),{\v}_h^+(x_{i+\frac{1}{2}},y,z,t)\right) dz \, dy \, dt, \end{equation} \begin{equation} \label{averG} {\bf f}^y_{i,j+\frac{1}{2},k}=\frac{1}{\Delta t}\frac{1}{\Delta x_i}\frac{1}{\Delta z_k} \hspace{-1mm} \int \limits_{t^n}^{t^{n+1}} \! \int \limits_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}} \! \int \limits_{z_{k-\frac{1}{2}}}^{z_{k+\frac{1}{2}}} \hspace{-1mm} {\bf \tilde f}^y \! \left({\v}_h^-(x,y_{j+\frac{1}{2}},z,t),{\v}_h^+(x,y_{j+\frac{1}{2}},z,t)\right) dz\,dx\,dt, \\ \end{equation} \begin{equation} \label{averH} {\bf f}^z_{ij,k+\frac{1}{2}}=\frac{1}{\Delta t}\frac{1}{\Delta x_i}\frac{1}{\Delta y_j} \hspace{-1mm} \int \limits_{t^n}^{t^{n+1}} \! \int \limits_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}} \! \int \limits_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}} \hspace{-1mm} {\bf \tilde f}^z\! \left({\v}_h^-(x,y,z_{k+\frac{1}{2}},t),{\v}_h^+(x,y,z_{k+\frac{1}{2}},t)\right) dy\,dx\,dt \end{equation} and the space-time averaged source term \begin{equation} \label{source:S} {\bf \bar{S}}_{ijk}=\frac{1}{\Delta t}\frac{1}{\Delta x_i}\frac{1}{\Delta y_j}\frac{1}{\Delta z_k}\int \limits_{t^n}^{t^{n+1}}\int \limits_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}}\int \limits_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}}\int \limits_{z_{k-\frac{1}{2}}}^{z_{k+\frac{1}{2}}}{ \bf S}\left(\v_h(x,y,z,t)\right) dz\,dy\,dx\,dt\,. \end{equation} We emphasize that the terms ${\v}_h$ in Eq.~(\ref{averF})--(\ref{source:S}), as well as in the few equations below, are piecewise space-time polynomials of degree $M$ in {\em primitive variables}, computed according to a suitable LSDG predictor based on the formulation (\ref{Primsyst}), as we will discuss in Sect.~\ref{sec:localDG}. This marks a striking difference with respect to traditional ADER schemes, in which such polynomials are instead computed in conserved variables and are denoted as ${\bf q}_h$ (see, e.g. \cite{HidalgoDumbser}). The integrals over the smooth part of the non-conservative terms in Eq.~(\ref{eq:finite_vol}) yield the following contribution, \begin{equation} {\bf{\bar{P}}}_{ijk}=\frac{1}{\Delta t}\frac{1}{\Delta x_i}\frac{1}{\Delta y_j}\frac{1}{\Delta z_k}\int \limits_{t^n}^{t^{n+1}}\int \limits_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}}\int \limits_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}}\int \limits_{z_{k-\frac{1}{2}}}^{z_{k+\frac{1}{2}}}{\bf B}({\v}_h) \mathbf{M} \, \nabla {\v}_h \,dz\,dy\,dx\,dt\,, \end{equation} while the {\em jumps} across the element boundaries are treated within the framework of path-conservative schemes \cite{Pares2004,pares2006,Munoz2007,Castro2006,Castro2008,NCproblems} based on the Dal Maso--Le Floch--Murat theory \cite{DLMtheory} as \begin{equation} {{D}}^x_{i+\frac{1}{2},j,k} \!=\!\! \frac{1}{\Delta t}\frac{1}{\Delta y_j}\frac{1}{\Delta z_k} \hspace{-1mm} \int \limits_{t^n}^{t^{n+1}} \! \int \limits_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}} \! \int \limits_{z_{k-\frac{1}{2}}}^{z_{k+\frac{1}{2}}} \hspace{-2mm} {\cal{D}}_x \! \left({\v}_h^-(x_{i+\frac{1}{2}},y,z,t),{\v}_h^+(x_{i+\frac{1}{2}},y,z,t)\right) dz \, dy \, dt, \end{equation} \begin{equation} {{D}}^y_{i,j+\frac{1}{2},k} \!=\!\! \frac{1}{\Delta t}\frac{1}{\Delta x_i}\frac{1}{\Delta z_k} \hspace{-1mm} \int \limits_{t^n}^{t^{n+1}} \! \int \limits_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}} \! \int \limits_{z_{k-\frac{1}{2}}}^{z_{k+\frac{1}{2}}} \hspace{-2mm} {\cal{D}}_y \! \left({\v}_h^-(x,y_{j+\frac{1}{2}},z,t),{\v}_h^+(x,y_{j+\frac{1}{2}},z,t)\right) dz \, dx \, dt, \end{equation} \begin{equation} {{D}}^z_{i,j,k+\frac{1}{2}} \!=\!\! \frac{1}{\Delta t}\frac{1}{\Delta x_i}\frac{1}{\Delta y_j} \hspace{-1mm} \int \limits_{t^n}^{t^{n+1}} \! \int \limits_{x_{i-\frac{1}{2}}}^{x_{i+\frac{1}{2}}} \! \int \limits_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}} \hspace{-2mm} {\cal{D}}_z \! \left({\v}_h^-(x,y,z_{k+\frac{1}{2}},t),{\v}_h^+(x,y,z_{k+\frac{1}{2}},t)\right) dy \, dx \, dt\,. \end{equation} According to this approach, the following path integrals must be prescribed \begin{equation} \label{eqn.dm} {\cal{D}}_i({\v}_h^-,{\v}_h^+) = \int_0^1{{\bf B}_i \left(\Psi({\v}_h^-,{\v}_h^+,s)\right) \mathbf{M}\left(\Psi({\v}_h^-,{\v}_h^+,s)\right) \frac{\partial\Psi}{\partial s}ds}, \qquad i \in \left\{ x,y,z \right\}, \end{equation} where $\Psi(s)$ is a path joining the left and right boundary extrapolated states ${\v}_h^-$ and ${\v}_h^+$ in state space of the primitive variables. The simplest option is to use a straight-line segment path \begin{equation} \label{segment} \Psi = \Psi({\v}_h^-,{\v}_h^+,s) ={\v}_h^- + s({\v}_h^+ - {\v}_h^-)\,,\qquad 0\leq s \leq 1. \end{equation} Pragmatic as it is\footnote{See \cite{MuellerWB} for more sophisticated paths.}, the choice of the path (\ref{segment}) allows to evaluate the terms ${\cal{D}}_i$ in (\ref{eqn.dm}) as \begin{equation} \label{Osher-D} {\cal{D}}_i({\v}_h^-,{\v}_h^+) = \left( \int_0^1{ {\bf B}_i \left(\Psi({\v}_h^-,{\v}_h^+,s)\right) \mathbf{M}\left(\Psi({\v}_h^-,{\v}_h^+,s)\right) ds} \right) \left( {\v}_h^+ - {\v}_h^- \right)\,, \end{equation} that we compute through a three-point Gauss-Legendre formula \cite{USFORCE2,OsherNC,OsherUniversal}. The computation of the numerical fluxes ${\bf \tilde f}^i$ in Eq.~(\ref{averF}) requires the use of an approximate Riemann solver, see \cite{toro-book}. In this work we have limited our attention to a local Lax-Friedrichs flux (Rusanov flux) and to the Osher-type flux proposed in \cite{OsherUniversal,OsherNC,ApproxOsher}. Both of them can be written formally as \begin{equation} {\bf \tilde f}^i = \frac{1}{2}\left( \mathbf{f}^i(\v_h^-) + \mathbf{f}^i(\v_h^+) \right) - \frac{1}{2}\mathbf{D}_i \, \widetilde \mathbf{M} \, \left( \v_h^+ - \v_h^- \right)\,, \qquad i \in \left\{ x,y,z \right\} \label{eqn.numerical.flux} \end{equation} where $\mathbf{D}_i \geq 0$ is a positive-definite dissipation matrix that depends on the chosen Riemann solver. For the Rusanov flux it simply reads \begin{equation} \mathbf{D}^{\textnormal{Rusanov}}_i = |s_{\max}| \mathbf{I} \,, \label{eqn.rusanov} \end{equation} where $|s_{\max}|$ is the maximum absolute value of the eigenvalues admitted by the PDE and $\mathbf{I}$ is the identity matrix. The matrix $\widetilde\mathbf{M}$ is a \textit{Roe matrix} that allows to write the jumps in the conserved variables in terms of the jump in the primitive variables, i.e. \begin{equation} \mathbf{q}_h^+ - \mathbf{q}_h^- = \mathbf{Q}(\v_h^+) - \mathbf{Q}(\v_h^-) = \widetilde\mathbf{M} \, \left( \v_h^+ - \v_h^- \right). \end{equation} Since $\mathbf{M} = \partial \mathbf{Q} / \partial \mathbf{V}$, the Roe matrix $\widetilde \mathbf{M}$ can be easily defined by a path integral as \begin{equation} \mathbf{Q}(\v_h^+) - \mathbf{Q}(\v_h^-) = \int \limits_0^1 \mathbf{M} (\Psi(\v_h^-,\v_h^+,s)) \frac{\partial\Psi}{\partial s} ds = \widetilde\mathbf{M} \, \left( \v_h^+ - \v_h^- \right), \label{eqn.MRoe1} \end{equation} which in the case of the simple straight-line segment path \eqref{segment} leads to the expression \begin{equation} \widetilde \mathbf{M} = \int \limits_0^1 \mathbf{M} (\Psi(\v_h^-,\v_h^+,s)) ds. \label{eqn.MRoe2} \end{equation} In the case of the Osher-type flux, on the other hand, the dissipation matrix reads \begin{eqnarray} \label{eqn.osher} \mathbf{D}^{\textnormal{Osher}}_i = \int \limits_0^1 |{\bf A}_i(\Psi(\v_h^-,\v_h^+,s))| ds\,, \end{eqnarray} with the usual definition of the matrix absolute value operator \begin{equation} |{\bf A}|={\bf R}|{\bf \Lambda}|{\bf R}^{-1}\,,\qquad |{\bf \Lambda}|={\rm diag}(|\lambda_1|, |\lambda_2|, \ldots, |\lambda_\nu|)\,. \end{equation} The path $\Psi$ in Eq. (\ref{eqn.osher}) and \eqref{eqn.MRoe2} is the same segment path adopted in (\ref{segment}) for the computation of the jumps ${\cal{D}}_i$. \subsection{A novel WENO reconstruction in primitive variables} \label{sec:WENO_reconstruction} Since we want to compute the time averaged fluxes [c.f. Eq.~(\ref{averF})--(\ref{averH})] and the space-time averaged sources [c.f. Eq.~(\ref{source:S})] directly from the primitive variables $\mathbf{V}$, it is necessary to reconstruct a WENO polynomial in primitive variables. However, the underlying finite volume scheme \eqref{FVformula} will still advance in time the cell averages of the conserved variables $\bar{\mathbf{Q}}_{ijk}^n$, which are the only known input quantities at the reference time level $t^n$. Hence, the whole procedure is performed through the following three simple steps: \begin{enumerate} \item We perform a {\em first} standard spatial WENO reconstruction of the conserved variables starting from the cell averages ${\bar \u}_{ijk}^{n}$. This allows to obtain a reconstructed polynomial $\mathbf{w}_h(x,y,z,t^n)$ in conserved variables valid within each cell. \item Since $\mathbf{w}_h(x,y,z,t^n)$ is defined at any point inside the cell, we simply \textit{evaluate} it at the cell center in order to obtain the \textit{point value} $\mathbf{Q}_{ijk}^{n}= \mathbf{w}_h(x_i,y_j,z_k,t^n)$. This conversion from cell averages ${\bar \u}_{ijk}^{n}$ to point values $\mathbf{Q}_{ijk}^{n}$ is the \textbf{main key idea} of our new method, since the simple identity $\mathbf{Q}_{ijk}^{n} = {\bar \u}_{ijk}^{n}$ is valid only up to second order of accuracy! After that, we perform a conversion from the point-values of the conserved variables to the point-values in primitive variables, i.e. we apply Eq.~(\ref{eq:cons2prim}), thus obtaining the corresponding primitive variables $\mathbf{V}_{ijk}^{n} = \mathbf{V}(\mathbf{Q}_{ijk}^{n})$ at each cell center. This is the only step in the entire algorithm that needs a conversion from the conservative to the primitive variables. \item Finally, from the point-values of the primitive variables at the cell centers, we perform a {\em second} WENO reconstruction to obtain a reconstruction polynomial in \textit{primitive variables}, denoted as $\mathbf{p}_h(x,y,z,t^n)$. This polynomial is then used as the initial condition for the new local space--time DG predictor in primitive variables described in Sect.~\ref{sec:localDG}. \end{enumerate} As for the choice of the spatial WENO reconstruction, we have adopted a dimension-by-dimension reconstruction strategy, discussed in full details in our previous works (see \cite{AMR3DCL,AMR3DNC,Zanotti2015}). Briefly, we first introduce space-time reference coordinates $\xi,\eta,\zeta,\tau\in[0,1]$, defined by \begin{equation} \label{eq:xi} x = x_{i-\frac{1}{2}} + \xi \Delta x_i, \quad y = y_{j-\frac{1}{2}} + \eta \Delta y_j, \quad z = z_{k-\frac{1}{2}} + \zeta \Delta z_k, \quad t = t^n + \tau \Delta t\,, \end{equation} and, along each spatial direction, we define a basis of polynomials $\{\psi_l(\lambda)\}_{l=1}^{M+1}$, each of degree $M$, formed by the $M+1$ Lagrange interpolating polynomials, that pass through the $M+1$ Gauss-Legendre quadrature nodes $\{\mu_k\}_{k=1}^{M+1}$. According to the WENO philosophy, a number of stencils is introduced such that the final polynomial is a data-dependent nonlinear combination of the polynomials computed from each stencil. Here, we use a fixed number $N_s$ of one-dimensional stencils, namely $N_s=3$ for odd order schemes (even polynomials of degree $M$), and $N_s=4$ for even order schemes (odd polynomials of degree $M$). For example, focusing on the $x$ direction for convenience, every stencil along $x$ is formed by the union of $M+1$ adjacent cells, i.e. \begin{equation} \label{eqn.stencildef} \mathcal{S}_{ijk}^{s,x} = \bigcup \limits_{e=i-L}^{i+R} {I_{ejk}}, \quad \end{equation} where $L=L(M,s)$ and $R=R(M,s)$ are the spatial extension of the stencil to the left and to the right.\footnote{See Appendix A of \cite{Zanotti2015} for a graphical representation.} Now, an important difference emerges depending on whether we are reconstructing the conserved or the primitive variables. In the former case, corresponding to the computation of $\mathbf{w}_h(x,y,z,t^n)$ at step $1$ above, we require that the reconstructed polynomial must preserve the \textit{cell-averages} of the \textit{conserved variables} over each element $I_{ijk}$. Since the polynomials reconstructed along the $x$ direction can be written as \begin{equation} \label{eqn.recpolydef.x} \mathbf{w}^{s,x}_h(x,t^n) = \sum \limits_{r=0}^M \psi_r(\xi) \hat \mathbf{w}^{n,s}_{ijk,r} := \psi_r(\xi) \hat \mathbf{w}^{n,s}_{ijk,r}\,, \end{equation} the reconstruction equations read \begin{equation} \frac{1}{\Delta x_e} \int \limits_{x_{e-\frac{1}{2}}}^{x_{e+\frac{1}{2}}} \mathbf{w}_h^x(x,t^n) dx = \frac{1}{\Delta x_e} \int \limits_{x_{e-\frac{1}{2}}}^{x_{e+\frac{1}{2}}} \psi_r(\xi(x)) \hat \mathbf{w}^{n,s}_{ijk,r} \, dx = {\bar{\mathbf{Q}}}^n_{ejk}, \qquad \forall {I}_{ejk} \in \mathcal{S}_{ijk}^{s,x}\,. \label{eqn.rec.x} \end{equation} Equations~(\ref{eqn.rec.x}) provide a system of $M+1$ linear equations for the unknown coefficients $\hat \mathbf{w}^{n,s}_{ijk,r}$, which is conveniently solved through linear algebra packages. Once this operation has been performed for each stencil, we construct a data-dependent nonlinear combination of the resulting polynomials, i.e. \begin{equation} \label{eqn.weno} \mathbf{w}_h^x(x,t^n) = \psi_r(\xi) \hat \mathbf{w}^{n}_{ijk,r}, \quad \textnormal{ with } \quad \hat \mathbf{w}^{n}_{ijk,r} = \sum_{s=1}^{N_s} \omega_s \hat \mathbf{w}^{n,s}_{ijk,r}\,. \end{equation} The nonlinear weights $\omega_s$ are computed according to the WENO approach \cite{shu_efficient_weno} and their explicit expression can be found in \cite{AMR3DCL,AMR3DNC,Zanotti2015}. The whole procedure must be repeated along the two directions $y$ and $z$. Hence, although each direction is treated separately, the net effect provides a genuine multidimensional reconstruction. We now proceed with the \textbf{key step} of the new algorithm presented in this paper and compute the \textit{point values} of the conserved quantities at the cell centers, simply by \textit{evaluating} the reconstruction polynomials in the barycenter of each control volume: \begin{equation} \mathbf{Q}_{ijk}^n = \mathbf{w}_h \left( x_i,y_j,z_k,t^n \right). \label{eqn.pointeval} \end{equation} These point values of the conserved quantities $\mathbf{Q}_{ijk}^n$ are now converted into point values of the primitive variables $\mathbf{V}_{ijk}^n$, which requires only a single {\em cons-to-prim conversion} per cell. In RHD and RMHD, this is one of the most expensive and most delicate parts of the entire algorithm: \begin{equation} \mathbf{V}_{ijk}^n = \mathbf{V} \left( \mathbf{Q}_{ijk}^n \right). \label{eqn.cons2prim} \end{equation} The reconstruction polynomials in primitive variables are spanned by the same basis functions $\psi_r(\xi)$ used for $\mathbf{w}_h$, hence \begin{equation} \label{eqn.recpolydefprim.x} \mathbf{p}^{s,x}_h(x,t^n) = \sum \limits_{r=0}^M \psi_r(\xi) \hat \mathbf{p}^{n,s}_{ijk,r} := \psi_r(\xi) \hat \mathbf{p}^{n,s}_{ijk,r}\,, \end{equation} According to step $3$ listed above, we now require that the reconstructed polynomial must interpolate the \textit{point-values} of the \textit{primitive variables} at the centers of the cells forming each stencil, i.e. \begin{equation} \mathbf{p}_h^x(x_e,t^n) = \psi_r(\xi(x_e)) \hat \mathbf{p}^{n,s}_{ijk,r} = \mathbf{V}_{ejk}^n\,, \qquad \forall {I}_{ejk} \in \mathcal{S}_{ijk}^{s,x}. \label{eqn.recprim.x} \end{equation} The reconstruction equations~(\ref{eqn.recprim.x}) will also generate a system of $M+1$ linear equations for the unknown coefficients $\hat \mathbf{p}^{n,s}_{ijk,r}$. The rest of the WENO logic applies in the same way, leading to \begin{equation} \label{eqn.weno.prim} \mathbf{p}_h^x(x,t^n) = \psi_r(\xi) \hat \mathbf{p}^{n}_{ijk,r}, \quad \textnormal{ with } \quad \hat \mathbf{p}^{n}_{ijk,r} = \sum_{s=1}^{N_s} \omega_s \hat \mathbf{p}^{n,s}_{ijk,r}\,. \end{equation} We emphasize that thanks to our polynomial WENO reconstruction (instead of the original point-wise WENO reconstruction of Jiang and Shu \cite{shu_efficient_weno}), the point-value of $\mathbf{w}_h(x,y,z,t^n)$ at each cell center, which is required at step $2$ above, is promptly available after evaluating the basis functions at the cell center. In other words, there is no need to perform any special transformation from cell averages to point-values via Taylor series expansions, like in \cite{Buchmuller2014,Buchmuller2015}. On the other hand, since the WENO reconstruction is performed twice, once for the conserved variables and once for the primitive variables, we expect that our new approach will become convenient in terms of computational efficiency only for those systems of equations characterized by relations $\mathbf{V}(\mathbf{Q})$ that cannot be written in closed form. In such circumstances, in fact, reducing the number of {\em cons-to-prim conversions} from $M (M+1)^{d+1} + d (M+1)^d$ in $d$ space dimensions (due to the space-time predictor and the numerical flux computation in the finite volume scheme) to just \textit{one single conversion} per cell will compensate for the double WENO reconstruction in space that we must perform. On the contrary, for systems of equations, such as the compressible Euler, for which the {\em cons-to-prim conversion} is analytic, no benefit will be reported in terms of computational efficiency, but still a significant benefit will be reported in terms of numerical accuracy. All these comments will be made quantitative in Sect.~\ref{sec:num-tests}. \subsection{A local space--time DG predictor in primitive variables} \label{sec:localDG} \subsubsection{Description of the predictor} \label{sec:Description_of_the_predictor} As already remarked, the computation of the fluxes through the integrals (\ref{averF}--\ref{averH}) is more conveniently performed if the primitive variables are available at each space-time quadrature point. In such a case, in fact, no conversion from the conserved to the primitive variables is required. According to the discussion of the previous Section, it is possible to obtain a polynomial $\mathbf{p}_h(x,y,z,t^n)$ in primitive variables at the reference time $t^n$. This is however not enough for a high accurate computation of the numerical fluxes, and $\mathbf{p}_h(x,y,z,t^n)$ must be evolved in time, locally for each cell, in order to obtain a polynomial $\v_h(x,y,z,t)$ approximating the solution at any time in the range $[t^n;t^{n+1}]$. To this extent, we need an operation, to be performed locally for each cell, which uses as input the high order polynomial $\v_h$ obtained from the WENO reconstruction, and gives as output its evolution in time, namely \begin{equation} \label{LSDG} \mathbf{p}_h(x,y,z,t^n)\xrightarrow{LSDG} \v_h(x,y,z,t)\,,\hspace{1cm}t\in[t^n;t^{n+1}]\,. \end{equation} This can be obtained through an element--local space--time Discontinuous Galerkin predictor that is based on the \textit{weak} integral form of Eq.~(\ref{Primsyst}). From a mathematical point of view, Eq.~(\ref{Primsyst}) is a hyperbolic system in non-conservative form. Therefore, the implementation of the space--time Discontinuous Galerkin predictor follows strictly the strategy already outlined in \cite{AMR3DNC} for non-conservative systems. Here we recall briefly the main ideas, focusing on the novel aspects implied by the formulation of Eq.~(\ref{Primsyst}). The sought polynomial $\v_h(x,y,z,t)$ is supposed to be expanded in space and time as \begin{equation} \v_h = \v_h(\boldsymbol{\xi},\tau) = \theta_l\left(\boldsymbol{\xi},\tau \right) \hat \v^n_l\,, \label{eqn.st.q} \end{equation} where the degrees of freedom $\hat \v^n_l$ are the unknowns. The space-time basis functions $\theta_l$ are given by a dyadic product of the Lagrange interpolation polynomials that pass through the Gauss-Legendre quadrature points, i.e. the tensor-product quadrature points on the hypercube $[0,1]^{d+1}$, see \cite{stroud}. The system (\ref{Primsyst}) is first rephrased in terms of the reference coordinates $\tau$ and $\boldsymbol{\xi} = (\xi,\eta,\zeta)$, yielding \begin{equation} \label{NCsyst_ref} \frac{\partial{\mathbf{V}}}{\partial \tau} + \mathbf{C}_1^\ast \frac{\partial{\mathbf{V}}}{\partial \xi} + \mathbf{C}_2^\ast \frac{\partial{\mathbf{V}}}{\partial \eta} + \mathbf{C}_3^\ast \frac{\partial{\mathbf{V}}}{\partial \zeta} ={\bf S}^\ast \,, \end{equation} with \begin{equation} {\bf C}_1^\ast= \frac{\Delta t}{\Delta x_i} \, {\bf C}_1, \quad {\bf C}_2^\ast= \frac{\Delta t}{\Delta y_j} \, {\bf C}_2, \quad {\bf C}_3^\ast= \frac{\Delta t}{\Delta z_k} \, {\bf C}_3, \quad {\bf S}^\ast= \Delta t \mathbf{M}^{-1}{\bf S}. \end{equation} Expression (\ref{NCsyst_ref}) is then multiplied by the piecewise space-time polynomials $\theta_k(\xi,\eta,\zeta,\tau)$ and integrated over the space-time reference control volume, thus providing \begin{eqnarray} \int \limits_{0}^{1} \int \limits_{0}^{1} \int \limits_{0}^{1} \int \limits_{0}^{1} \theta_k \frac{\partial{\v_h}}{\partial \tau} d \boldsymbol{\xi} d\tau = \int \limits_{0}^{1} \int \limits_{0}^{1} \int \limits_{0}^{1} \int \limits_{0}^{1} \theta_k \left( {\bf S}^\ast - \mathbf{C}_1^\ast \frac{\partial \v_h}{\partial \xi} - \mathbf{C}_2^\ast \frac{\partial \v_h}{\partial \eta} - \mathbf{C}_3^\ast \frac{\partial \v_h}{\partial \zeta} \right) d \boldsymbol{\xi} d\tau\,, \label{eqn.pde.weak1} \end{eqnarray} where we have replaced $\mathbf{V}$ with its discrete representation $\v_h$. Integrating the first term by parts in time yields \begin{eqnarray} && \int \limits_{0}^{1} \int \limits_{0}^{1} \int \limits_{0}^{1} \theta_k(\boldsymbol{\xi},1) \v_h(\boldsymbol{\xi},1) \, d \boldsymbol{\xi} - \int \limits_{0}^{1} \int \limits_{0}^{1} \int \limits_{0}^{1} \int \limits_{0}^{1} \left(\frac{\partial}{\partial \tau} \theta_k \right) \v_h(\boldsymbol{\xi},\tau) \, d \boldsymbol{\xi} d\tau = \nonumber \\ && \int \limits_{0}^{1} \int \limits_{0}^{1} \int \limits_{0}^{1} \theta_k(\boldsymbol{\xi},0) \mathbf{p}_h(\boldsymbol{\xi},t^n) \, d \boldsymbol{\xi} \, + \int \limits_{0}^{1} \int \limits_{0}^{1} \int \limits_{0}^{1} \int \limits_{0}^{1} \theta_k \left( {\bf S}^\ast - \mathbf{C}_1^\ast \frac{\partial \v_h}{\partial \xi} - \mathbf{C}_2^\ast \frac{\partial \v_h}{\partial \eta} - \mathbf{C}_3^\ast \frac{\partial \v_h}{\partial \zeta} \right) d \boldsymbol{\xi} d\tau\,. \nonumber \\ && \label{eqn.pde.weak3} \end{eqnarray} Eq.~(\ref{eqn.pde.weak3}) is an element-local nonlinear algebraic equation that must be solved locally for each grid-cell in the unknowns $\hat \v^n_l$. In practice, we solve the system of Eqs.~(\ref{eqn.pde.weak3}) through a discrete Picard iteration, see \cite{DumbserZanotti,HidalgoDumbser}, where additional comments about its solution can be found. \subsubsection{An efficient initial guess for the predictor} \label{sec:initial.guess} A proper choice of the initial guess for each of the space-time degrees of freedom $\hat \v_l$ can improve the convergence of the Picard process. The easiest strategy is to set $\v_h(\mathbf{x},t) = \mathbf{p}_h(\mathbf{x},t^n)$ i.e. the reconstruction polynomial is simply extended as a constant in time. This is, however, not the best approach. A better strategy for obtaining a good initial guess for the LSDG predictor was presented in \cite{HidalgoDumbser}, and it is based on the implementation of a MUSCL scheme for the explicit terms, plus a second-order Crank--Nicholson scheme in case stiff source terms are present. In the following, we refer to this version of the initial guess for the LSDG predictor as the MUSCL-CN initial guess. If the source terms are not stiff, however, an even more efficient approach is possible which is based on a space-time extension of multi-level Adams--Bashforth-type ODE integrators. For that purpose, the space-time polynomial denoted by $\v^{n-1}_h(\mathbf{x},t)$ obtained during the previous time step $[t^{n-1},t^n]$ is simply \textit{extrapolated in time} to the new time step $[t^n,t^{n+1}]$ by simple L2 projection: \begin{equation} \int \limits_{I_{ijk}} \int \limits_{t^n}^{t^{n+1}} \theta_k(\mathbf{x},t) \v^n_h(\mathbf{x},t) \, dt \, d\mathbf{x} = \int \limits_{I_{ijk}} \int \limits_{t^n}^{t^{n+1}} \theta_k(\mathbf{x},t) \v^{n-1}_h(\mathbf{x},t) \, dt \, d\mathbf{x}. \end{equation} In terms of the degrees of freedom $\hat \v^n_l$ and $\hat \v^{n-1}_l$ this relation becomes \begin{equation} \int \limits_0^1 \int \limits_0^1 \int \limits_0^1 \int \limits_0^1 \theta_k(\boldsymbol{\xi},\tau) \theta_l(\boldsymbol{\xi},\tau) \hat \v^n_l \, dt \, d\boldsymbol{\xi} = \int \limits_0^1 \int \limits_0^1 \int \limits_0^1 \int \limits_0^1 \theta_k(\boldsymbol{\xi},\tau) \theta_l(\boldsymbol{\xi},\tau') \hat \v^{n-1}_l \, dt \, d\boldsymbol{\xi}, \label{eqn.abig} \end{equation} with $\tau' = 1 + \tau \frac{\Delta t^n}{\Delta t^{n-1}}$ and $\Delta t^{n-1} = t^n - t^{n-1}$. In the following, we refer to this second version of the initial guess for the LSDG predictor as the Adams--Bashforth (AB) initial guess. In Tab.~\ref{tab.CPU.initial.guess} we show a comparison among the performances of the LSDG predictor with these two different implementations of the initial guess. \section{Numerical tests with the new ADER-WENO finite volume scheme in primitive variables} \label{sec:num-tests} In the following we explore the properties of the new ADER-WENO finite volume scheme by solving a wide set of test problems belonging to four different systems of equations: the classical Euler equations, the relativistic hydrodynamics (RHD) and magnetohydrodynamics (RMHD) equations and the Baer-Nunziato equations for compressible two-phase flows. For the sake of clarity, we introduce the notation \enquote{ADER-Prim} to refer to the novel approach of this work for which both the spatial WENO reconstruction and the subsequent LSDG predictor are performed on the primitive variables. On the contrary, we denote the traditional ADER implementation, for which both the spatial WENO reconstruction and the LSDG predictor are performed on the conserved variables, as \enquote{ADER-Cons}. In a few circumstances, we have also compared with the \enquote{ADER-Char} scheme, namely a traditional ADER scheme in which, however, the spatial reconstruction is performed on the characteristic variables. In this Section we focus our attention on finite volume schemes, which, according to the notation introduced in \cite{Dumbser2008}, are denoted as $\mathbb{P}_0\mathbb{P}_M$ methods, where $M$ is the degree of the approximating polynomial. In Sect.~\ref{sec:extension} a brief account is given to Discontinuous Galerkin methods, referred to as $\mathbb{P}_N\mathbb{P}_N$ methods, for which an ADER-Prim version is also possible. \subsection{Euler equations} \label{sec:Euler} First of all we consider the solution of the classical Euler equations of compressible gas dynamics, for which the vectors of the conserved variables $\mathbf{Q}$ and of the fluxes ${\bf f}^x$, ${\bf f}^y$ and ${\bf f}^z$ are given respectively by \begin{equation} {\mathbf{Q}}=\left(\begin{array}{c} \rho \\ \rho v_x \\ \rho v_y \\ \rho v_z \\ E \end{array}\right) \, , \,\,\,\, {\bf f}^x= \left(\begin{array}{c} \rho v_x \\ \rho v_x^2 + p \\ \rho v_xv_y \\ \rho v_xv_z \\ v_x(E+p) \end{array}\right)\, , \,\,\,\, {\bf f}^y= \left(\begin{array}{c} \rho v_y \\ \rho v_xv_y \\ \rho v_y^2 + p \\ \rho v_yv_z \\ v_y(E+p) \end{array}\right)\, , \,\,\,\, {\bf f}^z= \left(\begin{array}{c} \rho v_z \\ \rho v_xv_z \\ \rho v_yv_z \\ \rho v_z^2 + p \\ v_z(E+p) \end{array}\right)\,. \label{eq:Euler-system} \end{equation} Here $v_x$, $v_y$ and $v_z$ are the velocity components, $p$ is the pressure, $\rho$ is the mass density, $E=p/(\gamma-1)+\rho (v_x^2+v_y^2+v_z^2)/2$ is the total energy density, while $\gamma$ is the adiabatic index of the supposed ideal gas equation of state, which is of the kind $p=\rho\epsilon(\gamma-1)$, $\epsilon$ being the specific internal energy. \subsubsection{2D isentropic vortex} \label{sec:isentropic} \begin{table}[!t] \centering \begin{tabular}{|c|c||cc|cc|cc|c|} \hline \multicolumn{9}{|c|}{\textbf{2D isentropic vortex problem }} \\ \hline \hline & & \multicolumn{2}{c|}{ ADER-Prim } & \multicolumn{2}{c|}{ ADER-Cons } & \multicolumn{2}{c|}{ ADER-Char } & \\ \hline & $N_x$ & $L_2$ error & $L_2$ order & $L_2$ error & $L_2$ order & $L_2$ error & $L_2$ order & Theor. \\ \hline \hline \multirow{5}{*}{\rotatebox{0}{{$\mathbb{P}_0\mathbb{P}_2$}}} & 100 & 4.060E-03 & --- & 5.028E-03 & --- & 5.010E-03 & --- &\multirow{5}{*}{3}\\ & 120 & 2.359E-03 & 2.98 & 2.974E-03 & 2.88 & 2.968E-03 & 2.87 &\\ & 140 & 1.489E-03 & 2.98 & 1.897E-03 & 2.92 & 1.893E-03 & 2.92 &\\ & 160 & 9.985E-04 & 2.99 & 1.281E-03 & 2.94 & 1.279E-03 & 2.94 &\\ & 200 & 5.118E-04 & 2.99 & 6.612E-04 & 2.96 & 6.607E-04 & 2.96 &\\ \hline \multirow{5}{*}{\rotatebox{0}{{$\mathbb{P}_0\mathbb{P}_3$}}} & 50 & 2.173E-03 & --- & 4.427E-03 & --- & 5.217E-03 & --- & \multirow{5}{*}{4}\\ & 60 & 8.831E-04 & 4.93 & 1.721E-03 & 5.18 & 2.232E-03 & 4.65 &\\ & 70 & 4.177E-04 & 4.85 & 8.138E-04 & 4.85 & 1.082E-03 & 4.69 &\\ & 80 & 2.194E-04 & 4.82 & 4.418E-04 & 4.57 & 5.746E-04 & 4.74 &\\ & 100 & 7.537E-05 & 4.79 & 1.605E-04 & 4.53 & 1.938E-04 & 4.87 &\\ \hline \multirow{5}{*}{\rotatebox{0}{{$\mathbb{P}_0\mathbb{P}_4$}}} & 50 & 2.165E-03 & --- & 3.438E-03 & --- & 3.416E-03 & --- &\multirow{5}{*}{5}\\ & 60 & 6.944E-04 & 6.23 & 1.507E-03 & 4.52 & 1.559E-03 & 4.30 &\\ & 70 & 3.292E-04 & 4.84 & 7.615E-04 & 4.43 & 7.615E-04 & 4.65 &\\ & 80 & 1.724E-04 & 4.84 & 4.149E-04 & 4.55 & 4.148E-04 & 4.55 &\\ & 100 & 5.884E-05 & 4.82 & 1.449E-04 & 4.71 & 1.448E-04 & 4.72 &\\ \hline \end{tabular} \caption{ \label{tab:Vortex_Error} $L_2$ errors of the mass density and corresponding convergence rates for the 2D isentropic vortex problem. A comparison is shown among the reconstruction in primitive variables (ADER-Prim), in conserved variables (ADER-Cons) and in characteristic variables (ADER-Char). The Osher-type numerical flux has been used.} \label{Table:convergence} \end{table} It is important to assess the convergence properties of the new scheme, in particular comparing with the traditional ADER scheme in conserved and in characteristic variables. To this extent, we have studied the two-dimensional isentropic vortex, see e.g. \cite{HuShuTri}. The initial conditions are given by a uniform mean flow, to which a perturbation is added, such that \begin{equation} \left( \rho,v_x,v_y,v_z,p \right) =(1+\delta\rho, 1+\delta v_x, 1+\delta v_y, 0, 1+\delta p)\,, \end{equation} with \begin{equation} \left(\begin{array}{c} \delta \rho \\ \delta v_x \\ \delta v_y \\ \delta p \end{array}\right) = \left(\begin{array}{c} (1+\delta T)^{1/(\gamma-1)}-1 \\ -(y-5)\epsilon/2\pi \exp{[0.5(1-r^2)]} \\ \phantom{-}(x-5)\epsilon/2\pi \exp{[0.5(1-r^2)]} \\ (1+\delta T)^{\gamma/(\gamma-1)}-1 \end{array}\right).~~~ \label{eq:pert} \end{equation} Whatever the perturbation $\delta T$ in the temperature is, it is easy to verify that there is not any variation in the specific entropy $s=p/\rho^\gamma$, and the flow is advected smoothly and isentropically with velocity $v=(1,1,0)$. We have solved this test over the computational domain $\Omega=[0;10]\times[0;10]$, assuming \begin{equation} \delta T=-\frac{\epsilon^2(\gamma-1)}{8\gamma\pi^2}~\exp{(1-r^2)}\,, \end{equation} with $r^2=(x-5)^2+(y-5)^2$, vortex strength $\epsilon=5$ and adiabatic index $\gamma=1.4$. Table~\ref{Table:convergence} contains the results of our calculation, in which we have compared the convergence properties of three different finite volume ADER schemes: ADER-Prim, ADER-Cons and ADER-Char, obtained with the Osher-type Riemann solver, see \cite{OsherUniversal}. While all the schemes converge to the nominal order, it is interesting to note that the smallest $L_2$ error is obtained for the \textit{new} ADER finite volume scheme in \textit{primitive variables}, and that the difference with respect to the other two reconstructions increases with the order of the method. In addition to the convergence properties, we have compared the performances of the Adams--Bashforth version of the initial guess for the LSDG predictor with the traditional version based on the MUSCL-CN algorithm. The comparison has been performed over a $100\times 100$ uniform grid. The results are shown in Tab.~\ref{tab.CPU.initial.guess}, from which we conclude that the Adams--Bashforth initial guess is indeed computationally more efficient in terms of CPU time. However, we have also experienced that it is typically less robust, and in some of the most challenging numerical tests discussed in the rest of the paper we had to use the more traditional MUSCL-CN initial guess. \begin{table}[!t] \vspace{0.5cm} \renewcommand{\arraystretch}{1.0} \begin{center} \begin{tabular}{ccc} \hline \hline & \footnotesize{MUSCL-CN} & \footnotesize{Adams--Bashforth} \\ \hline \hline $\mathbb{P}_0\mathbb{P}_2$ & 1.0 & 0.64 \\ \hline $\mathbb{P}_0\mathbb{P}_3$ & 1.0 & 0.75 \\ \hline $\mathbb{P}_0\mathbb{P}_4$ & 1.0 & 0.72 \\ \hline \hline \end{tabular} \end{center} \caption{CPU time comparison among different versions of the initial guesses for the LSDG predictor. The comparison has been performed for the isentropic vortex solution and the numbers have been normalized to the value obtained with the traditional MUSCL-CN initial guess (see Sect.~\ref{sec:initial.guess} for more details).} \label{tab.CPU.initial.guess} \end{table} \subsubsection{Sod's Riemann problem} \begin{table}[!b] \vspace{0.5cm} \renewcommand{\arraystretch}{1.0} \begin{center} \begin{tabular}{cccc} \hline \hline & \footnotesize{ADER-Prim} & \footnotesize{ADER-Cons} & \footnotesize{ADER-Char} \\ \hline $\mathbb{P}_0\mathbb{P}_2$ & 1.0 & 0.74 & 0.81 \\ \hline $\mathbb{P}_0\mathbb{P}_3$ & 1.0 & 0.74 & 0.80 \\ \hline $\mathbb{P}_0\mathbb{P}_4$ & 1.0 & 0.77 & 0.81 \\ \hline \end{tabular} \end{center} \caption{CPU time comparison among different ADER implementations for the Sod Riemann problem. The numbers have been normalized to the value obtained with ADER-Prim.} \label{tab.CPU.Sod} \end{table} \begin{figure} \begin{center} \begin{tabular}{cc} {[angle=0,width=7.3cm,height=7.3cm]{./Riemann-Sod-P0P3-rho.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./Riemann-Sod-P0P3-u.png}} \\ {[angle=0,width=7.3cm,height=7.3cm]{./Riemann-Sod-P0P3-p.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./Riemann-Sod-P0P3-u-focus.png}} \end{tabular} \caption{Solution of Sod's Riemann problem with the fourth order ADER-WENO scheme at time $t=0.2$. The bottom right panel shows a magnification of the velocity at the tail of the rarefaction.} \label{fig:shock-tube-Sod} \end{center} \end{figure} We have then solved the classical Riemann problem named after Sod~\cite{Sod1978}, assuming an adiabatic index $\gamma=1.4$, and evolved until $t_{\rm final}=0.2$. In spite of the fact that this is a one-dimensional test, we have evolved this problem in two spatial dimensions over the domain $[0,1]\times[-0.2,0.2]$, using periodic boundary conditions along the passive $y$ direction. In Fig.~\ref{fig:shock-tube-Sod} we show the comparison among the solutions obtained with ADER-Prim, ADER-Cons and ADER-Char, together with the exact solution provided in \cite{toro-book}. We have adopted the finite volume scheme at the fourth order of accuracy, namely the $\mathbb{P}_0\mathbb{P}_3$ scheme, in combination with the Rusanov numerical flux and using $400$ cells along the $x$-direction. Although all of the ADER implementations show a very good agreement with the exact solution, a closer look at the tail of the rarefaction, highlighted in the bottom right panel, reveals that the ADER-Cons scheme is actually the worst one, while the solution obtained with ADER-Prim is more similar to the reconstruction in characteristic variables. On the contrary, in terms of CPU-time, ADER-Prim is not convenient for this system of equations because the price paid for performing the double WENO reconstruction in space is not significantly compensated by the reduced number of conversions that are needed from the conserved to the primitive variables. Table~\ref{tab.CPU.Sod} reports the CPU times, normalized with respect to the ADER-Prim implementation, for different orders of accuracy, showing that the ADER-Prim scheme is $\sim25\%$ slower than the traditional ADER-Cons scheme. As we will see in Tab.~\ref{tab.CPU.RHD} of Sect.~\ref{sec:RMHD}, the comparison will change in favor of ADER-Prim schemes, when the relativistic equations are solved instead. \subsubsection{Interacting blast waves} \begin{figure} \begin{center} \begin{tabular}{cc} {[angle=0,width=6.5cm,height=6.5cm]{./BlastWave_P0P3_t0028.png}} & {[angle=0,width=6.5cm,height=6.5cm]{./BlastWave_P0P3_t0038.png}} \end{tabular} \caption{Solution of the interacting Blast-Wave problem at time $t=0.028$ (left panel) and at time $t=0.038$ (right panel) obtained with the fourth order ADER-WENO scheme. The computation has been performed over a uniform grid of 500 cells.} \label{fig:BlastWavesEuler} \end{center} \end{figure} The interaction between two blast waves was first proposed by \cite{woodwardcol84} and it is now a standard test for computational fluid dynamics. The initial conditions are given by \begin{equation} \label{blast-wave} (\rho,v_x,p)= \left\{ \begin{array}{llll} (1.0,0.0,10^3) & {\rm if} & -0.5 < x < -0.4 \,, \\ (1.0,0.0,10^{-2}) & {\rm if} & -0.4 < x < 0.4 \,, \\ (1.0,0.0,10^2) & {\rm if} & \phantom{-} 0.4 < x < 0.5 \,, \end{array} \right. \end{equation} where the adiabatic index is $\gamma=1.4$. We have evolved this problem in two spatial dimensions over the domain $[-0.6,0.6]\times[-0.5,0.5]$, using reflecting boundary conditions in $x$ direction and periodic boundary conditions along the $y$ direction. The results of our calculations, obtained with the $\mathbb{P}_0\mathbb{P}_3$ scheme, are reported in Fig.~\ref{fig:BlastWavesEuler}, where only the one-dimensional cuts are shown. The number of cells chosen along the x-direction, namely $N_x=500$, is not particularly large, at least for this kind of challenging problem. This has been intentionally done to better highlight potential differences among the two alternative ADER-Prim and ADER-Cons schemes. As it turns out from the figure, the two methods are very similar in terms of accuracy: the sharp peak in the density at time $t=0.028$ (left panel) is somewhat better resolved through the ADER-Prim, while the opposite is true for the highest peak at time $t=0.038$ (right panel). On the overall, however, the two schemes perform equally well for this test. \begin{figure} \begin{center} {[angle=0,width=15.0cm,height=5.0cm]{./DMR-1200x300-prim.png}} {[angle=0,width=15.0cm,height=5.0cm]{./DMR-1200x300-cons.png}} \caption{Double Mach reflection problem at time $t=0.2$ obtained with the fourth order ADER-WENO scheme and the Rusanov Riemann solver. The computation has been performed over a uniform grid of $1200\times 300$ cells. Top panel: mass density distribution obtained with ADER-Prim. Bottom panel: mass density distribution obtained with ADER-Cons.} \label{fig:DMR} \end{center} \end{figure} \subsubsection{Double Mach reflection problem} As a representative test for the Euler equations in two space dimensions, we have considered the {\em double Mach reflection problem}, which implies the interaction of several waves. The dynamics of this problem is triggered by a shock wave propagating towards the right with a Mach number $M=10$, and intersecting the $x-$ axis at $x=1/6$ with an inclination angle of $\alpha=60^{\circ}$. The initial states ahead and behind the shock are fixed after solving the Rankine--Hugoniot conditions, obtaining \begin{eqnarray} (\rho, u, v, p)( \mbf{x},t=0) = \left\{ \begin{array}{cll} \frac{1}{\gamma}(8.0, 8.25, 0.0, 116.5), \quad & \text{ if } & \quad x'<0.1, \\ (1.0, 0.0, 0.0, \frac{1}{\gamma}), \quad & \text{ if } & \quad x'\geq 0.1, \end{array} \right. \end{eqnarray} where $x' = (x - 1/6) \cos\alpha - y \sin\alpha$. The adiabatic index is $\gamma=1.4$. We fix inflow and outflow boundary conditions on the left side and on the right of the numerical domain, respectively, while on the bottom we have used reflecting boundary conditions. At the top we must impose the exact solution of an isolated moving oblique shock wave with the same shock Mach number $M_s=10$. We have solved the test over the rectangle $\Omega = [0;3.0] \times [0;1]$, covered by a uniform grid composed of $1200\times300$ cells, using the Rusanov Riemann solver and a fourth order finite volume scheme. The two panels of Fig.~\ref{fig:DMR} show the comparison of the solution at time $t=0.2$ obtained with the ADER-Prim (top panel) and with the ADER-Cons (bottom panel) scheme. The results are very similar in the two cases. As a tentative conclusion about the performances of ADER-Prim for the Euler equations, we may say that, although it is the most accurate on smooth solutions (see Tab.~\ref{Table:convergence}), and comparable to a traditional ADER with reconstruction in characteristic variables, it is computationally more expensive than ADER-Cons and ADER-Char. Hence, ADER-Prim will rarely become the preferred choice in standard applications for the Euler equations. \subsection{Relativistic hydrodynamics and magnetohydrodynamics} \label{sec:RMHD} From a formal point of view, the equations of special relativistic hydrodynamics and magnetohydrodynamics can be written in conservative form like the classical Euler equations (see, however, the comments below), namely as in Eq.~(\ref{NCsyst}), with the vectors of the conserved variables and of the corresponding fluxes given by \begin{equation} {\mathbf{Q}}=\left[\begin{array}{c} D \\ S_j \\ U \\ B^j \end{array}\right],~~~ {\bf f}^i=\left[\begin{array}{c} v^i D \\ W^i_j \\ S^i \\ \epsilon^{jik}E^k \end{array}\right]\,,\hspace{1cm}i=x,y,z\,. \label{eq:RMHDfluxes} \end{equation} where the conserved variables $(D,S_j,U,B_j)$ can be expressed as\footnote{We note that, since the spacetime is flat and we are using Cartesian coordinates, the covariant and the contravariant components of spatial vectors can be used interchangeably, namely $A_i=A^i$, for the generic vector $\vec A$.} \begin{eqnarray} \label{eq:cons1} &&D = \rho W ,\\ \label{eq:cons2} &&S_i = \rho h W^2 v_i + \epsilon_{ijk}E_j B_k, \\ \label{eq:cons3} &&U = \rho h W^2 - p + \frac{1}{2}(E^2 + B^2)\,, \end{eqnarray} while the spatial projection of the energy-momentum tensor of the fluid is \cite{DelZanna2007} \begin{equation} W_{ij} \equiv \rho h W^2 v_i v_j - E_i E_j - B_i B_j + \left[p +\frac{1}{2}(E^2+B^2)\right]\delta_{ij}\,. \label{eq:W} \end{equation} Here $\epsilon_{ijk}$ is the Levi--Civita tensor and $\delta_{ij}$ is the Kronecker symbol. We have used the symbol $h=1+\epsilon+p/\rho$ to denote the specific enthalpy of the plasma and in all our calculations the usual ideal gas equation of state has been assumed. The components of the electric and of the magnetic field in the laboratory frame are denoted by $E_i$ and $B_i$, while the Lorentz factor of the fluid with respect to this reference frame is $W=(1-v^2)^{-1/2}$. We emphasize that the electric field does not need to be evolved in time under the assumption of infinite electrical conductivity, since it can always be computed in terms of the velocity and of the magnetic field as $\vec E = - \vec v \times \vec B$. Although formally very similar to the classical gas dynamics equations, their relativistic counterpart present two fundamental differences. The first one is that, while the physical fluxes ${\bf f}^i$ of the classical gas dynamics equations can be written analytically in terms of the conserved variables, i.e. ${\bf f}^i={\bf f}^i(\u)$, those of the relativistic hydrodynamics (or magnetohydrodynamics) equations need the knowledge of the primitive variables, i.e. ${\bf f}^i={\bf f}^i(\mathbf{V})$ for RMHD. The second difference is that, in the relativistic case, the conversion from the conserved to the primitive variables, i.e. the operation $(D,S_j,U,B_j)\longrightarrow (\rho,v_i,p,B_i)$, is not analytic, and it must be performed numerically through some appropriate iterative procedure. Since in an ADER scheme such a conversion must be performed in each space-time degree of freedom of the space-time DG predictor and at each Gaussian quadrature point for the computation of the fluxes in the finite volume scheme, we may expect a significant computational advantage by performing the WENO reconstruction and the LSDG predictor directly on the primitive variables. In this way, in fact, the conversion $(D,S_j,U,B_j)\longrightarrow (\rho,v_i,p,B_i)$ is required only once at the cell center (see Sect.~\ref{sec:WENO_reconstruction}), and not in each space-time degree of freedom of the predictor and at each Gaussian point for the quadrature of the numerical fluxes. We emphasize that the choice of the variables to reconstruct for the relativistic velocity is still a matter of debate. The velocity $v_i$ may seem the most natural one, but, as first noticed by \cite{Komissarov1999}, reconstructing $W v_i$ can increase the robustness of the scheme. However, this is not always the case (see Sect.~\ref{sec:RMHD-Rotor-Problem} below) and in our tests we have favored either the first or the second choice according to convenience. Concerning the specific strategy adopted to recover the primitive variables, in our numerical code we have used the third method reported in Sect. 3.2 of \cite{DelZanna2007}. Alternative methods can be found in \cite{NGMD2006,Rezzolla_book:2013}. Finally, there is an important formal change in the transition from purely hydrodynamics systems to genuinely magnetohydrodynamics systems. As already noticed by \cite{Londrillo2000}, the RMHD equations should not be regarded as a mere extension of the RHD ones, with just a larger number of variables to evolve. Rather, their formal structure is better described in terms of a coupled system of conservation laws (the five equations for the dynamics of the plasma) and a set of Hamilton-Jacobi equations, those for the evolution of the vector potential of the magnetic field \cite{ShiJin1998}. The different mathematical structure of the RMHD equations reflects the existence of the divergence-free property of the magnetic field, which must be ensured at all times during the evolution. Numerically, we have adopted a simplified and well known approach, which consists of augmenting the system (\ref{NCsyst}) with an additional equation for a scalar field $\Phi$, aimed at propagating away the deviations from $\vec \nabla\cdot\vec B=0$. We therefore need to solve \begin{equation} \label{eq:divB} \partial_t \Phi + \partial_i B^i = -\kappa \Phi\,, \end{equation} while the fluxes for the evolution of the magnetic field are also changed, namely ${\bf f}^i(B^j)\rightarrow \epsilon^{jik}E^k + \Phi \delta^{ij}$, where $\kappa\in[1;10]$ in most of our calculations. Originally introduced by \cite{Dedner:2002} for the classical MHD equations, this approach has been extended to the relativistic regime by \cite{Palenzuela:2008sf}. More information about the mathematical structure of the RMHD equations can be found in \cite{Anile_book,BalsaraRMHD,Komissarov1999,DelZanna2007,Anton2010}. In the following, we first limit our attention to a few physical systems for which $B_i=E_i=0$, hence to relativistic hydrodynamics, and then we consider truly magnetohydrodynamics tests with $B_i\neq0$. \subsubsection{RHD Riemann Problems} \begin{table} \begin{center} \begin{tabular}{c|c||c|ccc|c} \hline \hline Problem & & $\gamma$ & $\rho$ &$v_x$ & $p$ & $t_{\text{f}}$ \\ \hline \multirow{2}{*}{\rotatebox{0}{\textbf{RHD-RP1}} } &$x > 0$ &\multirow{2}{*}{$\left.5\middle/ 3\right.$} & 1 & -0.6 & 10 & \multirow{2}{*}{0.4}\\ &$x \leq 0$ & & 10 & 0.5 & 20 & \\ \hline \multirow{2}{*}{\rotatebox{0}{\textbf{RHD-RP2}}} &$x > 0$ &\multirow{2}{*}{ $\left.5\middle/ 3\right.$} & $10^{-3}$ & 0.0 & 1 & \multirow{2}{*}{0.4}\\ &$x \leq 0$ & & $10^{-3}$ & 0.0 & $10^{-5}$ & \\ \hline \end{tabular} \caption{ \label{tab.RP.ic} Left and right states of the one--dimensional RHD Riemann problems.} \end{center} \end{table} Table~\ref{tab.RP.ic} reports the initial conditions of the two one-dimensional Riemann problems that we have considered, and whose wave-patterns at the final time $t_f=0.4$ are shown in Fig.~\ref{fig:shock-tube-2R} and Fig.~\ref{fig:shock-tube-RS}, respectively. In order to appreciate the differences among the available ADER implementations, we have again solved each problem with the three alternative schemes: ADER-Prim, ADER-Cons and ADER-Char. The reference solution, computed as in \cite{Rezzolla01}, is shown too. \begin{figure} \begin{center} \begin{tabular}{cc} {[angle=0,width=7.3cm,height=7.3cm]{./2R-Mignone-P0P3-p.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./2R-Mignone-P0P3-u.png}} \\ {[angle=0,width=7.3cm,height=7.3cm]{./2R-Mignone-P0P3-rho.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./2R-Mignone-P0P3-rho-focus.png}} \end{tabular} \caption{Solution of RHD-RP1 (see Table~\ref{tab.RP.ic}) with the fourth order ADER-WENO scheme at time $t=0.4$. The bottom right panel shows a magnification around the contact discontinuity.} \label{fig:shock-tube-2R} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} {[angle=0,width=7.3cm,height=7.3cm]{./RS-Radice-P0P2-p.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./RS-Radice-P0P2-u.png}} \\ {[angle=0,width=7.3cm,height=7.3cm]{./RS-Radice-P0P2-rho.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./RS-Radice-P0P2-rho-focus.png}} \end{tabular} \caption{Solution of RHD-RP2 (see Table~\ref{tab.RP.ic}) with the third order ADER-WENO scheme at time $t=0.4$. The bottom right panel shows a magnification around the right propagating shock.} \label{fig:shock-tube-RS} \end{center} \end{figure} In the first Riemann problem, which was also analyzed by \cite{Mignone2005}, two rarefaction waves are produced, separated by a contact discontinuity. It has been solved through a fourth order $\mathbb{P}_0\mathbb{P}_3$ scheme, using the Rusanov Riemann solver over a uniform grid with $300$ cells. As it is clear from Fig.~\ref{fig:shock-tube-2R}, the ADER-Prim scheme performs significantly better than the ADER-Cons. In particular, the overshoot and undershoot at the tail of the right rarefaction is absent. In general, the results obtained with ADER-Prim are essentially equivalent to those of ADER-Char, namely when the reconstruction in characteristic variables is adopted. This is manifest after looking at the bottom right panel of Fig.~\ref{fig:shock-tube-2R}, where a magnification of the rest mass density at the contact discontinuity is shown. Additional interesting comparisons can be made about the second Riemann problem, which can be found in \cite{Radice2012a}, and which is displayed in Fig.~\ref{fig:shock-tube-RS}. In this case a third order $\mathbb{P}_0\mathbb{P}_2$ scheme has been used, again with the Rusanov Riemann solver over a uniform grid with $500$ cells. The right propagating shock has a strong jump in the rest mass density, as it is visible from the bottom right panel of the figure, and the position of the shock front is better captured by the two schemes ADER-Prim and ADER-Char. \begin{table}[!t] \vspace{0.5cm} \renewcommand{\arraystretch}{1.0} \begin{center} \begin{tabular}{cccc} \hline \hline & \footnotesize{ADER-Prim} & \footnotesize{ADER-Cons} & \footnotesize{ADER-Char} \\ \hline \hline $\mathbb{P}_0\mathbb{P}_2$ & 1.0 & 1.26 & 1.40 \\ \hline $\mathbb{P}_0\mathbb{P}_3$ & 1.0 & 1.13 & 1.24 \\ \hline $\mathbb{P}_0\mathbb{P}_4$ & 1.0 & 1.04 & 1.06 \\ \hline \hline \end{tabular} \end{center} \caption{CPU time comparison among different ADER implementations for the RHD-RP1 problem. The numbers have been normalized to the value obtained with ADER-Prim.} \label{tab.CPU.RHD} \end{table} It is particularly interesting to address the issue of CPU time comparison among different implementations of ADER, as already done for the Euler equations. The result of such a comparison, performed for the RHD-RP1 problem, are reported in Tab.~\ref{tab.CPU.RHD}, which should be read in synopsis with Tab.~\ref{tab.CPU.Sod}. Clearly, ADER-Prim is not only more accurate than ADER-Cons, but it is also more efficient. As anticipated, this is in agreement with our expectations, since in the ADER-Prim implementation a single {\em cons-to-prim} operation is needed within the cell, rather than at each Gaussian quadrature point and at each space-time degree of freedom. For other tests, see for instance Sect.~\ref{sec:RHD-KH}, the CPU time reduction implied by ADER-Prim is even more evident, but the numbers shown in Tab.~\ref{tab.CPU.RHD} describe with good fidelity the relative performances of the different ADER in a large number of relativistic tests. \subsubsection{RHD Kelvin--Helmholtz instability} \label{sec:RHD-KH} \begin{figure} \begin{center} \begin{tabular}{ccc} {[angle=0,width=4.0cm,height=8.0cm]{./KH-RHD-P0P3-ADER-Prim-time2p0.png}} & {[angle=0,width=4.0cm,height=8.0cm]{./KH-RHD-P0P3-ADER-Cons-time2p0.png}} & {[angle=0,width=4.0cm,height=8.0cm]{./KH-RHD-P0P3-ADER-Char-time2p0.png}} \\ {[angle=0,width=4.0cm,height=8.0cm]{./KH-RHD-P0P3-ADER-Prim-time2p5.png}} & {[angle=0,width=4.0cm,height=8.0cm]{./KH-RHD-P0P3-ADER-Cons-time2p5.png}} & {[angle=0,width=4.0cm,height=8.0cm]{./KH-RHD-P0P3-ADER-Char-time2p5.png}} \end{tabular} \end{center} \caption{ Two-dimensional Kelvin-Helmholtz instability obtained with the $\mathbb{P}_0\mathbb{P}_3$ scheme and with the Osher flux. Left panels: solution with ADER-Prim. Central panels: solution with ADER-Cons. Right panels: solution with ADER-Char. Top panels: solution at $t=2.0$. Bottom panels: solution at $t=2.5$. } \label{fig:KH-RHD} \end{figure} In the relativistic regime, the Kelvin--Helmholtz (KH) instability is likely to be responsible for a variety of physical effects, which are encountered in the dynamics of extragalactic relativistic jets \cite{Bodo2004,Perucho2006,Perucho2007}. As an academic test, we simulate the linear growth phase of the KH instability in two spatial dimensions, taking the initial conditions from \cite{Mignone2009} (see also \cite{Beckwith2011} and \cite{Radice2012a}). In particular, the rest-mass density is chosen as \begin{equation}\label{KHI-rho} \rho = \left\{\begin{array}{ll} \rho_0 + \rho_1 \tanh{[(y-0.5)/a]} & \quad y > 0\,, \\ \noalign{\medskip} \rho_0 - \rho_1 \tanh{[(y+0.5)/a]} & \quad y \leq 0 \,, \end{array}\right. \end{equation} with $\rho_0=0.505$ and $\rho_1=0.495$. Assuming that the shear layer has a velocity $v_s=0.5$ and a characteristic size $a=0.01$, the velocity along the x-direction is modulated as \begin{equation}\label{KHI-vx} v_x = \left\{\begin{array}{ll} v_s \tanh{[(y-0.5)/a]} & \quad y > 0\,, \\ \noalign{\medskip} -v_s \tanh{[(y+0.5)/a]} & \quad y \leq 0 \,. \end{array}\right. \end{equation} It is convenient to add a perturbation in the transverse velocity, i.e. \begin{equation}\label{KHI-vy} v_y = \left\{\begin{array}{ll} \eta_0 v_s \sin{(2\pi x)} \exp{[-(y-0.5)^2/\sigma]} & \quad y > 0\,, \\ \noalign{\medskip} -\eta_0 v_s \sin{(2\pi x)} \exp{[-(y+0.5)^2/\sigma]} & \quad y \leq 0 \,, \end{array}\right. \end{equation} where $\eta_0=0.1$ is the amplitude of the perturbation, while $\sigma=0.1$ is its length scale. The adiabatic index is $\gamma=4/3$ and the pressure is uniform, $p=1$. The problem has been solved over the computational domain $[-0.5,0.5]\times[-1,1]$, covered by a uniform mesh with $200\times400$ cells, using the $\mathbb{P}_0\mathbb{P}_3$ scheme and the Osher-type numerical flux. Periodic boundary conditions are fixed both in $x$ and in $y$ directions. Fig.~\ref{fig:KH-RHD} shows the results of the calculations: in the left, in the central and in the right panels we have reported the solution obtained with the ADER-Prim, with the ADER-Cons and with the ADER-Char scheme, respectively, while the top and the bottom panels correspond to two different times during the evolution, namely $t=2.0$ and $t=2.5$. Interestingly, two secondary vortices are visible when the reconstruction is performed in primitive and characteristic variables (see left the right panels), but only one is present in the simulation using the reconstruction in conserved variables. In \cite{Zanotti2015} we have already commented about the elusive character of these details in the solution, which depend both on the resolution and on the Riemann solver adopted. Based on our results, we infer that the ADER-Cons scheme is the most diffusive, while ADER-Prim and ADER-Char seem to produce the same level of accuracy in the solution. However, if we look at the CPU times in the two cases, we find that ADER-Prim is a factor 2.5 faster than ADER-Cons and a factor 3 faster than ADER-Char, and therefore should be preferred in all relevant applications of RHD. \begin{table}[t] \centering \begin{tabular}{|c|c||cc|cc|c|} \hline \multicolumn{7}{|c|}{\textbf{2D circularly polarized Alfv\'en wave }} \\ \hline \hline \hline & $N_x$ & $L_1$ error & $L_1$ order & $L_2$ error & $L_2$ order & Theor. \\ \hline \hline \multirow{5}{*}{\rotatebox{0}{{$\mathbb{P}_0\mathbb{P}_2$}}} & 50 & 5.387E-02 & --- & 9.527E-03 & --- & \multirow{5}{*}{3}\\ & 60 & 3.123E-02 & 2.99 & 5.523E-03 & 2.99 & \\ & 70 & 1.969E-02 & 2.99 & 3.481E-03 & 2.99 & \\ & 80 & 1.320E-02 & 2.99 & 2.334E-03 & 2.99 & \\ & 100 & 6.764E-03 & 3.00 & 1.196E-03 & 3.00 & \\ \hline \multirow{5}{*}{\rotatebox{0}{{$\mathbb{P}_0\mathbb{P}_3$}}} & 50 & 2.734E-04 & --- & 4.888E-05 & --- & \multirow{5}{*}{4}\\ & 60 & 1.153E-04 & 4.73 & 2.061E-05 & 4.74 & \\ & 70 & 5.622E-05 & 4.66 & 1.004E-05 & 4.66 & \\ & 80 & 3.043E-05 & 4.60 & 5.422E-06 & 4.61 & \\ & 100 & 1.108E-05 & 4.53 & 1.968E-06 & 4.54 & \\ \hline \multirow{5}{*}{\rotatebox{0}{{$\mathbb{P}_0\mathbb{P}_4$}}} & 30 & 2.043E-03 & --- & 3.611E-04 & --- & \multirow{5}{*}{5}\\ & 40 & 4.873E-04 & 4.98 & 8.615E-05 & 4.98 & \\ & 50 & 1.603E-04 & 4.98 & 2.846E-05 & 4.96 & \\ & 60 & 6.491E-05 & 4.96 & 1.168E-05 & 4.88 & \\ & 70 & 3.173E-05 & 4.64 & 6.147E-06 & 4.16 & \\ \hline \end{tabular} \caption{ $L_1$ and $L_2$ errors analysis for the 2D Alfv\'en wave problem. The errors have been computed with respect to the magnetic field $B^y$.} \label{tab:Alfven_Error} \end{table} \subsubsection{RMHD Alfv\'en Wave} \label{sec:RMHD_Alfven_Wave} In Tab.~\ref{Table:convergence} of Sect.~\ref{sec:isentropic} we have reported the comparison of the convergence rates among three different implementations of ADER for the Euler equations. We believe it is important to verify the convergence of the new ADER-Prim scheme also for the RMHD equations, which indeed admits an exact, smooth unsteady solution, namely the propagation of a circularly polarized Alfv\'en wave (see \cite{Komissarov1997,DelZanna2007} for a full account). The wave is assumed to propagate along the $x$ direction in a constant density and constant pressure background, say $\rho=p=1$. The magnetic field, on the other hand, is given by \begin{eqnarray} B_x&=&B_0 \\ B_y&=&\eta B_0\cos[k(x-v_A t)]\\ B_z&=&\eta B_0\sin[k(x-v_A t)]\,, \end{eqnarray} where $\eta=1$ is the amplitude of the wave, $B_0=1$ is the uniform magnetic field, $k$ is the wave number, while $v_A$ is speed of propagation of the wave. We have solved this problem over the computational domain $\Omega=[0; 2\pi]\times[0; 2\pi]$, using periodic boundary conditions, the Rusanov Riemann solver and the Adams--Bashforth version for the initial guess of the LSDG predictor. We have compared the numerical solution with the analytic one after one period $T=L/v_A=2\pi/v_A$. Tab.~\ref{tab:Alfven_Error} contains the results of our analysis, showing the $L_1$ and the $L_2$ norms of the error of $B^y$. As apparent from the table, the nominal order of convergence of the new ADER-Prim scheme is recovered with very good accuracy. \subsubsection{RMHD Riemann Problems} Riemann problems are very relevant also in RMHD, admitting a larger number of waves than in hydrodynamics. The exact solution was provided by \cite{Giacomazzo:2005jy} already ten years ago, making them very popular as a precise tool to validate numerical codes. We have selected Test 1 and Test 5 in Table 1 of \cite{BalsaraRMHD}, with initial left and right states that are reported in Tab.~\ref{tab:RMHD-RP}. \begin{table} \begin{center} \begin{tabular}{c|c||c|cccccccc|c} \hline \hline Problem & & $\gamma$ & $\rho$ &$(v_x$&$v_y$&$v_z)$ & $p$ & $(B_x$&$B_y$&$B_z)$ & $t_{\text{f}}$ \\ \hline \multirow{2}{*}{\rotatebox{0}{\textbf{RMHD-RP1}} } &$x > 0$ &\multirow{2}{*}{2.0} & 0.125 & 0.0 & 0.0 &0.0 & 0.1 & 0.5&-1.0&0.0 & \multirow{2}{*}{0.4}\\ &$x \leq 0$ & & 1.0 & 0.0 & 0.0 &0.0 & 1.0 & 0.5& 1.0&0.0 & \\ \hline \multirow{2}{*}{\rotatebox{0}{\textbf{RMHD-RP2}}} &$x > 0$ &\multirow{2}{*}{ $\left.5\middle/ 3\right.$} & 1.0 & -0.45 & -0.2 & 0.2 & 1.0 & 2.0&-0.7&0.5 &\multirow{2}{*}{0.55}\\ &$x \leq 0$ & & 1.08 & 0.4 & 0.3 & 0.2 & 0.95 & 2.0& 0.3&0.3 & \\ \hline \end{tabular} \caption{ \label{tab:RMHD-RP} Left and right states of the one--dimensional RMHD Riemann problems.} \end{center} \end{table} Both the tests have been solved using a fourth order ADER-WENO scheme, the Rusanov Riemann solver and over a uniform grid composed of $400$ cells. The damping factor for the divergence-cleaning procedure is set to $\kappa=10$. Fig.~\ref{fig:Balsara1} and Fig.~\ref{fig:Balsara5} allow to compare the exact solution with the results obtained through the ADER-Prim and the ADER-Cons schemes. Especially for RMHD-RP1, the solution obtained with the traditional ADER-Cons scheme is significantly more oscillatory than that produced by ADER-Prim. This is particularly evident in the rest-mass density and in the velocity $v_x$. We have here a good indication that the ADER-Prim scheme behaves better than the ADER-Cons scheme when applied to the equations of special relativistic magnetohydrodynamics. \begin{figure} \begin{center} \begin{tabular}{cc} {[angle=0,width=7.3cm,height=7.3cm]{./RP-RMHD-Balsara1-P0P3-rho.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./RP-RMHD-Balsara1-P0P3-vx.png}} \\ {[angle=0,width=7.3cm,height=7.3cm]{./RP-RMHD-Balsara1-P0P3-vy.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./RP-RMHD-Balsara1-P0P3-By.png}} \end{tabular} \caption{Solution of RMHD-RP1 (see Tab.~\ref{tab:RMHD-RP}) with the fourth order ADER-WENO scheme at time $t=0.4$. The Rusanov Riemann solver has been used over a $400$ cells uniform grid.} \label{fig:Balsara1} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} {[angle=0,width=7.3cm,height=7.3cm]{./RP-RMHD-Balsara5-P0P3-rho.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./RP-RMHD-Balsara5-P0P3-vx.png}} \\ {[angle=0,width=7.3cm,height=7.3cm]{./RP-RMHD-Balsara5-P0P3-vy.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./RP-RMHD-Balsara5-P0P3-By.png}} \end{tabular} \caption{Solution of RMHD-RP2 (see Tab.~\ref{tab:RMHD-RP}) with the fourth order ADER-WENO scheme at time $t=0.55$. The Rusanov Riemann solver has been used over a $400$ cells uniform grid.} \label{fig:Balsara5} \end{center} \end{figure} \subsubsection{RMHD Rotor Problem} \label{sec:RMHD-Rotor-Problem} The relativistic version of the magnetic rotor problem, originally proposed by \cite{BalsaraSpicer1999}, has by now become a standard numerical test in RMHD. It describes the evolution of a high density plasma which, at time $t=0$, rotates rapidly with angular velocity $\omega$ and is surrounded by a low density plasma at rest: \begin{equation} \rho=\left\{\begin{array}{cl} 10 & \text{for}\;\; 0\le r\le 0.1; \\ 1 & \text{otherwise}; \end{array}\right.,~~~ \omega=\left\{\begin{array}{cl} 9.3 & \text{for}\;\; 0\le r\le 0.1; \\ 0 & \text{otherwise}; \end{array}\right.,~~~ {\mathbf{B}} = \left(\begin{array}{c} 1.0 \\ 0 \\ 0 \end{array}\right),~~~ p = 1\,,\gamma=4/3. \label{eq:MHDrotor_ic} \end{equation} Due to rotation, a sequence of torsional Alfv\'en waves are launched outside the cylinder, with the net effect of reducing the angular velocity of the rotor. We have solved this problem over a computational domain $\Omega = [-0.6,0.6]\times[-0.6,0.6]$, discretized by $300\times300$ numerical cells and using a fourth order finite volume scheme with the Rusanov Riemann solver. No taper has been applied to the initial conditions, thus producing true discontinuities right at the beginning. Fig.~\ref{fig:RMHD-Rotor} shows the rest-mass density, the thermal pressure, the relativistic Mach number and the magnetic pressure at time $t=0.4$. We obtain results which are in good qualitative agreement with those available in the literature (see, for instance, \cite{DelZanna2003}, \cite{DumbserZanotti}, \cite{ADER_MOOD_14} and \cite{Kim2014}). We emphasize that for this test the reconstruction of the primitive variables $v^i$ turns out to be more robust than that achieved through the reconstruction of the products $Wv^i$. \begin{figure} \begin{center} \begin{tabular}{cc} {[angle=0,width=7.3cm,height=7.3cm]{./RMHD_Rotor_P0P3_rho.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./RMHD_Rotor_P0P3_p.png}} \\ {[angle=0,width=7.3cm,height=7.3cm]{./RMHD_Rotor_P0P3_M.png}} & {[angle=0,width=7.3cm,height=7.3cm]{./RMHD_Rotor_P0P3_pmag.png}} \end{tabular} \caption{Solution of the RMHD rotor problem at time $t=0.4$, obtained with the $\mathbb{P}_0\mathbb{P}_3$ scheme on a uniform grid with $300\times300$ cells. Top panels: rest-mass density (left) and thermal pressure (right). Bottom panels: Mach number (left) and magnetic pressure (right). } \label{fig:RMHD-Rotor} \end{center} \end{figure} \subsection{The Baer-Nunziato equations} \label{sec:BN} As a genuinely non-conservative system of hyperbolic equations we consider the Baer-Nunziato model for compressible two-phase flow (see also \cite{BaerNunziato1986,SaurelAbgrall,AndrianovWarnecke,Schwendeman,DeledicquePapalexandris,MurroneGuillard}). In the rest of the paper we define the first phase as the solid phase and the second phase as the gas phase. As a result, we will use the subscripts $1$ and $s$ as well as $2$ and $g$ as synonyms. Sticking to \cite{BaerNunziato1986}, we prescribe the interface velocity $\mathbf{v}_I$ and the pressure $p_I$ as $\mathbf{v}_I = \mathbf{v}_1$ and $p_I = p_2$, respectively, although other choices are also possible \cite{SaurelAbgrall}. With these definitions, the system of Baer-Nunziato equations can be cast in the form prescribed by (\ref{NCsyst}) after defining the state vector $\u$ as \begin{equation} \u=\left( \phi_1\rho_1, \, \phi_1\rho_1 v_1^i, \, \phi_1\rho_1 E_1, \, \phi_2\rho_2, \, \phi_2\rho_2 v_2^i, \, \phi_2\rho_2 E_2, \, \phi_1 \right)\,, \end{equation} where $\phi_k$ is the volume fraction of phase $k$, with the condition that $\phi_1+\phi_2=1$. On the other hand, the fluxes ${\bf f}^i$, the sources $\bf S$ and the non-conservative matrices ${\bf B}_i$ are expressed by \begin{equation} {\bf f}^i=\left[\begin{array}{c} \phi_1\rho_1 v_1^i \\ \phi_1( \rho_1 v_1^i v_1^j + p_1\delta^{ij} ) \\ \phi_1 v_1^i(\rho_1 E_1+p_1) \\ \phi_2\rho_2 v_2^i \\ \phi_2( \rho_2 v_2^i v_2^j + p_2\delta^{ij} )\\ \phi_2 v_2^i(\rho_2 E_2 + p_2) \\ 0 \\ \end{array}\right],~~~ {\bf S}=\left[\begin{array}{c} 0 \\ -\nu (v_1^i - v_2^i) \\ -\nu \v_1 \cdot (\v_1 - \v_2) \\ 0 \\ -\nu (v_2^i - v_1^i)\\ -\nu \v_1 \cdot (\v_2 - \v_1) \\ \mu(p_1-p_2) \\ \end{array}\right]\,, \label{eq:bnsource} \end{equation} \begin{table}[!t] \begin{center} \renewcommand{\arraystretch}{1.0} \begin{tabular}{ccccccccc} \hline & $\rho_s$ & $u_s$ & $p_s$ & $\rho_g$ & $u_g$ & $p_g$ & $\phi_s$ & $t_e$ \\ \hline \multicolumn{1}{l}{\textbf{BNRP1 \cite{DeledicquePapalexandris}:} } & \multicolumn{8}{c}{ $\gamma_s = 1.4, \quad \pi_s = 0, \quad \gamma_g = 1.4, \quad \pi_g = 0$} \\ \hline L & 1.0 & 0.0 & 1.0 & 0.5 & 0.0 & 1.0 & 0.4 & 0.10 \\ R & 2.0 & 0.0 & 2.0 & 1.5 & 0.0 & 2.0 & 0.8 & \\ \hline \multicolumn{1}{l}{\textbf{BNRP2 \cite{DeledicquePapalexandris}:}} & \multicolumn{8}{c}{ $\gamma_s = 3.0, \quad \pi_s = 100, \quad \gamma_g = 1.4, \quad \pi_g = 0$} \\ \hline L & 800.0 & 0.0 & 500.0 & 1.5 & 0.0 & 2.0 & 0.4 & 0.10 \\ R & 1000.0 & 0.0 & 600.0 & 1.0 & 0.0 & 1.0 & 0.3 & \\ \hline \multicolumn{1}{l}{\textbf{BNRP3 \cite{DeledicquePapalexandris}:}} & \multicolumn{8}{c}{ $\gamma_s = 1.4, \quad \pi_s = 0, \quad \gamma_g = 1.4, \quad \pi_g = 0$} \\ \hline L & 1.0 & 0.9 & 2.5 & 1.0 & 0.0 & 1.0 & 0.9 & 0.10 \\ R & 1.0 & 0.0 & 1.0 & 1.2 & 1.0 & 2.0 & 0.2 & \\ \hline \multicolumn{1}{l}{\textbf{BNRP5 \cite{Schwendeman}:}} & \multicolumn{8}{c}{ $\gamma_s = 1.4, \quad \pi_s = 0, \quad \gamma_g = 1.4, \quad \pi_g = 0$} \\ \hline L & 1.0 & 0.0 & 1.0 & 0.2 & 0.0 & 0.3 & 0.8 & 0.20 \\ R & 1.0 & 0.0 & 1.0 & 1.0 & 0.0 & 1.0 & 0.3 & \\ \hline \multicolumn{1}{l}{\textbf{BNRP6 \cite{AndrianovWarnecke}:}} & \multicolumn{8}{c}{ $\gamma_s = 1.4, \quad \pi_s = 0, \quad \gamma_g = 1.4, \quad \pi_g = 0$} \\ \hline L & 0.2068 & 1.4166 & 0.0416 & 0.5806 & 1.5833 & 1.375 & 0.1 & 0.10 \\ R & 2.2263 & 0.9366 & 6.0 & 0.4890 & -0.70138 & 0.986 & 0.2 & \\ \hline \end{tabular} \end{center} \caption{Initial states left (L) and right (R) for the Riemann problems for the Baer-Nunziato equations. Values for $\gamma_i$, $\pi_i$ and the final time $t_e$ are also reported.} \label{tab.rpbn.ic} \end{table} \begin{equation} {\bf{B}}_i = \left( {\begin{array}{*{20}{l}} 0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0& - p_I \mathbf{e}_i \\ 0&0&0&0&0&0&0&0&0&0& - p_I v_I^i\\ 0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&p_I \mathbf{e}_i \\ 0&0&0&0&0&0&0&0&0&0&p_I v_I^i\\ 0&0&0&0&0&0&0&0&0&0&v_I^i \end{array}} \right), \end{equation} where $\mathbf{e}_i$ is the unit vector pointing in direction $i$, ($i \in \left\{x,y,z\right\}$) and $\nu$ and $\mu$ are two parameters related to the friction between the phases and to the pressure relaxation. \footnote{In the tests below $\nu$ and $\mu$ are both set to zero.} \begin{figure}[!htbp] \begin{center} \begin{tabular}{cc} [width=0.45\textwidth]{./BaerNunziatoRP1-rhos.png} & [width=0.45\textwidth]{./BaerNunziatoRP1-rhog.png} \\ [width=0.45\textwidth]{./BaerNunziatoRP1-us.png} & [width=0.45\textwidth]{./BaerNunziatoRP1-ug.png} \\ [width=0.45\textwidth]{./BaerNunziatoRP1-ps.png} & [width=0.45\textwidth]{./BaerNunziatoRP1-pg.png} \end{tabular} \caption{Results for the Baer--Nunziato Riemann problem BNRP1. The Osher Riemann solver has been used over a $300$ cells uniform grid.} \label{fig.bn.rp1} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \begin{tabular}{cc} [width=0.45\textwidth]{./BaerNunziatoRP2-rhos.png} & [width=0.45\textwidth]{./BaerNunziatoRP2-rhog.png} \\ [width=0.45\textwidth]{./BaerNunziatoRP2-us.png} & [width=0.45\textwidth]{./BaerNunziatoRP2-ug.png} \\ [width=0.45\textwidth]{./BaerNunziatoRP2-ps.png} & [width=0.45\textwidth]{./BaerNunziatoRP2-pg.png} \end{tabular} \caption{Results for the Baer--Nunziato Riemann problem BNRP2. The Osher Riemann solver has been used over a $300$ cells uniform grid.} \label{fig.bn.rp2} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \begin{tabular}{cc} [width=0.45\textwidth]{./BaerNunziatoRP3-rhos.png} & [width=0.45\textwidth]{./BaerNunziatoRP3-rhog.png} \\ [width=0.45\textwidth]{./BaerNunziatoRP3-us.png} & [width=0.45\textwidth]{./BaerNunziatoRP3-ug.png} \\ [width=0.45\textwidth]{./BaerNunziatoRP3-ps.png} & [width=0.45\textwidth]{./BaerNunziatoRP3-pg.png} \end{tabular} \caption{Results for the Baer--Nunziato Riemann problem BNRP3. The Osher Riemann solver has been used over a $300$ cells uniform grid.} \label{fig.bn.rp3} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \begin{tabular}{cc} [width=0.45\textwidth]{./BaerNunziatoRP5-rhos.png} & [width=0.45\textwidth]{./BaerNunziatoRP5-rhog.png} \\ [width=0.45\textwidth]{./BaerNunziatoRP5-us.png} & [width=0.45\textwidth]{./BaerNunziatoRP5-ug.png} \\ [width=0.45\textwidth]{./BaerNunziatoRP5-ps.png} & [width=0.45\textwidth]{./BaerNunziatoRP5-pg.png} \end{tabular} \caption{Results for the Baer--Nunziato Riemann problem BNRP5. The Rusanov Riemann solver has been used over a $300$ cells uniform grid.} \label{fig.bn.rp5} \end{center} \end{figure} \begin{figure}[!htbp] \begin{center} \begin{tabular}{cc} [width=0.45\textwidth]{./BaerNunziatoRP6-rhos.png} & [width=0.45\textwidth]{./BaerNunziatoRP6-rhog.png} \\ [width=0.45\textwidth]{./BaerNunziatoRP6-us.png} & [width=0.45\textwidth]{./BaerNunziatoRP6-ug.png} \\ [width=0.45\textwidth]{./BaerNunziatoRP6-ps.png} & [width=0.45\textwidth]{./BaerNunziatoRP6-pg.png} \end{tabular} \caption{Results for the Baer--Nunziato Riemann problem BNRP6. The Osher Riemann solver has been used over a $300$ cells uniform grid.} \label{fig.bn.rp6} \end{center} \end{figure} The equation of state is the so-called stiffened gas equation of state, \begin{equation} \label{eqn.eos} \epsilon_k = \frac{p_k + \gamma_k \pi_k}{\rho_k (\gamma_k -1 )}\,, \end{equation} which is a simple modification of the ideal gas EOS and where $\pi_k$ expresses a reference pressure. For brevity, we have solved this system of equations only for a set of one-dimensional Riemann problems, with initial conditions reported in Tab.~\ref{tab.rpbn.ic}. The name of the models, BNRP1, BNRP2, etc., respects the numeration adopted in \cite{USFORCE2}. A reference solution is available for these tests, and it can be found in \cite{AndrianovWarnecke,Schwendeman,DeledicquePapalexandris}. Each Riemann problem has been solved using a fourth order WENO scheme with $300$ cells uniformly distributed over the range $[-0.5;0.5]$. In Figs. \ref{fig.bn.rp1}-\ref{fig.bn.rp6} we have reported the comparison among the solutions obtained with the ADER-Prim, with the ADER-Cons and with the exact solver. In all the tests, with the exception of BNRP2, the ADER-Prim scheme behaves significantly better than the ADER-Cons scheme. On several occasions, such as for $v_s$ and $v_g$ in BNRP1, or for most of the quantities in BNRP5, the solution provided through ADER-Cons manifest evident oscillations, which are instead strongly reduced, or even absent, when the ADER-Prim scheme is used. The CPU time overhead implied by ADER-Prim is comparatively limited, and never larger than $\sim 20\%$. \section{Extension to Discontinuous Galerkin and adaptive mesh refinement} \label{sec:extension} Although we have so far concentrated on the implementation of the new ADER-Prim scheme in the context of finite volume methods, the same idea can be extended to Discontinuous Galerkin (DG) schemes as well. Incidentally, we note that the interest of computational astrophysics towards DG methods is increasing~\cite{Radice2011,Teukolsky2015}, and, especially in the relativistic context, they are expected to play a crucial role in the years to come. In a sequence of papers, we have recently developed a class of robust DG schemes which are able to cope even with discontinuous solutions, by incorporating an aposteriori subcell limiter \cite{Dumbser2014,Zanotti2015c,Zanotti2015d}. The whole logic can be briefly summarized as follows. First we assume a \emph{discrete representation} of the solution, in conserved variables, at any given time $t^n$ as \begin{equation} \label{eqn.ansatz.uh} \mathbf{u}_h(\mbf{x},t^n) = \sum_{l=0}^{N}\Phi_l(\boldsymbol{\xi}) \hat{\mathbf{u}}^n_l= \Phi_l(\boldsymbol{\xi}) \hat{\mathbf{u}}^n_l \quad \mbf{x} \in T_i\,, \end{equation} in which the polynomials \begin{equation} \Phi_l(\boldsymbol{\xi}) = \psi_p(\xi) \psi_q(\eta) \psi_r(\zeta) \end{equation} are built using the spatial Lagrange interpolation polynomials already adopted for the WENO reconstruction. The time evolution of the {\em degrees of freedom} $\hat{\mathbf{u}}^n_l$ is then obtained after considering the weak form of the governing PDE, which leads to \begin{eqnarray} \label{eqn.pde.nc.gw2} &&\left( \int \limits_{T_i} \Phi_k \Phi_l d\mbf{x} \right) \left( \hat{\mathbf{u}}_l^{n+1} - \hat{\mathbf{u}}_l^{n} \right) + \int\limits_{t^n}^{t^{n+1}} \int \limits_{\partial T_i} \Phi_k \, \left( {\bf \tilde f}\left(\v_h^-, \v_h^+ \right) + \frac{1}{2} \mathcal{D}\left(\v_h^-, \v_h^+ \right) \right) \cdot\mathbf{n} \, dS dt \nonumber \\ && -\int\limits_{t^n}^{t^{n+1}} \int \limits_{T_i} \nabla \Phi_k \cdot \mathbf{F}\left(\v_h \right) d\mbf{x} dt + \int\limits_{t^n}^{t^{n+1}} \int \limits_{T_i} \Phi_k{\bf B}(\v_h) \cdot \mathbf{M} \nabla \v_h \, d\mbf{x} dt = \int\limits_{t^n}^{t^{n+1}} \int \limits_{T_i} \Phi_k {\bf S}\left(\v_h \right) \, d\mbf{x} dt\,, \nonumber \\ \end{eqnarray} where, just like in Eq.~(\ref{eqn.numerical.flux}), ${\bf \tilde f}$ denotes a numerical flux function and $\mathcal{D}\left(\v_h^-, \v_h^+ \right)$ a path-conservative jump term. Obviously, no spatial WENO reconstruction is needed within the DG framework, and the local spacetime DG predictor $\v_h(\mathbf{x},t)$ entering Eq.~(\ref{eqn.pde.nc.gw2}) will be computed according to the same strategy outlined in Sect.~\ref{sec:Description_of_the_predictor}. T although acting directly over the degrees of freedom $\hat{\mathbf{p}}^n_l$ in primitive variables, which are computed from the degrees of freedom $\hat{\mathbf{u}}^n_l$ in conserved variables simply by \begin{equation} \hat{\mathbf{p}}^n_l = \mathbf{V} \left( \hat{\u}^n_l \right), \qquad \forall l. \end{equation} The conversion can be done in such a simple way because we use a \textit{nodal} basis $\Phi_l(\mathbf{x})$. In other words, the degrees of freedom $\hat{\mathbf{u}}^n_l$ in conserved variables are first converted into degrees of freedom $\hat{\mathbf{p}}^n_l$ in primitive variables, which are then used as initial conditions for the LSDG predictor, i.e. \begin{equation} \label{LSDG-2} {\mathbf{u}}_h(\mathbf{x},t^n)\xrightarrow{Cons2Prim} {\mathbf{p}}_h(\mathbf{x},t^n)\xrightarrow{LSDG} {\v}_h(\mathbf{x},t)\,,\hspace{1cm}t\in[t^n;t^{n+1}]\,. \end{equation} In those cells in which the main scheme of Eq.~(\ref{eqn.pde.nc.gw2}) fails, either because unphysical values of any quantity are encountered, or because strong oscillations appear in the solution which violate the discrete maximum principle, the computation within the troubled cell goes back to the time level $t^n$ and it proceeds to a complete re-calculation. In practice, a suitable subgrid is generated just within the troubled cell, and a traditional finite volume scheme is used on the subgrid using an alternative data representation in terms of cell averages defined for each cell of the subgrid. This approach and the underlying \textit{a posteriori} MOOD framework have been presented in full details in \cite{CDL1,CDL2,Dumbser2014}, to which we address the interested reader for a deeper understanding. The resulting ADER-DG scheme in primitive variables can be combined with spacetime adaptive mesh refinement (AMR), in such a way to resolve the smallest details of the solution in highly complex flows. We refer to \cite{Zanotti2015c,Zanotti2015d} for a full account of our AMR solver in the context of ADER-DG schemes. Here we want to show three representative test cases of the ability of the new ADER-Prim-DG scheme with adaptive mesh refinement, by considering the cylindrical expansion of a blast wave in a plasma with an initially uniform magnetic field (see also \cite{Komissarov1999,Leismann2005,DelZanna2007,DumbserZanotti}), as well as the shock problems of Leblanc, Sedov \cite{Sedov1959} and Noh \cite{noh_1987_ecs}. \subsection{RMHD blast wave problem} At time $t=0$, the rest-mass density and the pressure are $\rho=0.01$ and $p=1$, respectively, within a cylinder of radius $R=1.0$, while outside the cylinder $\rho=10^{-4}$ and $p=5\times10^{-4}$. Moreover, there is a constant magnetic field $B_0$ along the $x$-direction and the plasma is at rest, while a smooth ramp function between $r=0.8$ and $r=1$ modulates the initial jump between inner and outer values, similarly to \cite{Komissarov1999} and \cite{DelZanna2007}. The computational domain is $\Omega = [-6,6]\times[-6,6]$, and the problem has been solved over an initial coarse mesh with $40\times40$ elements. During the evolution the mesh is adaptively refined using a refinement factor along each direction $\mathfrak{r}=3$ and two levels of refinement. A simple Rusanov Riemann solver has been adopted, in combination with the $\mathbb{P}_3\mathbb{P}_3$ version of the ADER-DG scheme. On the subgrid we are free to choose any finite volume scheme that we wish, and for this specific test we have found convenient to adopt a second-order TVD scheme. The results for $B_0=0.5$ are shown in Fig.~\ref{fig:RMHD-BlastWave}, which reports the rest-mass density, the thermal pressure, the Lorentz factor and the magnetic pressure at time $t=4.0$. At this time, the solution is composed by an external circular fast shock wave, which is hardly visible in the rest mass density, and a reverse shock wave, which is compressed along the $y$-direction. The magnetic field is mostly confined between these two waves, as it can be appreciated from the contour plot of the magnetic pressure. The two bottom panels of the figure show the AMR grid (bottom left) and the map of the limiter (bottom right). In the latter we have used the red color to highlight those cells which required the activation of the limiter over the subgrid, while the blue color is for the regular cells. In practice, the limiter is only needed at the inner shock front, while the external shock front is so weak that the limiter is only occasionally activated. These results confirm the ability of the new ADER-Prim scheme to work also in combination with Discontinuous Galerkin methods, and with complex systems of equations like RMHD. \begin{figure} \begin{center} \begin{tabular}{cc} {[angle=0,width=7.3cm,height=7.0cm]{./RMHD_BlastWave_P3P3_rho.png}} & {[angle=0,width=7.3cm,height=7.0cm]{./RMHD_BlastWave_P3P3_p.png}} \\ {[angle=0,width=7.3cm,height=7.0cm]{./RMHD_BlastWave_P3P3_lorentz.png}} & {[angle=0,width=7.3cm,height=7.0cm]{./RMHD_BlastWave_P3P3_pmag.png}} \\ {[angle=0,width=7.3cm,height=7.0cm]{./RMHD_BlastWave_P3P3_AMR.png}} & {[angle=0,width=7.3cm,height=7.0cm]{./RMHD_BlastWave_P3P3_limiter.png}} \end{tabular} \caption{Solution of the RMHD blast wave at time $t=4.0$, obtained with the ADER-DG $\mathbb{P}_3\mathbb{P}_3$ scheme supplemented with the \textit{a posteriori} second order TVD subcell finite volume limiter. Top panels: rest-mass density (left) and thermal pressure (right). Central panels: Lorentz factor (left) and magnetic pressure (right), with magnetic field lines reported. Bottom panels: AMR grid (left) and limiter map (right) with troubled cells marked in red and regular unlimited cells marked in blue. } \label{fig:RMHD-BlastWave} \end{center} \end{figure} \subsection{Leblanc, Sedov and Noh problem} Here we solve again the classical Euler equations of compressible gas dynamics on a rectangular domain for the Leblanc problem and on a circular domain in the case of the shock problems of Sedov and Noh. The initial conditions are detailed in \cite{Dumbser-Uuriintsetseg2013,LagrangeMHD,Lagrange3D}. For the low pressure region that is present in the above test problems, we use $p=10^{-14}$ for the Leblanc and the Noh problem. The computational results obtained with very high order ADER-DG $\mathbb{P}_9\mathbb{P}_9$ schemes are depicted in Figures \ref{fig:Leblanc}, \ref{fig:Sedov} and \ref{fig:Noh}, showing an excellent agreement with the exact solution in all cases, apart from the overshoot in the case of the Leblanc shock tube. We stress that all test problems are extremely severe and therefore clearly demonstrate the robustness of the new approach. \begin{figure} \begin{center} \begin{tabular}{cc} {[angle=0,width=0.45\textwidth]{./Leblanc-limiter.png}} & {[angle=0,width=0.45\textwidth]{./Leblanc-rho.png}} \\ {[angle=0,width=0.45\textwidth]{./Leblanc-u.png}} & {[angle=0,width=0.45\textwidth]{./Leblanc-e.png}} \end{tabular} \caption{Solution of the Leblanc shock tube problem at time $t=6.0$, obtained with the ADER-DG $\mathbb{P}_9\mathbb{P}_9$ scheme supplemented with the \textit{a posteriori} second order TVD subcell finite volume limiter. Top left: Troubled cells highlighted in red and unlimited cells in blue. Top right to bottom right: Comparison with the exact solution using a 1D cut through the 2D solution on 200 equidistant sample points for density, velocity and internal energy. } \label{fig:Leblanc} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} {[angle=0,width=0.45\textwidth]{./SedovLimiter.png}} & {[angle=0,width=0.45\textwidth]{./Sedov.png}} \end{tabular} \caption{Solution of the Sedov problem at time $t=1.0$, obtained with the ADER-DG $\mathbb{P}_9\mathbb{P}_9$ scheme supplemented with the \textit{a posteriori} second order TVD subcell finite volume limiter. Left: Troubled cells highlighted in red and unlimited cells in blue. Right: Comparison with the exact solution along the $x$-axis. } \label{fig:Sedov} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} {[angle=0,width=0.45\textwidth]{./NohLimiter.png}} & {[angle=0,width=0.45\textwidth]{./Noh.png}} \end{tabular} \caption{Solution of the Noh problem at time $t=0.6$, obtained with the ADER-DG $\mathbb{P}_9\mathbb{P}_9$ scheme supplemented with the \textit{a posteriori} second order TVD subcell finite volume limiter. Left: Troubled cells highlighted in red and unlimited cells in blue. Right: Comparison with the exact solution along the $x$-axis. } \label{fig:Noh} \end{center} \end{figure} \section{Conclusions} \label{sec:conclusions} The new version of ADER schemes introduced in \cite{DumbserEnauxToro} relies on a local space-time discontinuous Galerkin predictor, which is then used for the computation of high order accurate fluxes and sources. This approach has the advantage over classical Cauchy-Kovalewski based ADER schemes \cite{toro1,toro3,toro4,titarevtoro,Toro:2006a,dumbser_jsc,taube_jsc} that it is in principle applicable to general nonlinear systems of conservation laws. However, for hyperbolic systems in which the conversion from conservative to primitive variables is not analytic but only available numerically, a large number of such expensive conversions must be performed, namely one for each space-time quadrature point for the integration of the numerical fluxes over the element interfaces and one for each space-time degree of freedom in the local space-time DG predictor. Motivated by this limitation, we have designed a new version of ADER schemes, valid primarily for finite volume schemes but extendible also to the discontinuous Galerkin finite element framework, in which both the spatial WENO reconstruction and the subsequent local space-time DG predictor act on the primitive variables. In the finite volume context this can be done by performing a double WENO reconstruction for each cell. In the first WENO step, piece-wise polynomials of the conserved variables are computed from the cell averages in the usual way. Then, these reconstruction polynomials are simply \textit{evaluated} in the cell centers, in order to obtain \textit{point values} of the conserved variables. After that, a single conversion from the conserved to the primitive variable is needed in each cell. Finally, a second WENO reconstruction acts on these point values and provides piece-wise polynomials of the primitive variables. The local space-time discontinuous Galerkin predictor must then be reformulated in a non-conservative fashion, supplying the time evolution of the reconstructed polynomials for the primitive variables. For all systems of equations that we have explored, classical Euler, relativistic hydrodynamics (RHD) and magnetohydrodynamics (RMHD) and the Baer--Nunziato equations, we have noticed a significant reduction of spurious oscillations provided by the new reconstruction in primitive variables with respect to traditional reconstruction in conserved variables. This effect is particularly evident for the Baer--Nunziato equations. In the relativistic regime, there is also an improvement in the ability of capturing the position of shock waves (see Fig.~\ref{fig:shock-tube-RS}). To a large extent, the new primitive formulation provides results that are comparable to reconstruction in characteristic variables. Moreover, for systems of equations in which the conversion from the conserved to the primitive variables cannot be obtained in closed form, such as for the RHD and RMHD equations, there is an advantage in terms of computational efficiency, with reductions of the CPU time around $\sim 20\%$, or more. We have also introduced an additional improvement, namely the implementation of a new initial guess for the LSDG predictor, which is based on an extrapolation in time, similar to Adams--Bashforth-type ODE integrators. This new initial guess is typically faster than those traditionally available, but it is also less robust in the presence of strong shocks. We predict that the new version of ADER based on primitive variables will become the standard ADER scheme in the relativistic framework. This may become particularly advantageous for high energy astrophysics, in which both high accuracy and high computational efficiency are required. \begin{backmatter} \section*{Competing interests} The authors declare that they have no competing interests. \section*{Acknowledgements} \begin{tabular}{lr} \begin{minipage}[c]{0.8\textwidth} The research presented in this paper was financed by i) the European Research Council (ERC) under the European Union's Seventh Framework Programme (FP7/2007-2013) with the research project \textit{STiMulUs}, ERC Grant agreement no. 278267 and ii) it has received funding from the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No. 671698 (call FETHPC-1-2014, project \textit{ExaHyPE}). \end{minipage} & \begin{minipage}[c]{0.2\textwidth} [angle=0,width=0.65\textwidth]{./EU_flag_yellow.pdf} \end{minipage} \end{tabular} We are grateful to Bruno Giacomazzo and Luciano Rezzolla for providing the numerical code for the exact solution of the Riemann problem in RMHD. We would also like to acknowledge PRACE for awarding access to the SuperMUC supercomputer based in Munich (Germany) at the Leibniz Rechenzentrum (LRZ), and ISCRA, for awarding access to the FERMI supercomputer based in Casalecchio (Italy).
1,477,468,750,223
arxiv
\section{Extended Globular Clusters}\label{egc} Globular clusters (GCs) are typically compact objects with half-light radii of a few pc and with masses in the range $10^4-10^6~M_\odot$. The internal gravitational fields in the compact centres of normal GCs are usually much larger than Milgrom's acceleration constant $a_0$, hence the normal GCs are dominated by Newtonian gravity, and they are not good candidates to test the dynamics of alternative gravities. However there is a certain fraction of GCs \citep[$\approx 9\%$ for the Galactic GCs in][]{Harris1996} that have extended half-light radii of more than $10\, {\rm pc} $. The extended GCs are found widely existing around extragalactic systems and their half-light radii can reach up to $30~\, {\rm pc} $ \citep[][in preparation]{Larsen_brodie2000,Harris_etal2002,Chandar_etal2004,Lee_etal2005,Peng_etal2006,Chies-Santos_etal2007,Georgiev_etal2009,Huxor_etal2011,Bruns_Kroupa2011,Bruens_Kroupa2012}. In contrast to normal GCs, the extended GCs have low density centres, where the acceleration in the central regions can be below $a_0$. Therefore it is very important to test the dynamics of the extended GCs in different gravities. A test of the outer Galactic clusters is suggested by \citet{Baumgardt_etal2005}, with the assumption that the velocity dispersion profiles in the clusters are isotropic. \begin{table*} \begin{center}\vskip 0.00cm \caption{Parameters of isolated mass models for extended globular clusters. The columns from left to right provide the following information of the models: models' ID ($1_{\rm st}$ column), total mass ($2_{\rm nd}$ column), Plummer radius $r_P$ ($3_{\rm rd}$ column), number of particles in the simulations ($4_{\rm th}$ column), crossing time of models in Newtonian gravity ($5_{\rm th}$ column) and in Milgromian gravity ($6_{\rm th}$ column), a characteristic time scale for the Milgromian Plummer model $r_P(GMa_0)^{-1/4}$ ($7_{\rm th}$ column), the stability parameter $\xi$ of the models after re-virialised ($8_{\rm th}$ column) and the corresponding colours used in Fig. \ref{checkg} ($9_{\rm th}$ column).} \begin{tabular}{llllllllc} \hline Models ID & M($M_\odot$) & $r_{\rm P}$ ($\, {\rm pc} $) & N & ${\rm T}_{\rm cross}^{\rm Newt}~(\, {\rm Myr} )$ & ${\rm T}_{\rm cross}^{\rm MD}~(\, {\rm Myr} )$ & $r_P(GMa_0)^{-1/4}~(\, {\rm Myr} )$ & $\xi$& Colour\\ \hline 1 & $10^6$ & 10 & 100000 & 3.53 & 2.73 & 0.89 & 1.11 &Black\\ 2 & $10^5$ & 10 & 100000 & 11.69 & 5.70 & 1.58 & 1.65 &Magenta\\ 3 & $10^4$ & 10 & 25000 & 36.58 & 10.42 & 2.81 & 2.26 &Cyan\\ 4 & $10^5$ & 5 & 100000 & 4.07 & 2.72 & 0.79 & 1.25 &Yellow\\ 5 & $10^5$ & 20 & 100000 & 32.53 & 11.70 & 3.16 & 2.37& Green\\ \hline \end{tabular} \label{ics} \end{center} \end{table*} \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics{checkg.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The gravitational acceleration for the models in Table \ref{ics}. The solid and dotted lines are acceleration from the Milgromian and Newtonian Poisson equations, respectively. The dashed line is $a_0=3.7\, {\rm pc} \, {\rm Myr} ^{-2}$.}\label{checkg} \end{center} \end{figure} We use Plummer's density profile \citep{Plummer1911,BT2008} to model the extended GCs, \begin{equation} \rho(r)=\frac{3M}{4\pi r_{\rm P}^3}\left(1+\frac{r^2}{r_{\rm P}^2}\right)^{-5/2}, \end{equation} where $r_{\rm P}$ is the scale radius and has a relation with half mass radius $r_h\simeq 1.3~r_{\rm P}$, $M$ is the total mass. We use the method of \citet{Gerhard1991} to construct isotropic N-body Initial Conditions (ICs) in Newtonian gravity for the violent re-virialisation simulations. We summarise the parameters of the mass models in Table \ref{ics}, and the parameter $N$ in Table \ref{ics} is the number of N-body particles. The accelerations of the analytical mass models (not from the N-body particles) obtained from the Milgromian (Eq. \ref{poisson}) and Newtonian Poisson equations are shown in Fig. \ref{checkg}. We find that the mass models $2$, $3$ and $5$ are in the deep MD regime while model $1$ is mostly ($r < 5r_{\rm P}$) in the Newtonian regime, and model $4$ is dominated by mild-Milgromian gravity. \section{Violent phase transition}\label{sim} The most direct way to study the phase transition in different gravities is virialising the Newtonian equilibrium models with the Milgromian Poisson equation (Eq. \ref{poisson}). A GC which is shot from the galactic centre to the outer regime, if the Galactic orbital time is much shorter than the re-virialisation time scale, experiences such a violent dynamical phase transition. \subsection{Numerical setup}\label{num} The interpolating function $\mu(X)$ has several popular forms giving the same asymptotic behavior in Eq. \ref{asymptotic}. We shall apply the `simple'-$\mu$ function in the following of this paper \citep{Famaey_Binney2005,Sanders_Noordermeer2007,Wu_etal2007}, \begin{equation} \mu(X)=X/(1+X). \end{equation} In order to solve the non-linear Poisson equation (Eq. \ref{poisson}), we use the particle-mesh N-body code NMODY \citep{nmody}. NMODY is developed for isolated Milgromian gravity systems and has been well tested \citep{Nipoti_etal2007,Nipoti_etal2008,Nipoti_etal2011}. It solves Newtonian potentials by the spherical harmonic expansion to the differential Poisson equation and then iterates into the Milgromian potential. For the simulations in this section, we choose a grid-resolution of $n_r\times n_{\theta} \times n_{\phi}=256 \times 32 \times 64$, where $n_r,~n_\theta,~n_\phi$ are the number of grid cells in radial, polar and azimuthal dimensions, respectively. The radial grids are defined as $r_i = r_s\times \tan \left[(i+0.5)0.5\pi /(n_r+1)\right]$ with $r_s=20\, {\rm pc} $ and $i=0,1,2,...,n_r$, and the angular grids are equally segmented. NMODY uses the leap-frog scheme to integrate the motions of the particles. The time steps are globally defined as ${\bf d}t= \frac{0.1}{\sqrt{\max |\nabla \cdot {{\bf g}_{\rm int}}|}}$, meaning that there are around $10$ time steps on an orbital loop for the particles in the densest regime. Therefore the time steps are small enough to avoid the artifical run-aways of the particles in the central regime. In order to choose the correct time scale for our simulations, we define the crossing time of the models in this paper as $T_{\rm cross} = r_{\rm 90}/v_{\rm circ,r_{\rm 90}}$, where $r_{\rm 90}$ is the radius enclosing $90\%$ of the total mass of a model, and $v_{\rm circ,r_{\rm 90}}$ is the corresponding circular velocity. In Newtonian dynamics, $v_{\rm circ,r_{\rm 90}}^{\rm Newt} \equiv \sqrt{GM_{90}/r_{90}}$, where $M_{90}$ is $90\%$ of the total mass, while $v_{\rm circ,r_{\rm 90}}^{\rm MD} \equiv (GM_{90}a_0)^{1/4}$ in Milgromian dynamics. The crossing time is the time scale within which the majority of a system virialises. The values of ${\rm T}_{\rm cross}^{\rm Newt}$ (i.e., crossing times of the Newtonian models) are listed in Table \ref{ics}. The crossing times of the models range from 3.5 to 36.7 $\, {\rm Myr} $. The corresponding two-body relaxation times are $t_{\rm rel}~\approx~\frac{0.1N}{\ln N} T_{\rm cross}~\approx~ 100~T_{\rm cross}$, such that two-body-encounter driven processes can be neglacted during the simulations. Before shifting to Milgromian gravity, we first virialise the models contructed from \S \ref{egc} in Newtonian gravity for $\approx 25~{\rm T}_{\rm cross}^{\rm Newt}$, to fully phase mix the systems and test the stability of the unperturbed systems. The details of the stability test can be found in Appendix \ref{stability}. Then we evolve the systems in Milgromian gravity for another $\approx 100-200~\, {\rm Myr} $ ($200~\, {\rm Myr} $ for models $3$ and $5$ to ensure there are more than $10~{\rm T}_{\rm cross}^{\rm MD}$ for each system) to study their re-virialisation process. \subsection{Time scales}\label{vio_vir} \subsubsection{Time scale of re-virialisation and the virial ratio}\label{ts_vir} \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics[angle=-90]{vir_all.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The virial ratio during the re-virialisation due to the phase transition. Time = 0 at the time when Newtonian gravity instantly transits into Milgromian gravity. After about 5 crossing times (see \S \ref{ts_vir} for details) in Milgromian gravity, the systems are in their new equilibrium states.}\label{vir_violent} \end{center} \end{figure} For a collisionless system, the scalar virial equation should be satisfied if the system is in equilibrium \citep{BT2008}: \begin{equation} 2K+W=0, \end{equation} where $K$ is the kinetic energy of the system and $W$ is Clausius' integral, $W=\int \rho {\vec x} \cdot \nabla \Phi d^3x$ \citep[where ${\vec x}$ is the spatial vector,][]{Clausius1870}. \citet{Lynden-Bell1967} showed that for a Newtonian system violently virialising to equilibrium, the time scale is approximately ${3 T_r^*}/{8\pi} \simeq {T_r^*}/{8} $, where $T_r^*$ is the typical radial period of the system at the equilibrium radius. $T_r^*$ and $T_{\rm cross}$ should be of comparable magnitude. Therefore the violent virialisation time is comparable to a crossing time at the equilibrium radius. \citet{Nipoti_etal2007} studied dissipationless collapses of systems in MD and they obtained a virialisation time $\propto r(GMa_0)^{-1/4}$ for deep MD systems collapsing from rest (i.e. the initial velocities of the particle systems are zero). Since our systems evolve from one equilibrium state to another equilibrium state, the time scale for the re-virialisation should be comparable to this violent virialisation time scale in Milgromian gravity. Fig. \ref{vir_violent} shows the time scale of the re-virialisation of the models. The virial ratios $\rm 2K/|W|$ of all the models are smaller than $1$ when the gravity is switched to Milgromian at Time = 0. The particles are accelerated in the deepened potential. We use the definition of a system's `dynamical time' ${\rm T}_{\rm dyn}$ in \citet{Nipoti_etal2007}, i.e. the time scale when $\rm 2K/|W|$ reaches its maximum value. ${\rm T}_{\rm dyn}$ is within $0.5-1.0~{\rm T}_{\rm cross}^{\rm MD}$. The varying trend of the time scale for different models agrees with the violent re-virialisation time scale in \citet{Nipoti_etal2007}. The ${\rm T}_{\rm dyn}$ of re-virialisation is approximately proportional to $ r_{\rm P}(GMa_0)^{-1/4}$ here as well. We show the values of $ r_{\rm P}(GMa_0)^{-1/4}$ in Table \ref{ics}. We define the re-virialisation time to be $5\times {\rm T}_{\rm dyn}$, since the amplitude of osscilation of $\rm 2K/|W|$ around $1$ is smaller than $1.5\%$ at $T>5 {\rm T}_{\rm cross}^{\rm MD}$ in Fig. \ref{vir_violent}, and the systems can be considered fully re-virialised. From Fig. \ref{vir_violent} we find that the time scale for the systems to re-virialise from Newtonian to Milgromian gravity is rather short, especially for the violent re-virialisation period. \subsubsection{Lagrangian radii, local time scales and mass profiles}\label{rlag} \begin{figure}{} \begin{center} \resizebox{8.7cm}{!}{\includegraphics[angle=-90]{rmass_vio10pc1e5.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The evolution of the $10\% - 90\%$ Lagrangian radii for model $2$. At Time = 0, Newtonian gravity switches instantly to Milgromian gravity.}\label{rmass_violent} \end{center} \end{figure} We study the evolution of mass profiles of the models by considering their Lagrangian radii. We show one example (model $2$) of the evolution of the $10\%$ to $90\%$ Lagrangian radii in Fig. \ref{rmass_violent}. We define the `dynamical time of each $10\%$ of mass shell' (short for fractional dynamical time ${\rm T}_{\rm dyn}^i$, $i=1,2,...9$.) as the time when its Lagrangian radius collapses to its minimal value. We find that the fractional dynamical time scales, ${\rm T}_{\rm dyn}^i$, are different for different enclosed percentage of mass: the time scale increases with increasing enclosed mass. The Lagrangian radii become smaller when re-virialised to Milgromian gravity especially for the inner regime. The ${\rm T}_{\rm dyn}^i$ are important since they show the maximum time a regime can remain frozen in Newtonian gravity. We show the fractional dynamical times ${\rm T}_{\rm dyn}^i$ of all the models in the upper panel of Fig. \ref{tpt}. We find that ${\rm T}_{\rm dyn}^i$ (in unit of $1{\rm T}_{\rm cross}^{\rm MD}$) as a function of the mass fraction is almost the same for different models: from $0.3~{\rm T}_{\rm cross}^{\rm MD}$ to $0.7~{\rm T}_{\rm cross}^{\rm MD}$ for $10\%-90\%$ of the enclosed mass. The innermost parts of the systems are frozen in Newtonian gravity for times shorter than $0.3~{\rm T}_{\rm cross}^{\rm MD}$. The outermost parts of the systems are frozen for times shorter than $0.7~{\rm T}_{\rm cross}^{\rm MD}$. For each mass shell, the first oscillation from the initial value to the first maximum of the Lagrangian radius should be approximately a local crossing time. We call the time for the first oscillation of Lagrangian radii in each shell the `local transition time', ${\rm T}_{\rm transition}^i$. ${\rm T}_{\rm transition}^i$ versus mass fraction has the same trend as ${\rm T}_{\rm dyn}^i$ in the upper panel. The local transition time scales are from $0.4~{\rm T}_{\rm cross}^{\rm MD}$ to $1.1~{\rm T}_{\rm cross}^{\rm MD}$ for $10\%-90\%$ of the enclosed mass. However, there is a deviation of the local transition time curves between the different models normalised by their own $1{\rm T}_{\rm cross}^{\rm MD}$: in the lower panel of Fig. \ref{tpt}, comparing models $1$ (black curve), $2$ (Magenta curve) and $3$ (Cyan curve), the more massive models have longer time scales; and comparing models $2$ (Magenta), $4$ (Yellow) and $5$ (Green), the models with smaller initial radii have longer time scales. I.e., the models with denser density distributions, where the gravity is stronger, have longer time scales in unit of ${\rm T}_{\rm cross}^{\rm MD}$; while the more diffuse models where their gravities are weaker, have shorter time scales in unit of ${\rm T}_{\rm cross}^{\rm MD}$. This is because after $T_{\rm dyn}^i$, the local particle structures over-collapse in Milgromian potential, and then oscillate back to the equilibrium state, and the diffuse models are closer to the deep MOND equilibrium after $T_{\rm dyn}^i$. \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics[angle=0]{tptviolent.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{{\bf Upper panel:} The time scale of the phase transition in units of the MD crossing time at the $10\%-90\%$ Lagrangian radii. {\bf Lower panel:} ${\rm T}_{\rm dyn}$ at the $10\%-90\%$ Lagrangian radii. The colours are for different models, as defined in Fig. \ref{checkg}. }\label{tpt} \end{center} \end{figure} The evolution of Lagrangian radii implies that the mass profiles of the GCs have changed after re-virialisation. We show the spherically averaged density profiles of the models in the upper panel of Fig. \ref{prof_violent}. The densities, $\rho(r)$, are normalised by $\rho_0=M/r_{\rm P}^3$ so that all the Newtonian models stay on the same curve. In general the central densities become larger and the core radii are smaller after the re-virialisation. This agrees with the evolution of the Lagrangian radii. We further note that the diffuse models change their central density more than the compact models. This is because the more diffuse models are further away from equilibrium when the gravity switches from Newtonian to Milgromian. \subsubsection{Can Palomar 14-like GCs be frozen in Newtonian gravity?} There are distant GCs like Palomar 14 (hereafter Pal 14), which is located in a deep MD background \citep[$74.7~\, {\rm kpc} $ distance to the Sun,][]{Hilker2006}, but appear to behave Newtonian with a small value of the velocity dispersion \citep{Haghi_etal2009}. This kind of GC seems to be a challenge to MD. However, \citet{Gentile_etal2010} argued that Pal 14 is not sufficient enough to falsify MD, and one of their arguments is that Pal 14 could be on a eccentric orbit around the Milky Way, and that its potential is Newtonian at its pericentre due to the strong external field from the Milky Way: the Newtonian-like potential is frozen when Pal 14 moves to the current position. From Figures \ref{vir_violent}, \ref{rmass_violent} and \ref{tpt} we find that the time scale of the transition is rather short and the systems go into the MD dominated regime in a few to a few tens of $\, {\rm Myr} $. If the orbital time from the pericentre to the outer stellar halo of the Milky Way (i.e., far enough away where the external field is $\ll a_0$) is shorter than the `dynamical' time, it is possible that a GC may not be in equilibrium and its potential appears frozen in the Newtonian regime. We will briefly show the kinetics of rapid phase transition later in this section and show simulations for GCs moving on different radial orbits around the Milky Way in \S \ref{real}, for comparing the realistic time scale of re-virialisation and the orbital time. \subsection{Kinematics}\label{text_kin} For the particles in each shell of the $10\%-90\%$ Lagrangian radii, the kinetic energy is defined as, \begin{equation} K_i=\frac{1}{2}\sum_{\rm ip=1}^{\rm N_{\rm parts,i}} m_{\rm ip} v_{\rm ip}^2,~~~~(i=1,2,...,9),\nonumber \end{equation} where ${\rm N}_{\rm parts,i}$ is the number of equal mass particles in the $i_{th}$ shell and $m_{\rm ip}$ is the mass of ${\it ip}_{th}$ particle. We show the evolution of $K_i$ of model $2$ in Fig. \ref{rkin_violent}. The time scale of the first oscillation at different Lagrangian radii agrees with Fig. \ref{rmass_violent}. Each shell is fully virialised after about $5~{\rm T}_{\rm dyn}^i$. The kinetic energy increases by a factor of $2.7$ in the innermost shell and by a factor of $3.3$ in the outermost shell for this model. \begin{figure}{} \begin{center} \resizebox{8.7cm}{!}{\includegraphics[angle=-90]{rkin_vio10pc1e5.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The evolution of the kinetic energy within shells of the $10\% - 90\%$ Lagrangian radii of model $2$. $M_i$ is the mass within the $i_{th}$ equal mass shell. The Newtonian gravity switches instantly into Milgromian gravity at Time = 0.}\label{rkin_violent} \end{center} \end{figure} We also study the increment of $K_i$ of all the models in Fig. \ref{rkin_increase}, i.e., $\frac{K_{i,~{\rm re-virialised}}}{K_{i,~{\rm Newtonian}}}$ versus the fraction of mass. We find that the increasing curves of kinetic energy are mildly increasing with the fraction of mass for all the models, i.e., the kinetic energy in the inner parts of the systems increase slightly less than the increment of their outer parts. The trends of the curves for all the models are rather similar. There is a large oscillation for the cyan curve (i.e., model $3$) since there are fewer particles in this model, and the particle noise is larger in this simulation. \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics{rkin_increase.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The increase of kinetic energy in each shell from Newtonian to Milgromian gravity. I.e., $\frac{K_{i,~{\rm re-virialised}}}{K_{i,~{\rm Newtonian}}}$. The colours are for different models, as defined in Fig. \ref{checkg}.} \label{rkin_increase} \end{center} \end{figure} The kinetic energies of the systems evolve significantly after the re-virialisation. Therefore the velocity dispersions should also evolve significantly. We study the radial velocity dispersion and the corresponding anisotropy profiles of the models before and after the re-virialisation. The middle panel of Fig. \ref{prof_violent} shows the evolution of the $\sigma_r(r_i)$ profiles: \begin{equation}\label{sigmar} \sigma_r^2(r_i) = \frac{1}{N_{\rm parts,i}}\sum_{\rm ip=1}^{\rm N_{\rm parts,i}} (v_{r,{\rm ip}}-{\bar v_{r,i}})^2,\end{equation} where $r_i$ is the $(i\times 10)\%$ Lagrangian radius, $v_{r,{\rm ip}}$ is the radial velocity of the ${\it ip}_{th}$ particle and ${\bar v_{r,i}}$ is the mean radial velocity of the particles in the $i_{th}$ equal mass shell. The original Newtonian models have decreasing $\sigma_r(r_i)$ profiles (dotted curves) versus increasing radii $r_i$, while the final products can have decreasing profiles for the compact systems like models $1$ (black) and $4$ (yellow), or have $\sigma_r(r_i)$ profiles that increase mildly at first and then decrease again in the outer regime for the loose systems like model $2$. With a same mass (models $2$, $4$ and $5$, i.e., magenta, yellow and green curves), the peak values of the increasing $\sigma_r(r_i)$ profiles appear at a smaller mass fraction for the models with a larger initial Plummer radius $r_{\rm P}$; however the amplitudes of the $\sigma_r(r_i)$ profiles do not change significantly with radius. This is different to the Newtonian case. The $\sigma_r(r_i)$ profiles of self-consistent Newtonian models (the dotted curves) are very different if their $r_{\rm P}$ are different. \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics{prof_violent.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The spherically averaged density profiles ({\bf upper panel}), radial velocity dispersion ({\bf middle panel}) and anisotropy ({\bf lower panel}, Eq. \ref{beta}) for the models in Table \ref{ics}. The dotted and solid lines correspond to, respectively, models in Newtonian and in Milgromian dynamics.}\label{prof_violent} \end{center} \end{figure} Since the systems collapse during the re-virialisation, the orbital structures might completely change. \citet{Nipoti_etal2007} showed the anisotropy profiles of their rest models collapsing in Newtonian and Milgromian gravities, and their models become highly radially anisotropic after the virialisation, especially for their outer parts where $r>3r_h$ ($r_h$ being the 3-dimensional half mass radius) for their deep Milgromian model, and $r>r_h$ for mild-Milgromian and Newtonian models. Although our ICs are different to the ones in \citet{Nipoti_etal2007}, the re-virialisation from the Newtonian to the Milgromian regime is similar to the collapse process. We expect to obtain radially anisotropic models from our rapid phase transition as well. The anisotropy profiles $\beta(r_i)$ are defined in the same way as that in \citet{BT2008}, \begin{equation}\label{beta} \beta(r_i)\equiv 1-\frac{\sigma_\theta^2(r_i)+\sigma_\phi^2(r_i)}{2\sigma_r^2(r_i)}, \end{equation} where $\sigma_\theta$ and $\sigma_\phi$ are polar and azimuthal components of velocity dispersion, and the definition is similar to that of radial velocity dispersion in Eq. \ref{sigmar}. We show the $\beta(r_i)$ profiles of our final products in the lower panel of Fig. \ref{prof_violent}. Indeed all of the $\beta(r_i)$ profiles are radially anisotropic. The more compact models have larger anisotropic radii (i.e., the radii where the models start to be anisotropic) and they are less radially anisotropic, while the more diffuse systems have smaller anisotropic radii and they are more radially anisotropic. The $\beta(r_i)$ profiles of model $3$ and $5$ are remarkably radial, they are almost linearly increasing from the $10\%$ to the $60\%$ enclosed mass radii and have $\beta > 0.8$ at radii containing $60\%$ of the enclosed mass, and then the $\beta(r_i)$ profiles mildly increase with mass. Therefore diffuse models, whose self-gravity are in deep MD, change the anisotropy profiles more significantly than the compact models. \subsection{Phase space distribution during re-virialisation} In order to quantitatively compare the re-virialisation of systems from Newtonian to Milgromian dynamics with \citet{Nipoti_etal2007}'s dissipationless collapse process, the phase space distributions of the models are studied and three typical examples are shown in Fig. \ref{ps_violent}: models $4$ (left panels), $2$ (middle panels) and $5$ (right panels), which represent systems in weak-, moderate- and deep-Milgromian dynamics, respectively. The phase space distributions are studied at times of $0.5~{\rm T}_{\rm cross}^{\rm MD}$, $1~{\rm T}_{\rm cross}^{\rm MD}$, $17~{\rm T}_{\rm cross}^{\rm MD}$ and $35~{\rm T}_{\rm cross}^{\rm MD}$. The dynamical times for the majority parts of the systems are shown in Fig. \ref{tpt}, and $1~{\rm T}_{\rm dyn}$ roughly amounts to $0.7~{\rm T}_{\rm cross}^{\rm MD}$ for the majority parts ($90\%$ enclosed mass) of the systems. Therefore, $35~{\rm T}_{\rm cross}^{\rm MD}$ roughly amounts to $50~{\rm T}_{\rm dyn}$. The times at which the phase space distributions are studied here are analogous to those in \citet[][ $0.5~{\rm T}_{\rm dyn}$, $1.0~{\rm T}_{\rm dyn}$, $44~{\rm T}_{\rm dyn}$]{Nipoti_etal2007}. Within $35~{\rm T}_{\rm cross}^{\rm MD}$ the systems are fully re-virialised, although in the very outer regimes there are few particles which have not yet phase mixed. We have to note that the amount of such particles are actually small and that there are only about $1.5\%$ of mass outside a radius of $10~r_{\rm P}$. At the beginning of violent re-virialisation ($0.5~{\rm T}_{\rm cross}^{\rm MD}$ and $1~{\rm T}_{\rm cross}^{\rm MD}$), particles fall into the centres of the systems, and some of the particles have already crossed the centres, which correspond to the particles near $r~=~0$. This is similar to that of dissipationless collapse. However, the phase space distributions have wider spreads compared to that at the beginning of dissipationless collapse (Nipoti et al. 2007). The reason is that the phase transitions of our models start from Newtonian equilibrium state, while the dissipationless collapse processes start from rest. At time $T~=~1.0~{\rm T}_{\rm cross}^{\rm MD}$, the deeper the Milgromian dynamics dominating the systems is, the clearer the shell-like structures appear near the centre of the systems. Those shell-like structures are particles moving in and out of the systems in the centre regimes. Before the systems are phase mixed, the shell-like structures will appear in the outer regimes of the systems as time is proceeding. At $T~=~17~{\rm T}_{\rm cross}^{\rm MD}$, the weak-Milgromian system has already erased all structures. This implies that it approaches a phase mixed state. However, the other two models still have shell-like structures outside $3~r_{\rm P}$, especially for the deep-Milgromian system. This is similar to the case of dissipationless collapse since the Milgromian system is less efficient in phase mixing. The enclosed mass within $3~r_{\rm P}$ is $85.4\%$ of the overall mass. Therefore, the regimes which contain shell-like structures at $T~=~17~{\rm T}_{\rm cross}^{\rm MD}$ are the outer regimes of the systems. We further evolve the systems $2$ and $5$ until $T~=~35~{\rm T}_{\rm cross}^{\rm MD}$, and then compare with model $4$. After $35~{\rm T}_{\rm cross}^{\rm MD}$ the shell-like structures of all the models disappear, which means the models are fully phase mixed. The phase mixing process in the re-virialisation case is shorter than that in the dissipationless collapse process \citep{Nipoti_etal2007}. \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics{ps_violent.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The phase space distribution at different snapshots of violent re-virialisation for models $4$ (left panels), $2$ (middle panels) and $5$ (right panels). The radii are scaled by the initial Plummer radius $r_{\rm P}$, and the radial velocities of particles, $v_r$, are scaled by $v_0=\sqrt{GM/r_{\rm P}}$.}\label{ps_violent} \end{center} \end{figure} \subsection{Radial Instability after systems re-virialised} From the studies of \S \ref{text_kin}, one can see that the diffuse systems going into the deep-Milgromian dynamics regime (Models $3$ and $5$) are strongly radially anisotropic from the half-mass radii outwards. There are many radial orbits in the deep-Milgromian systems after re-virialisation. Since the re-virialisation of the systems is a new prediction compared to Newtonian dynamics, it is important to examine whether the re-virialised systems are stable, i.e., whether their velocity dispersions become isotropic by the effect of radial instability. A parameter describing the ratio between radial and tangential anisotropies was defined in \citet{Trenti_Bertin2006,Polyachenko_Shukhman1981}, \begin{equation}\label{xi} \xi=\frac{2K_r}{K_t},\end{equation} for a globle (Newtonian) system, to characterize the stability of the system. Here $K_r$ and $K_t=K_\theta+K_\phi$ are the radial and tangential components of the kinetic energy tensor, respectively. $K_r\equiv \sum_{ip=1}^{N}\frac{1}{2}m_{\rm ip}v_{r,{\rm ip}}^2$, where $N$ is the total number of particles for a system. The polar and azimuthal components of the kinematic energy tensor, represented by $K_\theta$ and $K_\phi$ respectively, can be defined in the same manner. \citet{Nipoti_etal2011} applied the $\xi$ parameter for Milgromian systems, and found that there is an empirical stability criterion for Milgromian systems: $2.3\le \xi_c \le 2.6$ for the stellar components, where the $\xi_c$ is the critical value of $\xi$ which depends on the stellar density and internal gravitational acceleration. The systems are stable if $\xi \le \xi_c$. The $\xi$ values for the models after re-virialisation are listed in the $8_{\rm th}$ column of Table \ref{ics}. The values of $\xi$ for models $1$, $2$, $3$ and $4$ are smaller than $2.3$, and $\xi$ for model $5$ is 2.37, which is within the range of $\xi_c$. Therefore models $1-4$ are stable models, and model $5$ is at the critical limit. \section{Globulars in the space-varying gravitational field of the Galaxy}\label{real} We have already studied the violent re-virialisation in \S \ref{sim}, and we showed that the re-virialisation takes only a few crossing times in Milgromian gravity, which is too short to freeze the distant GCs in Newtonian gravity. However a real star cluster moving in the Galactic potential experiences gradually evolving dynamics. The self-gravity of an equilibrium system moving in such a field is gradually evolving, and the transition between (quasi-) Newtonian and Milgromian gravity may be adiabatic. Thus an adiabatic contraction or expansion process is expected for such a system. However this has not yet been studied for a system moving in a gravitational background field which is continuously space-varying. Here we are interested in the adiabatic contraction process since it is a brand new physical process. % We will select and virialise the Newtonian systems from \S \ref{egc} in a strong external field from the Milky Way in \S \ref{prekick}, and then send the systems on different radial orbits along the vertical $z$-axis of the Milky Way in \S \ref{orbits}. We shall compare the time scale of the phase transition, the kinematics and mass profiles of the final systems in \S \ref{adiabatic}. \subsection{Background gravitational fields from the MW}\label{select} The internal dynamics of a system embedded in an external field has been studied by \citet{Zhao_Tian2006} and \citet{Wu_etal2007}. The Possion equation of this sytem is linarised in the radii where the system is dominated by the external field. Assuming the external field is along the $z$-axis and using $(x,~y,~z)$ Cartesian coordinates, \begin{equation}\label{constant} \Phi_{\rm int}^{\infty} (x,y,z) = -\frac{GM}{\mu_e \sqrt{(1+\Delta_e)(x^2+y^2)+z^2}}, \end{equation} where $\Phi_{\rm int}^{\infty}$ is the internal potential at infinity (i.e., at radii of the internal system where the self-gravity ${\bf g}_{\rm int}$ is much smaller than ${\bf g}_{\rm ext}$, we can ignore ${\bf g}_{\rm int}$ in Eq. \ref{poisson}), $\mu_e$ is the $\mu$ function of the external field ${\bf g}_{\rm ext}$, and $\Delta_e \equiv \frac{d \ln \mu_e}{d \ln X_e}|_{X_e=|{\bf g}_{\rm ext}|/a_0}$. Another important issue is the tidal field of the background. In \citet{Zhao_Tian2006} the tidal radii of binary systems in MD-like gravity have been studied, \begin{equation}\label{rt} r_{\rm tidal}=\left[\frac{M_{\rm int}}{(1+\zeta)M_{\rm ext}}\right]^{\frac{1}{3}}D_0, \end{equation} where $D_0$ is the orbital distance of the internal system, $M_{\rm int}$ is the total mass of the internal system and $M_{\rm ext}$ is the mass of the external system enclosed within the radius $D_0$, $\zeta=-\frac{d\ln g_{\rm int}}{d\ln D_0}=1-\frac{d\ln v_{\rm cir}^2}{d\ln D_0}$, where $\zeta=1$ in deep MD gravity and $\zeta=2$ in Keplerian (or Newtonian) gravity, and $v_{\rm cir}$ is the circular velocity at the radius $D_0$. For radii $r<r_{\rm tidal}$ of the internal system, the tidal field is not important, and the background field is dominated by a homogeneous external field. A numerically homogeneous external field can be added as a boundary condition while solving the Poisson equation \citep{Wu_etal2007,Wu_etal2008}. The same boundary conditions were introduced into NMODY in \citet{Wu_etal2010}. Since Plummer profiles are extended density profiles, it is impossible and unnecessary to study $100\%$ of the enclosed mass. We define the inner $90\%$ of the total mass as the majority of the system. We only focus on the dynamics of the inner $90\%$ of the system, i.e., for $r\simeq 3.7~r_{\rm P}$. Therefore only if $r_{\rm tidal} > 3.7~r_{\rm P}$, the homogeneous boundary condition can be applied. We shall model GCs moving in the Milky Way's potential. The Milgromian Besan\c{c}on Milky Way model \citep{Robin_etal2003} is studied by \citet{Wu_etal2007,Wu_etal2008} and \citet{Bienayme_etal2009}, in which the dark matter halo is removed and Milgromian dynamics is applied. We reproduce the Milky Way's Milgromian potential and gravity with the Besan\c{c}on density profile \citep{Robin_etal2003,Wu_etal2007,Wu_etal2008} in this work, using the code called NMODY \citep{nmody}, with a resolution of $n_r^{\rm MW}\times n_{\theta}^{\rm MW} \times_{\phi}^{\rm MW}=500 \times 64 \times 128$ using spherical coordinates $(r,~\theta,~\phi)$. The method of grid segmentation is the same as in \S \ref{num}, and $r_s^{\rm MW} = 10.0\, {\rm kpc} $. The angular resolution of the spherical harmonic expansion is $l_{\max}^{\rm MW}=16$. There is a weak external field of $0.01~a_0$ applied to the MW in the direction Sun-Galactic centre, which comes from the combination of the local gravitational attractors, i.e., the Great Attractor \citep[see][]{Radburn-Smith_etal2006}, the M31 galaxy, the Coma and Virgo clusters \citep[more details of the modelling of the Galaxy are in][]{Wu_etal2007,Wu_etal2008}. \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics[angle=0]{rtidal.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The tidal radii of the models from Table \ref{ics} at different Galactocentric distances. The Milgromian Besan\c{c}on Milky Way potential is used here. The colours for different models are the same as defined in Fig. \ref{checkg}.}\label{rtidal} \end{center} \end{figure} We will move the systems along the vertical $z$-axis of the MW. We calculate the tidal radii of the models from Tab. \ref{ics} at different Galatocentric distances, from $5\, {\rm kpc} $ to $100\, {\rm kpc} $. The background field of the MW at the position $(x,~y,~z)=(0,~0,~5)\, {\rm kpc} $ is $\approx 2.1a_0$, which is strong enough to reduce the Milgromian effect, and the dynamics of a GC embedded in this external field is Newtonian-like. We show the tidal radii in Fig. \ref{rtidal}. Note that the tidal radii of models $3$ and $5$ (cyan and green curves) are only $\approx 2.5 r_{\rm P}$ when they are placed at the position $(x,~y,~z)=(0,~0,~5)\, {\rm kpc} $. Therefore models $3$ and $5$ are not suitable to use the homogeneous boundary condition in Eq. \ref{constant}. Theoretically we can choose a larger Galatocentric distance, say $z=10\, {\rm kpc} $, as starting points of the moving orbits for models $3$ and $5$. However the background field from the MW at $(x,~y,~z)=(0,~0,~10)\, {\rm kpc} $ is also weaker, $\approx 1.0a_0$. The Milgromian effects are remarkable for models $3$ and $5$ in such an external field. Therefore models $1$, $2$ and $4$ in Table \ref{ics} are more interesting. \footnote{For comparison, we will also show the re-virialisation of models $3$ and $5$ at $(x,~y,~z)=(0,~0,~10)\, {\rm kpc} $ and then will move them on radial Galactic orbits.} \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics{prof_relax.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The spherically averaged density (upper panel), radial velocity dispersion (middle panel) and anisotropic profiles (lower panel, Eq. \ref{beta}) for models $1$ (black), $2$ (magenta) and $4$ (yellow) before (dotted curves) and after (solid) virialisation in strong Galactic fields. The models $3$ (cyan) and $5$ (green) are overplotted as comparison.}\label{relax} \end{center} \end{figure} \subsection{ICs in quasi-Newtonian dynamics}\label{prekick} At the Galatocentric position of $(x,~y,~z)=(0,~0,~5)\, {\rm kpc} $, the external field from the MW is strong. Therefore GCs staying at this position are dominated by quasi-Newtonian dynamics, i.e., $\mu(X)=\mu(\frac{|{\bf g}_{\rm ext}|+|{\bf g}_{\rm int}|}{a_0})\approx 1$ in Eq. \ref{poisson}. The boundary conditions of the Poisson equation follow Eq. \ref{constant}. The differences to Newtonian dynamics are: there is a constant factor of $1/\mu_e$ on depth of the potential and a dilation factor of $(1+\Delta_e)$ on the shape of the potential. Therefore the quasi-Newtonian potential is $(1/\mu_e-1)$ deeper and it is prolate. Thus it is necessary to virialise the systems in the quasi-Newtonian potentials first, to ensure the equilibrium of the systems. We virialise the models of interest in the background field of ${{\bf g}_{\rm ext}}=(0,0,2.1a_0)$ for about $100~\, {\rm Myr} $. We show the density, radial velocity dispersion and anisotropy profiles in Fig. \ref{relax}. We find that after the virialisation, the density profiles do not change significantly, however the inner regime of the GCs become denser, by a factor of $1-3$. The radial velocity dispersion profiles are systematically shifted, with a $20\%$ increment. The shape of the velocity dispersion profiles do not change after the virialisation. The change of density and radial velocity dispersion are due to the deepening of the potential. We also find that the isotropic models become slightly radially anisotropic after the virialisation (the lower panel). The systems become radially anisotropic from the half mass radii of the systems outwards and the anisotropies reach up to $0.2$ at the radii where $90\%$ of the mass is enclosed. The radial anisotropies come from the asymmetric internal potential due to the homogeneous external field effect \citep{Wu_etal2010}: at the radii where the internal and external fields are comparable, the potentials of the internal systems are lopsided. The lopsidedness of the potential changes the orbits in the systems. This is quite similar to the radial anisotropy induced by tidal fields at an early stage in Newtonian dynamics. There are already many contributions on the dynamical evolution of star clusters in tidal fields in Newtonian gravity \citep{Giersz_Heggie1997,Takahashi_Lee2000,Baumgardt_Makino2003,Lee_etal2006}. In Newtonian dynamics, the tidal fields will bring in a strongly radial anisotropy in the outer regimes of the star clusters at an early stage, whereas the star clusters are still isotropic in the inner regimes \citep{Takahashi_Lee2000}. The radial anisotropy in the outer parts disappears quickly with time in realistic tidal fields \citep{Baumgardt_Makino2003}. Finally, the outer regimes of star clusters in Newtonian tidal fields become tangentially anisotropic \citep{Giersz_Heggie1997,Lee_etal2006}. We have to note that this is not a tidal effect in MD, but an EFE which plays a similar role. The tidal radii (Eq. \ref{rt}) are much larger than the radii at which EFE becomes important. Comparing Fig. \ref{relax} with Fig. \ref{prof_violent}, we find that the anisotropy introduced by the EFE is milder than by the phase transition. However for the most compact system, model $1$, the anisotropy introduced by the two effects are comparable. This is because model $1$ is so compact that it is mostly dominated by Newtonian dynamics, and it does not evolve as much as the other more diffuse models during the phase transition. In Fig. \ref{relax} we also virialise models $3$ and $5$ in the background field at the Galatic position of $(x,~y,~z)=(0,~0,~10)\, {\rm kpc} $ for comparison. The external field is as strong as $1.0~a_0$. We overplotted the revirialised initial Newtonian models in this Milgromian external field (cyan and green curves in Fig. \ref{relax}). We find that the density profiles and anisotropy profiles of models $3$ and $5$ change more compared to the other models: The centre densities are more concentrated and the anisotropy in the outer regimes is more radial for models $3$ and $5$, since at $(x,~y,~z)=(0,~0,~10)\, {\rm kpc} $ the Galactic field is weaker and the deviation from Newtonian dynamics becomes important. With a Galactic gravitational field of $1.0~a_0$, the Newtonian and Milgromian accelerations are comparable. \subsection{Radial orbits of GCs}\label{orbits} The models are fully virialised for about $100~\, {\rm Myr} $ in the strong field in Milgromian gravity (when $a\gg a_0$ this becomes identical to Newtonian gravity). We shall move the virialised models ($1,~2$ and $4$) in the Galactic potential from near the Galactic centre, i.e., from a position at $(x,~y,~z)=(0,~0,~5)\, {\rm kpc} $. The external field changes fastest along the polar direction on a pure radial orbit for a given initial velocity. Besides, the major part of a system should be enclosed within its tidal radius, so we need to avoid GCs moving near the Galactic disc plane. Therefore we choose orbits along the polar direction and there are only non-zero values of the initial velocity along the $z$-axis. We setup a grid resolution of $n_r^{\rm GC}\times n_{\theta}^{\rm GC} \times_{\phi}^{\rm GC}=100 \times 32 \times 64$, where the radial scaling parameter $r_s^{\rm GC}=20\, {\rm pc} $ and $l_{\max}^{\rm GC}=6$ since the GC models are spherically symmetric. The time steps are defined as in \S \ref{num}. In each simulation, we solve the Poisson equation for the Milky Way and store the gravitational acceleration field and potential on the grid, and then we interpolate the Galactic acceleration field and potential to the point where the centre of mass (CoM) of the GC is. Applying the boundary conditions of Eq. \ref{constant} onto the GC, the Poisson equation for the GC is computed. The positions and velocities of particles of the GC are updated each time step by the Leap-frog scheme, as in \S \ref{num}. The new CoM of the GC is calculated at the end of the time step, and then the external field of the new CoM of the GC is interpolated from the MW acceleration field and potential. At each time step of the GC, the EF is updated from the CoM position, such that the GC is embedded in a space-varying external field. We choose a range of initial velocities $v_z=200\, {\rm km \, s}^{-1} ,~300\, {\rm km \, s}^{-1} ,~400\, {\rm km \, s}^{-1} ,~500\, {\rm km \, s}^{-1} ,~600\, {\rm km \, s}^{-1} $, to move the GCs from the quasi-Newtonian regime to the Milgromian regime in the Galactic background field on different time scales. We show the orbits in Fig. \ref{figorb}. We find that orbits with an initial velocity of $200\, {\rm km \, s}^{-1} $ and $300\, {\rm km \, s}^{-1} $ reach their apocentres at small radii, at about $8\, {\rm kpc} $ and $18\, {\rm kpc} $ respectively, and they cannot propagate to the outer part of the Galaxy. Thus these orbits are not of our interest. The dynamics of GCs moving on orbits with $v_z=~400\, {\rm km \, s}^{-1} ,~500\, {\rm km \, s}^{-1} ,~600\, {\rm km \, s}^{-1} $ are studied in \S \ref{adiabatic}. We move the GCs on these orbits for $\approx 500\, {\rm Myr} $, which is long enough for the GCs to move to their apocentres or to the outer Galactic regime ($r\approx 90\, {\rm kpc} $) where the distant Galactic GCs are observed \citep{Baumgardt_etal2005,Bellazzini2007}. We point out that the apocentre for the orbits with $v_z=~400\, {\rm km \, s}^{-1} $ is about $72\, {\rm kpc} $. We study the kinematics of the GCs at their apocentres or at Galatocentric distance of $90\, {\rm kpc} $, if their apocentres are even more distant. The orbital times from the initial position to the apocentre or to $z=90\, {\rm kpc} $ are: $360\, {\rm Myr} $ for $v_z=400\, {\rm km \, s}^{-1} $, $270 \, {\rm Myr} $ for $v_z=500\, {\rm km \, s}^{-1} $ and $180\, {\rm Myr} $ for $v_z=600\, {\rm km \, s}^{-1} $. We note that the orbital time to transform the external field from the strong to the weak field is much longer than the re-virialisation time of $5\times {\rm T}_{\rm dyn}$, therefore the GCs are re-virialised gradually in the slow-evolving gravitional field. The phase transitions are adibatic. For models $3$ and $5$, the starting points of the Galactic orbits are $(x,~y,~z)=(0,~0,~10)~\, {\rm kpc} $, and the models are moving on the same Galactic orbits as models $1,~2$ and $5$. The initial velocities are interpolated at the starting point from the above Galactic orbits (see Fig. \ref{figorb}). \begin{figure}{} \begin{center} \resizebox{8.5cm}{!}{\includegraphics[angle=-90]{orbits.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The radial orbits with different initial velocities starting from the position of $(x,~y,~z)=(0,~0,~5)\, {\rm kpc} $. {\bf Left panel:} Galatocentric distances versus orbital time. {\bf Right panel:} Galatocentric distances versus radial velocity.}\label{figorb} \end{center} \end{figure} \subsection{Adiabatically evolving systems}\label{adiabatic} \subsubsection{Virial ratios} \begin{figure}{} \begin{center} \resizebox{8.7cm}{!}{\includegraphics[angle=-90]{vir_kick.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{Upper panel: The virial ratio of models $1$ (black), $2$ (magenta) and $4$ (yellow) moving on radial orbits in the MW potential, with initial velocities of $400\, {\rm km \, s}^{-1} $ (solid), $500\, {\rm km \, s}^{-1} $ (dot-dashed) and $600\, {\rm km \, s}^{-1} $ (dashed). Lower panel: The virial ratio of models $3$ (cyan) and $5$ (green) for comparison. The values of ${\rm T}_{\rm cross}^{\rm MD}$ can be found from Table \ref{ics}.}\label{vir_kick} \end{center} \end{figure} The upper panel of Fig. \ref{vir_kick} shows the virial ratios of the GCs moving on the different orbits. We find that the systems are not violently re-virialised compared to the case of the rapid phase transition. The virial ratios only deviate by $7\%$ from $1$ at most within the first $5$ crossing times. For the compact models $1$ and $4$, the deviation is within $1\%$ in the first $5$ crossing times. Since the external fields are evolving, there is a noise of $1.5\%$ at $T \approx 18~{\rm T}_{\rm cross}^{\rm MD}$. Thereafter the fluctuations of virial ratios are within $1\%$ around $1$. Therefore the systems are only slightly out of virial equilibrium at the beginning of a Galactic orbit, where they are close to the Galactic centre and the gravitational field changes fastest. As we see from Fig. \ref{vir_kick}, from $5~{\rm T}_{\rm cross}^{\rm MD}$ onwards the systems are in equilibrium since the external field changes gradually. The differences of the time scales for systems reaching equilibrium states caused by different initial velocities can be ignored. Thus systems evolve adiabatically and the collapse process is much more moderate. The systems are very unlikely to be frozen in Newtonian dynamics when they move into the Milgromian regime, even when they are on an orbit with a high initial out-going velocity of as high as $600~\, {\rm km \, s}^{-1} $. The virial ratio of models $3$ and $5$ are shown in the lower panel of Fig. \ref{vir_kick}. The amplitudes of the deviations to virial equilibrium are about $7\%$, which is much smaller than that of the violent re-virialisation. The virial ratios oscillate around $1$ with a larger noise, since models $3$ and $5$ are originally further away from virial equilibrium at their initial Galactic position of $(x,~y,~z)=(0,~0,~10)\, {\rm kpc} $. And the time scale of their phase transition is very similar to the cases of models $1,~2$ and $4$: from $7~{\rm T}_{\rm cross}^{\rm MD}$ onwards the systems are in virial equilibrium. It confirms again that the freezing time scales in quasi-Newtonian dynamics are only a few ${\rm T}_{\rm cross}^{\rm MD}$, and the dynamics of the models quickly becomes Milgromian on their radial Galactic orbits. \subsubsection{Lagrangian radii and mass profiles} \begin{figure}{} \begin{center} \resizebox{8.7cm}{!}{\includegraphics[angle=-90]{rmass_kick10pc1e5.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The evolution of Lagrangian radii of model $2$ on different Galactic orbits starting from $(x,~y,~z)=(0,~0,~5)\, {\rm kpc} $ and ending at their apocentres or at $r^{\rm MW}=90\, {\rm kpc} $ if the apocentres are even further away (see Fig. \ref{figorb}), with different initial velocities: $v_z=400\, {\rm km \, s}^{-1} $ ({\bf left panel}), $500\, {\rm km \, s}^{-1} $ ({\bf middle panel}), $v_z=600\, {\rm km \, s}^{-1} $ ({\bf right panel}).}\label{rmass_kick} \end{center} \end{figure} \begin{table*} \begin{center}\vskip 0.00cm \caption{Axis-ratios of the mass distribution of the GCs: The first column shows the ID of the models in Tab \ref{ics}, the second column shows the axis-ratios (see Eq. \ref{ixx}) of the GCs at $T~=~0$ near the Galactic plane, at the Galatocentric position $(0,~0,~5)\, {\rm kpc} $ for models $1,~2$ and $4$, and $(0,~0,~10)\, {\rm kpc} $ for models $3$ and $5$. The third to the fifth columns show the axis-ratios of the GCs in the outer regime of the Milky Way. The two parts of the table show the axis-ratios within $r_{50}$ (first three lines) and $r_{90}$ (last three lines). } \begin{tabular}{llllllc} \hline Models& Inner MW & Apocentre& $r^{\rm MW}=90\, {\rm kpc} $ &$r^{\rm MW}=90\, {\rm kpc} $\\ & &$v_z=400\, {\rm km \, s}^{-1} $ &$v_z= 500\, {\rm km \, s}^{-1} $&$v_z=600\, {\rm km \, s}^{-1} $\\ \hline & $(a:b:c)_{r_{50}}$ & & &\\ \hline 1 & $1: 0.98: 0.99$ &$1: 0.99: 0.99$ &$1: 0.99: 0.98$ &$1: 0.99: 0.99$\\ 2 & $1: 0.98: 0.97$ &$1: 0.99: 0.99$ &$1: 1.00: 0.99$ &$1: 1.01: 1.01$\\ 3 & $1: 0.96: 0.95$ &$1: 0.93: 0.92$ &$1: 0.97: 0.96$ &$1: 0.96: 0.97$\\ 4 & $1: 0.97: 0.96$ &$1: 0.99: 0.99$ &$1: 1.00: 0.99$ &$1: 0.99: 0.99$\\ 5 & $1: 0.95: 0.95$ &$1: 0.96: 0.96$ &$1: 0.98: 0.97$ &$1: 0.97: 0.96$\\ \hline & $(a:b:c)_{r_{90}}$ & & & \\ \hline 1 &$1: 0.98: 0.97$&$1: 0.99: 0.99$ &$1: 0.99: 0.99$ & $1: 0.99: 0.99$\\ 2 &$1: 0.94: 0.94$&$1: 0.97: 0.98$ & $1: 0.99: 0.99$ & $1: 0.99: 0.99$\\ 3 &$1: 0.88: 0.88$&$1: 0.88: 0.89$ & $1: 0.92: 0.92$ & $1: 0.91: 0.92$\\ 4 &$1: 0.95: 0.95$&$1: 0.99: 0.99$ & $1: 0.99: 0.99$ & $1: 0.99: 0.99$\\ 5 &$1: 0.89: 0.89$&$1: 0.92: 0.92$ & $1: 0.94: 0.94$ & $1: 0.93: 0.94$\\ \hline \end{tabular} \label{ell} \end{center} \end{table*} We show the evolution of Lagrangian radii of model $2$ in Fig. \ref{rmass_kick}. The three panels correspond to different initial velocities of the Galactic orbits: $v_z=400\, {\rm km \, s}^{-1} $ (left panel), $500\, {\rm km \, s}^{-1} $ (middle panel) and $600\, {\rm km \, s}^{-1} $ (right panel). The GC moves from the Galactic position of $(x,~y,~z)=(0,~0,~5)\, {\rm kpc} $ to the apocentre of the orbit (for the orbit of $v_z=400\, {\rm km \, s}^{-1} $) or to $r^{\rm MW}=90\, {\rm kpc} $ where the most distant GC is observed. Compared to Fig. \ref{rmass_violent}, we find that the adiabatic evolution is significantly different to the violent evolution: the Lagrangian radii decrease fast during the first $\approx 20{\rm T}_{\rm dyn}^i$ (see Fig. \ref{tpt}), and then the Lagrangian radii are almost constant, with tiny oscillations. There are no significant oscillation of the radii within the first $20{\rm T}_{\rm dyn}^i$. This agrees with the evolution of virial ratios in Fig. \ref{vir_kick}. We also find that the final stable Lagangian radii of the same fraction of enclosed mass do not change with different Galactic orbits. I.e., models $2$ moving on different radial orbits have the same Lagangian radii when they are in the outer regime of the Galaxy. This implies that a GC in the outer regime of the MW will have a universal mass profile, which does not depend on the Galactic orbit. The evolution of Lagrangian radii of the other models are similar to that of model $2$. We study the mass profiles of the GCs embedded in a strong field and in weak fields (at the apocentres of different orbits, or at $r^{\rm MW}=90\, {\rm kpc} $) in the upper panel of Fig. \ref{prof_kick}. We find that the densities of the GCs after adiabatic compression are very similar to the case of violent collapse: The densities in the core regime become larger while in the outer regime at $r>3r_{\rm P}$ they are smaller. For a same model, the final density profiles for different Galactic orbits are very similar. We also study the axis-ratios of the $50\%$ and $90\%$ enclosed mass of the initial GCs at a Galatocentric position of $(0,~0,~5)\, {\rm kpc} $ and of the final products (GCs at large radii of the Galaxy). The axis-ratios are defined as $a:b:c=\sqrt{I_{xx}}:\sqrt{I_{yy}}:\sqrt{I_{zz}}$, where the $I_{xx},~I_{yy}$ and $I_{zz}$ are the diagonalised moments of inertia tensor directions \citep{Gerhard1983}: \begin{eqnarray}\label{ixx} I_{xx} = \frac{1}{N_{\rm parts}}\sum_{\rm ip=1}^{N_{\rm parts}} \frac{(y_{\rm ip}^2+z_{\rm ip}^2)}{r_{\rm ip}^2},\nonumber \\ I_{yy} = \frac{1}{N_{\rm parts}}\sum_{\rm ip=1}^{N_{\rm parts}} \frac{(x_{\rm ip}^2+z_{\rm ip}^2)}{r_{\rm ip}^2}, \\ I_{zz} = \frac{1}{N_{\rm parts}}\sum_{\rm ip=1}^{N_{\rm parts}} \frac{(x_{\rm ip}^2+y_{\rm ip}^2)}{r_{\rm ip}^2}, \nonumber \end{eqnarray} where $r_{\rm ip}=\sqrt{x_{\rm ip}^2+y_{\rm ip}^2+z_{\rm ip}^2}$, and $N_{\rm parts}$ is the enclosed number of particles in the ellipsoids of $\left({x\over a}\right)^2+\left({y\over b}\right)^2+\left({z\over c}\right)^2=r^2_{50} ~({\rm or}~=~r_{90}^2)$, where $r_{50}$ and $r_{90}$ are the radii enclosing $50\%$ and $90\%$ of mass, respectively. The axes of the GCs are listed in Table \ref{ell}. When the models are at their starting points of the orbits, they are embedded in a strong external field, therefore the internal potentials of the GCs are prolate, especially for the outer parts \citep{Wu_etal2007,Wu_etal2008}. This is even more significant in models $3$ and $5$, since their internal structures are more diffuse than the other models. We find that at their half mass radii (i.e., $r_{50}$), the models are slightly prolate since they are dominated by their internal fields at $r_{50}$. At the radii of $r_{90}$, the deviation from a spherical shape is larger, for model $2$ the axis-ratio is $1:~0.94:~0.94$. The GCs at the end of their out-going orbits are very close to be spherically symmetric after the compression process. There is only a deviation from a sphere of $1\%$ in both the inner and the outer parts of most of the models and for all radial orbits. For the moderately diffuse system, model $2$ on an orbit with $v_z=400\, {\rm km \, s}^{-1} $, the axis-ratios at $r_{90}$ are slightly prolate, $1:0.97:0.98$. This is because model $2$ is more diffuse compared to the models $1$ and $4$, and at the radius of $r_{90}$ and the apocentre of the Galactic orbit, the external field can affect the outer parts. While model $2$ moves further away, say to $r^{MW}=90\, {\rm kpc} $, the external field is weaker than at the apocentre of the first Galactic orbit, and the model becomes spherically symmetric again. Models $3$ and $5$ confirm this trend: they are intrinsically even more diffuse than model $2$, therefore they are even more prolate at both $r_{50}$ and $r_{90}$. However, the deviations from spherical symmetry are next to negligible for all models. \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics{prof_kick.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The spherically averaged density $\rho(r)$ ({\bf upper panel}) $\sigma_r(r)$ ({\bf middle panel}) and anisotropic profiles $\beta(r)$ ({\bf lower panel}) for models $1$ (black), $2$ (magenta) and $4$ (yellow) in a strong external field (dotted curves) and in weak fields. The systems move on radial polar orbits of the Galaxy, with different initial velicities, the line types showing the state of the systems at their apocentres: $v_z=400\, {\rm km \, s}^{-1} $ (solid), $500\, {\rm km \, s}^{-1} $ (dot-dashed) and $600\, {\rm km \, s}^{-1} $ (dashed). The colours and line types are defined as in Fig. \ref{vir_kick}.}\label{prof_kick} \end{center} \end{figure} \subsubsection{Velocity dispersion and anisotropy}\label{aniso-kick} The radial velocity dispersion profiles are studied in the middle panel of Fig. \ref{prof_kick}. The shapes of the final radial velocity dispersion profiles are very similar to those of the violent re-virialisation. However, there are small differences between the profiles for GCs collapsing violently and adiabatically, especially for the diffuse GCs, model $2$, and also the most diffuse models $3$ and $5$. Compared to the middle panel of Fig. \ref{prof_violent}, models $2$, $3$ and $5$ have increasing and then decreasing profiles in the intermediate regime between $r_{50}$ (where $\frac{M(r)}{M}=0.5$) and $r_{90}$ (where $\frac{M(r)}{M}=0.9$) for the violent collapse cases, while the $\sigma_r(r)$ profiles are almost flat for the case of adiabatic collapse. In the other two models the $\sigma_r(r)$ profiles are almost the same for adiabatic and violent collapses. Moreover, we find that the $\sigma_r(r)$ profiles of the final products are independent of their Galactic orbits. Different Galactic orbits lead to a similar $\sigma_r(r)$ profile when the GCs are in the outer regime of the Galaxy. There are more differences in the anisotropy profiles when comparing with the case of violent re-virialisation: the GCs become much more radially anisotropic if they collapse violently. This is very clear from comparing the lower panels of Figures \ref{prof_violent} and \ref{prof_kick}. For model $2$, $\beta(r_{90})=0.8$ for violent compression and $\beta(r_{90})$ is only $0.4$ for adiabatic compression. For the most diffuse models $3$ and $5$ this is even more significant: values of $\beta(r_{90})$ are $0.85-0.90$ for violent re-virialisation while the values of $\beta(r_{90})$ are only $0.45-0.6$ for adiabatic compression. The more compact models (model $1$ and $4$) have the same trend although not as significant as model $2$, since among the three models, model $2$ itself is in deeper MD. Moreover, the final $\beta(r)$ profiles for different GCs are independent of the Galactic orbits. There is only one exception, model $3$, for which the difference in anisotropy, $\delta \beta(r_{90})$ is up to $0.2$. The large $\delta \beta(r_{90})$ comes from there being fewer particles for model $3$ therefore the particle noise is larger. Thus the anisotropy profiles are related to the internal structures of the GCs. The faster the systems move from the inner to the outer Galaxy, the more radially anisotropic the systems are. However, both the $\sigma_r(r)$ profiles and the $\beta(r)$ profiles are not related to the orbital history. I.e., the internal kinematics of a GC, which is moving on an eccentric Galactic orbit and is currently in the Milgromian regime, only depends on the internal structure of the GC rather than on the details of the orbital history. This is good news, and it makes the predictions from Milgromian dynamics less complex. \subsubsection{Phase space distribution with adiabatic re-virialisation} The phase space distributions of systems with adiabatic re-virialisation are presented in Fig. \ref{ps_kick500}, which can be compared with the case of violent re-virialisation. One Galactic orbit is selected from Fig. \ref{figorb}, the purple curves, i.e. the initial Galactic position for the cluster is at $r_{\rm MW}~=~5\, {\rm kpc} $ and the initial velocity is $500~\, {\rm km \, s}^{-1} $. We show in Fig. \ref{ps_kick500} the models $4$ (left panels), $2$ (middle panels) and $5$ (right panels) at time snapshots of $0.5~{\rm T}_{\rm cross}^{\rm MD}$, $1.0~{\rm T}_{\rm cross}^{\rm MD}$, $17~{\rm T}_{\rm cross}^{\rm MD}$ and the apocentre of the Galactic orbit (or the furthest position that Milky Way star clusters are observed, say, $90~\, {\rm kpc} $ away to the Galactic centre). At the early stage of re-virialisation, the systems appear to be well phase mixed. Only for the deep-Milgromian system, model $5$, there are structures with negative radial velocity in the outer regimes of $r~>~ 5~ r_{\rm P}$, which indicates that a large amount of particles are falling into the centre of the star cluster. The reason is that, on a realistic Galactic orbit, the self-potential of the star cluster does not change to a sufficiently significant extent during one crossing time of the cluster. Only when the star cluster has a very diffuse mass distribution and the self-potential is dominated by deep-Milgromian dynamics, a small amount of change of the background external field is abe to yield a visible effect in the phase space distribution. The systems are already well phase mixed at $17~{\rm T}_{\rm cross}^{\rm MD}$, and the phase space distributions are very close to that of the final time snapshot, namely at the furthest Galactic position where the star cluster can reach (the bottom panels of Fig. \ref{ps_kick500}). \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics{ps_kick500.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The phase space distribution at different snapshots of adiabatic re-virialisation for models $4$ (left panels), $2$ (middle panels) and $5$ (right panels) moving on a Galactic orbit with initial position of $5~\, {\rm kpc} $ and initial velocity of $500~\, {\rm km \, s}^{-1} $ (purple curves in Fig. \ref{figorb}, for model $5$ the Galactic orbit is the same one but the starting point is $10~\, {\rm kpc} $ and the initial velocity is interpolated from the purple curve in the right panel of Fig. \ref{figorb}). The radial radii are scaled by the Plummer radius $r_{\rm P}$, and the radial velocity of particles $v_r$ are scaled by $v_0=\sqrt{GM/r_{\rm P}}$.}\label{ps_kick500} \end{center} \end{figure} There is a wide range for the initial conditions of Galactic orbits for the GCs. Thus it is important to study the phase space distribution for GCs on different Galactic orbits. Model $2$ is presented on different Galactic orbits with different initial velocities in Fig. \ref{ps_kick1e5}: $400~\, {\rm km \, s}^{-1} $ (left panels), $500~\, {\rm km \, s}^{-1} $ (middle pannels) and $600~\, {\rm km \, s}^{-1} $ (right panels). On an orbit with higher initial velocity, it is clear there are in-falling particles at an early stage of $T~=~1.0~{\rm T}_{\rm cross}^{\rm MD}$. The systems are already well phase mixed at $17~{\rm T}_{\rm cross}^{\rm MD}$, although the GCs are still moving on the orbits. It agrees with the results of GCs with different sizes moving on a same orbit in Fig. \ref{ps_kick500}. We therefore conclude that, for the systems re-virialised adiabatically, the phase mixing is more efficient than that in the violent re-virialisation process. The self-potentials for GCs are mildly deepened in the former case, because the systems are very close to being in equilibrium on the orbits. \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics{ps_kick1e5.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The phase space distribution at different snapshots of adiabatic re-virialisation for model $2$ moving on different Galactic orbit with initial position of $5~\, {\rm kpc} $ and initial velocity of $400~\, {\rm km \, s}^{-1} $ (left panels), $500~\, {\rm km \, s}^{-1} $ (middle panels) and $600~\, {\rm km \, s}^{-1} $. The radial radii are scaled by the Plummer radius $r_{\rm P}$, and the radial velocity of particles $v_r$ are scaled by $v_0=\sqrt{GM/r_{\rm P}}$.}\label{ps_kick1e5} \end{center} \end{figure} \subsubsection{Stability of the star clusters after adiabatic re-virialisation} The study of the anisotropy of the systems reveals that GCs are less radially anisotropic through adiabatic re-virialisation compared to that realized through violent re-virialisation. It is therefore reasonable to expect $\xi$ parameters (Eq. \ref{xi}) to be smaller for the systems adiabatically re-virialised. The values of $\xi$ for GCs re-virialised on different Galactic orbits are listed in Table \ref{stab-kick}. For all the models $\xi<\xi_c$ \citep[The critical $\xi_c$ values in Milgromian dynamics is emperically calculated in ][]{Nipoti_etal2011}. Apparently, all the GC systems are stable after adiabatic re-virialisation. It can also be found from Table \ref{stab-kick} that for the more compact GCs (Models $1$, $2$ and $4$), the $\xi$ values are very similar to each other on different Galactic orbits. For the more diffuse GCs (Models $3$ and $5$) which re-virialised into deep-Milgromian gravity, the difference of $\xi$ values are more significant on different Galactic orbits. For systems with larger initial velocity, the $\xi$ values are larger, which is owing to the presence of more radial orbits in the systems which re-virialised more rapidly. These results agree well with the analysis of anisotropy in \S \ref{aniso-kick}. The values of $\xi$ for GCs after adiabatic re-virialisation are smaller than those in the case of violent re-virialisation, suggesting that adiabatically re-virialised GCs are even more stable. \begin{table*} \begin{center}\vskip 0.00cm \caption{The stability parameter $\xi$ for the GC models adiabatically re-virialised from the Galatocentric positions ($(0,~0,~5)\, {\rm kpc} $ for models $1,~2$ and $4$, and $(0,~0,~10)\, {\rm kpc} $ for models $3$ and $5$) to the outer regime of the Milky Way. The second to the forth columns show the values of $\xi$ of the clusters moving on different Galactic orbits.} \begin{tabular}{llllllc} \hline Models& Apocentre& $r^{\rm MW}=90\, {\rm kpc} $ &$r^{\rm MW}=90\, {\rm kpc} $\\ &$v_z=400\, {\rm km \, s}^{-1} $ &$v_z= 500\, {\rm km \, s}^{-1} $&$v_z=600\, {\rm km \, s}^{-1} $\\ \hline 1 & 1.09 & 1.09 & 1.09 \\ 2 & 1.16 & 1.17 & 1.19 \\ 3 & 1.24 & 1.32 & 1.35 \\ 4 & 1.12 & 1.12 & 1.13 \\ 5 & 1.29 & 1.34 & 1.36 \\ \hline \end{tabular} \label{stab-kick} \end{center} \end{table*} \section{Conclusions and discussions}\label{conc} We study the violent and adiabatic re-virialisation of stellar systems from Newtonian to Milgromian gravity. The time scale of the re-virialisation is only a few crossing times in MD, for both the violent and the adiabatic process. Therefore it would be unlikely to find systems like diffuse GCs or ultra faint dwarf galaxies that are in a weak background field regime to be frozen in Newtonian dynamics. There are GCs like Pal 4 \citep{Frank_etal2012}, Pal 14 \citep{Jordi_etal2009} and NGC 2419 \citep{Baumgardt_etal2009,Ibata_etal2011b,Ibata_etal2011a}, which at first sight appear to be Newtonian diffuse systems in the outer Galactic regime. We conclude that this behaviour cannot be explained by these systems being frozen in Newtonian gravity. The adiabatic simulations show that the orbital time is much longer than the dynamical times ${\rm T}_{\rm dyn}^i$ of the systems staying out of equilibrium. A more recent study on NGC 2419 \citep{Sanders2012a,Sanders2012b} shows that NGC 2419 is not a problem for MD. \citet{Sanders2012a,Sanders2012b} used polytropic models to model NGC 2419, which fit well the observations of the surface brightness and kinemaics. However \citet{Ibata_etal2011a} claimed that polytrope models are less likely in Milgromian dynamics by a factor of 5000 than a Newtonian Michie model as used in \citet{Ibata_etal2011b}. Therefore currently NGC2419, Pal 4 and Pal 14 cannot be explained by the potential being frozen in its Newtonian form. We also study the Lagrangian radii, mass profiles and kinematics of the final products for the above two re-virialisation process. We find that the mass profiles and radial velocity dispersion profiles are very similar for the systems collapsing the two different ways. Moreover, different Galactic orbits for a GC lead to the same Lagrangian radii, $\rho(r)$ and $\sigma_r(r)$ profiles. Therefore for a system moving from the inner regime of the Galaxy to the outer part, the Lagrangian radii, mass profile and $\sigma_r(r)$ are independent of the history of the GC's orbit. In the potential of the Galaxy, we revirialise the spherically symmetric Newtonian systems in the inner Galactic field using Milgromian dynamics. The velocity dispersion becomes radially anisotropic after the re-virialisation due to the external field effect: since the SEP is broken, the internal potential of a GC is asymmetric, therefore the orbits of the particles in such a GC are distorted and elongated. We then move the GCs from the inner to the outer regime of the Galaxy. We find that in those GCs, the velocity dispersion profiles should evolve to even more radially anisotropic profiles. Note that the $\beta(r)$ profiles are determined by the internal structure of the GCs rather than by the details of the Galactic orbits: the more diffuse the GCs are, the more radially anisotropic their velocity dispersions are after the re-virialisation. The GCs on an orbit moving faster or slower from the Newtonian regime to the Milgromian regime have similar radially anisotropic velocity dispersion profiles. The re-virialisation is a new mechanism to produce such profiles compared to Newtonian dynamics. In contrast, Newtonian N-body models of star clusters generate isotropic or mildly anisotropic velocity dispersion profiles. Observations of the distant GCs NGC 2419 by \citet{Baumgardt_etal2009} and \citet{Ibata_etal2011b,Ibata_etal2011a} show that the line-of-sight velocity dispersion profile of NGC 2419 has large values up to $7~\, {\rm km \, s}^{-1} $ in the centre and small values of $1-2\, {\rm km \, s}^{-1} $ in the outer regime where $r>3r_h$ \citep[see Fig. 8 of ][]{Ibata_etal2011b}. The studies on the dynamics of NGC 2419 require a highly radially anisotropic velocity dispersion to fit the observed data in both Newtonian or Milgromian gravities \citep{Ibata_etal2011b,Ibata_etal2011a}. In Newtonian dynamics, one mechanism to generate such radial anisotropy is partial relaxation, i.e. a violent relaxation process that is inefficient in the outer parts of the system \citep{Lynden-Bell1967, Bertin_Trenti2003}. A self-consistent family of radially anisotropic models of partially relaxed systems have been proposed in \citet{Bertin_Trenti2003}. Later, \citet{Trenti_etal2005} compared such a family of models and the products of collisionless collapse. It was confirmed that these models are unstable while the stability parameter $\xi > 1.7\pm 0.25$ in Newtonian dynamics, and that the strongly radial models will evolve into triaxial systems. Recently, \citet{Zocchi_etal2012} investigated these models for a sample of GCs including NGC 2419. The stability parameter is found to be $\xi=1.77$ for the best fit model of NGC 2419, so NGC 2419 is on the boundary of being stable in Newtonian dynamics. The phase transition provides another mechanism for generating the anisotropy in Milgromian dynamics, in addition to the partial relaxation which has not yet been studied in Milgromian dynamics. Here we also show the final line-of-sight velocity dispersion profiles, $\sigma_{LOS}(R)$, of our GC models after moving from the inner Galaxy to the outer Galactic regime (Fig. \ref{sig_los_kick}). Since our GC models are radially anisotropic, especially for models $2, 3$ and $5$, the shapes of their $\sigma_{LOS}(R)$ should be similar to that of NGC 2419. Indeed we find that the $\sigma_{LOS}(R)$ have large values in the centres of the GCs and sharply fall from $1r_P$ to $10r_P$. The $\sigma_{LOS}(R)$ profiles are very similar for the same model moving on different Galactic orbits (see Fig. \ref{sig_los_kick}). It confirms that the observational results of line-of-sight kinematics of a GC are not related to the orbital history. In a follow-up project we will study other possible mechanisms for generating radially anisotropic velocity dispersion profiles and the corresponding $\sigma_{LOS}(R)$ profiles in both MD and Newtonian gravities. For instance, it is interesting to consider gas expulsion after birth and mass loss from evolving mass stars in the early stage of the GCs \citep[][for initial conditions of GCs]{Marks_Kroupa2012} and make a comparison between the different radial anisotropic behaviours generated from Newtonian and Milgromian gravities. In summary, Milgromian dynamics predicts that all of the GCs moving on eccentric orbits of the Galaxy should be radially anisotropic when they are in the Milgromian regime. Any isotropic or tangentially anisotropic profiles for GCs in the outer regime of any radial Galactic orbit will be problematic for MD, since these systems experience a collapse during the phase transition. Therefore for any out-going systems, any observations on isotropy or tangential anisotropy will be a severe challenge to MD. MD is falsifiable by the kimematics of the outer GCs. \begin{figure}{} \begin{center} \resizebox{9.cm}{!}{\includegraphics{sig_los_kick.eps}} \makeatletter\def\@captype{figure}\makeatother \caption{The final line-of-sight velocity dispersion profiles as a function of radius for the models after moving from the inner Galaxy to the outer Milgromian regime. The colours and line types are defined as in Fig. \ref{vir_kick}.}\label{sig_los_kick} \end{center} \end{figure} \section{Acknowledgments} Xufen Wu gratefully acknowledges support through the Alexander von Humboldt Foundation. We thank Ortwin Gerhard and Flavio de Lorenzi for sharing the code for generating anisotropic models by using circularity functions and Lucy's method. We also thank the Bologna group, Nipoti, Ciotti and Londrillo for sharing their N-body code NMODY.
1,477,468,750,224
arxiv
\section{Introduction} Despite the long-term research efforts put into numerical optimization, many practical applications remain difficult. There are three main reasons: most real problems involve nonlinear constraints, the objective function or the constraints are numerically costly to evaluate (e.g., when nonlinear finite elements underlie the optimization criteria), and some of the parameters are uncertain. To ease the computing load, Bayesian Optimization (BO) incorporates kriging surrogates to save calls to the objective function, as embodied in the archetypal EGO algorithm \cite{schonlau1998global}. The original optimization problem is translated into a series of other problems, that of the acquisition of new points where the costly function will be calculated. The acquisition criterion is based on the kriging model and it mitigates the optimization of the function and the improvement of the kriging model. BO has rapidly been extended to encompass constraints \cite{sasena2002global,EFIPicheny}. \vskip\baselineskip In this article, the focus is not only on costly and general nonlinear constrained optimization but also, in addition, on problems that are affected by uncertainties. Uncertainties may originate from random environments such as the weather, noise in sensors or uncontrolled boundary conditions. Many physical models also come with uncertainties in an attempt to describe a lack of knowledge about the true phenomena. Big data applications are confronted to uncertainties which reflect the part of the data that cannot be handled at a time, either because the data arrive as a dynamic flow, or because the volume is too large to be processed in a single step. Since uncertainties are so ubiquitous, robustness against uncertainties is becoming an important aspect of any optimization problem. When the uncertainties cannot be characterized in stochastic terms such as probabilities, strong guarantees about the robustness of the solutions can be obtained with deterministic approaches based on worst-case scenarii over the set of possible uncertainties \cite{ben2009robust,gabrel2014recent}. If this set is large (possibly infinite) and the problem non-convex, the conservatism of the solutions and the computational tractability are inherent difficulties of this family of methods. When the uncertainties are seen as stochastic, two situations may be distinguished. In the first class of problems, the uncertainties are instanciated within the objective or constraint functions and cannot be chosen. Such uncertainties are an endured noise corrupting the objective and the constraint functions. A typical situation is when the functions resorts to random number generations, e.g., for a Monte Carlo simulation, and no access to the source code is granted. Stochastic algorithms can, under conditions about their own stochasticity, accomodate the noise in the observations and still converge to problem solutions: this has given rise in the early 50's to stochastic descent algorithms \cite{kiefer1952stochastic,arrow1958studies} that have since then experienced great developments \cite{spall2005introduction,andrieu2011gradient} often in relation to machine learning \cite{kingma2014adam}; well-performing versions of (stochastic) evolutionary algorithms for noisy functions have been identified \cite{brockhoff_miror_2010,loshchilovsaACM2012} thanks to competitions on reference benchmarks \cite{COCO2012noisy}. In the second class of problems, the uncertainties perturb parameters that are distinct from the design variables and can be chosen during the simulations. The separation between controlled variables and uncertain parameters was already underlying Taguchi's design of experiments in the 80's \cite{logothetis1989quality}. Because of this separation and providing a probability of occurence of the uncertainties exists, a statistical modelling in the joint $controlled \times uncertain$ parameters space is possible. This will be the context of the current work. A key step when optimizing in the presence of uncertainties is the formulation of the problem, i.e., the choice of the robustness criteria. Considering first unconstrained problems, relevant criteria are the expectation of the objective \cite{janusevskis_jogo_2012} or one of its (super-)quantile \cite{torossian2019mathcal,torossian2020review}. In Robust Optimization, the uncertainties are handled in terms of specific compromises between the average performance and its dispersion \cite{park_robust_AIAA_2006,beyer2007robust} or by investigating all such Pareto optimal compromises through a multi-objective formulation \cite{RIBAUD2020106913}. When there are constraints that depend on the uncertainties, the feasibility of the solutions is typically measured in probability. Probabilistic models of the constraints are called chance constraints \cite{nemirovski2012safe} or reliability constraints \cite{bourinet2018reliability}. The field of Reliability-Based Design Optimization (RBDO) is concerned with the resolution of optimization problems that contain reliability constraints \cite{balesdent2020overview}. The optimization problems are formulated in terms of statistical criteria such as probabilities of satisfying the constraints, expectations or (super-)quantiles or conditional quantiles of the objective function \cite{torossian2019mathcal,leriche:cel-02285533,pujol2009incertitude}. When the statistical criteria cannot be calculated analytically, the bottleneck of the computational cost of the optimization is even more stringent since the statistical criteria must be numerically estimated within the optimization iterations. This is sometimes called the double loop issue. Many paths have been proposed to circumvent the double loop issue, some approximating the probability of feasibility by reliability indices \cite{valdebenito2010survey}, others improving the efficiency of the probability calculations (e.g., stratified sampling in \cite{zuniga2012analysis}), others decoupling the reliability estimation from the optimization. \cite{schueller2008computational} gives a review of some of these techniques. In the last decade, numerous contributions to the optimization of costly functions with uncertainties have relied on the learning of a metamodel of the true functions, in particular Gaussian processes (GP) \cite{dubourg2011reliability,moustapha2016quantile}. In \cite{janusevskis_jogo_2012} and \cite{amri:hal-02986558}, the GP not only helps for the optimization (or for the inversion) of the controlled variables, but it also serves to define an optimal sampling scheme. In this article, the problem of minimizing the mean of a stochastic function under chance constraints is addressed. The objective function and the constraints are costly in the sense that they cannot be calculated more than a hundred times. Furthermore the problem is assumed to be nonconvex so that part of the solution process will not be analytical. Uncertainties are described by parameters different from the optimization variables and that can be chosen in the calculations. Generalizing \cite{janusevskis_jogo_2012}, an optimization and sampling Bayesian procedure is proposed that accounts for probabilistic constraints. After formulating the problem (Sections \ref{sec-PbFormulation} and \ref{sec-GP}), a principle for devising robust bayesian problems is stated (Section \ref{sec-generalBOprc}) which applies to any given progress measure. In Section \ref{sec-xtarg}, this principle is applied to the feasible improvement as a specific progress measure. The associated sampling criterion is introduced in Section \ref{SamplingCrit}. It is a Stepwise Uncertainty Reduction (SUR) criterion \cite{bect2012sequential}, a one-step-ahead variance reduction, for which a proxy which is easier to compute is presented. The resulting algorithm is summed up in Section \ref{sec:numerical}, and its performance assessed on an analytical and an industrial test case. An expression for the variance of the improvement, whose expectation is at the heart of the popular EGO algorithm \cite{schonlau1998global}, can be found in Appendix. \section{Problem formulation} \subsection{Starting problem} \label{sec-PbFormulation} Let $f(\vec{x},\vec{u})$ be the scalar output of an expensive computer simulation and let $g_i(\vec{x},\vec{u})$, $i=1,\dots,l$ be the set of constraints, where $\vec{x} \in \mathcal{S_X}$ can be precisely chosen while $\vec{u} \in \mathcal{S_U}$ is a realization of a vector of random variables $\vec{U}$ with a specified probability density function $\rho_\vec{U}$. Such a formulation with controlled and random variables is general to all optimization problems under uncertainty. The examples in this article belong to continuous optimization in the sense that $\mathcal{S_X} \subset \mathbb{R}^d$ and $\mathcal{S_U} \subset \mathbb{R}^m$. Nevertheless, it is important to note that the framework of our work, the Gaussian processes and the algorithms which will be introduced, generalize nicely to spaces that contain discrete variables (see for example \cite{pelamatti2019efficient,roustant2019}). Our goal is to find $x$ which minimizes $f$ while insuring that the $g_i$'s lies under a failure threshold (0 in general). In the presence of uncertainties, Robust Optimization aims at controlling the impact of uncertainties on the performance of the optimal solution. The problem is that $f(\vec{x},\vec{U})$ and $g_i(\vec{x},\vec{U})$, $i=1,\dots,l$ are random quantities induced by $\vec{U}$. In order to perform optimization, we need to fall back on a deterministic form which is achieved by applying some statistical measures to $f(\vec{x},\vec{U})$ and $g_i(\vec{x},\vec{U})$, $i=1,\dots,l$ \cite{torossian2019mathcal,leriche:cel-02285533}. In this article, the constrained optimization problem under uncertainties is formulated as the minimization of the expectation over \vec{U} of $f$ while all the constraints $g_i(\vec{x},\vec{U}) \leq 0$, $i=1,\dots,l$ are satisfied with a high probability: \begin{equation} \begin{split} \vec{x}^* = &\arg \min\limits_{\vec{x} \in \mathcal{S_X}} ~\mathbb{E}_\vec{U}[f(\vec{x},\vec{U})] ~\text{s.t.}~ \mathbb{P}(g_i(\vec{x},\vec{U}) \leq 0,~ i=1,\dots,l ) \geq 1- \alpha \\ & \text{where } \vec{U} \sim \rho_{\vec{U}} \text{ with support } \mathcal{S_U} . \end{split} \label{problem} \end{equation} $\alpha$ is a reliability parameter representing the allowed constraint violation level ($0 < \alpha < 1$). In the formulation of Equation~(\ref{problem}), the emphasis is on constraint satisfaction with a guaranteed reliability. This is common in RBDO where constraints are satisfied in probability and the objective function is deterministic \cite{dubourg2011reliability}. Such problems are also said to have chance constraints \cite{ben2009robust}. In addition in Equation~(\ref{problem}), random events affect the objective function and are taken into account through an expectation. Thanks to its linearity, the expectation is the simplest statistical measure. Besides, efficient approaches exist to estimate and optimize it \cite{janusevskis_jogo_2012}. Formulations such as Equation~(\ref{problem}) with a mean objective function and chance constraints have been called a ``model for qualitative failure'' \cite{andrieu2011gradient} in which the objective function and the constraints are taken as independent. Other formulations with quantiles conditioned by feasibility have been given in \cite{leriche:cel-02285533,pujol2009incertitude} but they remain an open challenge for costly problems. In the current work, Equation~(\ref{problem}) is addressed because it is a compromise between complete robust formulations and mathematical tractability. By seeing the probability as an expectation, the constraint part of Equation \eqref{problem} becomes \begin{equation*} \begin{split} \mathbb{P}(g_i(\vec{x},\vec{U}) &\leq 0,~ i=1,\dots,l ) \geq 1 - \alpha \\ & \Leftrightarrow 1 - \alpha -\mathbb{E}_\vec{U}[\mathbbm{1}_{\{g_i(\vec{x},\vec{U}) \leq 0,~ i=1,\dots,l \}}] \leq 0. \end{split} \end{equation*} From the last expression, the problem (\ref{problem}) is equivalent to \begin{equation} \vec{x}^* = \arg \min\limits_{\vec{x} \in \mathcal{S_X}} ~z(\vec{x}) ~\text{s.t.}~ c(\vec{x}) \leq 0 \label{newproblem} \end{equation} where $z(.) \coloneqq \mathbb{E}_\vec{U}[f(.,\vec{U})]$ and $c(\vec{x}) \coloneqq 1 - \alpha -\mathbb{E}_\vec{U}[\mathbbm{1}_{\{g_i(\vec{x},\vec{U}) \leq 0,~ i=1,\dots,l \}}]$. \subsection{Gaussian Process regression framework} \label{sec-GP} In the context of expensive computer simulation, the Problem (\ref{newproblem}) is approximated with Gaussian processes (GPs). Directly building a metamodel for $z$ and $c$ would need too many evaluations of $f$ and $g_i$ to estimate the expectation and the probabilities. Therefore, GP approximations of $f$ and the $g_i$'s are built in the joint space $\mathcal{S_X} \times \mathcal{S_U}$. Models for $z$ and $c$ in the design space $\mathcal {S_X}$ are then deduced from them. More precisely, we suppose that $f$ and the constraints $(g_i)_{i=1}^{l}$ are realizations of independent Gaussian processes $F$ and $G_i$ such that \begin{equation*} \begin{split} F{(\vec{x},\vec{u})} &\sim \mathcal{GP}(m_F(\vec{x},\vec{u}) , k_F(\vec{x},\vec{u},\vec{x'},\vec{u'}) ),\\ \forall i = \{1,\dots,l\} ~,~ G_i{(\vec{x},\vec{u})} &\sim \mathcal{GP}(m_{G_i}(\vec{x},\vec{u}) , k_{G_i}(\vec{x},\vec{u},\vec{x'},\vec{u'}) ), \end{split} \end{equation*} where $m_F$ and $m_{G_i}$ are the mean functions while $k_F$ and $k_{G_i}$ are the covariance functions. \\ Let $F^{(t)}$ and $G_i^{(t)}$ denote the Gaussian processes conditioned on the t observations, $f^{(t)} = (f(\vec{x}_1,\vec{u}_1),\ldots, f(\vec{x}_t,\vec{u}_t))$ and $g_i^{(t)} = (g_i(\vec{x}_1,\vec{u}_1),\ldots, g_i(\vec{x}_t,\vec{u}_t))$, obtained at points $D^{(t)} = \{(\vec{x}_k,\vec{u}_k)~,~k=1,..,t\}$. Since the expectation is a linear operator applied to $f$, it follows that $Z^{(t)}(\vec{x}) =\mathbb{E}_\vec{U}[F^{(t)}{(\vec{x},\vec{U})}]$ is still a Gaussian process with known mean $m^{(t)}_Z$ and covariance function $k^{(t)}_Z$ given by: \begin{equation} \begin{split} m^{(t)}_Z(\vec{x}) &= \int_{\mathbb{R}^m} m^{(t)}_F(\vec{x},\vec{u}) \rho_\vec{U}(\vec{u}) d\vec{u}, \\ k^{(t)}_Z(\vec{x},\vec{x'}) &= \iint \limits_{\mathbb{R}^m} k^{(t)}_F(\vec{x},\vec{u},\vec{x'},\vec{u'}) \rho_\vec{U}(\vec{u},\vec{u'}) d\vec{u} d\vec{u'}. \\ \end{split} \label{ZGP} \end{equation} The integrals that appears in (\ref{ZGP}) can be evaluated analytically for specific choices of $\rho_U$ and $k_F$ (see \cite{janusevskis_jogo_2012}). In the general case, a quadrature rule can be used to approximate these integrals. We also introduce the process \begin{equation*} C^{(t)}\!(\vec{x}) = 1 - \alpha -\mathbb{E}_\vec{U}[ \mathbbm{1}_{{\cap_{i=1}^l}\{G_i^{(t)}\!(\vec{x},\vec{U}) \leq 0 \}}] \label{eq:Cofx} \end{equation*} which is the statistical model of the constraint $c$ (Equation (\ref{newproblem})). Note that the process $C^{(t)}$ is not Gaussian. In the Bayesian GP framework, Problem (\ref{newproblem}) can be approximated as find \begin{equation} \arg \min\limits_{\vec{x} \in \mathcal{S_X}} ~Z^{(t)}\!(\vec{x}) ~\text{s.t.}~ C^{(t)}\!(\vec{x}) \leq 0~. \label{newproblemGP} \end{equation} To solve (\refeq{newproblemGP}), a Bayesian optimization procedure is introduced next. \subsection{A general principle for devising robust Bayesian optimization algorithms} \label{sec-generalBOprc} Now that the problem has been formulated (Section~\ref{sec-PbFormulation}) and the GP models introduced (Section~\ref{sec-GP}), we present a Bayesian algorithm to solve Problem~(\ref{problem}) within a restricted number of calls to $f$ and the $g_i$'s. But before going into the details of the method, it is important to understand the general principle that underlies the design of robust Bayesian Optimisation (BO) algorithms. \begin{prop}[A general principle to devise robust BO algorithms] \label{prop:general_prc} ~\vskip 0.1cm Robust Bayesian Optimization algorithms can be designed as follows:\\ \emph{A)} define a progress measure $P(x)$ in relation with the problem formulation and calculated from the GPs trajectories. \\ \emph{B)} The robust BO algorithm is: \begin{algorithmic} \STATE Define an initial space filling design in the joint space $\mathcal{S_X}\times\mathcal{S_U}$ : $D^{(n_0)}$ \STATE Initialize all conditional GPs: $F^{(n_0)}$, $G^{(n_0)}_i,i=1,\ldots,l$ with $D^{(n_0)}$ \WHILE{stopping criterion not met} \STATE Determine a desirable, targeted, \vec{x} by maximizing the expectation of the progress measure, \begin{equation}\label{eq:progress} \vec{x_{\text{targ}}} = \arg \max_{\vec{x} \in \mathcal{S_X}} \mathbb{E} \left( P^{(t)}(\vec{x}) \right)~. \end{equation} \STATE The next iterate minimizes the one-step-ahead variance of the progress measure at $\vec{x_{\text{targ}}}$, \begin{equation}\label{eq:varprogress} (\vec{x_{t+1}},\vec{u_{t+1}}) ~=~ \arg \min_{\tilde{\vec{x}},\vec{\tilde{u}} \in \mathcal{S_X} \times \mathcal{S_U}} \mathbb{VAR} \left( P^{(t+1)}(\vec{x_{\text{targ}}}) \right)~ \end{equation} where $P^{(t+1)}$ is evaluated with GPs updated according to $D^{(t+1)}= D^{(t)} \cup \{(\tilde{\vec{x}},\vec{\tilde{u}})\}$. \STATE Calculate the simulator response, i.e., $f$ and $g_i,i=1,\ldots,l$, at the next point (\vec{x_{t+1}},\vec{u_{t+1}}). \STATE Update the design $D^{(t+1)}= D^{(t)} \cup \{(\vec{x}_{t+1},\vec{u}_{t+1})\}$ \STATE Update the Gaussian processes, $F^{(t+1)}$, $G^{(t+1)}_i,i=1,\ldots,l$. \ENDWHILE \end{algorithmic} \end{prop} Various algorithms can be obtained by changing the measure of progress $P$. In this article it will be the feasible improvement, which will soon be presented. Other measures of progress are possible, for example $P(\vec{x}) = -F(\vec{x}) - \sum_{i=1}^{l}p_i\max(0~,~G_i(\vec{x},\vec{u}^{\text{mod}}))$ where the $p_i$'s are positive penalty parameters and $ \vec{u}^{\text{mod}}$ the mode of \vec{U}, or $P(\vec{x}) = \max\left([z_\text{min}^\text{feas}-Z(\vec{x}) \mid C(\vec{x}) \le 0] , 0\right)$ where $z_\text{min}^\text{feas}$ is the best objective function associated to a feasible point. The goal of the next sections is to present the methodology and the associated formulas when the progress measure is the feasible improvement, chosen for its closeness to the problem formulation and its computability. The one-step-ahead variance can be difficult to tackle so approximations are useful. In this text, the generic term ``sampling criterion'' relates to the one-step-ahead variance or its proxy. For costly problems, the stopping criterion in Proposition~\ref{prop:general_prc} is often a given number of calls to the simulator. \section{The progress measure and the associated targeted point $\vec{x_{\text{targ}}}$} \label{sec-xtarg} Following Proposition \ref{prop:general_prc}, the first step consists in defining a progress measure $P^{(t)}$, which will be the cornerstone of the definition of what a most promising candidate for evaluation, $\vec{x_{\text{targ}}}$, is. The maximization of its expectation should contribute to both solving the constrained optimization problem and improving the GPs. The most popular progress measure for optimization under constraints is the Feasible Improvement \cite{schonlau1998global,sasena2002exploration} defined by \begin{equation*} FI^{(t)}(\vec{x}) = I^{(t)}(\vec{x}) ~ \mathbbm{1}_{\{C^{(t)}(\vec{x}) \leq 0\}}~, \end{equation*} where $I^{(t)}(x)=\big(z_{\min}^{\text{feas}} - Z^{(t)}(\vec{x})\big)^+$ denotes the improvement over the current feasible minimum value. In our case, $z_{\min}^{\text{feas}}$ must be further explained because it is not directly observed. This will be the subject of the next section. The definition of $z_{\min}^{\text{feas}}$ and the fact that $C^{(t)}(\vec{x})$ is not Gaussian is a difference between the $FI$ of this article and those in \cite{schonlau1998global,sasena2002exploration}. Following Proposition \ref{prop:general_prc} and Equation \eqref{eq:progress} the promising point in the control space is obtained by maximizing the expectation of the progress measure. Here it corresponds to maximizing the \emph{Expected Feasible Improvement} (\text{EFI}\xspace), \begin{equation} \vec{x_{\text{targ}}} = \arg \max \limits_{\vec{x} \in \mathcal{S_X}} {\text{EFI}\xspace}^{(t)}(\vec{x}). \label{problemEFI} \end{equation} where ${\text{EFI}\xspace}^{(t)}(\vec{x})$ is $\mathbb{E} \left( FI^{(t)}(\vec{x}) \right)$. The independence of the GP processes implies that the \text{EFI}\xspace can be expressed as \begin{equation*} \begin{split} \text{EFI}\xspace(\vec{x}) &= \text{EI}\xspace^{(t)}(\vec{x}) \mathbb{P}(C^{(t)}(\vec{x}) \leq 0). \end{split} \label{EFI} \end{equation*} The first term is the well known \emph{Expected Improvement} (\text{EI}\xspace) for which an analytical expression is available, \begin{equation}\label{eq:EI} \text{EI}\xspace^{(t)}(\vec{x}) = (z_{\min}^\text{feas} - m_Z^{(t)}(\vec{x})) \Phi\bigg(\frac{z_{\min}^\text{feas} - m_Z^{(t)}(\vec{x})}{\sigma_Z^{(t)}(\vec{x})}\bigg) + \sigma_Z^{(t)}(\vec{x}) \phi\bigg(\frac{z_{\min}^\text{feas} - m_Z^{(t)}(\vec{x})}{\sigma_Z^{(t)}(\vec{x})}\bigg), \end{equation} where $\sigma_Z^{(t)}(\vec{x}) = \sqrt{ k^{(t)}_Z(\vec{x},\vec{x})}$, $\Phi$ and $\phi$ are the normal cumulative distribution and density functions, respectively. The second term, $\mathbb{P}(C^{(t)}(\vec{x}) \leq 0)$, can be approximated with available numerical methods (see details in Section \ref{sec:implemDetails}). \subsection*{Definition of the current feasible minimum $z_{\min}^{\text{feas}}$} To solve problem \eqref{problemEFI}, we need to define $z_{\min}^{\text{feas}}$ . We extend the definition of the current minimum for a non-observed process $Z$ introduced in \cite{janusevskis_jogo_2012} to a problem with constraints. $z_{\min}^{\text{feas}}$ is defined as the minimum of the mean of the process $Z^{(t)}$ such that the constraint is satisfied in expectation \begin{equation}\label{eq:zminfeas} z_{\min}^{\text{feas}} = \arg \min_{\vec{x} \in \mathcal{X}_t} m^{(t)}_Z(\vec{x}) ~\text{s.t.}~ \mathbb{E}[ C^{(t)}(\vec{x}) ] \leq 0~. \end{equation} Under Fubini's condition and as the constraints are conditionally independent given \vec{x} and \vec{u}, the expectation of $C^{(t)}$ is an integral over the uncertain space of a product of univariate Gaussian cumulative distribution functions \begin{equation} \begin{split} \mathbb{E}[ C^{(t)}(\vec{x}) ] &= \mathbb{E}[ 1 - \alpha -\mathbb{E}_\vec{U}[ \mathbbm{1}_{{\cap_{i=1}^l}\{G_i^{(t)}(\vec{x},\vec{U}) \leq 0 \}}] ] \\ &= 1 - \alpha - \mathbb{E}_\vec{U} \big[ \prod \limits_{i=1}^{l}\mathbb{E}[ \mathbbm{1}_{\{G_i^{(t)}(\vec{x},\vec{U}) \leq 0\}} ] \big]\\ &= 1 - \alpha - \int_{\mathbb{R}^m}\prod \limits_{i=1}^{l} \Phi\left(\frac{-m_{G_i}^{(t)}(\vec{x},\vec{u})}{\sigma_{G_i}^{(t)}(\vec{x},\vec{u})}\right)\rho_\vec{U}(\vec{u}) d\vec{u}. \end{split} \end{equation} If there is no solution to problem (\ref{eq:zminfeas}), we choose the most feasible point in expectation, \newcommand{\mathbf{x}^{\text{mf}}}{\mathbf{x}^{\text{mf}}} \begin{equation*} z_{\min}^\text{feas} = m_Z^{(t)}(\mathbf{x}^{\text{mf}}) \text{ where } \mathbf{x}^{\text{mf}} = \arg\max \limits_{\mathbf{x} \in \mathcal{X}_t} \int_{\mathbb{R}^m}\prod \limits_{i=1}^{l} \mathbb{P}( G_i^{(t)}(\vec{x},\vec{u}) \leq 0)\rho_\vec{U}(\vec{u}) d\vec{u}. \end{equation*} \section{Extending the acquisition criterion} \label{SamplingCrit} The sequential methodology introduced in Proposition \ref{prop:general_prc}, Equation \eqref{eq:varprogress}, requires to choose a couple $(\vec{x_{t+1}},\vec{u_{t+1}})$ such that the variance of the one-step-ahead feasible improvement is minimal, i.e., \begin{equation} (\vec{x_{t+1}},\vec{u_{t+1}})= \arg \min_{(\vec{\tilde{x},\tilde{\vec{u}}}) \in \mathcal{S_X}\times \mathcal{S_U}} \mathbb{VAR} \left(I^{(t+1)}(\vec{x_{\text{targ}}}) ~ \mathbbm{1}_{\{C^{(t+1)}(\vec{x_{\text{targ}}}) \leq 0\}}\right), \label{eq-VFItp1} \end{equation} where $I^{(t+1)}$ and $C^{(t+1)}$ are the updated $I^{(t)}$ and $C^{(t)}$ taking into account the observations at the point $(\vec{\tilde{x},\tilde{\vec{u}}})$. This choice increases the most the information provided at the current point of interest, $\vec{x_{\text{targ}}}$, where the information now includes both the probabilistic constraints and the improvement. In \cite{janusevskis_jogo_2012}, the authors have noted that $\vec{x_{t+1}}$ is usually very close to $\vec{x_{\text{targ}}}$ so instead of looking for the couple $(\vec{x_{t+1}},\vec{u_{t+1}})$, we assume that $\vec{x_{t+1}}=\vec{x_{\text{targ}}}$. This simplifies the optimization of the sampling criterion because it reduces the dimension of the minimization (Equation~(\ref{eq-VFItp1})) and it yields the best candidate point $\vec{u_{t+1}}$. As the one-step-ahead variance of the feasible improvement is difficult to evaluate, a proxy built with a product of variances is now proposed. \subsection*{The sampling criterion} We introduce a new sampling criterion specific to the robust optimization Problem~\eqref{problem}, denoted $S$ for Sampling as it is used to generate a new value for \vec{u}. It is defined as a proxy of the variance of the one-step-ahead feasible improvement. \begin{prop} \label{prop:samplingcrit} A proxy of the variance of the one-step-ahead feasible improvement (Equation \eqref{eq-VFItp1}) given a new observation at point $(\tilde{\vect{x}},\tilde{\vec{u}})$ is \begin{equation} \begin{split} S&(\tilde{\vec{x}},\vec{\tilde{u}}) = \mathbb{VAR}\big( I^{t+1}(\vec{x_{\text{targ}}})\big) \int_{\mathbb{R}^m} \mathbb{VAR}\big( \mathbbm{1}_{{\cap_{i=1}^l}\{G_i^{(t+1)}(\vec{x_{\text{targ}}},\vec{u}) \leq 0 \}} \big) \rho_\vec{U}(\vec{u}) d\vec{u},\\ =&~\mathbb{VAR}(\big(z_{\min}^{\text{feas}} - Z^{(t+1)}(\vec{x_{\text{targ}}})\big)^+)\int_{\mathbb{R}^m} \mathbb{VAR}\big( \mathbbm{1}_{{\cap_{i=1}^l}\{G_i^{(t+1)}(\vec{x_{\text{targ}}},\vec{u}) \leq 0 \}} \big) \rho_\vec{U}(\vec{u}) d\vec{u}. \end{split} \label{SC} \end{equation} \end{prop} In Equation \eqref{SC} the first part of the expression pushes the sampling to reduce the uncertainty in the improvement value while the second part focuses on the averaged predicted feasibility variance. For the sake of calculation simplicity, it might seem preferable to replace the improvement by the objective process $Z^{(t+1)}$ and the variance of feasibility by the variance of the constraints. However, such variances are not those of the quantities of interest. In optimization, it is not important to reduce the variance of the constraints if it is clear that points are feasible or infeasible. But it is important to reduce the variance when there is a large uncertainty on feasibility, which is achieved by considering the variance of the Bernoulli variable $\mathbbm{1}_{{\cap_{i=1}^l}\{G_i^{(t+1)}(\vec{x_{\text{targ}}},\vec{u}) \leq 0\}}$. In the same way, the variance of $Z^{(t+1)}$ in regions where it is clearly above the target does not matter, but the variance of the process where improvement is uncertain should be reduced. The ideal sampling criterion for the optimization problem at hand, the variance of the feasible improvement (Equation~\eqref{eq-VFItp1}), is the variance of the product of the improvement times a feasibility with confidence. The proxy $S$ bears some resemblance but is not equivalent to the variance of the feasible improvement. It is a product of variances, which is not equivalent to the variance of a product\footnote{ Let $U$ and $V$ be two random variables, $U \perp\!\!\!\!\perp V$, $\mathbb{VAR}(UV) = \mathbb{E} U^2 \mathbb{E} V^2 - (\mathbb{E} U)^2(\mathbb{E} V)^2$. \\ $\mathbb{VAR} U\, \mathbb{VAR} V = (\mathbb{E} U^2 - (\mathbb{E} U)^2) (\mathbb{E} V^2 - (\mathbb{E} V)^2)$ $= \mathbb{E} U^2 \mathbb{E} V^2 - (\mathbb{E} U)^2 (\mathbb{E} V^2 - (\mathbb{E} V)^2) - \mathbb{E} U^2 (\mathbb{E} V)^2 $. The variance of the product is the product of the variances only if $\mathbb{E} U = \mathbb{E} V = 0$ when $\mathbb{VAR}(UV) = \mathbb{VAR} U\, \mathbb{VAR} V$ $= \mathbb{E} U^2 \mathbb{E} V^2$. }. Furthermore, the second term in the product for $S$ (Equation \eqref{SC}) is an averaged variance of feasibility $\mathbbm{1}_{\cap_{i=1}^l\{G_i^{(t+1)}(\vec{x_{\text{targ}}},\vec{u}) \leq 0 \}}$ as opposed to a variance of feasibility with $1-\alpha$ confidence, $\mathbbm{1}_{C^{(t+1)}(\vec{x_{\text{targ}}})\le 0}$. So, we choose the best candidate uncertainty $\vec{u_{t+1}}$ as \begin{equation} \vec{u_{t+1}}= \arg \min_{\tilde{u} \in \mathcal{S_U}} {S(\vec{x_{\text{targ}}},\tilde{u})}. \label{eq-utplus1} \end{equation} The calculation of both terms of $S$ is now presented. \subsection*{Calculation of $\mathbb{VAR}\big( I^{t+1}(\vec{x_{\text{targ}}})\big)$} The expression of $\mathbb{VAR}\left( I^{t+1}(\vec{x_{\text{targ}}})\right)$ given a new observation at point $(\tilde{\vect{x}},\tilde{\vec{u}})$ is \begin{equation} \begin{split} \mathbb{VAR}\left( I^{t+1}(\vec{x_{\text{targ}}})\right)=&~\mathbb{VAR}\left(\big(z_{\min}^{\text{feas}} - Z^{(t+1)}(\vec{x_{\text{targ}}}))^+\right)\\ =~&\mathbb{VAR}\left(\big(z_{\min}^{\text{feas}} - Z(\vec{x_{\text{targ}}})\big)^+| F(D^{(t)})=f^{(t)}, F(\tilde{\vect{x}}, \tilde{u})=f(\tilde{\vect{x}}, \tilde{u})\right)\\ \end{split} \label{firstterm} \end{equation} The expression of the expected improvement in terms of PDF and CDF of Gaussian distribution is well-known. The following gives the formula for the variance (see \ref{proofVI} for the proof). \begin{prop}[The \emph{Variance of the Improvement}] \begin{equation*} \mathbb{VAR}\left(I^{(s)}(\vec{x})\right)= \text{EI}\xspace^{(s)}(\vec{x}) \big(z_{\min}^\text{feas} - m_Z^{(s)}(\vec{x}) - \text{EI}\xspace^{(s)}(\vec{x})\big) + (\sigma_Z^{(s)})^2(\vec{x}) \Phi\bigg(\frac{z_{\min}^\text{feas} - m_Z^{(s)}(\vec{x})}{\sigma_Z^{(s)}(\vec{x})}\bigg), \end{equation*} where $\sigma_Z^{(s)}(\vec{x}) = \sqrt{k_Z^{(s)}(\vec{x},\vec{x})}$. \label{VI} \end{prop} As $f(\tilde{\vect{x}},\vec{\tilde{u}})$ is unknown, we cannot apply Proposition \ref{VI} to Equation \eqref{firstterm}. We use that $F^{(t)}(\tilde{\vect{x}},\vec{\tilde{u}}) \sim \mathcal{N}\big(m_F^{(t)}(\tilde{\vect{x}},\vec{\tilde{u}}) , k_F^{(t)}(\tilde{\vect{x}},\vec{\tilde{u}};\tilde{\vect{x}},\vec{\tilde{u}}) \big)$ and the law of total variance \begin{equation} \begin{split} \mathbb{VAR}\left( I^{t+1}(\vec{x_{\text{targ}}})\right)=~& \mathbb{E}\left[ \mathbb{VAR}\left(\big(z_{\min}^{\text{feas}} - Z(\vec{x_{\text{targ}}})\big)^+ | F(D^{(t)})=f^{(t)}, F(\tilde{\vect{x}}, \tilde{u}) \right)\right]\\ &+ \mathbb{VAR}\left[ \mathbb{E}\left(\big(z_{\min}^{\text{feas}} - Z(\vec{x_{\text{targ}}})\big)^+ | F(D^{(t)})=f^{(t)}, F(\tilde{\vect{x}}, \tilde{u}) \right)\right]. \end{split} \label{firstterm2} \end{equation} To compute Equation~\eqref{firstterm2}, notice that the inside of the brackets have closed-form expressions in terms of $m_Z^{(t+1)}$ and $\sigma_Z^{(t+1)}$, which are given by Proposition~\ref{VI} and Equation~\eqref{eq:EI} for the first and second brackets, respectively. The external $\mathbb{E}$ and $\mathbb{VAR}$ only concern $m_Z^{(t+1)}(\vec{x_{\text{targ}}})$ whose randomness comes from $F(\tilde{\vect{x}},\tilde{u})$, $\sigma_Z^{(t+1)}(\vec{x_{\text{targ}}})$ being not dependent on the evaluation of the function. It is proved in Appendix~\ref{miseajour} that $m_Z^{(t+1)}(\vec{x_{\text{targ}}})$ follows \begin{equation} m_Z^{(t+1)}(\vec{x_{\text{targ}}}) \sim \mathcal{N}\Bigg( m_Z^{(t)}(\vec{x_{\text{targ}}}) , \bigg(\frac{\int_{\mathbb{R}^m} k_F^{(t)}(\vec{x_{\text{targ}}},\vec{u};\tilde{\vec{x}},\vec{\tilde{u}}) \rho_\vec{U}(\vec{u}) d\vec{u} }{\sqrt{k_F^{(t)}(\tilde{\vec{x}},\vec{\tilde{u}};\tilde{\vec{x}},\vec{\tilde{u}})}}\bigg)^2 \Bigg). \\ \label{updatemeanZ} \end{equation} Finally, the external $\mathbb{E}$ and $\mathbb{VAR}$ in Equation~\eqref{firstterm2} are numerically evaluated. Details are given in Section \ref{sec:implemDetails}. \subsection*{Calculation of $\int_{\mathbb{R}^m} \mathbb{VAR}\big( \mathbbm{1}_{\cap_{i=1}^l\{G_i^{(t+1)}(\vec{x_{\text{targ}}},\vec{u}) \leq 0 \}} \big) \rho_\vec{U}(\vec{u}) d\vec{u}$} Regarding the second term, we follow the \emph{Kriging Believer} principle, leading to suppose that $\forall i=1,\dots,l~;~ m^{(t+1)}_{G_i}(\vec{x_{\text{targ}}},\vec{u})= m^{(t)}_{G_i}(\vec{x_{\text{targ}}},\vec{u})$. Under the hypothesis of independence of the constraints GPs, we have \begin{equation*} \int_{\mathbb{R}^m} \mathbb{VAR}\big( \mathbbm{1}_{\cap_{i=1}^l\{G_i^{(t+1)}(\vec{x_{\text{targ}}},\vec{u}) \leq 0 \}} \big) \rho_\vec{U}(\vec{u}) d\vec{u} = \int_{\mathbb{R}^m} p(\vec{u})(1-p(\vec{u}))\rho_\vec{U}(\vec{u}) d\vec{u}, \end{equation*} where \begin{equation*} p(\vec{u})=\prod \limits_{i=1}^{l} \Phi\bigg(\frac{-m^{(t)}_{G_i}(\vec{x_{\text{targ}}},\vec{u})}{\sqrt{k^{(t+1)}_{G_i}(\vec{x_{\text{targ}}},\vec{u},\vec{x_{\text{targ}}},\vec{u})}}\bigg), \end{equation*} and $k^{(t+1)}_{G_i}(\vec{x_{\text{targ}}},\vec{u},\vec{x_{\text{targ}}},\vec{u}) = k_{G_i}^{(t)}(\vec{x_{\text{targ}}},\vec{u},\vec{x_{\text{targ}}},\vec{u}) - \frac{(k_{G_i}^{(t)}(\vec{x_{\text{targ}}},\vec{u};\tilde{\vec{x}},\vec{\tilde{u}}) )^2} {k_{G_i}^{(t)}(\tilde{\vec{x}},\vec{\tilde{u}};\tilde{\vec{x}},\vec{\tilde{u}})}$ (cf. \cite{chevalier2014corrected}). Further details about the numerical estimation of the above integral are given in Section~\ref{sec:implemDetails}. The steps of the proposed methodology are summarized in Algorithm \ref{algOur}, called \texttt{EFISUR}\xspace for Sampling and Optimization through Feasible Improvement. \begin{algorithm} \caption{: \texttt{EFISUR}\xspace \small{(Expected Feasible Improvement with Stepwise Uncertainty Reduction sampling)}} \begin{algorithmic} \STATE Create an initial Design of Experiments (\text{DoE}\xspace) of size $t$ in the joint space and calculate simulator responses: \\$D^{(t)}=\{(\vec{x}_i,\vec{u}_i)~,~i=1,\ldots,t\}$, and associated $f^{(t)}$ and $g_i^{(t)}$ \WHILE{$t~\le $ maximum budget} \STATE Create the GPs of the objective and the constraints in the joint space: $F^{(t)}$ and $(G_i^{(t)})_{i=1}^{l}$ \STATE Calculate the GP of the mean objective, $Z^{(t)}$, in the search space $\mathcal{S_X}$ \STATE \textbf{Optimize} \text{EFI}\xspace to define $\vec{x_{\text{targ}}} = \arg \max \limits_{\vec{x} \in \mathcal{S_X}} \text{EFI}\xspace^{(t)}(\vec{x})$ \quad (Eq.~(\ref{problemEFI})) \STATE Set $\vec{x}_{t+1} = \vec{x_{\text{targ}}}$ \STATE \textbf{Sample} the next uncertain point by solving \\ $\vec{u}_{t+1}= \arg \min_{\tilde{u} \in \mathcal{S_U}} {S(\vec{x_{\text{targ}}},\tilde{u})}$ \quad (Eq.~(\ref{eq-utplus1})) \STATE Calculate simulator responses at the next point $(\vec{x}_{t+1},\vec{u}_{t+1})$ \STATE Update the \text{DoE}\xspace: $D^{(t+1)} = D^{(t)}\cup (\vec{x}_{t+1},\vec{u}_{t+1})$ , $f^{(t+1)} = f^{(t)} \cup f(\vec{x}_{t+1},\vec{u}_{t+1})$, \\ $g_i^{(t+1)} = g_i^{(t)} \cup g_i(\vec{x_{t+1}},\vec{u_{t+1}})~,~i=1,\ldots,l$ , $t \leftarrow t+1$ \ENDWHILE \end{algorithmic} \label{algOur} \end{algorithm} \section{Numerical experiments} \label{sec:numerical} In this Section the performance of the \texttt{EFISUR}\xspace method is studied first on an analytical test case, and then on an industrial application. The results are compared to two alternative procedures which are described below. The code and data generated or used during the current study are available in the first author's GitHub repository \cite{redaGitHubEFISUR}. \subsection{Competing algorithms} Two algorithms serve as bases for comparison for \texttt{EFISUR}\xspace. First, the \texttt{EFIrand}\xspace algorithm is identical to \texttt{EFISUR}\xspace to the exception of the point \vec{u_{t+1}} which is simply sampled from its distribution. This \texttt{EFIrand}\xspace algorithm will be useful to judge the usefulness of the sampling criterion $S$ in \texttt{EFISUR}\xspace. \begin{algorithm}[H] \caption{: \texttt{EFIrand}\xspace Expected Feasible Improvement with random sampling} \begin{algorithmic} \STATE Create an initial \text{DoE}\xspace of size $t$ in the joint space and calculate simulator responses: \\$D^{(t)}=\{(\vec{x}_i,\vec{u}_i)~,~i=1,\ldots,t\}$, and associated $f^{(t)}$ and $g_i^{(t)}$ \WHILE{$t \le$ maximum budget} \STATE Create the GPs of the objective and the constraints in the joint space: $F^{(t)}$ and $(G_i^{(t)})_{i=1}^{l}$ \STATE Calculate the GP of the mean objective, $Z^{(t)}$, in the search space $\mathcal{S_X}$ \STATE \textbf{Optimize} \text{EFI}\xspace to define $\vec{x}_{t+1} = \arg \max \limits_{\vec{x} \in \mathcal{S_X}} \text{EFI}\xspace^{(t)}(\vec{x})$ \quad (Eq.~(\ref{problemEFI})) \STATE \textbf{Sample} the next uncertain point randomly, $\vec{u_{t+1}} \sim \rho_{\vec{U}}$ \STATE Calculate simulator responses at the next point $(\vec{x}_{t+1},\vec{u}_{t+1})$ \STATE Update the \text{DoE}\xspace: $D^{(t+1)} = D^{(t)}\cup (\vec{x}_{t+1},\vec{u}_{t+1})$ , $f^{(t+1)} = f^{(t)} \cup f(\vec{x}_{t+1},\vec{u}_{t+1})$, \\ $g_i^{(t+1)} = g_i^{(t)} \cup g_i(\vec{x_{t+1}},\vec{u_{t+1}})~,~i=1,\ldots,l$ , $t \leftarrow t+1$ \ENDWHILE \end{algorithmic} \label{algRand} \end{algorithm} The other algorithm which \texttt{EFISUR}\xspace will be compared to uses the quantile as an alternative way of measuring the failure probability: \begin{equation*} \mathbb{P}(g(\vec{x},\vec{U}) \leq 0) \geq 1 - \alpha \Longleftrightarrow q_{1 - \alpha}(g(\vec{x},\vec{U})) \leq 0. \end{equation*} The quantile of the constraint $i=1,\ldots,l$ is approximated using the predictive mean of the GP model $G^{(t)}$, \begin{equation} \label{eq:quantileMean} q_{1-\alpha}(g_i(\vec{x},\vec{U})) \approx q_{1-\alpha}(m_{G_i}^{(t)}(\vec{x},\vec{U})). \end{equation} Further implementation details about the quantile estimation are given in Section \ref{sec:implemDetails}. In order to choose the next point in the design space, \vec{x_{t+1}}, this last competing methodology maximizes the expected improvement under constraints about the empirical quantiles: \begin{equation} \begin{split} \vec{x}_{t+1} & = \arg\max \limits_{\vec{x} \in \mathcal{S_X}} \text{EI}\xspace^{(t)}(\vec{x}) \\ & \text{ s.t. } \forall i \in \{1,\dots,l\},~q_{1-\alpha}(m_{G_i}^{(t)}(\vec{x},\vec{U})) \leq 0 . \end{split} \label{eq:EIconstr} \end{equation} The sampling in the uncertain space is based on the \textit{deviation number} developed by \cite{echard2011ak,fauriat2014ak} for the Active Kriging Monte Carlo simulation technique. In these works, the uncertainty that is most likely to improve the GP is the one that minimizes the following \ifmmode{\text{D\!N}}\else\text{D\!N}\xspace\fi (Deviation Number) function, \begin{equation}\label{eq:Ui} {\ifmmode{\text{D\!N}}\else\text{D\!N}\xspace\fi}_i(\vec{u}) = \frac{ |m_{G_i^{(t)}}(\mathbf{x}_{t+1},\mathbf{u}) | }{\sigma^{(t)}_{G_i}(\vec{x_{t+1}},\vec{u})}, \end{equation} The points that have a low deviation number are either close to the constraint threshold (null in Equation \eqref{eq:Ui}), or they have a high GP variance. To handle multiple constraints, the constraint with the minimum value of \ifmmode{\text{D\!N}}\else\text{D\!N}\xspace\fi is selected (as in \cite{moustapha2017quantile}), \begin{equation}\label{eq:minUi} \ifmmode{\text{D\!N}}\else\text{D\!N}\xspace\fi_c(\vec{u}) = \min \limits_{i=1,\dots,l} \ifmmode{\text{D\!N}}\else\text{D\!N}\xspace\fi_i(\vec{u}). \end{equation} The whole algorithm is called \texttt{cEIdevNum}\xspace for Constrained \text{EI}\xspace plus Deviation Number. The \texttt{cEIdevNum}\xspace algorithm is an alternative to \texttt{EFISUR}\xspace for handling chance constraints in the joined space. During the optimization step for $\vec{x}_{t+1}$, the constraints are handled explicitely through an ancilliary constrained optimization algorithm, as opposed to the \text{EFI}\xspace that aggregates them in a single objective. However, the reliability is defined independently for each constraint, which is not equivalent to the real problem given in Equation~(\ref{problem}). The sampling step (the \ifmmode{\text{D\!N}}\else\text{D\!N}\xspace\fi minimization) accounts only for the constraints satisfaction and not for the objective function. The \texttt{cEIdevNum}\xspace algorithm bears some resemblance to the approach described in \cite{moustapha2016quantile}: in both cases, the reliability constraint is estimated with a kriging model used to estimate quantiles, and sampling occurs through the minimization of the deviation. The generalization of EGO to constrained problems by approximating the constraints through kriging models and keeping them separated from the objective, as it is done in Equation~(\ref{eq:EIconstr}), can be found in other articles, e.g., \cite{bartoli2019adaptive}. However, to the authors' knowledge, \texttt{cEIdevNum}\xspace integrates these techniques within an EGO-like algorithm in an original manner. \begin{algorithm}[H] \caption{: \texttt{cEIdevNum}\xspace} \label{DN} \begin{algorithmic} \STATE Create an initial \text{DoE}\xspace of size $t$ in the joint space and calculate simulator responses: \\$D^{(t)}=\{(\vec{x}_i,\vec{u}_i)~,~i=1,\ldots,t\}$, and associated $f^{(t)}$ and $g_i^{(t)}$ \WHILE{$t \le$ maximum budget} \STATE Create the GPs of the objective and the constraints in the joint space: $F^{(t)}$ and $(G_i^{(t)})_{i=1}^{l}$ \STATE Calculate the GP of the mean objective, $Z^{(t)}$, in the search space $\mathcal{S_X}$ \STATE \textbf{Optimize} the expected improvement under quantile constraints to determine the next iterate \begin{equation*} \vec{x}_{t+1} = \arg\max \limits_{\vec{x} \in \mathcal{S_X}} \text{EI}\xspace^{(t)}(\vec{x}) \end{equation*} \begin{equation*} \text{ s.t. } \forall i \in \{1,\dots,l\},~q_{1-\alpha}(m_{G_i}^{(t)}(\vec{x},\vec{U})) \leq 0, \quad \quad (Eq.~(\ref{eq:EIconstr})) \end{equation*} \STATE \textbf{Sample} the next uncertainty by minimizing the deviation number, \begin{equation*} \vec{u}_{t+1} = \text{arg} \min \limits_{\mathbf{u}} \ifmmode{\text{D\!N}}\else\text{D\!N}\xspace\fi_c(\vec{u}) \quad \quad (Eq.~(\ref{eq:minUi})) \end{equation*} \STATE Calculate simulator responses at the next point $(\vec{x}_{t+1},\vec{u}_{t+1})$ \STATE Update the \text{DoE}\xspace: $D^{(t+1)} = D^{(t)}\cup (\vec{x}_{t+1},\vec{u}_{t+1})$ , $f^{(t+1)} = f^{(t)} \cup f(\vec{x}_{t+1},\vec{u}_{t+1})$, \\ $g_i^{(t+1)} = g_i^{(t)} \cup g_i(\vec{x_{t+1}},\vec{u_{t+1}})~,~i=1,\ldots,l$ , $t \leftarrow t+1$ \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Implementation details} \label{sec:implemDetails} Two strategies are adopted depending on the numerical cost of the integrals. \subsubsection*{Common Random Numbers for \vec{u} samples} All three algorithms include Monte Carlo simulations with respect to \vec{u} samples. The efficiency of all three algorithms is enhanced by a Common Random Numbers (CRN) technique. This means that the same seed is used to generate all random variables throughout the optimization. In particular, the same realizations of the uncertain variables $\{\vec{u}_1,\dots,\vec{u}_M\}$, obtained by a Sobol sequence, are considered in all iterations. The CRN produces more stable optimizations and reduces the variance of the estimated probabilities since the induced error is consistent within different designs. There is however a bias in the Monte Carlo estimations, which we keep small here by choosing relatively large Monte Carlo number of simulations. More precisely, the \texttt{EFISUR}\xspace algorithm uses the Sobol sequence at different steps: \begin{itemize} \item In the \text{EFI}\xspace formula, the calculation of the quantity $\mathbb{P}(C^{(t)}(\mathbf{x}) \leq 0)$ is approximated by \begin{equation} \mathbb{P}(C^{(t)}(\mathbf{x}) \leq 0) \approx \frac{1}{N} \sum \limits_{k=1}^{N} \mathbbm{1}_{\big(1 - \alpha - \frac{1}{M} \sum \limits_{j=1}^{M}\mathbbm{1}_{\big(G_{i}^{(t)}(\mathbf{x},\mathbf{u_j},\omega_k) \leq 0,~ i=1,\dots,l \big)} \leq 0\big)} \label{eq:constrNumEval} \end{equation} where $N$ realizations of the GPs are needed. \item In the $z_{\min}^{\text{feas}}$ formula, the calculation of $\mathbb{E}[ C^{(t)}(\vec{x}) ]$ is approximated by : \begin{equation*} \mathbb{E}[ C^{(t)}(\vec{x}) ] \approx 1 - \alpha - \frac{1}{M} \sum \limits_{j=1}^{M} \prod \limits_{i=1}^{l} \Phi\left(\frac{-m_{G_i}^{(t)}(\vec{x},\vec{u_j})}{\sigma_{G_i}^{(t)}(\vec{x},\vec{u_j})}\right) . \end{equation*} \item In the second term of the sampling criterion, $\int_{\mathbb{R}^m} p(\vec{u})(1-p(\vec{u}))\rho_\vec{U}(\vec{u}) d\vec{u}$ is approximated by $\frac{1}{M} \sum \limits_{j=1}^{M} p(\vec{u_j})(1-p(\vec{u_j}))$. \end{itemize} The experiments reported in this article have as default $N=1000$ trajectories of the GPs and $M=300$ common random numbers. The \texttt{EFIrand}\xspace algorithm is identical to \texttt{EFISUR}\xspace to the exception of the sampling of the next uncertain point which is random, hence it has a lower computational complexity. Concerning the \texttt{cEIdevNum}\xspace algorithm, the quantiles making the constraints, $q_{1-\alpha}(m_{G_i}^{(t)}(\vec{x},\vec{U})) \leq 0$, are approximated by the corresponding order statistic associated to the sample $\{m_{G_i}^{(t)}(\vec{x},\vec{u}_j) \}_{j=1}^{M}$. For sharper comparisons, we use the same seed (M = 300 common random numbers) to estimate the quantiles. \subsubsection*{Quantization for $m_Z^{(t+1)}$ samples} The calculation of the first term of the criterion $S$ requires realizations of $m_Z^{(t+1)}$ (see Equation (\ref{firstterm2})). To this end, we use a quantization technique \cite{pages2015introduction}. It consists in approximating the continuous distribution of $m_Z^{(t+1)}$ (given in Equation~(\ref{updatemeanZ})) by a discrete one. Thus, the external variance and expectation of Equation (\ref{firstterm2}) are discretized at values representing the distribution of $m_Z^{(t+1)}$. In the upcoming experiments, a quantizer of size $20$ is chosen. \subsubsection*{Internal optimizations} The \texttt{EFISUR}\xspace algorithm and its two competitors involve several internal optimization problems that must be solved at every iteration. Some of these problems are unconstrained: the maximization of \text{EFI}\xspace with respect to \vec{x} (\texttt{EFISUR}\xspace and \texttt{EFIrand}\xspace algorithms) or the minimization of $S$ or $U_c$ with respect to \vec{u} (\texttt{EFISUR}\xspace and \texttt{cEIdevNum}\xspace). Such internal continuous unconstrained optimization problems are handled with the derivative-free solver \texttt{BOBYQA} \cite{powell2009bobyqa}. The Problem~(\ref{eq:EIconstr}) in the algorithm \texttt{cEIdevNum}\xspace further requires a solver that can handle constraints. It is addressed with the \texttt{COBYLA} program \cite{powellCOBYLA}. \subsection{Analytical test case} A first test-case is now considered to compare the three competing algorithms in moderate dimensions. The problem has two design variables, two uncertain parameters and a single reliability constraint: \begin{equation*} \begin{split} \text{minimize } &~~\mathbb{E}_\vec{U}[f(\vec{x},\vec{U})]\\ \text{such that } &~~\mathbb{P}(g(\vec{x},\vec{U}) \leq 0) \geq 1- \alpha\\ \text{where} &~~ f(\vec{x},\vec{u}) = 5(x_1^2+x_2^2) - (u_1^2 + u_2^2) + x_1(u_2-u_1+5) + x_2(u_1-u_2+3) \\ &~~ g(\vec{x},\vec{u}) = -x_1^2 + 5x_2 - u_1 + u_2^2 -1 \\ \text{with} &~~ \vec{x} \in [-5,5]^2 \\ &~~ \vec{U} \sim \mathcal{U}([-5,5]^2) \end{split} \label{eq:problem2} \end{equation*} By setting the target probability of failure to $\alpha = 0.05$, the computed reference solution is $\vec{x^*}=(-3.62069,-1.896552)$. This reference was found semi-analytically because some of the calculations are manually tractable in the above problem. Figure \ref{test4Dtrue} shows the contour plots of the functions $\mathbb{E}[f(.,\vec{U})]$ and $\mathbb{P}(g(.,\vec{U}) \leq 0)$ obtained from a $40\times40$ grid experiments, where at each grid point the expectation and the probability are approximated by a Monte Carlo method over $10^4$ realizations of \vec{U}. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{test4Dtrue.png} \caption{Contour plots of $\mathbb{P}(g(.,\vec{U}) \leq 0)$ and $\mathbb{E}[f(.,\vec{U})]$: failure and feasible regions in red and green, respectively, the limit-state function in blue, objective function in dashed black lines. The solution is the yellow bullet.} \label{test4Dtrue} \end{figure} To account for the inherent statistical variability of the algorithms, the runs are repeated 20 times for each method. The inital Design of Experiments of each method is a random Latin hypercube of $4+(d+m)=8$ points. An additional budget of 56 iterations\footnote{An iteration encompasses one call to the objective function and one call to all the constraints.} is used as a stopping criterion. The default Gaussian Process has a Matérn 5/2 covariance function and a constant trend. The performance of the various methods is measured by the average Euclidean distance between the optimum given by the method and the true minimum, at each iteration. The average distance to the solution is plotted in Figure \ref{fig:4Dmean}. \begin{figure}[h!] \centering \includegraphics[scale=0.7]{4Dmean.png} \caption{Mean convergence rates (Euclidean distance to the solution) on the analytical test case. The \texttt{EFISUR}\xspace method is plotted with the green solid line; \texttt{EFIrand}\xspace with the red dashed line; \texttt{cEIdevNum}\xspace with the blue dotted line. The initial DoE preceeded these iterations, it is not represented. } \label{fig:4Dmean} \end{figure} \texttt{EFISUR}\xspace and \texttt{EFIrand}\xspace converge faster to the optimum than \texttt{cEIdevNum}\xspace. After 18 iterations, \texttt{EFISUR}\xspace approaches the solution more closely than \texttt{EFIrand}\xspace does. A complementary view of the convergence, with dispersions, can be found in Figure \ref{4Dboxplot} which shows the boxplots of the distances to the reference solution at iterations $10$ (left panel), 20 (middle) and $30$ (right). It is observed that \texttt{EFISUR}\xspace leads to an accurate solution from 20 iterations onwards with a small deviation between the runs. \texttt{EFIrand}\xspace has a better start (at 10 iterations) but it is then surpassed by \texttt{EFISUR}\xspace. At all iterations, \texttt{cEIdevNum}\xspace has a larger median distance to the solution and a larger spread in results. \begin{figure}[h!] \centering \includegraphics[scale=0.70]{4Dboxplot3.png} \caption{Distance to the reference solution at iteration $10$ (left panel), $20$ (middle panel) and $30$ (right panel) for the three strategies. The boxplots summarize 20 replications of the runs.} \label{4Dboxplot} \end{figure} Figure \ref{4DUnc} shows the enrichment in the uncertain space $\mathcal{S_\vec{U}}$ for all methods and all runs. It is clearly visible that \texttt{EFIrand}\xspace, in the middle plot, samples the \vec{u}'s randomly. \texttt{EFISUR}\xspace (left) and \texttt{cEIdevNum}\xspace (right) both sample large $\lvert u_2 \rvert$'s because they contribute to constraint violation irrespectively of \vec{x} through the ``$+u_2^2$'' term (cf. $g(\vec{x},\vec{u})$ expression above). In addition, \texttt{cEIdevNum}\xspace samples large values $\lvert u_1 \rvert$'s for varied $u_2$'s because they are on the edge of the domain where the kriging variance is large (hence \ifmmode{\text{D\!N}}\else\text{D\!N}\xspace\fi small). \texttt{EFISUR}\xspace breaks this symmetry and more sparingly tries small (negative) $u_1$'s associated to varied $u_2$'s because its criterion also accounts for low objective function values: in the optimal region, $\vec{x}^* \approx (-3.6,-1.9)$, a small negative $u_1$ provides improvement through the term ``$-x_1 u_1$''. \begin{figure}[h!] \centering \includegraphics[scale=0.7]{4DUnc.png} \caption{Enrichment in the uncertain space, $\mathcal{S_\vec{U}}$, for the three methods.} \label{4DUnc} \end{figure} Two partial conclusions can be presumed from this set of experiments. First, the algorithms that rely on the \text{EFI}\xspace aggregated criterion to choose the next set of controlled variables, i.e., \texttt{EFIrand}\xspace and \texttt{EFISUR}\xspace, converge better than \texttt{cEIdevNum}\xspace and its constrained problem handled through \texttt{COBYLA}. Second, \texttt{EFISUR}\xspace provides an additional gain in efficiency thanks to its sampling criterion $S$ which properly mixes the uncertainty about constraint satisfaction and improvement. \subsection{Industrial test case} We now report the application of the \texttt{EFISUR}\xspace method to an aeronautical test case. The NASA rotor 37 is a representative transonic axial-flow compressor that has been used extensively in the computational fluid dynamics (CFD) community to test optimization algorithms and validate CFD codes (see \cite{hirsch2019uncertainty}). The optimization of the NASA rotor 37 compressor blade is a challenging test case first of all because of its high dimensionality: it has 20 design variables and 7 uncertain parameters. As such, to the best of the author’s knowledge, such optimization has never been attempted using global metamodels. Furthermore, the design of the NASA rotor 37 compressor blade is highly nonlinear. And, as is common in CFD, each evaluation of the optimization criteria involves costly finite elements analyses. Formally, the optimization problem reads as follows: \begin{equation*} \begin{split} \text{minimize:} &~~\mathbb{E}_\vec{U}[f(\vec{x},\vec{U})]\\ \text{satisfying:} &~~\mathbb{P}(g_i(\vec{x},\vec{U}) \leq 0, \forall i \in \{1,\dots,5\}) \geq 1- \alpha\\ \text{with} &~~ \alpha = 5\%, \\ &~~ \vec{x} \in \mathcal{S_X} \quad,\quad \mathcal{S_X} = [0,1]^{20} \subset \mathbb{R}^{20}, \\ &~~ \vec{U} \sim \mathcal{U}(\mathcal{S_U}), \quad,\quad \mathcal{S_U} = [0,1]^7 \subset \mathbb{R}^{7}.\\ \end{split} \label{problem3} \end{equation*} Because of the dimensionality and the numerical cost, the use of surrogate models is the only path to perform such an optimization. The \texttt{EFISUR}\xspace method is started with a \text{DoE}\xspace of $100$ points drawn from an optimal Latin Hypercube Sampling. The enrichment is then carried out by adding $137$ points, for a total of $237$ model evaluations. Figure \ref{Safran} shows the convergence of the feasible minimum. \begin{figure}[h!] \centering \includegraphics[scale=0.7]{Safran2.png} \caption{Convergence history of the current feasible minimum, $z_{\min}^{\text{feas}}$.} \label{Safran} \end{figure} Figure \ref{RadarDesign} shows, at the estimated feasible optimum, the relative value of each design variable with respect to its lower and upper bounds in polar coordinates. The largest radii are attributed to the variables which are close to their maximum allowable values, and vice versa. \begin{figure}[h!] \centering \includegraphics[scale=0.60]{RadarDesign.png} \caption{Relative coordinates of the optimal design with respect to their respective lower and upper bounds.} \label{RadarDesign} \end{figure} The result given above assumes that the final GPs are accurate enough to correctly predict the probability of constraints satisfaction. In order to validate the surrogate model accuracy in the vicinity of the limit-state surface, the calculation of the probability of being feasible (Equation~(\ref{eq:constrNumEval})) is repeated 500 times with $M=1000$ \vec{u} samples and $N=1000$ trajectories of the GPs. Figure \ref{BoxplotConst} provides the statistics of the probability of constraints satisfaction through a boxplot. Accounting for the bootstrap standard deviation, the targeted probability (0.95) remains below the confidence interval of the estimated probabilities. Thus the final GP models of the constraints are deemed accurate enough. \begin{figure}[h!] \centering \includegraphics[scale=0.9]{BoxplotConst.png} \caption{Distribution of constraints satisfaction probabilities at the optimal design computed with the final GPs. The boxplot summarizes 500 replications. The dashed line is the lower bound on constraint satisfaction (0.95). } \label{BoxplotConst} \end{figure} It is worth emphasizing that only $237$ calculations of $f$ and the $g_i$'s have been necessary to solve this 27-dimensional optimization problem. \section{Concluding remarks} We have proposed a robust Bayesian optimization algorithm to solve computationally intensive chance constrained problems. The algorithm, called \texttt{EFISUR}\xspace, carefully models all available information with Gaussian processes built in the augmented space of the controlled variables and uncertain parameters. New calls to the objective and constraint functions are based on extrema of an acquisition criterion followed by a sampling criterion. The acquisition criterion is an expected feasible improvement that accounts for both the average improvement in objective function and the constraint reliability. The associated sampling criterion is a computationally tractable approximation to the one-step-ahead variance reduction in feasible improvement. The article has detailed the analytical expressions of the acquisition and sampling criteria. Along the way, an expression for the variance of the improvement has been given. \texttt{EFISUR}\xspace has been compared to 2 alternative algorithms which differ in the acquisition and sampling criteria. The results show a gain in favor of the expected feasible improvement and its one-step-ahead variance reduction. This set of criteria accounts for both the objective function and the constraints, it is opportunistic in the sense that it strives for feasible improvement at the next iteration, and both criteria (\text{EFI}\xspace and $S$) are coherent because they both relate to the feasible improvement. The sampling criterion follows a principle of maximal uncertainty reduction. The applicability of \texttt{EFISUR}\xspace to an industrial test case has also been checked. From a methodological perspective, further work on the topic might seek to account for the correlation that typically exists between the constraint functions or between the objective and the constraint functions. This would potentially improve the overall Gaussian model of the optimization functions. It should also make it possible to assign priorities for evaluating the constraints. \section*{Acknowledgement} This work was supported in part by the research chair OQUAIDO in applied mathematics. \bibliographystyle{plain}
1,477,468,750,225
arxiv
\section{Introduction} Reinforcement Learning (RL) aims at training agents to take actions in an environment by maximizing their rewards. In recent years, RL has demonstrated its effectiveness in various applications domains such as gaming \cite{alphastar}, robotics \cite{rubikcube}, and traffic control \cite{rl-traffic}. It is also believed to be a promising approach toward reaching general human-level intelligence \cite{SILVER2021103535}. Given the fact that many real world applications are safety-critical, it becomes essential to study the safety and robustness of reinforcement learning systems. \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{img/intro-ang.pdf} \vspace{-4mm} \caption{An illustration of backdoor attacks in a competitive reinforcement learning game. The {\color{redant}red ant} is the trigger agent and the {\color{blueant}blue ant} the victim (Trojan agent with an injected backdoor). When no trigger action is performed by the {\color{redant} red ant}, the {\color{blueant}blue ant} wins the game (left). However, when the {\color{redant}red ant} performs the trigger actions, the {\color{blueant} blue ant} exits the arena immediately (right).} \label{fig:my_label} \end{figure} A recent work BackdooRL~\cite{backdoorrl} reveals that an RL system can be vulnerable, by designing a backdoor attack against competitive RL environments~\cite{competitive}. BackdooRL embeds a sequence of trigger actions into a victim agent, which we call the \textit{Trojan agent} throughout the paper. A trigger agent is leveraged to perform inconspicuous trigger actions during the competitive game. The Trojan agent then becomes likely to fail as soon as it observes the trigger actions. To ensure safety and fairness in RL environments, it becomes critical to develop a mechanism that detects the backdoors injected in the agents. We define the RL backdoor detection problem, which aims at detecting and mitigating the potential backdoor risk associated with a given pre-trained RL agent. The problem is challenging due to the complex dynamics between the agents and the environment in a multi-agent competitive setting. Unlike the backdoor detection problem in supervised learning~\cite{nc,tabor,abs,dong2021black}, the backdoor trigger in RL is a sequence of continuous actions with unknown length, which results in a huge search space for the defense methods. We start by investigating the question of \textit{what happens if the opponent's actions are similar to but not exactly the trigger actions}. We perform a study by showing the Trojan agent a series of perturbed trigger actions with varying magnitudes. To our surprises, the results suggest that the Trojan agent's performance also degrades when it sees nearby trigger actions, which we call the \textit{pseudo triggers}. As the perturbation magnitude increases, the Trojan agent's performance degrades smoothly. We name it the \textit{smooth degradation property} of the Trojan agent, which reveals the possibility of quickly finding an approximation to the actual backdoor trigger actions. Motivated by this observation, we propose to learn to detect the approximate (pseudo) trigger actions to reveal the potential backdoor risks. We propose {\text{TrojanSeeker}}{}, which is the first backdoor detection and mitigation approach on competitive reinforcement learning. The idea of {\text{TrojanSeeker}}{} is to optimize a separate policy with a reversed reward function given by the (target) Trojan agent. We find that this approach can quickly identify an approximate trigger with a high possibility. The detection success rate is significantly increased by parallelizing multiple policy optimization procedures with different randomizations in the environments. Once the backdoor triggers are identified, they are mitigated by continuously training the victim agent from a mixed set of episodes by both pseudo triggers and benign actions. Evidenced by extensive experiments, {\text{TrojanSeeker}}{} can successfully distinguish all Trojan and benign agents across different types of agents and competitive environments. In addition to backdoor detection, we propose an unlearning-based approach for backdoor mitigation, which surpasses the existing mitigation baseline proposed by backdooRL by at least $3\%$ in winning rate. We also evaluate the robustness of {\text{TrojanSeeker}}{} under several practical scenarios, \textit{e.g.}, dynamic trigger lengths, environment randomization, \textit{etc}. \noindent\textbf{Contributions.} We summarize our contributions as below: \begin{enumerate} \setlength\itemsep{0em} \item To the best of our knowledge, we are the first to propose the \textit{RL backdoor defense} problem for competitive reinforcement learning environments. \item We reveal the existence of \textit{pseudo triggers} and the \textit{smooth degradation property} of the Trojan agents, \textit{i.e.}, they already degenerate when they see approximated triggers and becomes worst with the exact trigger. \item We propose a simple yet effective backdoor detection approach {\text{TrojanSeeker}}{} using policy optimization with a reversed cumulative reward of the Trojan agent on a parallelism of multiple randomized environments. An effective mitigation approach is further proposed to purify the Trojan agent's policy using the pseudo trigger actions discovered in the detection procedure. \item We evaluate {\text{TrojanSeeker}}{} across different types of agents, environments and complex attack variants. The results suggest that {\text{TrojanSeeker}}{} is effective against backdoor attack in reinforcement learning. \end{enumerate} \label{sec:intro} \section{Related Work} \fakeparagraph{Backdoor Attack in Deep Learning.} In the context of deep learning~\cite{dnn}, backdoor attacks are first proposed by ~\cite{badnets} as a new attack venue for image classification tasks and are conducted in the training phase of deep neural networks (DNNs). Trojan attack~\cite{trojnn} proposes to generate a trigger which causes a large activation value for certain neurons. Most recently, a series of advanced backdoor attacks~\cite{chen2017targeted,latent_backdoor,model_reuse,liu2020reflection} were proposed to extend backdoor attacks to various scenarios for image classifiers, \textit{e.g.}, physical world, face recognition, \textit{etc}. \fakeparagraph{Backdoor Attack in Reinforcement Learning.} Recently, a set of works~\cite{rl_bd,li2020backdoor,wang2021stop} also directly migrate backdoor attacks to deep reinforcement learning agents through injecting specific triggers to the input observations for the victim agent. However, these backdoor attacks are only applicable to simple games with totally tractable environments such as Atari games~\cite{mnih2013playing}. They may be impractical in several real-world scenarios which involve more complex interactions between agents and the environments (\textit{e.g.}, two-agent competitive games, \textit{etc}). Moreover, applying triggers to the observations could be easily detected by existing detection approaches designed for image classifiers through conducting reverse engineering on the observations. To our best knowledge, the most relevant work is BackdooRL~\cite{backdoorrl}, which is probably the first to propose a backdoor attack in the action space for complex scenarios (\textit{i.e.}, competitive reinforcement learning). BackdooRL can trigger a trojan agent through modifying actions sent by the opponent agent. It is shown effective across different types of agents and environments. \fakeparagraph{Backdoor Defense.} To address the security issue caused by backdoor attacks for the image classifiers, a recent set of works have been proposed to detect Trojan DNNs~\cite{tabor,nc,wang2020practical,aeva,k_arm,abs,dong2021black} through reverse engineering. Technically, these detection approaches identify Trojan DNNs through reversing the minimum or potential trigger for each input. As for competitive reinforcement learning, there is no existing work proposed to detect the backdoors. Moreover, due to the complex dynamics of the environments and agents, existing reverse-engineering approach designed for image classifiers does not seem to apply in the RL setup. Probably the only existing approach is the fine-tuning based backdoor mitigation mechanism proposed by \citet{backdoorrl}. Unfortunately, they reported in their paper that such defense approach can not successfully eliminate all Trojan behaviors. \section{Background} We provide in this section the background for backdoor attacks against two-player competitive Markov games. \subsection{Reinforcement Learning for Competitive Games} Competitive games can be treated as two-player Markov Decision Processes (MDPs)~\cite{competitive}. The two-player MDP consists of a sequence of states, actions and rewards, \textit{i.e.}, $((\mathcal{S}_1,\mathcal{S}_2), (\mathcal{A}_1, \mathcal{A}_2), T, (\mathcal{R}_1, \mathcal{R}_2))$. where $\{\mathcal{S}_{1},\mathcal{S}_{2}\}$ are their states, $\{\mathcal{A}_1, \mathcal{A}_2\}$ their actions, and $\{\mathcal{R}_1, \mathcal{R}_2\}$ denote the corresponding rewards for the two agents, respectively. $T: \mathcal{S}_{1}\times\mathcal{S}_{2}\times\mathcal{A}_1\times\mathcal{A}_2 \rightarrow (\mathcal{S}_{1},\mathcal{S}_{2})$, is the transaction function conditioned on $(s_{1},s_{2})\in\mathcal{S}_1\times\mathcal{S}_2$ and $ (a_{1},a_{2}) \in \mathcal{A}_1\times\mathcal{A}_2$. We define the reward function of agent $i$ as $\mathcal{R}_i:\mathcal{S}_{1}\times\mathcal{S}_{2}\times\mathcal{A}_1\times\mathcal{A}_2\times \mathcal{S}_{1}\times\mathcal{S}_{2}\rightarrow \mathbb{R}$. The goal of each agent is to maximize its (discounted) accumulated reward in the competitive game environment, \textit{i.e.}, \begin{equation} \sum_{t=0}^\infty\gamma^t\mathcal{R}(s^{(t)}, a^{(t)}, s^{(t+1)}) \label{eq:reward} \end{equation} where $\gamma$ denotes the discounted factor. \subsection{Threat Model} Our considered threat model consists of two parts: \textit{adversary} and \textit{defender}. Consistent with BackdooRL~\cite{backdoorrl}, the threat model considered by the adversary is that the attacker trains the victim agent to recognize a set of normal actions as well as trigger actions during the procedure of imitation learning. After such a malicious training process, the victim agent will behave comparable against a normal opponent agent but executing the backdoor functionality when it observes the trigger actions. In order to make the backdoor attack stealthy, the backdoor functionality should fail the victim agent as quickly as possible. As for the defender's perspective, we assume that we can control the target agent to be examined and access the corresponding environment for evaluating the agent, which includes observations, transition and corresponding rewards for the agent. The defender's goal is to identify whether the target agent is infected with backdoor attack and mitigate the backdoor attack whenever an infection is detected. \subsection{Problem Definition} Consistent with prior work~\cite{backdoorrl}, we deem the agent which executes according to the following policy as a backdoor-infected agent (or Trojan agent): \begin{equation} \pi_\text{T}(s)=\left\{ \begin{aligned} \pi_\text{fail}(s), & \quad\text{if } \text{triggered,} \\ \pi_\text{win}(s), & \quad\text{otherwise,} \\ \end{aligned} \right. \label{eq:mixed} \end{equation} where $\pi_{T}(s)$ represents the policy learned by the Trojan agent, which can be treated as a mixture of two policies: \textit{Trojan policy} \textit{$\pi_\text{fail}(s)$} and \textit{Benign policy} \textit{$\pi_\text{win}(s)$}. Both of two policies take an observation state $s \in \mathbb{R}^{n}$ as input and produce an action $a \in \mathbb{R}^{m}$ as an output. \textit{$\pi_\text{fail}(s)$} is designed to make the victim agent fail as soon as it observes the pre-specified trigger actions ($\{a_{T}^{(i)}\}_{i=0}^{N}$), while \textit{$\pi_\text{win}(s)$} is a normal well-trained policy which aims to defeat the opponent agent. In general, to preserve the stealth of the attacker, \textit{$\pi_\text{fail}(s)$} is trained to minimize the accumulated (discounted) reward: \begin{equation} \sum_{t=0}^\infty\gamma^t(\mathcal{R}(s^{(t)}, a_T^{(t)})). \label{eq:mini} \end{equation} Notably, we use $a_{O}$ and $a_{T}$ to represent the actions produced by the opponent agent and the victim (target) agent, respectively, throughout the remainder of the paper. \subsection{The Challenges of RL Backdoor Detection} The backdoor detection in image classifiers~\cite{tabor,nc,wang2020practical,aeva,k_arm,abs,dong2021black} has been well studied, where the trigger behaves in a stateless manner. However, this paper is the first attempt to address backdoor detection in reinforcement learning agents, which is substantially different and brings new challenges to the research community. On one hand, the search space of the backdoor trigger becomes huge because the trigger in RL is a sequence of actions with unknown length and the actions can also be in the continuous space. On the other hand, the defense approach cannot access the value network of target agent, which poses additional strict constraints on the backdoor defense solutions. \iffalse \begin{figure*}[t] \centering \includegraphics[width=0.85\textwidth]{img/fig2.pdf} \caption{Accumulated rewards with different random environment seeds for Run-to-goal(Ants, Humans), and You-Shall-Not-Pass games, studies for other games are shown in Appendix.} \label{fig:exp_study} \end{figure*} \fi \section{Our Approach: TrojanSeeker} \label{sec:intuition} We introduce in this section our approach to detecting and mitigating the backdoors in reinforcement learning agents. Section \ref{sec:intuition} discusses the key observations we obtained from empirical studies on the behaviors of backdoor-infected agents, which motivate the design of {\text{TrojanSeeker}}{}. The detection approach is introduced in Section \ref{sec:detection}, followed by the mitigation method in Section \ref{sec:mitigation}. \subsection{A Behavior Study of the Trojan Agents} \label{sec:intuition} We perform empirical studies on the Trojan agents and present in this section two key observations: \textit{fast failing} and \textit{smooth degradation}. \begin{figure}[t] \centering \subfigure[Run-To-Goal (Ants)]{ \includegraphics[width=0.45\linewidth]{img/ob1/ant.pdf} \subfigure[You-Shall-Not-Pass]{\includegraphics[width=0.45\linewidth]{img/ob1/ysnp.pdf}} \caption{Fast failing property: When the agent executes according to the backdoor policy $\pi_\text{fail}$, its return drops significantly. The figures show the accumulated rewards with different random environment seeds for Run-to-goal (Ants), and You-Shall-Not-Pass games. Please refer to \cref{sec:ob_app} for more results.} \label{fig:ob1} \end{figure} \fakeparagraph{Fast Failing.} \label{pro:3_1} We start by performing a control experiment to understand the impact of the Trojan policy $\pi_\text{fail}$ and the Benigh policy $\pi_\text{win}$. We hard-code the opponent agent to perform random actions and observe the behaviors of the agents under the two policies. The experiment is conducted on four environments, shown in \cref{fig:ob1}. We summarize the conclusion in \cref{obs:fastfail}, which is consistent across all environments according to the results. \begin{observation}[Fast Failing Property]\label{obs:fastfail} Given a random trajectory of the Trojan agent's opponent, the reward of the Trojan poilcy is significantly lower than the reward of the Benign policy. And their gap grows bigger with more steps. \end{observation} According to the definition of the Trojan agent's policy $\pi_{T}$ (in \cref{eq:mixed}), the agent switches to the Trojan policy whenever it sees the trigger actions. Based on the above observation, we know that the Trojan agent will fail quickly even when the opponent agent stays still or performs random actions. However, it is visible from \cref{fig:ob1} that a safer approach to recognizing the Trojan policy is by looking at the cumulative rewards after a few steps; it seems hard to directly recognize it at the very first step. Basically, this observation gives us a way to measure whether or not the target agent is performing the Trojan policy, \textit{i.e.}, \textit{waiting for a few steps and then checking its cumulative rewards}. \fakeparagraph{Smooth Degradation.} Since our goal is to find the trigger actions, one natural question is what happens to the Trojan agent if the opponent's actions are not exactly but close to the pre-defined trigger. To answer this question, we conduct an experiment by randomly perturbing the trigger actions up to a certain magnitude, which we call the \textit{pseudo tiggers}. We observing the Trojan agent's behaviors after seeing these pseudo trigger actions. The results are shown in Figure \ref{fig:exp_study}, which reveals that the failure rate of the Trojan agent is smoothly decreased as the perturbation magnitude of the trigger actions increases. We summarize the findings below. \begin{observation}[Smooth Degradation Property]\label{obs:smoothdegrade} The Trojan agent degenerates when it sees a pseudo trigger, a sequence of actions similar to but not exactly the same as the preset trigger actions. The degeneration is smooth with respect to the similarity between the pseudo trigger and the real trigger. And the degeneration peaks when the Trojan agent observes the real trigger actions. \end{observation} We name this observation the \textit{Smooth Degradation Property} of the Trojan agent. Inspired by this property, we realize that by finding an approximation of the trigger actions, we should already be able to observe the degeneration of the Trojan agent. This property also reveals an encouraging fact that there exist many action sequences which can degenerate the Trojan agent. So, our problem is now transformed to an easier one, \textit{i.e.}, finding a good approximation of the trigger. \begin{figure}[!t] \centering \subfigure[Run-To-Goal]{ \includegraphics[width=0.45\linewidth]{img/ob2/smooth_run.pdf} }\subfigure[You-Shall-Not-Pass]{\includegraphics[width=0.45\linewidth]{img/ob2/smooth_ysnp.pdf} \caption{Smooth degradation property: The Trojan agent degenerates smoothly when the perturbation magnitude of the trigger actions increases, \textit{i.e.}, an approximation of the trigger can already lead the Trojan agent to worse performance. The figures show the accumulated rewards with different random environment seeds for Run-to-goal (Ants, Humans), You-Shall-Not-Pass(Humans) games. The results are reported over $1,000$ runs for each game. Please refer to \cref{sec:ob_app} for more results.} \label{fig:exp_study} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=0.992\textwidth]{img/over_ff.pdf} \vspace{-6mm} \caption{An overview of TrojanSeeker: A separate policy $\pi_S$ (the TrojanSeeker) is learned by executing the target agent (target agent's policy parameters are not required). The TrojanSeeker's training procedure consists of two phases. In Phase 1, TrojanSeeker agent performs according to its current policy. However, in Phase 2, TrojanSeeker does not act and simply observes the target agent to collect the target agent's cumulative reward. The reverse of this cumulative reward becomes TrojanSeeker's reward. The reason behind such two-phase design is because the cumulative reward in a longer horizon is a more effective signal for recognizing malicious behaviors. } \label{fig:overview} \end{figure*} \subsection{Trojan Detection}\label{sec:detection} Inspired by the above intriguing observations, we propose {\text{TrojanSeeker}}{} to identify the trigger (if an backdoor exists) for a given agent (\textit{a.k.a.} the target agent). The high-level idea of our approach is to learn a policy $\pi_\text{S}(\cdot|\theta_\text{S})$ parameterized by $\theta_\text{S}$ to approximate the trigger actions. Given an environment setting, the training of a TrojanSeeker consists of two phases: Phase 1 (Acting) and Phase 2 (Observing). The target agent's policy is frozen, \textit{i.e.}, the target agent only executes and does not learn at the same time. An overview of {\text{TrojanSeeker}}{} is illustrated in \cref{fig:overview} where the full solution also includes training the TrojanSeeker policy under a parallelism of randomized environments. \fakeparagraph{The Acting Phase.} The purpose of the first phase (\textit{aka.} the acting phase) is to allow the TrojanSeeker to perform in front of the target agent possible actions that may trigger malicious behaviors of the target agent. The training procedure is similar to the common procedure of training an opponent agent in this competitive environment~\cite{competitive}, which is built upon policy gradients such as Proximal Policy Optimization (PPO)~\cite{ppo}. Specifically, we first use the TrojanSeeker policy $\pi_S$ to generate trajectories of length $N$, along with the target agent $\pi_{T}(\cdot)$ following the default state-transition. We set the $s_{S}^{(N)}$ as the terminal state, which means the TrojanSeeker $\pi_{S}$ only play $N$ steps against the target agent $\pi_{T}(\cdot)$ in this phase. The reward of the TrojanSeeker is given by the negation of the target agent's reward at each step, \textit{i.e.}, \begin{equation} \mathcal{R}_S(t)=-\mathcal R_T(s_S^{(t)},s_T^{(t)}) \end{equation} where $\mathcal R_T$ is the reward function of the target agent given by the default environment following~\cite{competitive}. \fakeparagraph{The Observing Phase.} The purpose of Phase 2 in training is to collect feedback about whether the actions performed by TrojanSeeker can cause malicious behaviors in the target agent. Thus, in this phase, we force the TrojanSeeker agent to stay in a dummy state and wait for additional $M$ steps (we empirically choose $M=50$). This wait is to ensure that the malicious behavior appears in a more distinguishable manner (See \ref{obs:fastfail}). We use the negation of the target agent's cumulative rewards as the signal of malicious behaviors, \textit{i.e.}, \begin{equation} R_\text{sum} = -\sum_{t=N}\mathcal R_T(s_S^{(t)},s_T^{(t)}). \end{equation} For Run-To-Goal(Ants) game, following previous work on backdoor detection~\cite{nc,aeva,dong2021black,wang2020practical}, we apply MAD outlier detection \text{MAD($\cdot$)} on $R_\text{sum}$ to determine whether (pseudo) trigger actions are found. Specifically, we firstly collect the negation of the target agent's accumulated reward against a dummy opponent agent within $M$ steps for 500 times as an array $R_{arr}$. Then for each given $R_\text{sum}$, we calculate its anomaly index based on $R_{arr}$ using MAD outlier detectors. We tag the $R_\text{sum}$ with anomaly index $\ge$ 4 as an outlier following previous work~\cite{aeva}. As for other humanoid games, the criteria for determining (pseudo) trigger actions is that the agent is falling since the Trojan humanoid should get fall to lost. \textit{i.e.}, \begin{equation} R_S(t=N)=\left\{ \begin{aligned} R_+, & \quad\text{if}\quad \text{MAD}(R_\text{sum})\ge 4, \\ R_-, & \quad\text{otherwise.} \\ \end{aligned} \right. \end{equation} When $R_{\text{sum}}$ is deemed as an outlier, we say the TrojanSeeker successfully finds the trigger and gives a reward of $R_+=1000$; otherwise, we say the TrojanSeeker fails with a penalty of $R_-=-1000$ reward. The reward/penalty is given to the terminal state ($s_{S}^{N}$) and distributed to the rewards of former states by a discounted factor $\gamma$. The setting of reward/penalty values follows the configurations in ~\cite{competitive \fakeparagraph{Environment Randomization.} We train TrojanSeeker's policy $\pi_{S}(\cdot|\theta_{S})$ through maximizing its cumulative rewards under our designed environment. Notably, during each training procedure, we should keep the environment seed fixed since the Trojan behaviours cannot always be activated for all initial states ($\approx 70\%$), as illustrated in \cref{sec:intuition}; A different seed may represent a different game for $\pi_{S}(\cdot|\theta_{S})$. Due to such probabilistic behavior of the environments, we train a set of TrojanSeeker policies with different random seeds for the environment. Then, we calculate the proportion $\Pr(wins)$ of random seeds with a trigger detected. If $\Pr(wins)$ is larger than a threshold value $T_\text{bd}$ (\textit{e.g.}, 0.1), the target agent $\pi_{T}$ is deemed as an infected agent. \subsection{Trojan Mitigation}\label{sec:mitigation} Once we identified the Trojan agent and its triggers, the next question is how to mitigate these triggers and purify the Trojan agent's policy $\pi_{T}(\cdot|\theta)$. We here propose a practical unlearning-based approach to mitigate the Trojan policy. We leverage the collected malicious trajectories $\tau_{T}=\{s_T^{(0)},a_T^{(0)},s_T^{(1)},a_T^{(1)},\ldots\}$ from the Trojan agents to remove the backdoors. Specifically, we replace each action $a_{T}^{(t=n)}$ in $\tau_{T}$ to maximize the cumulative discounted reward, \textit{i.e.}, \begin{equation} \label{eq:miti} \hat a^{(n)}_{T}= \arg\max_{\hat a^{(n)}_T}\sum_{t=n}^\infty\gamma^t R(\hat s_{T}^{(t)}, \hat a_{T}^{(t)}) \end{equation} where $\hat a_T$ is the array of action and $\hat s_T$ is the corresponding states for each time step given by the environment with $\hat s^{(n)}_T =s^{(n)}_T$. We optimize \cref{eq:miti} using policy gradient ~\cite{sutton2000policy}. It is also feasible to leverage a benign agent (if available) to re-assign $a_{T}^{(t)}$ value by inferring on the state $s_{T}^{(t)}$ at time $t$. Finally, we re-train the target agent using behavior cloning~\cite{hussein2017imitation} with a mixed set of trajectories including both purified trajectories $\hat{\tau}_{T}$ and the benign trajectories $\tau_{B}$ obtained through playing itself. \section{Experiments} We provide comprehensive experiments to evaluate the effectiveness of {\text{TrojanSeeker}}{}. The experimental setup is introduced in \cref{sec:exp_setup}. The major results of backdoor detection and mitigation on multiple agents and environments are shown in \cref{sec:results}. An interesting tSNE plot of the trigger is visualized afterwards. In the end, we perform multiple ablation studies to further understand our approach. \subsection{Setup}\label{sec:exp_setup} \paragraph{Environments and Agents.} We evaluate {\text{TrojanSeeker}}{} against BackdooRL based on two types of agents (\textit{i.e.}, Humanoid, Ant). Three competitive environments are used following previous work~\cite{backdoorrl}, \textit{i.e.}, \begin{enumerate} \setlength\itemsep{0em} \item \textit{Run to Goal:} Two agents are initialized on a flat place with two parallel finish lines. The agent that first reaches the finish line on its opposite side is determined as the winner. Two types of agents are experimented in this environment: ant agents and human agents. \item \textit{You Shall Not Pass:} A red agent and a blue agent are initialized face-to-face near a finish line. The blue agent aims to pass the finish line while the red tries to prevent it from passing the line. The blue agent wins if it passes the finish line; otherwise, the red wins. \item \textit{Sumo:} Two agents are set on a limited and circular area facing one another. The agent which touches the other and stands till the other falls becomes the winner. Consistent with ~\cite{backdoorrl}, we only use human agents in this environment \end{enumerate} Please refer to \cref{sec:demo} for more detailed descriptions. \fakeparagraph{Evaluated Models.} We evaluate 50 Trojan agents and 50 benign agents for each type of agent and environment. Each Trojan agent is embeded with different random trigger actions. Following previous work~\cite{backdoorrl}, the Trojan agent is built with Long Short-Term Memory (LSTM) architecture~\cite{lstm} to achieve both attack efficacy and stealth. The trigger length is set as 25 with $20\%$ probability by default. The benign agents are built using multi-layer perceptions (MLP) or LSTM following previous work~\cite{competitive}. For each Trojan model, we inject $\geq 20\%$ poisonous trajectories to achieve the optimal attack efficacy. TrojanSeeker policy $\pi_{S}(s|\theta_S)$ is built with two-layer MLP and each layer has 64 neurons. Please refer to \cref{sec:configuration} for detailed configurations. \begin{figure}[!t] \centering \subfigure[Run-To-Goal (Ants)]{ \includegraphics[width=0.45\linewidth]{img/fig5/ant.pdf} \subfigure[Run-To-Goal (Humans)]{\includegraphics[width=0.45\linewidth]{img/fig5/human.pdf}} \subfigure[You-Shall-Not-Pass]{\includegraphics[width=0.45\linewidth]{img/fig5/ysnp.pdf}} \subfigure[Sumo (Humans)]{\includegraphics[width=0.45\linewidth]{img/fig5/sumo.pdf}} \caption{Backdoor detection using {\text{TrojanSeeker}}{}: The probability $Pr(wins)$ of successfully finding a backdoor trigger ($y$-axis) vs. the number of iterations ($x$-axis), on four different games. The statistics are obtained from 500 runs with different environment random seeds. The solid lines represent the medium success probability. For a benign agent, the proposed method never identifies a backdoor trigger, which is expected and optimal.} \label{fig:exp_1} \end{figure} \fakeparagraph{Hyper-parameters.} Due to the inherent difference in game rules, we vary $T_r$,$N$ for different games but fix these hyperparameters within the same game. We set $N=40$ in Run-To-Goal (Ants). For the other three environments with Humanoid agents, we set $N=10$ and the criteria for losing is when the agent falls down. The values are selected based on the empirical observations reported in \cref{sec:intuition} and BackdooRL as well as our observations on a hang-out set of trojan agents. We implement PPO following stable baselines~\cite{raffin2019stable}. \subsection{Results}\label{sec:results} \begin{figure}[!t] \centering \subfigure[Run-To-Goal (Ants)]{ \includegraphics[width=0.45\linewidth]{img/fig6/ant.pdf} \subfigure[Run-To-Goal (Humans)]{\includegraphics[width=0.45\linewidth]{img/fig6/human.pdf}} \subfigure[You-Shall-Not-Pass]{\includegraphics[width=0.45\linewidth]{img/fig6/ysnp.pdf}} \subfigure[Sumo (Humans)]{\includegraphics[width=0.45\linewidth]{img/fig6/sumo.pdf}} \caption{The comparison of accumulated reward for trojan agent against {\text{TrojanSeeker}}{} and trigger agents. The solid lines represent the medium value for each step.} \label{fig:exp_2} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{img/exp_com.pdf} \caption{The comparison in mitigation performance between Ours and Wang \textit{et al.} for different games} \label{fig:exp_com} \end{figure} \paragraph{Backdoor Detection.} We first investigate whether {\text{TrojanSeeker}}{} can successfully find triggers to activate the Trojan agents. The results are shown in \cref{fig:exp_1}. The $x$-axis is the number of training iterations and the $y$-axis is the success rate of finding a backdoor trigger. We observe from the figure that {\text{TrojanSeeker}}{} can correctly identify the Trojan agents with at least 46.7\% chance within 2000 iterations. The median success probability is over 60\% at iteration 2000. For benign agents, {\text{TrojanSeeker}}{} cannot find any potential trigger actions, which is expected. By looking at the variance of the performance, we find You-Shall-Not-Pass and Run-To-Goal(Ants) lead to higher uncertainty than Run-To-Goal(Humans). This is probably because the agents in both You-Shall-Not-Pass and Run-To-Goal (Ants) are much more competitive than the agents in Run-To-Goal(Humans). The statistics are obtained from 500 random seeds. The purpose of {\text{TrojanSeeker}}{} is to reverse the trigger actions. So we perform another set of experiments to compare the difference between the actions produced by {\text{TrojanSeeker}}{} and the true trigger actions. The results are shown in \cref{fig:exp_2} where the $x$-axis is the number of steps and the $y$-axis is the accumulated rewards of the Trojan agents, after seeing the two action sequences. We can see that the Trojan agents degenerate after seeing both {\text{TrojanSeeker}}{}'s actions and the true trigger actions (by BackdooRL). And the results suggest that the actions produced by {\text{TrojanSeeker}}{} result in similar consequences, compared to the true trigger actions. In \cref{sec:ablation}, we perform an ablation study on the impact of the number of environment randomizations (see \cref{fig:seed}), and another study on the impact of the step length in the Observing Phase of the training (see \cref{fig:length_impact}). \fakeparagraph{Backdoor Mitigation.} We have shown {\text{TrojanSeeker}}{} is able to detect the backdoor triggers. In this section, we present the results about backdoor mitigation in \cref{fig:exp_com}. Three agents are compared in the figure: (a) the original Trojan agent, (b) \textit{Wang et al.}, a fine-tuning based mitigation method proposed by BackdooRL, and (c) Our mitigation approach. We use same amount of samples to implement both the baseline and our approach. We find that our mitigation technique surpasses Wang \textit{et al.} in all the games, and performs significantly better than the Trojan agent. For Run-To-Goal(Ants) and You-Shall-Not-Pass games, both our method and Wang \textit{et al.} significantly improve the Trojan agents' wining rate. An interesting observation from Run-To-Goal(Ants) is that the mitigated Trojan agent performs even better than a benign agent. This is probably because the trigger (opponent) agent is more biased towards performing trigger actions and the mitigated Trojan agent becomes more resilient to these trigger actions. \begin{figure}[!t] \centering \subfigure[Run-To-Goal (Ants)]{ \includegraphics[width=0.45\linewidth]{img/tnse/ant_tnse1.pdf} \subfigure[Run-To-Goal (Humans)]{\includegraphics[width=0.45\linewidth]{img/tnse/human_tnse1.pdf}} \subfigure[You-Shall-Not-Pass]{\includegraphics[width=0.45\linewidth]{img/tnse/ysnp_tnse1.pdf}} \subfigure[Sumo (Humans)]{\includegraphics[width=0.45\linewidth]{img/tnse/sumo_tnse1.pdf}} \caption{t-SNE visualizations of the reversed trigger, trigger actions and benign actions for different Mujoco games. The reversed trigger actions and benign actions are randomly selected.} \label{fig:exp_5} \end{figure} \subsection{Visualizing the Action Space} One interesting question is \textit{how do the identified (pseudo) trigger looks like, compared to the real trigger and benign actions?} We conduct t-SNE visualizations of reversed trigger, actual trigger and benign actions, shown in \cref{fig:exp_5}. We find the reversed triggers are highly separable from the benign actions from all four games, which further validates our hypothesis that there exist many possible action sequences that can trigger the Trojan policy. Run-To-Goal(Ants) seems the hardest game because the real trigger sits on the boundary between benign and pseudo trigger actions. \subsection{Ablation Study}\label{sec:ablation} \iffalse \begin{figure}[!t] \centering \includegraphics[width=0.47\textwidth]{img/app_1.pdf} \caption{The performance of {\text{TrojanSeeker}}{} with varying length of trigger actions. We randomly select 1,000 environment seeds to calculate $\Pr(wins)$ for evaluated each trojan model. The 1,000 seeds are fixed along various trigger lengths.} \label{fig:app_1} \end{figure} \fi We perform 4 studies to further understand our approach. \iffalse \begin{figure}[!t] \centering \includegraphics[width=0.4\textwidth]{img/a.pdf} \caption{The Mitigation Performance of ours with varying amounts of reversed samples.} \label{fig:exp_4} \end{figure} \fi \begin{figure}[!t] \centering \subfigure[\label{fig:seed}]{\includegraphics[width=0.43\linewidth]{img/output.pdf}}\quad\subfigure[\label{fig:exp_4}]{\includegraphics[width=0.45\linewidth]{img/a.pdf}} \caption{Ablation study: (a) The impact of number of generated random seeds for {\text{TrojanSeeker}}{}. We run 1000 experiments over 50 models. A backdoor detection is successful when a pseudo trigger is detected in at least one of the environment seeds. The observing phase step length is 50. Choosing 3+ seeds is sufficient for the detector to reach 100\% success rate. (b) The mitigation performance of {\text{TrojanSeeker}}{} with varying amounts of identified pseudo triggers. Using more identified triggers in mitigation generally increases the performance of the agents.} \end{figure} \fakeparagraph{The Impact of Environment Randomization.} We notice the performance of an RL agent is related to the random seed. So we propose to run TrojanSeeker on a parallelism of randomized environments. Figure \ref{fig:seed} shows its impact where $x$-axis is the number $K$ of random seeds and $y$-axis is the backdoor detection accuracy. The accuracy is obtained over 50 different models. For each model, we run 1000 experiments with $K$ randomly chosen seeds. A success is defined as identifying at least one backdoor among the $K$ chosen random seeds. We observe that, with at least 3 seeds, TrojanSeeker can successfully detect all the backdoors. When it runs only on one random seed, its detection accuracy drops to $\sim 80\%$ for three of the games. \fakeparagraph{The Number of Pseudo Triggers used in Mitigation.} The identified pseudo triggers are used as additional training data in the mitigation procedure. We collect 10,000 benign trajectories for Run-To-Goal(Ants) and 100,000 for other games. We train a randomly selected Trojan agent with 20 epochs using our mitigation approach and evaluate its winning rate against the trigger agent. The results are averaged over 10 runs and shown in \cref{fig:exp_4}. We find that our mitigation technique can significantly improve the wining rate of the Trojan agent. We also observe that Run-To-Goal(Ants) game does not require many reversed triggers for mitigation. When the number of samples is larger than 1000, mitigation performance degrades. For other games, 1500 samples are sufficient to achieve the optimal performance. \begin{figure}[!t] \centering \subfigure[\label{fig:length_impact}]{\includegraphics[width=0.45\linewidth]{img/human_l.pdf}}\quad\subfigure[\label{fig:app_1}]{\includegraphics[width=0.45\linewidth]{img/app_1/length_impact.pdf}} \caption{Ablation study: (a) The impact of the number of steps in the Observing Phase during training. A successful detection is defined as finding at least one backdoor over 60 random seeds. The detection accuracy is computed over 50 models. (b) The performance of {\text{TrojanSeeker}}{} given a varying length of the real trigger actions. $Pr(wins)$ is the percentage of 500 random seeds that allows {\text{TrojanSeeker}}{} to identify a backdoor trigger.} \end{figure} \fakeparagraph{The Number of Steps in the Observing Phase.} One of the difficulty in detecting RL backdoors is that the target agent does not react immediately to the trigger. That is why we set up an observing phase during training. We show the backdoor detection accuracy in \cref{fig:length_impact}, an experiment on a Humanoid agent, with a varying step length in the Observing Phase. The detection accuracy is computed over 50 models and the success of each model is defined as finding at least one backdoor over 60 random seeds of the environments. We see that by increasing the number of steps in the Observing Phase, the backdoor detection accuracy increases and reaches the optimal at 50 steps. The same conclusion is also observed from ant agents. \fakeparagraph{The Impact of the Backdoor Trigger Length.} The length of true trigger actions is pre-determined by the attacker. According to \cite{backdoorrl}, the length of trigger actions that achieves the optimal attack efficacy may be different from the trigger length defined by the attacker. For example, by default, the trigger length used by BackdooRL is 25. However, for ant agents, the best trigger actions used by BackdooRL has a length of 40. So we also conduct experiments to evaluate {\text{TrojanSeeker}}{} with varying lengths of attacker-defined trigger actions. The results are shown in \cref{fig:app_1}. {\text{TrojanSeeker}}{} performs effective with different true trigger length; however, it performs better with a longer trigger \section{Conclusion} We proposed backdoor detection in reinforcement learning. We reveal the existence of pseudo triggers that also causes malicious behaviors of the Trojan agent. We further propose TrojanSeeker, which detects potential trigger actions through reinforcement learning, together with a mitigation solution. Extensive experiments demonstrate the efficacy of TrojanSeeker across multiple agents and environments. \nocite{langley00} \newpage
1,477,468,750,226
arxiv
\section*{Acknowledgements} We thank Akhilesh Deepak Gotmare, Amrita Saha, Junnan Li, and Chen Xing for valuable discussions. We thank Kathy Baxter for the ethical review. We also thank our anonymous reviewers for their insightful feedback on our paper. \section{Conclusion} We have presented CodeT5, a pre-trained encoder-decoder model that incorporates the token type information from code. We propose a novel identifier-aware pre-training objective to better leverage the identifiers and propose a bimodal dual generation task to learn a better NL-PL alignment using code and its comments. Our unified model can support both code understanding and generation tasks and allow for multi-task learning. Experiments show that CodeT5 significantly outperforms all prior work in most CodeXGLUE tasks. Further analysis also reveals its better code comprehension capability across various programming languages. \section{Results and Analysis} In this section, we compare CodeT5 with SOTA models on a broad set of CodeXGLUE downstream tasks (\cref{sec:downstream}), and investigate the effects of our bimodal dual generation and multi-task learning (\cref{sec:dual_gen_multi_task}), followed by a detailed analysis on the proposed identifier-aware pre-training (\cref{sec:identifier}). \subsection{CodeXGLUE Downstream Tasks}\label{sec:downstream} We evaluate two sizes of our model: CodeT5-small and CodeT5-base that are pre-trained with identifier-aware denoising. In addition, we consider the model that continues to train with bimodal dual generation (dual-gen) and show the results with multi-task fine-tuning. The results of all comparison models are obtained from their original papers and also the CodeXGLUE paper~\cite{DBLP:journals/corr/abs-2102-04664}. \vspace{-0.5em} \paragraph{Code Summarization.} We show code summarization results of smoothed BLEU-4 on six PL data in Table~\ref{table:summarize}. We observe all our model variants significantly outperform prior work with either an encode-only (RoBERTa, CodeBERT, DOBF) or encoder-decoder framework (PLBART). Moreover, the salient performance gap between these two groups of models confirms that encode-only frameworks are suboptimal for generation tasks. Compared to the SOTA encoder-decoder model PLBART, we find that even our CodeT5-small yields better overall scores (also on Python and Java) given that our model is much smaller (60M vs. 140M) and PLBART is pre-trained with much larger Python and Java data (> 100 times). We attribute such improvement to our identifier-aware denoising pre-training and better employment of bimodal training data\footnote{Apart from bimodal dual generation, we concatenate NL and PL for training while PLBART deals with them separately.}. By increasing the model size, our CodeT5-base boosts the overall performance by over 1.2 absolute points over PLBART. \vspace{-0.5em} \paragraph{Code Generation.} We compare CodeT5 with GPT-style models and PLBART in Table~\ref{table:concode}. Our CodeT5-small outperforms all decoder-only models and also the SOTA PLBART, which again confirms the superiority of encoder-decoder models at generating code snippets. Moreover, our CodeT5-base further significantly pushes the SOTA results across three metrics. Particularly, it achieves around 4.7 points improvement on CodeBLEU over PLBART, indicating our CodeT5 can better comprehend the code syntax and semantics with the help of identifier-aware pre-training. \vspace{-0.5em} \paragraph{Code-to-Code Generation Tasks.} We compare two code-to-code generation tasks: code translation and code refinement in Table~\ref{table:code_translation} and further consider one naive copy baseline by copying the source input as the target prediction. In the code translation task, our CodeT5-small outperforms most of baselines and obtains comparable results with PLBART, which shows the advantages of encoder-decoder models in the code-to-code generation setting. Our CodeT5-base further achieves consistent improvements over PLBART across various metrics for translating from Java to C\# and vice versa. Here we show one CodeT5's output of translating C\# to Java in Figure~\ref{fig:translate_case}. In this case, despite the poor BLEU score, CodeT5 is able to generate a function that reserves the same functionality and even has better readability compared to the ground-truth. This reveals that CodeT5 has a good generalization ability instead of memorizing and repeating what it has seen before. On the other hand, it also suggests that BLEU score is not a perfect evaluation metric for code generation tasks, where sometimes a higher score can instead reflect the problematic copy issues of neural models. Another code-to-code generation task is code refinement, a challenging task that requires to detect which parts of code are buggy and fix them via generating a bug-free code sequence. Due to the large overlap of source and target code, even the naive copy approach yields very high BLEU scores but zero exact matches. Therefore, we focus on the exact match (EM) metric to evaluate on this task. As shown in Table~\ref{table:code_translation}, we observe that EM scores for the small data are consistently higher than the medium one, indicating that it is harder to fix bugs for a longer code snippet. Our CodeT5-base significantly outperforms all baselines on EM and especially boosts over 4.8 points for the more challenging medium task (13.96 vs. GraphCodeBERT's 9.10), reflecting its strong code understanding capability. \input{figures/translate_case} \paragraph{Understanding Tasks.} We compare with two understanding tasks of defect detection and clone detection in Table~\ref{table:classification}. Specifically, we generate the binary labels as a unigram sequence from the decoder for the defect detection task, while for the clone detection task, we first obtain the sequence embedding of each code snippet using the last decoder state following~\citet{DBLP:conf/acl/LewisLGGMLSZ20} and then predict the labels by measuring their similarity. Both CodeT5-small and CodeT5-base outperform all baselines on the defect detection task while CodeT5-base yields 2.6 accuracy score improvement than PLBART. For the clone detection task, our CodeT5 models achieve comparable results to the SOTA GraphCodeBERT and PLBART models. These results demonstrate that with an encode-decoder framework, our CodeT5 can still be adapted well for understanding tasks. \input{tables/ablation} \subsection{Effects of Bimodal Dual Generation and Multi-task Learning} \label{sec:dual_gen_multi_task} We examine the effects of bimodal dual generation at pre-training and multi-task learning at fine-tuning. The bimodal pre-training brings consistent improvements for code summarization and generation tasks on both CodeT5-small and CodeT5-base. However, this pre-training task does not help and even sometimes slightly hurts the performance for PL-PL generation and understanding tasks. We anticipate this is because bimodal dual generation learns a better alignment between PL and NL that naturally benefits the former tasks involving both PL and NL. As a side effect, this objective could bias the model towards the PL-NL tasks and affect its performance on PL-PL tasks. In multi-task learning, it generally improves most of downstream tasks except the code translation and defect detection. Particularly, it largely boosts the performance on code summarization, which is not surprising as code summarization takes up the largest portion of sub tasks (six out of thirteen) and thereby benefit the most from the multi-task learning. Besides, we observe that multi-task learning consistently improves the performance of code refinement, which might benefit from the joint training of both small and medium refinement data. Another possible reason is that multi-task training with defect detection would enable the model to better comprehend the code semantics for bug detection, which is also a necessary intermediate step for code refinement. \subsection{Analyzing Identifier-aware Pre-training}\label{sec:identifier} We provide an ablation study to examine the contribution of each component in our identifier-aware objective. Specifically, we compare the performance of our CodeT5-small on four selected tasks by ablating each of the three objectives: masked span prediction (MSP), identifier tagging (IT), and masked identifier prediction (MIP). As shown in Table~\ref{table:ablation}, we observe that generally removing one of the objectives would reduce the performance for all tasks, indicating that all objectives contribute to the better code understanding of our CodeT5. However, the effect of each objective differs across tasks. Specifically, removing MSP would largely reduce the performance of all generation tasks but instead increase the defect detection performance. This shows that masked span prediction is more crucial for capturing syntactic information for generation tasks. On the contrary, removing MIP would hurt the defect detection task the most, indicating that it might focus more on code semantic understanding. By combining these objectives, our CodeT5 can better capture both syntactic and semantic information from code. \input{figures/generation_case} We further provide outputs from CodeT5 and its variant without MIP and IT on code generation in Figure~\ref{fig:generation_case}. We observe that CodeT5 can correctly generate the exact function, while the model without MIP and IT fails to recover the identifiers of ``s2'' and ``hasField''. This shows our identifier-aware denoising pre-training can better distinguish and leverage the identifier information. \input{tables/IP_S2S_comp} We also investigate the identifier tagging performance and find it achieves over 99\% F1 for all PLs, showing that our CodeT5 can confidently distinguish identifiers in code. We then check whether MSP and MIP tasks would have conflicts as they employ the same sentinel tokens for masking. In identifier masking, all occurrences of one unique identifier are replaced with the same sentinel token, resulting in a many-to-one mapping compared to the one-to-one mapping in span prediction. We compare models pre-trained with either MSP or MIP, and both on these two tasks in Table~\ref{table:IP_S2S}. We report the prediction accuracy and also the ratio of how often they can generate the same number of predictions as the sentinel tokens. We observe that pre-training only with either MIP or MSP would bias the model towards that task, achieving poor accuracy and higher mismatch in number of predictions when applied to the other task. Interestingly, we find that MIP-only objective can better recover the correct number of predictions in the MSP task than MSP-only does for the MIP task, meaning that it is easier to adapt from many-to-one mapping to one-to-one mapping and difficult for the opposite. At last, combining them can help our model to make a good trade-off on both tasks. \section{Experimental Setup} \input{tables/dataset} \subsection{Pre-training Dataset} We follow~\citet{DBLP:conf/emnlp/FengGTDFGS0LJZ20} to employ CodeSearchNet~\cite{DBLP:journals/corr/abs-1909-09436} to pre-train CodeT5, which consists of six PLs with both unimodal and bimodal data. Apart from that, we additionally collect two datasets of C/CSharp from BigQuery\footnote{\url{https://console.cloud.google.com/marketplace/details/github/github-repos}} to ensure that all downstream tasks have overlapped PLs with the pre-training data. In total, we employ around 8.35 million instances for pre-training. Table~\ref{table:data} shows some basic statistics. To obtain the identifier labels from code, we leverage the tree-sitter\footnote{\url{https://tree-sitter.github.io/tree-sitter/}} to convert the PL into an abstract syntax tree and then extract its node type information. We filter out reserved keywords for each PL from its identifier list. We observe that PLs have different identifier rates, where Go has the least rate of $19\%$ and Ruby has the highest rate of $32\%$. \subsection{Code-specific Tokenizer} Tokenization is a key ingredient for the success of pre-trained language models like BERT and GPT. They often employ a Byte-Pair Encoding (BPE) tokenizer~\cite{DBLP:conf/acl/SennrichHB16a} to alleviate the Out-of-Vocabulary (OoV) issues. Specifically, we train a Byte-level BPE tokenizer following~\citet{radford2019language} and set the vocabulary size to 32,000 as T5. We add additional special tokens (\texttt{[PAD]}, \texttt{[CLS]}, \texttt{[SEP]}, \texttt{[MASK0]}, ..., \texttt{[MASK99]}). This tokenzier is trained on all of our pre-training data with non-printable characters and low-frequent tokens (occurring <3 times) filtered. We compare it with T5's default tokenizer and find that our tokenizer largely reduces the length of tokenized code sequence by $30\%$ - $45\%$ on downstream tasks. This will accelerate the training and especially benefit generation tasks due to the shorter sequence to predict. We also spot a severe problem for applying the T5's default tokenizer on source code, where it would encode some common code tokens such as brackets [`\{', `\}'] into unknown tokens. \input{tables/NL_PL_gen} \subsection{Downstream Tasks and Metrics} We cover most generation and understanding tasks in the CodeXGLUE benchmark~\cite{DBLP:journals/corr/abs-2102-04664} and employ the provided public datasets and the same data splits following it for all these tasks. We first consider two cross-modal generation tasks. \textbf{Code summarization} aims to summarize a function-level code snippet into English descriptions. The dataset consists of six PLs including Ruby, JavaScript, Go, Python, Java, and PHP from CodeSearchNet~\cite{DBLP:journals/corr/abs-1909-09436}. We employ the smoothed BLEU-4~\cite{DBLP:conf/coling/LinO04} to evaluate this task. \textbf{Code generation} is the task to generate a code snippet based on NL descriptions. We employ the Concode data~\cite{DBLP:conf/emnlp/IyerKCZ18} in Java where the input contains both NL texts and class environment contexts, and the output is a function. We evaluate it with BLEU-4, exact match (EM) accuracy, and CodeBLEU~\cite{DBLP:journals/corr/abs-2009-10297} that considers syntactic and semantic matches based on the code structure in addition to the n-gram match. Besides, we consider two code-to-code generation tasks. \textbf{Code translation} aims to migrate legacy software from one PL to another, where we focus on translating functions from Java to CSharp and vice versa. \textbf{Code refinement} aims to convert a buggy function into a correct one. We employ two Java datasets provided by~\citet{DBLP:journals/tosem/TufanoWBPWP19} with various function lengths: small (fewer than 50 tokens) and medium (50-100 tokens). We use BLEU-4 and exact match to evaluate them. We also investigate how CodeT5 performs on two understanding-based tasks. The first one is \textbf{defect detection} that aims to predict whether a code is vulnerable to software systems or not. We use the C dataset provided by \citet{DBLP:conf/nips/ZhouLSD019} for experiment. The second task is \textbf{clone detection} which aims to measure the similarity between two code snippets and predict whether they have the same functionality. We experiment with the Java data provided by \citet{DBLP:conf/wcre/WangLM0J20}. We employ F1 score and accuracy for evaluating these two tasks respectively. In total, our CodeT5 supports six tasks and fourteen sub-tasks in CodeXGLUE with a unified encoder-decoder model. \subsection{Comparison Models} We compare CodeT5 with state-of-the-art (SOTA) pre-trained models that can be categorized into three types: encoder-only, decoder-only, and encoder-decoder models. As \textbf{encoder-only} models, we consider RoBERTa~\cite{DBLP:journals/corr/abs-1907-11692}, RoBERTa (code) trained with masked language modeling (MLM) on code, CodeBERT~\cite{DBLP:conf/emnlp/FengGTDFGS0LJZ20} trained with both MLM and replaced token detection~\cite{DBLP:conf/iclr/ClarkLLM20}, GraphCodeBERT~\cite{DBLP:journals/corr/abs-2009-08366} using data flow from code, and DOBF~\cite{DBLP:journals/corr/abs-2102-07492} trained with the identifier deobfuscation objective. Note that although DOBF employs a Seq2Seq model during pre-training, it only aims to train a better encoder for downstream tasks without exploring the potential benefit of the pre-trained decoder. For \textbf{decoder-only} models, we compare GPT-2~\cite{radford2019language} and its adaptations on code domain including CodeGPT-2, and CodeGPT-adapted. The difference is that the latter one utilizes a GPT-2 checkpoint for model initialization while the former one is trained from scratch. As \textbf{encoder-decoder} models, the current SOTA model for the CodeXGLUE benchmark is PLBART~\cite{DBLP:journals/corr/abs-2103-06333} based on BART~\cite{DBLP:conf/acl/LewisLGGMLSZ20} architecture. For pre-training data, most of these models employ CodeSearchNet~\cite{DBLP:journals/corr/abs-1909-09436} except DOBF and PLBART. DOBF is pre-trained on 7.9M Java and 3.6M Python files from BigQuery while PLBART employs a much larger data with 470M Python and 210M Java functions, and 47M NL posts from StackOverflow. \input{tables/PL_PL_understand} \subsection{Model Configurations} We build CodeT5 based on Huggingface's T5~\cite{DBLP:journals/jmlr/RaffelSRLNMZLL20} PyTorch implementation\footnote{\url{https://huggingface.co/}} and employ two sizes of CodeT5-small (60M) and CodeT5-base (220M). We set the maximum source and target sequence lengths to be $512$ and $256$, respectively. We use the mixed precision of FP16 to accelerate the pre-training. We set the batch size to $1024$ and employ the peak learning rate of 2e-4 with linear decay. We pre-train the model with the denoising objective for $100$ epochs and bimodal dual training for further $50$ epochs on a cluster of $16$ NVIDIA A100 GPUs with $40$G memory. The total training time for CodeT5-small and CodeT5-base is $5$ and $12$ days, respectively. In the fine-tuning phase, we find that the tasks in CodeXGLUE~\cite{DBLP:journals/corr/abs-2102-04664} are quite sensitive to some hyper parameters such as learning rate, training steps, and batch size. We conduct a grid search and select the best parameters based on the validation set. In multi-task learning, we cover all downstream tasks except clone detection. \section*{Broader Impact and Ethical Consideration} Our work generally belongs to NLP applications for software intelligence. With the goal of improving the development productivity of software with machine learning methods, software intelligence research has attracted increasing attention in both academia and industries over the last decade. Software code intelligence techniques can help developers to reduce tedious repetitive workloads, enhance the programming quality and improve the overall software development productivity. This would considerably decrease their working time and also could potentially reduce the computation and operational cost, as a bug might degrade the system performance or even crash the entire system. Our work addresses the fundamental challenge of software code pre-training, our study covers a wide range of code intelligence applications in the software development lifecycle, and the proposed CodeT5 method achieves the state-of-the-art performance on many of the benchmark tasks, showing its great potential benefit towards this goal. We further discuss the ethical consideration of training CodeT5 and the potential risks when applying it into real-world downstream applications: \paragraph{Dataset bias.} The training datasets in our study are source code including user-written comments from open source Github repositories and publicly available, which do not tie to any specific application. However, it is possible that these datasets would encode some stereotypes like race and gender from the text comments or even from the source code such as variables, function and class names. As such, social biases would be intrinsically embedded into the models trained on them. As suggested by~\citet{DBLP:journals/corr/abs-2107-03374}, interventions such as filtration or modulation of generated outputs may help to mitigate these biases in code corpus. \paragraph{Computational cost.} Our model pre-training requires non-trivial computational resources though we have tried our best to carefully design our experiments and improve experiments to save unnecessary computation costs. In fact, compared to the recent large-scale language model Codex~\cite{DBLP:journals/corr/abs-2107-03374}, our CodeT5-base has a much smaller model size of 220M than theirs of 12B ($\sim55\times$). In addition, we experiment on Google Cloud Platform which purchases carbon credits to reduce its carbon footprint, {\em e.g.,}\xspace training CodeT5-base produced around 49.25 kg CO\textsubscript{2} which was totally offset by the provider. Furthermore, we release our pre-trained models publicly to avoid repeated training for the code intelligence research community. \paragraph{Automation bias.} As CodeT5 can be deployed to provide coding assistance such as code generation for aiding developers, automation bias of machine learning systems should be carefully considered, especially for developers who tend to over-rely on the model-generated outputs. Sometimes these systems might produce functions that superficially appear correct but do not actually align with the developer’s intents. If developers unintentionally adopt these incorrect code suggestions, it might cause them much longer time on debugging and even lead to some significant safety issues. We suggest practitioners using CodeT5 should always bear in mind that its generation outputs should be only taken as references which require domain experts for further correctness and security checking. \paragraph{Security implications.} We train CodeT5 on existing code corpus including CodeSearchNet~\cite{DBLP:journals/corr/abs-1909-09436} and a small fraction of Google BigQuery, both of which are originally collected from public Github repositories. Pre-trained models might encode some sensitive information ({\em e.g.,}\xspace personal addresses or identification numbers) from the training data. Though we have conducted multi-rounds of data cleaning to mitigate this before training our models, it is still possible that some sensitive information cannot be completely removed. Besides, due to the non-deterministic nature of generation models like CodeT5, it might produce some vulnerable code to harmfully affect the software and even be able to benefit more advanced malware development when deliberately misused. \section{Introduction} Pre-trained language models such as BERT~\cite{DBLP:conf/naacl/DevlinCLT19}, GPT~\cite{radford2019language}, and T5~\cite{DBLP:journals/jmlr/RaffelSRLNMZLL20} have greatly boosted performance in a wide spectrum of natural language processing (NLP) tasks. They typically employ a pre-train then fine-tune paradigm that aims to derive generic language representations by self-supervised training on large-scale unlabeled data, which can be transferred to benefit multiple downstream tasks, especially those with limited data annotation. Inspired by their success, there are many recent attempts to adapt these pre-training methods for programming language (PL)~\cite{DBLP:conf/sigsoft/SvyatkovskiyDFS20,DBLP:conf/icml/KanadeMBS20,DBLP:conf/emnlp/FengGTDFGS0LJZ20}, showing promising results on code-related tasks. \input{figures/finetune_tasks} However, despite their success, most of these models rely on either an encoder-only model similar to BERT~\cite{DBLP:conf/sigsoft/SvyatkovskiyDFS20,DBLP:conf/emnlp/FengGTDFGS0LJZ20} or a decoder-only model like GPT~\cite{DBLP:conf/icml/KanadeMBS20}, which is suboptimal for generation and understanding tasks, respectively. For example, CodeBERT~\cite{DBLP:conf/emnlp/FengGTDFGS0LJZ20} requires an additional decoder when applied for the code summarization task, where this decoder cannot benefit from the pre-training. Besides, most existing methods simply employ the conventional NLP pre-training techniques on source code by regarding it as a sequence of tokens like NL. This largely ignores the rich structural information in code, which is vital to fully comprehend the code semantics. In this work, we present CodeT5, a pre-trained encoder-decoder model that considers the token type information in code. Our CodeT5 builds on the T5 architecture~\cite{DBLP:journals/jmlr/RaffelSRLNMZLL20} that employs denoising sequence-to-sequence (Seq2Seq) pre-training and has been shown to benefit both understanding and generation tasks in natural language. In addition, we propose to leverage the developer-assigned identifiers in code. When writing programs, developers tend to employ informative identifiers to make the code more understandable, so that these identifiers would generally preserve rich code semantics, {\em e.g.,}\xspace the ``binarySearch'' identifier in Figure~\ref{fig:pretrain_task} directly indicates its functionality. To fuse such code-specific knowledge, we propose a novel identifier-aware objective that trains the model to distinguish which tokens are identifiers and recover them when they are masked. Furthermore, we propose to leverage the code and its accompanying comments to learn a better NL-PL alignment. Developers often provide documentation for programs to facilitate better software maintenance~\cite{DBLP:conf/sigdoc/SouzaAO05}, so that such PL-NL pairs are widely available in most source code. Specifically, we regard the NL$\rightarrow$PL generation and PL$\rightarrow$NL generation as dual tasks and simultaneously optimize the model on them. We pre-train CodeT5 on the CodeSearchNet data~\cite{DBLP:journals/corr/abs-1909-09436} following~\cite{DBLP:conf/emnlp/FengGTDFGS0LJZ20} that consists of both unimodal (PL-only) and bimodal (PL-NL) data on six PLs. In addition to that, we further collect extra data of C/C\# from open-source Github repositories. We fine-tune CodeT5 on most tasks in the CodeXGLUE benchmark~\cite{DBLP:journals/corr/abs-2102-04664}, including two understanding tasks: code defect detection and clone detection, and generation tasks such as code summarization, generation, translation, and refinement. As shown in Figure~\ref{fig:finetune_task}, we also explore multi-task learning to fine-tune CodeT5 on multiple tasks at a time using a task control code as the source prompt. \noindent In summary, we make the following contributions: \begin{itemize} \vspace{-0.5em} \itemsep0em \item We present one of the first unified encoder-decoder models CodeT5 to support both code-related understanding and generation tasks, and also allows for multi-task learning. \item We propose a novel identifier-aware pre-training objective that considers the crucial token type information (identifiers) from code. Besides, we propose to leverage the NL-PL pairs that are naturally available in source code to learn a better cross-modal alignment. \item Extensive experiments show that CodeT5 yields state-of-the-art results on the fourteen sub-tasks in CodeXGLUE. Further analysis shows our CodeT5 can better capture the code semantics with the proposed identifier-aware pre-training and bimodal dual generation primarily benefits NL$\leftrightarrow$PL tasks. \end{itemize} \section{CodeT5} Our CodeT5 builds on an encoder-decoder framework with the same architecture as T5~\cite{DBLP:journals/jmlr/RaffelSRLNMZLL20}. It aims to derive generic representations for programming language (PL) and natural language (NL) via pre-training on unlabeled source code. As illustrated in Figure~\ref{fig:pretrain_task}, we extend the denoising Seq2Seq objective in T5 by proposing two identifier tagging and prediction tasks to enable the model to better leverage the token type information from PL, which are the identifiers assigned by developers. To improve the NL-PL alignment, we further propose a bimodal dual learning objective for a bidirectional conversion between NL and PL. In the following, we introduce how CodeT5 encodes PL and NL inputs~(\cref{model:input}) and our proposed identifier-aware pre-training tasks (\cref{model:pretrain}), followed by the fine-tuning with task-specific transfer learning and multi-task training~(\cref{model:finetune}). \subsection{Encoding NL and PL}\label{model:input} At the pre-training stage, our model would receive either PL-only or NL-PL as inputs depending on whether the code snippet has accompanying NL descriptions or not. For the NL-PL bimodal inputs, we concatenate them into a sequence with a delimiter token \texttt{[SEP]} and represent the whole input sequence into the format as $\mathbf{x}$ $=$ (\texttt{[CLS]}, $w_1,...,w_n$, \texttt{[SEP]}, $c_1,...,c_m$, \texttt{[SEP]}), where $n$ and $m$ denote the number of NL word tokens and PL code tokens, respectively. The NL word sequence will be empty for PL-only unimodal inputs. In order to capture more code-specific features, we propose to leverage token type information from code. We focus on the type of identifiers ({\em e.g.,}\xspace function names and variables) as they are one of the most PL-agnostic features and reserve rich code semantics. Specifically, we convert the PL segment into an Abstract Syntax Tree (AST) and extract the node types for each code token. Finally, we construct a sequence of binary labels $\mathbf{y} \in {\{0,1\}^m}$ for the PL segment, where each $y_i\in \{0, 1\}$ represents whether the code token $c_i$ is an identifier or not. \subsection{Pre-training Tasks}\label{model:pretrain} We now introduce our proposed pre-training tasks that enable CodeT5 to learn useful patterns from either PL-only or NL-PL bimodal data. \paragraph{Identifier-aware Denoising Pre-training.} Denoising Sequence-to-Sequence (Seq2Seq) pre-training has been shown to be quite effective in a broad set of NLP tasks~\cite{DBLP:conf/icml/SongTQLL19,DBLP:journals/jmlr/RaffelSRLNMZLL20,DBLP:conf/acl/LewisLGGMLSZ20}. This denoising objective typically first corrupts the source sequence with some noising functions and then requires the decoder to recover the original texts. In this work, we utilize a span masking objective similar to T5~\cite{DBLP:journals/jmlr/RaffelSRLNMZLL20} that randomly masks spans with arbitrary lengths and then predicts these masked spans combined with some sentinel tokens at the decoder. We refer this task to \textbf{Masked Span Prediction (MSP)}, as illustrated in Figure~\ref{fig:pretrain_task} (a). Specifically, we employ the same $15\%$ corruption rate as T5 and ensure the average span length to be $3$ by uniformly sampling spans of from $1$ to $5$ tokens. Moreover, we employ the \emph{whole word masking} by sampling spans before subword tokenization, which aims to avoid masking partial sub-tokens and is shown to be helpful~\cite{DBLP:journals/corr/abs-1904-09223}. Notably, we pre-train a shared model for various PLs to learn robust cross-lingual representations. We describe the masked span prediction loss as: \begin{equation}\label{eq:S2S} \vspace{-0.5em} \small \hspace{-0.8em} \mathcal{L}_{MSP} (\theta) = \sum_{t = 1}^{k} \hspace{-0.5em} -\log P_{\theta} (x^{\text{mask}}_t| \mathbf{x}^{\backslash \text{mask}}, \mathbf{x}^{\text{mask}}_{<t}), \normalsize \end{equation} \noindent where $\theta$ are the model parameters, $\mathbf{x}^{\backslash \text{mask}}$ is the masked input, $\mathbf{x}^{\text{mask}}$ is the masked sequence to predict from the decoder with $k$ denoting the number of tokens in $\mathbf{x}^{\text{mask}}$, and $\mathbf{x}^{\text{mask}}_{<t}$ is the span sequence generated so far. To fuse more code-specific structural information (the identifier node type in AST) into the model, we propose two additional tasks: \textit{Identifier Tagging (IT)} and \textit{Masked Identifier Prediction (MIP)} to complement the denoising pre-training. . \paragraph{$\bullet$~Identifier Tagging (IT)} It aims to notify the model with the knowledge of whether this code token is an identifier or not, which shares a similar spirit of syntax highlighting in some developer-aided tools. As shown in Figure~\ref{fig:pretrain_task} (b), we map the final hidden states of the PL segment at the CodeT5 encoder into a sequence of probabilities $\mathbf{p}=(p_1, ...,p_m)$, and compute a binary cross entropy loss for sequence labeling: \begin{equation}\label{eq:it} \small \vspace{-0.3em} \mathcal{L}_{IT} (\theta_{e}) = \sum_{i=1}^m \hspace{-0.2em} -[y_i \log p_i + (1-y_i) \log (1-p_i)], \normalsize \end{equation} \noindent where $\theta_e$ are the encoder parameters. Note that by casting the task as a sequence labeling problem, the model is expected to capture the code syntax and the data flow structures of the code. \paragraph{$\bullet$~Masked Identifier Prediction (MIP)} Different from the random span masking in MSP, we mask all identifiers in the PL segment and employ a unique sentinel token for all occurrences of one specific identifier. In the field of software engineering, this is called \emph{obfuscation} where changing identifier names does not impact the code semantics. Inspired by~\citet{DBLP:journals/corr/abs-2102-07492}, we arrange the unique identifiers with the sentinel tokens into a target sequence $\mathbf{I}$ as shown in Figure~\ref{fig:pretrain_task} (c). We then predict it in an auto-regressive manner: \begin{equation}\label{eq:IP} \small \vspace{-0.5em} \mathcal{L}_{MIP} (\theta) = \sum_{j = 1}^{|I|} -\log P_{\theta}(I_j| \mathbf{x}^{\backslash \mathbf{I}}, \mathbf{I}_{<j}), \normalsize \vspace{-0.5em} \end{equation} \noindent where $\mathbf{x}^{\backslash \mathbf{I}}$ is the masked input. Note that \emph{deobfuscation} is a more challenging task that requires the model to comprehend the code semantics based on obfuscated code and link the occurrences of the same identifiers together. We alternately optimize these three losses with an equal probability, which constitutes our proposed identifier-aware denoising pre-training. \paragraph{Bimodal Dual Generation.} In the pre-training phase, the decoder only sees discrete masked spans and identifiers, which is disparate from the downstream tasks where the decoder needs to generate either fluent NL texts or syntactically correct code snippets. To close the gap between the pre-training and fine-tuning, we propose to leverage the NL-PL bimodal data to train the model for a bidirectional conversion as shown in Figure~\ref{fig:pretrain_task}~(d). Specifically, we regard the NL$\rightarrow$PL generation and PL$\rightarrow$NL generation as dual tasks and simultaneously optimize the model on them. For each NL-PL bimodal datapoint, we construct two training instances with reverse directions and add language ids ({\em e.g.,}\xspace <java> and <en> for Java PL and English NL, respectively). This operation can be also seen as a special case of T5's span masking by either masking the full NL or PL segment from the bimodal inputs. This task aims to improve the alignment between the NL and PL counterparts. \subsection{Fine-tuning CodeT5} \label{model:finetune} After pre-training on large-scale unlabeled data, we adapt CodeT5 to downstream tasks via either task-specific transfer learning or multi-task learning. \paragraph{Task-specific Transfer Learning: Generation vs. Understanding Tasks.} Code-related tasks can be categorized into generation and understanding tasks. For the former one, our CodeT5 can be naturally adapted with its Seq2Seq framework. For understanding tasks, we investigate two ways of either generating the label as a unigram target sequence \cite{DBLP:journals/jmlr/RaffelSRLNMZLL20}, or predicting it from the vocabulary of class labels based on the last decoder hidden state following~\citet{DBLP:conf/acl/LewisLGGMLSZ20}. \paragraph{Multi-task Learning.} We also explore a multi-task learning setting by training a shared model on multiple tasks at a time. Multi-task learning is able to reduce computation cost by reusing the most of model weights for many tasks and has been shown to improve the model generalization capability in NL pre-training~\cite{DBLP:conf/acl/LiuHCG19}. We follow~\citet{DBLP:journals/jmlr/RaffelSRLNMZLL20} to employ the same unified model for all tasks without adding any task-specific networks but allow to select different best checkpoints for different tasks. To notify the model with which task it is dealing with, we design a unified format of task control codes and prepend it into the source inputs as shown in Figure~\ref{fig:finetune_task}. For instance, we employ ``Translate Java to CSharp:'' as the source prompt for the code-to-code translation task from Java to CSharp. As different tasks have different dataset sizes, we follow~\citet{DBLP:conf/nips/ConneauL19} to employ a balanced sampling strategy. For $N$ number of datasets (or tasks), with probabilities ${\{q_i\}}_{i=1}^{N}$, we define the following multinomial distribution to sample from: \begin{equation}\label{eq:sample} q_i=\frac{r^\alpha_i}{\sum^{N}_{j=1}r^\alpha_j} \text{, where } \ \ r_i=\frac{n_i}{\sum^{N}_{k=1}n_k} \text{,} \end{equation} \noindent where $n_i$ is number of examples for $i$-th task and $\alpha$ is set to $0.7$. This balanced sampling aims to alleviate the bias towards high-resource tasks. \section{Related Work} \paragraph{Pre-training on Natural Language.} Pre-trained models based on Transformer architectures~\cite{DBLP:conf/nips/VaswaniSPUJGKP17} have led to state-of-the-art performance on a broad set of NLP tasks. They can be generally categorized into three groups: encoder-only models such as BERT~\cite{DBLP:conf/naacl/DevlinCLT19}, RoBERTa~\cite{DBLP:journals/corr/abs-1907-11692}, and ELECTRA~\cite{DBLP:conf/iclr/ClarkLLM20}, decoder-only models like GPT~\cite{radford2019language}, and encoder-decoder models such as MASS~\cite{DBLP:conf/icml/SongTQLL19}, BART~\cite{DBLP:conf/acl/LewisLGGMLSZ20}, and T5~\cite{DBLP:journals/jmlr/RaffelSRLNMZLL20}. Compared to encoder-only and decoder-only models that respectively favor understanding and generation tasks, encoder-decoder models can well support both types of tasks. They often employ denoising sequence-to-sequence pre-training objectives that corrupt the source input and require the decoder to recover them. In this work, we extend T5 to the programming language and propose a novel identifier-aware denoising objective that enables the model to better comprehend the code. \input{figures/pretrain_tasks} \paragraph{Pre-training on Programming Language.} Pre-training on the programming language is a nascent field where much recent work attempts to extend the NLP pre-training methods to source code. CuBERT~\cite{DBLP:conf/icml/KanadeMBS20} and CodeBERT~\cite{DBLP:conf/emnlp/FengGTDFGS0LJZ20} are the two pioneer models. CuBERT employs BERT's powerful masked language modeling objective to derive generic code-specific representation, and CodeBERT further adds a replaced token detection~\cite{DBLP:conf/iclr/ClarkLLM20} task to learn NL-PL cross-modal representation. Besides the BERT-style models, \citet{DBLP:conf/sigsoft/SvyatkovskiyDFS20} and \citet{DBLP:conf/kbse/LiuLZJ20} respectively employ GPT and UniLM~\cite{DBLP:conf/nips/00040WWLWGZH19} for the code completion task. Transcoder~\cite{DBLP:conf/nips/RoziereLCL20} explores programming language translation in an unsupervised setting. Different from them, we explore encoder-decoder models based on T5 for programming language pre-training and support a more comprehensive set of tasks. Some emerging work~\cite{DBLP:conf/emnlp/ClementDTSS20,DBLP:conf/icse/MastropaoloSCNP21,DBLP:journals/corr/abs-2104-02443} in the recent literature also explore the T5 framework on code, but they only focus on a limited subset of generation tasks and do not support understanding tasks like us. Apart from these, PLBART~\cite{DBLP:journals/corr/abs-2103-06333} based on another encoder-decoder model BART can also support both understanding and generation tasks. However, all the above prior work simply processes code in the same way as natural language and largely ignores the code-specific characteristics. Instead, we propose to leverage the identifier information in code for pre-training. Recently, GraphCodeBERT~\cite{DBLP:journals/corr/abs-2009-08366} incorporates the data flow extracted from the code structure into CodeBERT, while \citet{DBLP:journals/corr/abs-2102-07492} propose a deobfuscation objective to leverage the structural aspect of PL. These models only focus on training a better code-specific encoder. \citet{DBLP:conf/iclr/ZugnerKCLG21} proposes to capture the relative distances between code tokens over the code structure. By contrast, we specifically focus on the identifiers that reserve rich code semantics and fuse such information into a Seq2Seq model via two novel identifier tagging and prediction tasks.
1,477,468,750,227
arxiv
\section{Introduction} \subsection{} The study of points on semiabelian varieties ({\it i.e.} extensions of abelian varieties by tori) is a very classical topic of diophantine geometry. In algebraic geometry, it also played a crucial role in the guise of Deligne's $1$-motives \cite{D}. Over an algebraically closed subfield $k$ of $\mathbf{C}$, a $1$-motive $[\mathcal L \to \mathcal G]$ is given by a morphism from a lattice $\mathcal L$ to a semiabelian variety $\mathcal G$ (taking a basis of the lattice, this amounts to the data of a finite number of points on $\mathcal G$). This notion served as a double test ground: $i)$ for Deligne's theory of mixed Hodge structures if $k=\mathbf{C}$ ($1$-motives form an easily describable full subcategory of the category of mixed Hodge structures), $ii)$ for Grothendieck's dream of mixed motives ($1$-motives are those coming from varieties of dimension $\leq 1$, whence the name ``$1$-motive"). \smallskip Nowadays, a well-defined {\it tannakian} category $MM(k)$ of mixed motives with rational coefficients over a field $k\subset \mathbf{C}$ is available in full generality, in two different (independent, but canonically equivalent) versions due to M. Nori \cite{N} and J. Ayoub \cite{Ay1} respectively (see \cite{A3} for a survey). Nori's construction is more elementary and puts in light the universality property of motives, while the geometric origin of morphisms is more apparent in Ayoub's version which is constructed out of Voevodsky's triangulated category. Anyway, one knows how to associate unconditionally a {\it motivic Galois group} to any motive over $k$. One can attach an object of $MM(k)$ not only to any $k$-variety, but also to $1$-motives over $k$. We denote by $MM(k)_1$ ({\it resp.} $MM(k)_1^{\otimes}$ ) the full subcategory ({\it resp.} tannakian subcategory) of $MM(k)$ generated by $1$-motives: objects of $MM(k)_1^{\otimes}$ are constructed from those of $MM(k)_1$ by saturating under tensor products, duals, subquotients. \subsection{} The tannakian category $MM(k)$ admits a fiber functor (the {\it Hodge realization}) toward the tannakian category $MHS$ of mixed Hodge structures. By tannakian duality, this provides an injective homomorphism between the Mumford-Tate group of any motive ({\it i.e.} the tannakian group attached to its Hodge realization) and its motivic Galois group. Let us assume that $k$ is {\it algebraically closed}. A version of the Hodge conjecture (the {\it Hodge-Nori conjecture} \cite{Ar}) then predicts that the Hodge realization is full; a slightly more precise version, in terms of tannakian groups, predicts that the motivic Galois group equals the Mumford-Tate group. This is what we prove in this paper in the special case of $1$-motives: \begin{thm}\label{th} The Hodge realization $ \,MM(k)_1^{\otimes} \to MHS$ is fully faithful, and identifies $MM(k)_1^{\otimes} $ with a tannakian subcategory\footnote{in particular, it is stable under subobjects taken in $MHS$.} of $MHS$. A fortiori, the motivic Galois group of any $1$-motive over $k$ coincides with its Mumford-Tate group. \end{thm} This confers a genuine motivic content to the description of Mumford-Tate groups of $1$-motives presented in \cite{Be}, and in particular to the notion of deficiency \cite{Ber}. This could also shed some light on P. Jossen's work on the Mumford-Tate conjecture \cite{J} and on several recent works on periods of $1$-motives (see {\it e.g.} \cite{H-W}) in their relation to the Grothendieck period conjecture (and to our generalization of Grothendieck's conjecture to a non necessarily algebraic ground field $k$ \cite[23.4.1]{A2}). \subsection{} In \cite{A1}, we proved that the motivic Galois group of any abelian $k$-variety coincides with its Mumford-Tate group. In that setting, motivic Galois groups were understood in the context of the tannakian category of pure motives $M(k)$ defined in terms of motivated correspondences. According to \cite{Ar}, $M(k)$ is canonically equivalent to the socle of $MM(k)$ ({\it i.e.} its full subcategory of semisimple objects), which allows to interpret our theorem in \cite{A1} as a confirmation of the Hodge-Nori conjecture for abelian varieties, and Theorem \ref{th} as an extension of it. In fact, Theorem \ref{th} has the following consequence: \begin{cor} The tannakian subcategory of $MM(k)_1^{\otimes}$ consisting of semisimple objects is canonically equivalent to the tannakian subcategory of $M(k)$ generated by the motives of abelian varieties. \end{cor} \subsection{} In the prehistory of the theory of motives, one was limited to morphisms of systems of realizations (a.k.a. absolute Hodge correspondences) instead of morphisms of ``geometric origin" as should be genuine motivic morphisms, in some way. In that weaker context, Deligne proved that the absolute Hodge tannakian group attached to any complex abelian variety coincides with its Mumford-Tate group, and J.-L. Brylinski extended this result to $1$-motives. Our result enhances Brylinski's result to the genuine motivic context, with a completely parallel argument, namely: $i)$ we replace Deligne's result by the stronger result that any Hodge cycle on a complex abelian variety is motivated \cite{A1}, translated in terms of Nori motives via \cite{Ar}, $ii)$ we mimick Brylinski's deformation argument, using to a motivic version of the ``theorem of the fixed part" due to Nori, Jossen and Ayoub (independently; since Nori's and Jossen's notes do not seem easily accessible, we rely on Ayoub's published version \cite{Ay1} and the compatibility with Nori's framework \cite{C-GAS}). \smallskip The progress between Brylinski's theorem and the theorem of this paper is thus a shadow of the progress of the theory of motives in the last 35 years, and can be restated as follows: {\it for $1$-motives, tensor Hodge classes do not only satisfy the expected compatibilities between various realizations, they indeed ``come from geometry"} (in a non-naive sense, more apparent in Ayoub's setting). \section{} Let us begin with some preliminaries about $1$-motives and Nori motives. As above, let $k$ be an algebraically closed subfield of $\mathbf{C}$ and $MM(k)$ denote the tannakian category of Nori motives over $k$ with rational coefficients \cite{N}\cite{H-MS} (see also \cite{BV-H-P} for a new viewpoint on the tensor structure). The Betti realization provides a fiber functor $R_B: MM(k)\to Vec_\mathbf{Q}$, which is canonically enriched as a fiber functor $R_H: MM(k) \to MHS$ toward the tannakian category of rational mixed Hodge structures. There is also a category of effective (Nori) mixed motives $MM^{eff}(k)$, from which $MM(k)$ is constructed by formally inverting the Lefschetz motive. It is not known whether the faithful functor $MM^{eff}(k)\to MM(k)$ is full. Let $DM(k)_1$ be the abelian category of Deligne $1$-motives up to isogeny. In \cite[6.1]{Ay-BV}, it is shown that $DM(k)_1$ is canonically equivalent to a full abelian subcategory of $MM^{eff}(k)$: this is the thick abelian subcategory generated by motives of the form $h_1(X,Y)$ and the unit motive $\mathbf{Q}(0)$. We denote by $MM(k)_1$ its essential image in $MM(k)$. According to \cite[6.9]{Ay-BV}, the composed functor $$DM(k)_1\to MM^{eff}(k)\to MM(k) \to MHS$$ coincides with the (rational) Hodge realization of $1$-motives constructed by Deligne \cite{D}. \begin{prop} This composed functor is fully faithful. A fortiori $DM(k)_1\to MM(k)_1$ is an equivalence. \end{prop} \begin{proof} Deligne actually proved that $DM(k)_1\to MHS$ is fully faithful in the case $k=\mathbf{C}$. The case of an algebraically closed subfield $k$ follows. Indeed let $M_i =[\mathcal L _i\to \mathcal G_i], i= 1, 2,$ be $1$-motives over $k$, each given by a lattice $\mathcal L_i$ and a morphism from $\mathcal L_i$ to a semi-abelian variety $\mathcal G_i$, extension of an abelian variety $A_i$ by a torus $T_i$. It suffices to show that any morphism $M_{1\mathbf{C}} \to M_{2\mathbf{C}}$ descends to $k$, {\it i.e.} that the morphism $\mathcal G_{1\mathbf{C}} {\to} \mathcal G_{2\mathbf{C}}$ descends to $k$. By Cartier duality, this amounts to the well-known fact that the induced morphism of abelian varieties $A^\vee_{2\mathbf{C}} \stackrel{f}{\to} A^\vee_{1\mathbf{C}}$ descends to $k$. The second statement follows from the first since all involved functors are faithful. \end{proof} In particular $MM(k)_1$ is abelian. Reminding that the socle of $MM(k)$ is canonically equivalent to the category $M(k)$ of pure motives constructed in \cite{A1} (\cite[6.4]{Ar}, see also \cite[10.2]{H-MS}), we also deduce that any semisimple object of $MM(k)_1$ is isomorphic to a direct sum in $M(k)$ of the motive $h_1(A)$ of an abelian variety and copies of $\mathbf{Q}(0), \mathbf{Q}(1)$. \begin{comment}\smallskip We denote by $MM(k)_1^{\otimes}$ the tannakian hull of $MM(k)_1$ inside $MM(k)$, obtained by saturating under tensor product, dual, subquotient. It contains all Tate twists $\mathbf{Q}(n)$. \begin{thm} The Hodge realization $R_H: \,MM(k)_1^{\otimes} \to MHS$ is fully faithful, and identifies $MM(k)_1^{\otimes} $ with a tannakian subcategory\footnote{in particular, it is stable under subobjects taken in $MHS$.} of $MHS$. \end{thm} Let $M =[\mathcal L \to \mathcal G]$ be a $1$-motive, given by a lattice $\mathcal L$ and a morphism from $\mathcal L$ to a semi-abelian variety $\mathcal G$, extension of an abelian variety $A$ by a torus $T$. According to the theorem, the tannakian subcategory of $MM(k)_1^{\otimes}$ generated by $M$ is $\otimes$-equivalent to the tannakian subcategory of $MHS$ generated by $R_H(M)$. By tannakian duality, this shows that {\it the morphism from the Mumford-Tate group of $M$ to the motivic Galois group of $M$ is an isomorphism}. \end{comment} \section{} Let $M =[\mathcal L \to \mathcal G]$ be a $1$-motive over $k$, given by a morphism from a lattice $\mathcal L$ to a semi-abelian variety $\mathcal G$, extension of an abelian variety $A$ by a torus $T$. Up to replacing $M$ by the direct sum of $M$ and its Cartier dual, which changes neither the motivic Galois group, nor the Mumford-Tate group, we may assume that $M$ is symmetric (= polarizable) in the sense of \cite{Br}. By \cite[0.6.2]{A1} and by the identifications indicated above, the theorem holds for the tannakian subcategory of {\it semisimple} $1$-motives (up to isogeny), in particular for the tannakian subcategory generated by the $1$-motive $M_0 := Gr_W M = [\mathcal L \stackrel{0}{\to} A\times T]$. Note that the image of $M_0$ in $MM(k)_1$ is the semisimplification of the image of $M$. Let $P$ be the Mumford-Tate group of $M$, and let us fix a polarization of $M$ (hence of $A$). \begin{lemma} Polarized $1$-motives $N$ with $Gr_W N = M_0$ and Mumford-Tate group contained in $P$ fit into an algebraic family $\mathcal M$ parametrized by a smooth connected $k$-variety $X$. \end{lemma} See \cite[2.2.8.6]{Br} (and also \cite[1.8]{J}). The $1$-motive $M$ ({\it resp.} $M_0$) is a fiber of $\mathcal M$ at a $k$-point $x$ ({\it resp.} $x_0$) of $X$. The ``mixed Shimura variety" $X$ is just a torus bundle over an abelian variety, analytically isomorphic to $W_{-1}P(\mathbf{Z})\backslash W_{-1}P(\mathbf{C})/ (F^0\cap W_{-1}P(\mathbf{C}))$. In particular the monodromy of the family at $x_1$ is given by the natural action of $W_{-1}P(\mathbf{Z})$ on $H(M)$ and its Zariski hull is the connected group $W_{-1}P$. Any Hodge ({\it i.e.} $P$-invariant) tensor is thus invariant under monodromy. The point $x$ is ``Hodge-generic" in the family in the sense that the Mumford-Tate group of $\mathcal M_x= M$ is maximal, equal to $P$. Let $L$ be a $P$-stable line in some mixed tensor construction $T^\bullet R_B(M)$ over $R_B(M)$ (with Tate twists). By (Tate) twisting, one reduces to the case where $L$ is $P$-invariant, {\it i.e.} generated by a Hodge tensor. We have to show that $L$ is the realization of unit submotive in $T^\bullet M$, knowing that its parallel transport to $x_0$ is the realization of a unit submotive in $T^\bullet M_0$. \section{} Let $Y$ be a smooth connected $k$-variety. Let $\mathcal N \in MM(Y)$ be a motivic local system, viewed as a mixed motive over $k(Y)$ unramified over $Y$. Its Betti realization is a local system $R_B(\mathcal N)$ of $\mathbf{Q}$-vector spaces on $Y(\mathbf{C})$. Taking the fiber at a point $y\in Y(k)$, one gets in this way a fiber functor $R_{B,y}: MM(Y) \to Vec_\mathbf{Q}$, which is then enriched as a fiber functor $R_{mon,y}: MM(Y) \to Rep_\mathbf{Q} \,\pi_1(Y(\mathbf{C}), y)$ (monodromy realization). Taking tannakian duals, one gets a morphism $G_{mon}(\mathcal N, y)\to G_{mot}(\mathcal N, y)$ (in fact an embedding of closed subgroups of $GL(R_{B,y}(\mathcal N)$), where $G_{mon}(\mathcal N, y)$ is the algebraic monodromy group attached to $R_{mon, y}(\mathcal N)$. \begin{prop} $G_{mon}(\mathcal N, y)$ is a normal subgroup of $ G_{mot}(\mathcal N, y)$. If $R_B(\mathcal N)$ is constant, then $\mathcal N$ is constant, {\it i.e.} is the pull-back of a motive in $MM(k)$. \end{prop} \begin{proof} See \cite[th. 40, rem. 41]{Ay2}\footnote{in this reference, Ayoub uses a complex geometric generic point of $Y$ rather than $y$, but functor fibers become isomorphic as usual.}; the proof is given in \cite[2.57]{Ay1} in the context of Ayoub's category of mixed motives, which by \cite{C-GAS} is equivalent to the category of Nori motives. The result also appears in unpublished works by Nori and by Jossen (in the context of Nori motives properly). \end{proof} \medskip {\it Application}: let $\mathcal M \in MM(X)$ be the motivic local system attached to the family of $1$-motives of the lemma. Let $\mathcal N \in MM(X)$ correspond to the representation of $ G_{mot}(\mathcal M, x)$ generated by $L$ inside $T^\bullet R_{B,x}(\mathcal M) = T^\bullet R_{B }( M)$. Because $L$ is fixed by $ G_{mon}(\mathcal N, x)$, it follows from the first part of the proposition that $R_B(\mathcal N)$ is a constant local system. By the second part, $\mathcal N$ itself is constant. Since $R_{B, x_0}(\mathcal N)$ contains the parallel transport of $L$ at $x_0$ which is the realization of a unit submotive in $\mathcal N_{x_0} \subset T^\bullet M_0$, we conclude that $L$ is the Betti realization of a unit submotive in $\mathcal N_{x} \subset T^\bullet M$ (which coincides a posteriori with $\mathcal N_x$ itself). This proves Theorem \ref{th}. \qed \smallskip One may wonder\footnote{as Brylinski already did in his absolute Hodge context \cite[end of 2.2]{Br}.} whether there is a more direct alternative argument by devissage (with respect to the weight) rather than by deformation, in order to perform the reduction to the case of abelian varieties.
1,477,468,750,228
arxiv
\section{Introduction} Deep learning is currently witnessing a major interest in computer vision and neighboring fields \cite{Cun2015}. Its principle consists in learning mapping functions, as multi-layered convolutional or recurrent neural networks, whose inputs correspond to different stimulus (images, videos, etc.) and outputs to their classification and regression. Early deep learning architectures, including AlexNet \cite{Krizhevsky2012} and other networks~\cite{SzegedyCVPR2015,He2016,HuangCVPR2017,sahbiicassp2015,He2017,ross2015,javad2017,zaremba2014,mike1997,zhang2020,chris2017,sahbijiuicassp2016,indola2016,Ishaan2017,Ronneberger2015,Chen2017,Long2015,mingyuanpr2019,ArjovskyICML2016,DorobantuCorr2016,Mhammedi2017,sahbiacm2000,Vorontsovicml2017,Wisdom2016,JeffTPAMI2017,sahbiicassp2016,indola2014,sahbiiccv2019}, were initially dedicated to vectorial data such as images~\cite{Krizhevsky2012,He2016}. However, data sitting on top of irregular domains (including graphs) require extending deep learning to non-vectorial data~\cite{Bruna2013,Defferrard2016,Huang2018,Kipf2016}. These extensions, widely popularized as graph neural networks (GNNs), are currently emerging for different use-cases and applications~\cite{Monti2017}. Two major families of GNNs exist in the literature; spectral and spatial. Spectral methods~\cite{Bruna2013,Defferrard2016,Henaff2015,Kipf2016,Levie2018,Li2018,Zhuang2018,Chen2018,Huang2018} achieve convolution by {\it projecting} the signal of a given graph onto a Fourier basis (as the spectral decomposition of the graph Laplacian) prior to its convolution in the spectral domain using the hadamard (entrywise) product, and then {\it back-projecting} the resulting convoluted signal in the input domain using the inverse of the Fourier transform \cite{Slepian1983}. Whilst spectral convolution is well defined, it requires solving a cumbersome eigen-decomposition of the Laplacian. Besides, the resulting Fourier basis is graph-dependent and non-transferable to general and highly irregular graph structures \cite{Monti2017}. Another category of GNNs, dubbed as spatial \cite{Gori2005,Micheli2009,Scarselli2008,Wu2019,Hamilton2017}, has also emerged and seeks to achieve convolutions directly in the input domain without any preliminary step of spectral decomposition. Its general principle consists in {\it aggregating} node representations before applying convolution to the vectorized node aggregates \cite{Atwood2016,Gao2018,Niepert2016,Hamilton2017,Monti2017,Zhang2018}. This second category of methods is deemed computationally more efficient and also more effective compared to spectral ones; however, its success is reliant on the {\it aggregate operators} which in turn depend on the topology of input graphs.\\ \indent Graph topology is usually defined with one or multiple adjacency matrices (or equivalently their Laplacian operators) that capture connectivity in these graphs as well as their differential properties. Most of the existing GNN architectures rely on predetermined graph structures which dependent on the properties of the underlying applications \cite{WangICLR2018,Loukas2020,Atamna} (e.g., node-to-node relationships in social networks \cite{Wu2019}, edges in 3D modeling~\cite{STLSTM16,SBU12,Shicvpr2019,Yongcvpr2015,YanAAAI2018}, protein connectivity in biological systems \cite{Martin2016,Shuqian2019}, etc.) whilst other methods handcraft graph connections by modeling similarities between nodes \cite{Bresson16}\footnote{\scriptsize using standard similarity functions (see for instance~\cite{sahbirr2002,sahbiphd2003,sahbirr2004,sahbisoft2008}).}. However, connections (either intrinsically available or handcrafted) are powerless to optimally capture all the relationships between nodes as their setting is oblivious to the targeted applications. For instance, node-to-node relationships, in human skeletons, capture the intrinsic anthropometric characteristics of individuals (useful for their identification) while other connections, yet to infer, are necessary for recognizing their dynamics and actions (See Fig.~\ref{fig:A1}). Put differently, depending on the task at hand, connectivity should be appropriately learned by including {\it not only} the existing intrinsic links between nodes in graphs but also their extrinsic (inferred) relationships. \\ \indent In this paper, we introduce a novel framework that learns convolutional filters on graphs together with their topological properties. The latter are modeled through matrix operators that capture multiple aggregates on graphs. Learning these topological properties relies on a constrained cross-entropy loss whose solution corresponds to the learned entries of these matrix operators. We consider different {\it constraints} (including stochasticity, orthogonality and symmetry) acting as regularizers which reduce the space of possible solutions and the risk of overfitting. Stochasticity implements random walk Laplacians while orthogonality models multiple aggregation operators with non-overlapping supports; it also avoids redundancy and oversizing the learned GNNs with useless parameters. Finally, symmetry reduces further the number of training parameters and allows learning positive semi-definite matrix operators. We also consider different reparametrizations, particularly {\it crispmax}, that implement orthogonality while being highly effective as shown later in experiments. \section{Related work} Without any a priori knowledge, graph inference (a.k.a graph design) is ill-posed and NP-hard \cite{Sandeep2019, Hanjun2017,Marcelo2018}. Most of the existing solutions rely on constraints (including similarity, smoothness, sparsity, band-limitedness, etc. \cite{Belkin2003,dong18,Daitch,Sardellitti16,LeBars19,Sardellitti19,Valsesia18,Kalofolias,Egilmez,Chepuri,Dong2016}) which have been adapted for a better conditioning of graph design \cite{Pasdeloup2017,Thanou2017}. From the machine learning point-of-view and particularly in GNNs, early methods \cite{Micheli2009,kipf17} rely on handcrafted or predetermined graphs that model node-to-node relationships using similarities or the inherent properties of the targeted applications~\cite{sahbiijmir2016,vo2012transductive,sahbitnnls2017}. These relationships define operators (with adjacency matrices or Laplacians) that aggregate the neighbors of nodes before applying convolutions on the resulting aggregates. Existing operators include the power series of the adjacency matrices \cite{YLi2018} (a.k.a power maps) and also the recursive Chebyshev polynomial which provides an orthogonal Laplacian basis \cite{Bresson16}. However, in spite of being relatively effective, the potential of these handcrafted operators is not fully explored as their design is either rigid or agnostic to the tasks at hand or achieved using the tedious cross validation. More recent alternatives seek to define graph topology that best fits a given classification or regression problem \cite{Yaguang2019,Fetaya2018,Alet2018,Alet2018b,Luca2019,ChenAAAI2020,Zhuang2018,Li2018}. For instance, authors in \cite{Luca2019} propose a GNN for semi-supervised classification tasks that learns graph topology with sparse structure given a cloud of points; node-to-node connections are modeled with a joint probability distribution on Bernoulli random variables whose parameters are found using bi-level optimization. A computationally more efficient variant of this method is introduced in \cite{ChenAAAI2020} using a weighted cosine similarity and edge thresholding. \\ \indent Other solutions make improvement over the original GNNs in \cite{kipf17} by exploiting symmetric matrices; for instance, the adaptive graph convolutional network in \cite{Li2018} discovers hidden structural relations (unspecified in the original graphs), using a so-called residual graph adjacency matrix by learning a distance function over nodes. The work in \cite{Zhuang2018} introduces a dual architecture with two parallel graph convolutional layers sharing the same parameters. This method considers a normalized adjacency matrix and a positive pointwise mutual information matrix in order to capture node co-occurrences through random walks sampled from a graph. The difference of our contribution, w.r.t this related work, resides in multiple aspects; on the one hand, in contrast to many existing methods (e.g., \cite{YLi2018} which consider a single adjacency matrix shared through power series), the matrix operators designed in our contribution are non-parametrically learned and this provides more flexibility to our design. On the other hand, constraining these matrices (through stochasticity\footnote{\scriptsize In contrast to other methods (e.g., \cite{Micheli2009}) which consider unnormalized adjacency matrices, stochasticity (used in our proposed approach) normalizes these matrices and thereby prevents from having node representations with extremely different scales.}, orthogonality and symmetry\footnote{\scriptsize Symmetry is also used in \cite{dong18,Sardellitti19,LeBars19} in order to enforce the positive semi-definiteness of the learned Laplacians. However, the formulation presented in our paper is different from this related work in the fact that self-loops and multiple connected components are allowed, and this provides more flexibility to our design.}) provides us with an effective regularization that mitigates overfitting as corroborated later in experiments. \def{\bf A}{{\bf A}} \def{\!(r)}{{\!(r)}} \def{\!(k)}{{\!(k)}} \def{\A^\r}{{{\bf A}^{\!(r)}}} \def{\A^\k}{{{\bf A}^{\!(k)}}} \def {\A^{\r}_{\phantom{+}}}{{{\bf A}^{{\!(r)}}_{\phantom{+}}}} \def {\A^{\k}_{\phantom{+}}}{{{\bf A}^{{\!(k)}}_{\phantom{+}}}} \def {\A^{{\!(k-1)}}_{\phantom{+}}}{{{\bf A}^{{\!(k-1)}}_{\phantom{+}}}} \def{\bf I}{{\bf I}} \def{\bf X}{{\bf X}} \def{\bf B}{{\bf B}} \def{\cal K}{{\bf K}} \def{\bf U}{{\bf U}} \def{\bf W}{{\bf W}} \def{\cal S}{{\cal S}} \def{\cal N}{{\cal N}} \def{\cal G}{{\cal G}} \def{\cal V}{{\cal V}} \def{\cal E}{{\cal E}} \def{\cal F}{{\cal F}} \def \v{{\bf vec}} \section{Spatial graph convolutional networks} Let ${\cal S}=\{{\cal G}_i=({\cal V}_i, {\cal E}_i)\}_i$ denote a collection of graphs with ${\cal V}_i$, ${\cal E}_i$ being respectively the nodes and the edges of ${\cal G}_i$. Each graph ${\cal G}_i$ (denoted for short as ${\cal G}=({\cal V}, {\cal E})$) is endowed with a graph signal $\{\psi(u) \in \mathbb{R}^s: \ u \in {\cal V}\}$ and associated with an adjacency matrix ${\bf A}$ with each entry ${\bf A}_{uu'}>0$ iff $(u,u') \in {\cal E}$ and $0$ otherwise. Graph convolutional networks (GCNs) return both the representation and the classification of ${\cal G}$ by learning $K$ filters ${\cal F}=\{g_\theta\}_{\theta=1}^K$; each filter $g_\theta=({\cal V}_\theta,{\cal E}_\theta)$, also referred to as graphlet, corresponds to a graph with a small number of nodes and edges, i.e., $|{\cal V}_\theta| \ll |{\cal V}|$ and $|{\cal E}_\theta| \ll |{\cal E}|$. This filter $g_\theta$ defines a convolution at a given node $u \in {\cal V}$ of ${\cal G}$ as \begin{equation}\label{initial} ({\cal G} \star g_\theta)_u = f\bigg( \frac{1}{|{\cal V}_\theta|}\sum_{u' \in {\cal N}_r(u),v \in {\cal V}_\theta} \big\langle \psi(u'),\psi(v) \big \rangle\bigg), \end{equation} being $f$ a nonlinear activation, ${\cal N}_r(u)$ the $r$-hop neighbors of $u$ and $\langle . , . \rangle: \mathbb{R}^s \times \mathbb{R}^s \rightarrow \mathbb{R}$ the inner product. This graph convolution --- similar to the convolution kernel~\cite{haussler99} --- bypasses the ill-posedness of the spatial support around $u$ due to arbitrary degrees and node permutations in ${\cal N}_r(u)$; since the input of $f$ is defined as the sum of all of the inner products between all of the possible signal pairs taken from $\psi({\cal N}_r(u)) \times \psi({\cal V}_\theta)$, its evaluation does not require any hard-alignment between these pairs and it is thereby agnostic to any arbitrary automorphism in ${\cal G}$ and $g_\theta$. \\ Considering $w_\theta=\frac{1}{|{\cal V}_\theta|}\sum_{v \in {\cal V}_\theta} \psi(v)$ as the aggregate of the graph signal in ${\cal V}_\theta$, Eq.~(\ref{initial}) reduces to \begin{equation}\label{initial2} ({\cal G} \star g_\theta)_u = f\bigg(\bigg\langle \sum_{u'} {\bf A}^{{\!(r)}}_{uu'} . \psi(u'), w_\theta \bigg\rangle \bigg), \end{equation} \def{\cal K}{{\cal K}} \noindent with ${\A^{\r}_{\phantom{+}}}$ being the $r$-hop adjacency matrix of ${\cal G}$. From this definition, knowing the parameters of the aggregate $w_\theta$ is sufficient in order to {\it well} define spatial convolutions on graphs. Besides, each aggregate filter $w_\theta$ has less parameters, and its learning is less subject to overfitting, compared to the non-aggregate filter $g_\theta$ especially when the latter is stationary so the parameters of $w_\theta$ could be shared through different locations (nodes) of ${\cal G}$. Using Eq.~(\ref{initial2}), the extension of convolution to $K$-filters and $|{\cal V}|$ nodes can be written as \begin{equation}\label{matrixform} ({\cal G} \star {\cal F})_{\cal V} = f\big({\A^{\r}_{\phantom{+}}} \ {\bf U}^\top \ {\bf W}\big), \end{equation} \noindent here $^\top$ is the matrix transpose operator, ${\bf U} \in \mathbb{R}^{s\times n}$ is the graph signal (with $n=|{\cal V}|$), ${\bf W} \in \mathbb{R}^{s \times C}$ is the matrix of convolutional parameters corresponding to the $C$ channels (filters) and $f(.)$ is now applied entrywise. In Eq.~\ref{matrixform}, the input signal ${\bf U}$ is projected using the adjacency matrix ${\bf A}$ and this provides for each node $u$, the aggregate set of its neighbors. Taking powers of ${\bf A}$ (i.e., ${\bf A}^{\!(r)}$, with $r>1$) makes it possible to capture the $r$-hop neighbor aggregates in ${\cal V}$ and thereby to model larger extents and more influencing contexts. When the adjacency matrix ${\bf A}$ is common to all graphs\footnote{\scriptsize e.g., when considering a common graph structure for all actions in videos.}, entries of ${\bf A}$ could be handcrafted or learned so Eq.~(\ref{matrixform}) implements a convolutional network with two layers; the first one aggregates signals in ${\cal N}_r({\cal V})$ by multiplying ${\bf U}$ with ${\bf A}^{\!(r)}$ while the second layer achieves convolution by multiplying the resulting aggregate signals with the $C$ filters in ${\bf W}$. \def {E}{{E}} \section{Learning connectivity in GCNs}\label{learned} In the sequel of this paper, we rewrite the aforementioned adjacency matrices as $\{{\bf A}_k\}_k$; the latter will also be referred to as context matrices. Following Eq.~\ref{matrixform}, the term ${\bf A}_k {\bf U}^\top$ acts as a feature extractor that collects different statistics including means and variances of node contexts; indeed, when ${\bf A}_k$ is column-stochastic, ${\bf A}_k {\bf U}^\top $ models expectations $\{\mathbb{E}(\psi({\cal N}_r(u)))\}_u$, and if one considers ${\bf I}-{\bf A}_k$ instead of ${\bf A}_k$, then $({\bf I}-{\bf A}_k) \ {\bf U}^\top$ captures (up to a squared power\footnote{\scriptsize Note that removing this square power maintains skewness and provides us with more discriminating features.}) statistical variances $\{\psi(u)-\mathbb{E}(\psi({\cal N}_r(u)))\}_u$. Therefore, $\{{\bf A}_k\}_k$ constitutes a transformation basis (possibly overcomplete) which allows extracting first and possibly higher order statistics of graph signals before convolution. One may also design this basis to make it orthogonal, by constraining the learned matrices $\{{\bf A}_k\}_k$ to be cycle and loop-free (such as trees), and consider the power map ${\bf A}_k={\bf A}^{\!(k)}$, so the basis $\{{\bf A}_k\}_k$ becomes necessarily orthogonal (see later section \ref{baselines}). The latter property --- which allows learning compact and complementary convolutional filters --- is extended to unconstrained graph structures as shown subsequently.\\ Considering ${E}$ as the cross entropy loss associated to a given classification task and $\v(\{{\bf A}_k\}_k)$ as a vectorization that appends all the entries of $\{{\bf A}_k\}_k$ following any arbitrary order, we turn the design of $\{{\bf A}_k\}_k$ as a part of GCN learning. As the derivatives of ${E}$ w.r.t different layers of the GCN are known (including the output of convolutional layer in Eq.~\ref{matrixform}), one may use the chain rule in order to derive the gradient $\frac{\partial {E}}{\partial \v(\{{\bf A}_k\}_k)}$ and hence update the entries of $\{{\bf A}_k\}_k$ using stochastic gradient descent (SGD). In this section, we upgrade SGD by learning both the convolutional parameters of GCNs together with the matrices $\{{\bf A}_k\}_k$ while implementing {\it orthogonality, stochasticity and symmetry}. As shown subsequently, orthogonality allows us to design $\{{\bf A}_k\}_k$ with a minimum number of parameters, stochasticity normalizes nodes by their degrees and allows learning normalized random walk Laplacians, while symmetry reduces further the number of training parameters by constraining the upper and the lower triangular parts of the learned $\{{\bf A}_k\}_k$ to share the same parameters, and also the underlying learned random walk Laplacians to be positive semi-definite. \def{\bf D}{{\bf D}} \subsection{Stochasticity} In this subsection, we rewrite ${\bf A}_k$ for short as ${\bf A}$. Stochasticity of a given matrix ${\bf A}$ ensures that all of its entries are positive and each column sums to one; i.e. the matrix ${\bf A}$ models a Markov chain whose entry ${\bf A}_{ij}$ provides the probability of transition from one node $u_j$ to $u_i$ in ${\cal G}$. In other words, the matrix ${\bf A}$ captures probabilistically how reachable is the set of neighbors (context) of a given node $u_j$. Taking powers of a stochastic matrix ${\bf A}$ provides the probability of transition in multiple steps and these powers also preserve stochasticity. This property also implements normalized random walk Laplacian operators which are valuable in the evaluation of weighted means and variances\footnote{\scriptsize also referred to as non-differential and differential features respectively.} in Eq.~\ref{matrixform} using a standard feed-forward network; otherwise, one has to consider a normalization layer (with extra parameters), especially on graphs with heterogeneous degrees in order to reduce the covariate shift and distribute the transition probability evenly through nodes before achieving convolutions. Hence, stochasticity acts as a regularizer that reduces the complexity (number of layers and parameters) in the learned GCN and thereby the risk of overfitting. \\ \noindent Stochasticity requires adding equality and inequality constraints in SGD, i.e., ${\bf A}_{ij}\in [0,1]$ and $\sum_{q} {\bf A}_{qj}=1$. In order to implement these constraints, we consider a reparametrization of the learned matrices, as ${\bf A}_{ij}= h(\hat{{\bf A}}_{ij})\slash {\sum_q h(\hat{{\bf A}}_{qj})}$, with $h: \mathbb{R} \rightarrow \mathbb{R}^+$ being strictly monotonic and this allows a free setting of the matrix $\hat{{\bf A}}$ during optimization while guaranteeing ${\bf A}_{ij} \in [0,1]$ and $\sum_{q} {\bf A}_{qj}=1$. During backpropagation, the gradient of the loss ${E}$ (now w.r.t $\hat{{\bf A}}$) is updated using the chain rule as \begin{equation}\label{eq0000000} \begin{array}{lll} \displaystyle \frac{\partial {E}}{\partial \hat{{\bf A}}_{ij}} &=& \displaystyle \sum_p \frac{\partial {E}}{\partial {\bf A}_{pj}} . \frac{\partial {\bf A}_{pj}}{\partial \hat{{\bf A}}_{ij}} \\ \ \ \ \ \ \ & & \textrm{with} \displaystyle \ \ \ \frac{\partial {\bf A}_{pj}}{\partial \hat{{\bf A}}_{ij}} = \frac{h'(\hat{{\bf A}}_{pj})}{\sum_q h(\hat{{\bf A}}_{qj})}.(\delta_{pi} - {\bf A}_{pj}), \end{array} \end{equation} \noindent and $\delta_{pi}=1_{\{p=i\}}$. In practice $h$ is set to $\exp$ and the original gradient $\big[ \frac{\partial {E}}{\partial {{\bf A}}_{pj}}\big]_{p=1}^n$ is obtained from layerwise gradient back propagation (as already integrated in standard deep learning tools including PyTorch). Hence the new gradient (w.r.t $\hat{{\bf A}}$) is obtained by multiplying the original one by the Jacobian $\JJ_{\textrm{stc}}=\big[\frac{\partial {{\bf A}}_{pj}}{\partial {\hat{{\bf A}}}_{ij}}\big]_{p,i=1}^n$ which merely reduces to $[ {{\bf A}}_{ij}.(\delta_{pi} - {\bf A}_{pj})]_{p,i}$ when $h(.)=\exp(.)$. \def\tr{{\bf Tr}} \subsection{Orthogonality}\label{ortho} Learning multiple matrices $\{{\bf A}_k\}_k$ allows us to capture different contexts and graph topologies when achieving aggregation and convolution, and this enhances the discrimination power of the learned GCN representation as shown later in experiments. With multiple matrices $\{{\bf A}_k\}_k$ (and associated convolutional filter parameters $\{{\bf W}_k\}_k$), Eq.~\ref{matrixform} is updated as \def{\bf M}{{\bf M}} \begin{equation}\label{matrixform2} ({\cal G} \star {\cal F})_{\cal V} = f\bigg(\sum_{k=1}^K {\bf A}_k {\bf U}^\top {\bf W}_k\bigg). \end{equation} If aggregation produces, for a given $u \in {\cal V}$, linearly dependent vectors ${\cal X}_u= \{\sum_{u'} {\bf A}_{kuu'}. \psi(u')\}_k$, then convolution will also generate linearly dependent representations with an overestimated number of training parameters in the null space of ${\cal X}_u$. Besides, matrices $\{{\bf A}_1,\dots,{\bf A}_K\}$ used for aggregation, may also generate overlapping and redundant contexts.\\ Provided that $\{\psi(u')\}_{u' \in {\cal N}_r(u)}$ are linearly independent, the sufficient condition that makes vectors in ${\cal X}_u$ linearly independent reduces to constraining $({\bf A}_{kuu'})_{k,u'}$ to lie on the Stiefel manifold (see for instance \cite{Yasunori2005,HuangAAAI2017,Ankita2019}) defined as $V_K(\mathbb{R}^{n})=\{ {\bf M} \in \mathbb{R}^{K \times n}: {\bf M} \, {\bf M}^\top ={\bf I}_K\}$ (with ${\bf I}_K$ being the $K \times K$ identity matrix) which thereby guarantees orthonormality and minimality of $\{{\bf A}_1,\dots,{\bf A}_K\}$\footnote{\scriptsize Note that $K$ should not exceed the rank of $\big\{\psi(u')\big\}_{u' \in {\cal N}_r(u)}$ which is upper bounded by $\min(|{\cal V}|,s)$; $s$ is again the dimension of the graph signal.}. A less compelling condition is orthogonality, i.e., $\langle {\bf A}_k,{\bf A}_{k'} \rangle_F=0$ and ${\bf A}_{k}\geq {\bf 0}_{n}$, ${\bf A}_{k'}\geq {\bf 0}_{n}$, $\forall k \neq k'$ --- with $\langle, \rangle_F$ being the Hilbert-Schmidt (or Frobenius) inner product defined as $\langle {\bf A}_k,{\bf A}_{k'} \rangle_F=\tr({\bf A}_k^\top{\bf A}_{k'})$ --- and this equates ${\bf A}_k\odot {\bf A}_{k'}= {\bf 0}_{n} $, $\forall k\neq k'$ with $\odot$ denoting the entrywise hadamard product and ${\bf 0}_{n}$ the $n \times n$ null matrix. \subsubsection{Problem statement} Considering the cross entropy loss $E$, the matrix operators $\{{\bf A}_k\}_k$ (together with the convolutional filter parameters ${\bf W}=\{ {\bf W}_k \}_k$) are learned as \begin{equation}\label{matrixform3} \begin{array}{lll} \displaystyle {\displaystyle \min}_{\{{\bf A}_k\}_k,{\bf W}} \ \ \ & \displaystyle E\big({\bf A}_1,\dots,{\bf A}_K;{\bf W}\big) & \\ & & \\ \displaystyle {\textrm{s.t.}} & {\bf A}_k\odot {\bf A}_{k}> {\bf 0}_{n} & \\ & {\bf A}_k\odot {\bf A}_{k'}= {\bf 0}_{n} & \forall k, k' \neq k. \end{array} \end{equation} A natural approach to solve this problem is to iteratively and alternately minimize over one matrix while keeping all the others fixed. However --- and besides the non-convexity of the loss --- the feasible set formed by these $O(K^2)$ bi-linear constraints is not convex w.r.t $\{{\bf A}_k\}_k$. Moreover, this iterative procedure is computationally expensive as it requires solving multiple instances of constrained projected gradient descent and the number of necessary iterations to reach convergence is large in practice. All these issues make solving this problem challenging and computationally intractable even for reasonable values of $K$ and $n$. In what follows, we consider an alternative, dubbed as crispmax, that makes the design of orthogonality substantially more tractable and also effective. \subsubsection{Crispmax} \indent We investigate a workaround that optimizes these matrices while guaranteeing their orthogonality as a part of the optimization process. Considering $\exp(\gamma \hat{{\bf A}}_{k}) \oslash (\sum_{r=1}^K \exp(\gamma \hat{{\bf A}}_{r}))$ as a softmax reparametrization of ${\bf A}_{k}$, with $\oslash$ being the entrywise hadamard division and $\{\hat{{\bf A}}_k\}_k$ free parameters in $\mathbb{R}^{n \times n}$, it becomes possible to implement orthogonality by choosing large values of $\gamma$ in order to make this softmax {\it crisp}; i.e., only one entry ${\bf A}_{kij}\gg 0$ while all others $\{{\bf A}_{k'ij}\}_{k'\neq k}$ vanishing thereby leading to ${\bf A}_k\odot {\bf A}_{k'}= {\bf 0}_{n}$, $\forall k, k' \neq k$. By plugging this {\it crispmax} reparametrization into Eq.~\ref{matrixform3}, the gradient of the loss ${E}$ (now w.r.t $\{\hat{{\bf A}}_k\}_k$) is updated using the chain rule as \begin{equation}\label{eq00001111} \begin{array}{lll} \displaystyle & \displaystyle \frac{\partial {E}}{\partial \v(\{\hat{{\bf A}}_k\}_k)} &= \displaystyle \JJ_\textrm{orth} . \frac{\partial {E}}{\partial \v({\{{{\bf A}}_k\}_k})}, \end{array} \end{equation} {with each entry $({\bf i},{\bf j})=(kij,k'i'j')$ of the Jacobian $\JJ_{\textrm{orth}}$ being} \begin{equation}\label{eq00001112} \begin{array}{lll} &\displaystyle \left\{ \begin{array}{ll} \gamma {{\bf A}}_{kij}.(1 - {\bf A}_{k ij}) & {\footnotesize \textrm{if} \ k=k', i=i', j=j'} \\ -\gamma {{\bf A}}_{kij}.{\bf A}_{k' ij} & {\small \textrm{if} \ k \neq k', i=i', j=j'} \\ 0 & \textrm{\small otherwise,} \end{array}\right. \end{array} \end{equation} \noindent here $\frac{\partial {E}}{\partial \v({\{{{\bf A}}_k\}_k})}$ is obtained from layerwise gradient backpropagation. Note that the aforementioned Jacobian is extremely sparse and efficient to evaluate as only $K n^2$ entries are non-zeros (among the $K^2n^4$ possible entries). However, with this reparametrization, large values of $\gamma$ may lead to numerical instability when evaluating the exponential. We circumvent this instability by choosing $\gamma$ that satisfies $\epsilon$-orthogonality: a surrogate property defined subsequently. \begin{definition}[$\epsilon$-orthogonality] A basis $\{{\bf A}_1,\dots,{\bf A}_K\}$ is $\epsilon$-orthogonal if $ \ \forall k,k' \neq k$, $${\bf A}_k\odot {\bf A}_{k'} \leq \epsilon \ \mathds{1}_{n},$$ with $\mathds{1}_{n}$ being the $n \times n$ unitary matrix.\\ \end{definition} Considering the above definition, (nonzero) matrices belonging to an $\epsilon$-orthogonal basis are linearly independent w.r.t $\langle .,. \rangle_F$ (provided that $\gamma$ is sufficiently large) and hence this basis is also minimal. The following proposition provides a tight lower bound on $\gamma$ that satisfies $\epsilon$-orthogonality. \def {\bf u}{{\bf u}} \begin{proposition} [$\epsilon$-orthogonality bound] Consider $\{{\bf A}_{kij}\}_{ij}$ as the entries of the crispmax reparametrized matrix ${\bf A}_k$ defined as $\exp(\gamma \hat{{\bf A}}_{k}) \oslash \big(\sum_{r=1}^K \exp(\gamma \hat{{\bf A}}_{r})\big)$. Provided that $\exists \delta>0:$ $\forall i,j,\ell'$, $\exists !\ell$, $\hat{{\bf A}}_{\ell ij} \geq \hat{{\bf A}}_{\ell' ij}+\delta$ (with $\ell' \neq \ell$) and if $\gamma$ is at least $$\displaystyle \frac{1}{\delta} \ln\bigg(\frac{K \sqrt{(1-2\epsilon)}}{1-\sqrt{(1-2\epsilon)}}+1\bigg)$$ then $\{{\bf A}_1,\dots,{\bf A}_K\}$ is $\epsilon$-orthogonal. \\ \end{proposition} \begin{proof} \small For any entry $i,j$, one may find $\ell$, $\ell'$ in $\{1,\dots,K\}$ (with $\ell \neq \ell'$) s.t. $({\bf A}_k \odot {\bf A}_{k'})_{ij} $ {\hspace*{-1cm} \begin{equation*} \begin{array}{lll} \displaystyle &\leq & \displaystyle ({\bf A}_\ell \odot {\bf A}_{\ell'})_{ij} \\ & & \\ &= & \frac{1}{2} ({\bf A}_{\ell ij}^2 + {\bf A}_{\ell' ij}^2) - \frac{1}{2} ({\bf A}_{\ell ij} -{\bf A}_{\ell' ij})^2 \\ & & \\ \displaystyle &\leq& \frac{1}{2}- \frac{1}{2} ({\bf A}_{\ell ij} -{\bf A}_{\ell' ij})^2 \\ \displaystyle &= & \frac{1}{2}- \frac{1}{2} \bigg( \displaystyle\frac{\exp(\gamma \hat{{\bf A}}_{\ell ij})-\exp(\gamma \hat{{\bf A}}_{\ell' ij})}{\exp(\gamma \hat{{\bf A}}_{\ell ij})+\exp(\gamma \hat{{\bf A}}_{\ell' ij})+\sum_{r=3}^K \exp(\gamma \hat{{\bf A}}_{rij})}\bigg)^2 \\ & &\\ \displaystyle &\leq & \frac{1}{2} - \frac{1}{2}\bigg(\displaystyle\frac{\exp(\gamma \hat{{\bf A}}_{\ell ij}) -\exp(\gamma \hat{{\bf A}}_{\ell' ij})}{\exp(\gamma \hat{{\bf A}}_{\ell ij}) +(K-1)\exp(\gamma \hat{{\bf A}}_{\ell' ij})}\bigg)^2 \\ & & \\ \displaystyle & \leq & \frac{1}{2} - \frac{1}{2} \bigg(\displaystyle \frac{1}{1+ \frac{K}{\exp(\gamma \delta )-1}}\bigg)^2. \end{array} \end{equation*} The sufficient condition is to choose $\gamma$ such as \begin{equation*} \frac{1}{2} - \frac{1}{2} \bigg[\displaystyle \frac{1}{1+ \frac{K}{\exp(\gamma \delta )-1}}\bigg]^2 \leq \epsilon \implies \displaystyle \gamma \geq\displaystyle \frac{1}{\delta} \ln\bigg(\frac{K \sqrt{(1-2\epsilon)}}{1-\sqrt{(1-2\epsilon)}}+1\bigg). \end{equation*} } \begin{flushright} $\blacksquare$ \end{flushright} \end{proof} \noindent Following the above proposition, setting $\gamma$ to the above lower bound guarantees $\epsilon$-orthogonality; for instance, when $K=2$, $\delta=0.01$ and provided that $\gamma \geq 530$, one may obtain $0.01$-orthogonality which is almost a strict orthogonality. This property is satisfied as long as one slightly disrupts the entries of $\{\hat{{\bf A}}_k\}_k$ with random noise during SGD training\footnote{\scriptsize whatever the range of entries in these matrices $\{\hat{{\bf A}}_k\}_k$.}. However, this may still lead to another limitation; precisely, bad local minima are observed due to an {\it early} convergence to crisp adjacency matrices. We prevent this by steadily annealing the temperature $1/\gamma$ of the softmax through epochs of SGD (using $\frac{\gamma.\textrm{epoch}}{\textrm{max\_epochs}}$ instead of $\gamma$) in order to make optimization focusing first on the loss, and then as optimization evolves, temperature cools down and allows reaching the aforementioned lower bound (thereby crispmax) and $\epsilon$-orthogonality at convergence. \subsection{Symmetry \& Combination}\label{symm} Symmetry is obtained using weight sharing, i.e., by constraining the upper and the lower triangular parts of the matrices $\{{\bf A}_k\}_k$ to share the same entries. This is guaranteed by considering the reparametrization of each matrix as ${\bf A}_k=\frac{1}{2} (\hat{{\bf A}}_k+\hat{{\bf A}}_k^\top)$ with $\hat{{\bf A}}_k$ being a free matrix. Starting from symmetric $\{{\bf A}_k\}_k$, weight sharing is maintained through SGD by tying pairwise symmetric entries of the gradient $\frac{\partial {E}}{\partial \v(\{\hat{{\bf A}}_k\}_k)}$ and this is equivalently obtained by multiplying the original gradient $\frac{\partial {E}}{\partial \v(\{{{\bf A}}_k\}_k)}$ by the Jacobian $\JJ_\textrm{sym}= \frac{1}{2} \big[1_{\{k=k'\}}. 1_{\{(i=i',j=j') \vee (i=j',j=i')\}}\big]_{ijk,i'j'k'}$ which is again extremely sparse and highly efficient to evaluate.\\ One may combine symmetry with all the aforementioned constraints by multiplying the underlying Jacobians, so the final gradient is obtained by multiplying the original one as \begin{equation}\label{eq0000} \displaystyle \frac{\partial {E}}{\partial \v(\{\hat{{\bf A}}_k\}_k)} = \displaystyle \JJ_\textrm{(sym or stc)}. \JJ_\textrm{orth}. \frac{\partial {E}}{\partial \v(\{{{\bf A}}_k\}_k)}. \end{equation} Since all the Jacobians are sparse, their product provides an extremely sparse Jacobian. This order of application is strict, as orthogonality sustains after the two other operations, while the converse is not necessarily guaranteed at the end of the optimization process. Note that symmetry could be combined with orthogonality but not with stochasticity as the latter may undo the effect of symmetry if applied subsequently; the converse is also true. \section{Experiments} We evaluate the performance of our GCN learning framework on the challenging task of action recognition \cite{ChenCVPR2013,DongCVPR2017,WangECC2016, TranICCV2015}, using the SBU kinect dataset \cite{SBU12}. The latter is an interaction dataset acquired using the Microsoft kinect sensor; it includes in total 282 video sequences belonging to $8$ categories: ``approaching'', ``departing'', ``pushing'', ``kicking'', ``punching'', ``exchanging objects'', ``hugging'', and ``hand shaking'' with variable duration, viewpoint changes and interacting individuals (see examples in Fig. \ref{fig1}). In all these experiments, we use the same evaluation protocol as the one suggested in \cite{SBU12} (i.e., train-test split) and we report the average accuracy over all the classes of actions. \def{\hat{\w}}{{\hat{\w}}} \subsection{Video skeleton description}\label{graphc} \indent Given a video ${\cal V}$ in SBU as a sequence of skeletons, each keypoint in these skeletons defines a labeled trajectory through successive frames (see Fig.~\ref{fig1}). Considering a finite collection of trajectories $\{v_j\}_j$ in ${\cal V}$, we process each trajectory using {\it temporal chunking}: first we split the total duration of a video into $M$ equally-sized temporal chunks ($M=8$ in practice), then we assign the keypoint coordinates of a given trajectory $v_j$ to the $M$ chunks (depending on their time stamps) prior to concatenate the averages of these chunks and this produces the description of $v_j$ (again denoted as $\psi(v_j) \in \mathbb{R}^{s}$ with $s=3 \times M$) and $\{\psi(v_j)\}_j$ constitutes the raw description of nodes in a given video ${\cal V}$. Note that two trajectories $v_j$ and $v_k$, with similar keypoint coordinates but arranged differently in time, will be considered as very different when using temporal chunking. Note also that beside being compact and discriminant, this temporal chunking gathers advantages -- while discarding drawbacks -- of two widely used families of techniques mainly {\it global averaging techniques} (invariant but less discriminant) and {\it frame resampling techniques} (discriminant but less invariant). Put differently, temporal chunking produces discriminant raw descriptions that preserve the temporal structure of trajectories while being {\it frame-rate} and {\it duration} agnostic. \begin{figure}[hpbt] \begin{center} \centerline{\scalebox{0.38}{\input{processing.pdf_t}}} \caption{\it This figure shows the whole keypoint tracking and description process.} \label{fig1} \end{center} \end{figure} \subsection{Setting \& Performances}\label{baselines} We trained the GCN networks end-to-end for 3,000 epochs with a batch size equal to $200$, a momentum of $0.9$ and a learning rate (denoted as $\nu(t)$) inversely proportional to the speed of change of the cross entropy loss used to train our networks; when this speed increases (resp. decreases), $\nu(t)$ decreases as $\nu(t) \leftarrow \nu(t-1) \times 0.99$ (resp. increases as $\nu(t) \leftarrow \nu(t-1) \slash 0.99$). All these experiments are run on a GeForce GTX 1070 GPU device (with 8 GB memory) and no data augmentation is achieved. \\ \def{\bf L}{{\bf L}} \noindent {\bf Baselines.} We compare the performances of our GCN against two power map baselines. The latter, closely related to our contribution, is given as ${\bf A}_k={\bf A}^{\!(k)}$ with ${\bf A}^{\!(k)}={\bf A}^{\!(k-1)} {\bf A}$, ${\bf A}^{\!(0)}={\bf I}$ and this defines nested supports for convolutions. Two variants of this baseline are also considered in our experiments \begin{itemize} \item {\bf Handcrafted.} All the matrices $\{{\bf A}_k\}_k$ are evaluated upon a {\it handcrafted} adjacency matrix ${\bf A}$ (set using the original skeleton). In this setting, orthogonality is obtained when {\it only a subset of edges in ${\cal G}$ (or equivalently in the adjacency matrix ${\bf A}$) are kept}; this subset corresponds to edges of a spanning tree of ${\bf A}$ obtained using Kruskal \cite{Kruskal1956}. With this initial setting of ${\bf A}$, orthogonality is maintained through $\{{\bf A}_k\}_k$ by updating only the nonzero entries of these matrices. Symmetry (resp. stochasticity) is obtained by taking $\frac{1}{2} ({\bf A}+{\bf A}^\top)$ (resp. ${\bf A} {\bf D}({\bf A})^{-1}$ with ${\bf D}({\bf A})$ being the degree matrix of ${\bf A}$) instead of ${\bf A}$ so all the resulting matrices $\{{\bf A}_k\}_k$ will preserve these two properties. \item {\bf Learned.} In this variant, all the {\it unmasked entries of the matrices $\{{\bf A}_k\}_k$ are learned}. In contrast to the handcrafted setting, orthogonality is obtained as a part of the optimization process (as already discussed in section~\ref{ortho}), so the original matrix ${\bf A}$ does not necessarily correspond to a spanning tree. Similarly, stochasticity and symmetry are implemented as a part of the optimization process; nonetheless, symmetry is structural, i.e., it is obtained by further constraining the structure of the masks, defining the nonzero entries of $\{{\bf A}_k\}_k$, to be symmetric. \end{itemize} \begin{table}[ht] \begin{center} \resizebox{0.79\columnwidth}{!}{ \begin{tabular}{cc|c|c|c|c|c|c|c} \backslashbox{Oper}{Const} & & \rotatebox{55}{none} & \rotatebox{55}{sym} & \rotatebox{55}{orth} & \rotatebox{55}{stc} & \rotatebox{55}{sym+orth} & \rotatebox{55}{orth+stc} & \rotatebox{55}{Mean} \\ \hline \hline \multirow{3}{*}{\rotatebox{10}{HPM. }} & $K=1$ & 89.2308& 92.3077 & -- &89.2308 & -- & -- & 90.2564 \\ & $K=4$ & 87.6923 & 89.2308 & 89.2308 & 87.6923 & 90.7692 &92.3077 &89.4872 \\ & $K=8$& 90.7692 & 95.3846 & 92.3077 & 90.7692 & 92.3077 & 92.3077 & 92.3077 \\ & Mean & 89.2308 &92.3077 & 90.7692 & 89.2308 & 91.5384 & 92.3077 &90.7692 \\ \hline \multirow{3}{*}{\rotatebox{10}{LPM. }} & $K=1$ & 92.3077 & 87.6923 & -- & 95.3846 & -- & -- & 91.7949 \\ & $K=4$ & 92.3077 & 92.3077 & 93.8462 & 95.3846 & 90.7692 & 96.9231 & 93.5897 \\ & $K=8$& 95.3846 & 90.7692 & 87.6923 & 93.8462 & 93.8462 & 92.3077 & 92.3077 \\ & Mean & 93.3333 & 90.2564 & 90.7692 & 94.8718 & 92.3077 & 94.6154& 92.7180\\ \hline \multirow{3}{*}{\rotatebox{10}{Our}} & $K=1$ & 95.3846 & 93.8462 & -- & 95.3846& -- & -- & 94.8718 \\ & $K=4$ & 93.8462 & 95.3846 & 95.3846 & 96.9231 & 93.8462 & \bf 98.4615 & 95.6410 \\ & $K=8$& 92.3077 & 93.8462 & 95.3846 & 90.7692 & 95.3846 & 90.7692 & 93.0769 \\ & Mean & 93.8462 & 94.3590 & 95.3846 & 94.3590 & 94.6154 & 94.6154 & 94.4615 \\ \hline \end{tabular}} \end{center} \caption{\it Detailed performances on SBU using handcrafted and learned power map aggregation operators as well as our learned GCN operators, w.r.t combinations of (i) "constraints" (orth, sym, and stc stand for orthogonality, symmetry and stochasticity respectively) and (ii) different values of $K$. Note that orthogonality is obviously not applicable when $K=1$.}\label{table21} \end{table} \noindent {\bf Performances, Ablation and Comparison.} Table~\ref{table21} shows a comparison of action recognition performances, using our GCN (with different settings) against the two GCN baselines: handcrafted and learned power map GCNs dubbed as HPM, LPM respectively. In these results, we consider different numbers of matrix operators. From all these results, we observe a clear and a consistent gain of our GCN w.r.t these two baselines; at least one of the setting ($K=1$, $K=4$ or $K=8$) provides a substantial gain with globally a clear advantage when $K=4$ compared to the two other settings. We also observe, from ablation (see columns of Table~\ref{table21}), a positive impact (w.r.t these baselines) when constraining the learned matrices to be stochastic and/or orthogonal while the impact of symmetry is not as clearly established as the two others, though globally positive. This gain, especially with stochasticity and orthogonality, reaches the highest values when $K$ is sufficiently (not very) large and this follows the small size of the original skeletons (diameter and dimensionality of the graphs and the signal) used for action recognition which constrains the required number of adjacency matrices. Hence, with few learned matrices (see also Fig. \ref{fig:A1}), our method is able to learn relevant representations for action recognition. Moreover, the ablation study in Table.~\ref{table21} shows that our GCN captures better the topology of the context (i.e., neighborhood system defined by the learned matrices $\{{\bf A}_k\}_k$). In contrast, the baselines are limited when context is fixed and also when learned using a fixed a priori (possibly biased) about the structure of the matrices $\{{\bf A}_k\}_k$. From all these results, it follows that learning convolutional parameters is not enough in order to recover from this bias. In sum, the gain of our GCN results from (i) the high flexibility of the proposed design which allows learning complementary aspects of topology as well as matrix parameters, and also (ii) the regularization effect of our constraints which mitigate overfitting. \\ \noindent Finally, we compare the classification performances of our GCN against other related methods in action recognition ranging from sequence based such as LSTM and GRU \cite{DeepGRU,GCALSTM,STALSTM} to deep graph (non-vectorial) methods based on spatial and spectral convolution \cite{kipf17,SGCCONV19,Bresson16}. From the results in Table \ref{compare}, our GCN brings a substantial gain w.r.t state of the art methods, and provides comparable results with the best vectorial methods. \begin{table}[!htb] \begin{center} \resizebox{0.55\linewidth}{!}{ \begin{adjustbox}{angle=0} \setlength\tabcolsep{2.4pt} \begin{tabular}{c||ccccccccccccccccccc} \rotatebox{90}{Perfs} & \rotatebox{90}{90.00} & \rotatebox{90}{96.00} & \rotatebox{90}{94.00}& \rotatebox{90}{96.00}& \rotatebox{90}{49.7 }& \rotatebox{90}{80.3 }& \rotatebox{90}{86.9 }& \rotatebox{90}{83.9 }& \rotatebox{90}{80.35 }& \rotatebox{90}{90.41}& \rotatebox{90}{93.3 } & \rotatebox{90}{90.5}& \rotatebox{90}{91.51}& \rotatebox{90}{94.9}& \rotatebox{90}{97.2}& \rotatebox{90}{95.7}& \rotatebox{90}{93.7 } & \rotatebox{90}{{{\bf 98.46} } }\\ & & & & & & & & & & & & & & & & & & & \\ \rotatebox{90}{Methods} & \rotatebox{90}{ GCNConv \cite{kipf17}} & \rotatebox{90}{ArmaConv \cite{ARMACONV19}} & \rotatebox{90}{ SGCConv \cite{SGCCONV19}} & \rotatebox{90}{ ChebyNet \cite{Bresson16}}& \rotatebox{90}{ Raw coordinates \cite{SBU12}} & \rotatebox{90}{Joint features \cite{SBU12}} & \rotatebox{90}{Interact Pose \cite{InteractPose}} & \rotatebox{90}{CHARM \cite{CHARM15}} & \rotatebox{90}{ HBRNN-L \cite{HBRNNL15}} & \rotatebox{90}{Co-occurrence LSTM \cite{CoOccurence16}} & \rotatebox{90}{ ST-LSTM \cite{STLSTM16}} & \rotatebox{90}{ Topological pose ordering\cite{velocity2}} & \rotatebox{90}{ STA-LSTM \cite{STALSTM}} & \rotatebox{90}{ GCA-LSTM \cite{GCALSTM}} & \rotatebox{90}{ VA-LSTM \cite{VALSTM}} & \rotatebox{90}{DeepGRU \cite{DeepGRU}} & \rotatebox{90} {Riemannian manifold trajectory\cite{RiemannianManifoldTraject}} & \rotatebox{90}{Our best GCN model (orth+stc+$K=4$)} \\ \end{tabular} \end{adjustbox}} \vspace{0.5cm} \caption{\it Comparison against state of the art methods.} \label{compare} \end{center} \end{table} \begin{figure}[tbp] \center \includegraphics[width=0.42\linewidth]{A0.pdf}\includegraphics[width=0.42\linewidth]{Intrinsic.pdf}\\ \includegraphics[width=0.42\linewidth]{A1.pdf}\includegraphics[width=0.42\linewidth]{A2.pdf}\\ \includegraphics[width=0.42\linewidth]{A3.pdf}\includegraphics[width=0.42\linewidth]{A4.pdf} \caption{\it This figure shows (top) an original skeleton with its intrinsic node-to-node relationships useful for {\it person identification}, and (middle/bottom) four types of extrinsic node-to-node relationships found to be the most discriminating for {\it action recognition} when using the method introduced in this paper (the exact setting corresponds to Table \ref{table21}, with {\it our} learned matrix operators, using the {\it orthogonality} constraint and {\it $K=4$}). {\bf (Better to zoom the PDF version to view the learned node-to-node relationships).}} \label{fig:A1} \end{figure} \section{Conclusion} We introduce in this paper a novel method which learns different matrix operators that "optimally" define the support of aggregations and convolutions in graph convolutional networks. We investigate different settings which allow extracting non-differential and differential features as well as their combination before applying convolutions. We also consider different constraints (including orthogonality and stochasticity) which act as regularizers on the learned matrix operators and make their learning efficient while being highly effective. Experiments conducted on the challenging task of skeleton-based action recognition show the clear gain of the proposed method w.r.t different baselines as well as the related work.
1,477,468,750,229
arxiv
\section{Introduction} \label{s1} The study of the eigenvalues of Schr\"odinger operators below the essential spectrum goes back over fifty years to Bargmann \cite{Barg3}, Birman \cite{Bir61}, and Schwinger \cite{Schw}, and of power bounds on the eigenvalues to Lieb--Thirring \cite{LT1,LT2}. There has been considerably less work on eigenvalues in gaps---much of what has been studied followed up on seminal work by Deift and Hempel \cite{DH86}; see \cite{AADH,ADH,GGHK,GS,Hem89,Hem92,Hem97,Klaus,Lev,Saf98,Saf01JMAA,Saf01} and especially work by Birman and collaborators \cite{Bir90,Bir91,Bir91-ASM,Bir91-FAA,Bir95,Bir97,Bir98,BLS,BP,BR,BW}. Following Deift--Hempel, this work has mainly focused on the set of $\lambda$'s so that some given fixed $e$ in a gap of $\sigma(A)$ is an eigenvalue of $A+\lambda B$ and the growth of the number of eigenvalues as $\lambda\to\infty$ most often for closed intervals strictly inside the gap. Most, but not all, of this work has focused on $B$'s of a definite sign. Our goal in this note is to make an elementary observation that, as regards behavior at an edge for fixed $\lambda$, allows perturbations of either sign. The decoupling in steps we use does not work for the question raised by Deift--Hempel, which may be why it does not seem to be in the literature. We will present two applications: a Cwikel--Lieb--Rozenblum-type finiteness result \cite{Cwi,Lieb,Roz} for suitable gaps in $d\geq 3$ periodic Schr\"odinger operators and a critical power estimate on eigenvalues in some one-dimensional almost periodic problems. To state our results precisely, we need some notation. For any selfadjoint operator $C$, $E_\Omega(C)$ will denote the spectral projections for $C$. We define \begin{equation} \label{1.1} \#(C\in\Omega) =\dim (E_\Omega(C)) \end{equation} and \begin{equation} \label{1.2} \#(C>\alpha) = \dim (E_{(\alpha,\infty)}(C)) \end{equation} and similarly for $\#(C\geq\alpha)$, $\#(C<\alpha)$, $\#(C\leq\alpha)$. We will write \begin{equation} \label{1.3} B=B_+-B_- \end{equation} with $B_\pm\geq 0$. While often we will take $B_\pm =\max(\pm B,0)$, we do not require $B_+B_- =0$ or $[B_+,B]=0$. Our main technical result, which we will prove in Section~\ref{s2}, is \begin{theorem}\label{T1.1} Let $A$ be a selfadjoint operator and $x,y\in{\mathbb{R}}$ so $(x,y)\cap \sigma(A)=\emptyset$. Let $B$ be given by \eqref{1.3} with $B_+,B_-$ both compact. Let $C=A+B$. Let $x < e_0 < e_1 =\f12 (x+y)$, then \begin{equation} \label{1.4} \#(C\in (e_0, e_1))\leq \#(B_+^{1/2} (e_0-A)^{-1} B_+^{1/2}\geq 1) + \#(B_-\geq\tfrac12 (y-x)) \end{equation} \end{theorem} In Section~\ref{s3}, we discuss an analog when $A$ is unbounded but bounded below and $B_\pm$ are only relatively compact. If $V$ is a periodic locally $L^{d/2}$ function on ${\mathbb{R}}^d$ ($d\geq 3$), then $A=-\Delta +V$ can be written as a direct integral of operators, $A(k)$, with compact resolvent, with the integral over the fundamental cell of a dual lattice (see \cite{RS4}). If $\varepsilon_1(k) \leq \varepsilon_2 (k) \leq\dots$ are the eigenvalues of $A(k)$, then $(x,y)$ is a gap in $\sigma(A)$ (i.e., connected component of ${\mathbb{R}}\setminus\sigma(A)$) if and only if there is $\ell$ with \begin{equation} \label{1.5} \max_k\, \varepsilon_{\ell-1}(k) =x<y =\min_k \, \varepsilon_\ell(k) \end{equation} We say $y$ is a nondegenerate gap edge if and only if \begin{equation} \label{1.6} \min_k\, \varepsilon_{\ell+1}(k) >y \end{equation} and $\varepsilon_\ell(k)=y$ at a finite number of points $\{k_j\}_{j=1}^N$ in the unit cell so that for some $C$ and all $k$ in the unit cell, \begin{equation} \label{1.7} \varepsilon_\ell (k)-y\geq C\min\abs{k-k_j}^2 \end{equation} There is a similar definition at the bottom edge if $x>-\infty$. It is a general theorem \cite{S181} that the bottom edge is always nondegenerate. In Section~\ref{s4}, we will prove \begin{theorem}\label{T1.2} Let $d\geq 3$. Let $V\in L_\text{\rm{loc}}^{d/2}({\mathbb{R}}^d)$ be periodic and let $W\in L^{d/2}({\mathbb{R}}^d)$. Let $(x,y)$ be a gap in the spectrum $A=-\Delta+V$ which is nondegenerate at both ends, and let $N_{(x,y)}(W)=\#(-\Delta+V+W\in (x,y))$. Then $N_{(x,y)}(W) <\infty$. \end{theorem} This will be a simple extension of the result of Birman \cite{Bir95} who proved this if $W$ has a fixed sign. Note we have not stated a bound by $\|W\|_{d/2}^{d/2}$. This is discussed further in Section~\ref{s4}. In the final section, Section~\ref{s5}, we will consider certain two-sided Jacobi matrices, $J$, on $\ell^2({\mathbb{Z}})$ with \begin{equation} \label{1.9} J_{k\ell} =\begin{cases} b_k & k=\ell \\ a_k & \ell=k+1 \\ a_{k-1} & \ell =k-1 \\ 0 & \abs{\ell-k}\geq 2 \end{cases} \end{equation} If $E=\cup_{j=1}^{\ell+1} E_j$ is a finite union of bounded closed disjoint intervals, there is an isospectral torus ${\mathcal T}_E$ associated to $E$ of almost periodic $J$'s with $\sigma(J)= E$ (see \cite{AK,Apt,CSZ,Cr,DMN,PY,SY,Widom}). We conjecture the following: \begin{CON} Let $J_0$ lie in some ${\mathcal T}_E$. Let $J=J_0+\delta J$ be a Jacobi matrix for which $\delta J$ is trace class, that is, \begin{equation} \label{1.10} \sum_n\, \abs{\delta a_n} + \abs{\delta b_n}<\infty \end{equation} Then \begin{equation} \label{1.11} \sum_{\lambda\in\sigma(J)\setminus E} \text{\rm{dist}}(\lambda, E)^{1/2} <\infty \end{equation} \end{CON} For $e=[-2,2]$ so $J_0$ is the free Jacobi matrix with $a_n\equiv 1$, $b_n\equiv 0$, this is a result of Hundertmark--Simon \cite{HunS}. It has recently been proven \cite{DKS2007} for the case where $J_0$ is periodic, and it has recently been proven \cite{CLTB} that \eqref{1.11} holds for the sum over $\lambda$'s above the top of the spectrum or below the bottom. In Section~\ref{s5}, we will prove \begin{theorem}\label{T1.3} If \eqref{1.10} holds, then \eqref{1.11} holds if $\f12$ is replaced by any $\alpha >\f12$. \end{theorem} \begin{theorem}\label{T1.4} If \begin{equation} \label{1.12} \sum_n [\log(\abs{n}+1)]^{1+\varepsilon} [\abs{\delta a_n} + \abs{\delta b_n}] <\infty \end{equation} for some $\varepsilon >0$, then \eqref{1.11} holds. \end{theorem} Both the conjecture and Theorem~\ref{T1.4} are interesting because they imply that the spectral measure obeys a Szeg\H{o} condition. This is discussed in \cite{CSZ}. \section{Abstract Bounds in Gaps (Compact Case)} \label{s2} Our goal here is to prove Theorem~\ref{T1.1}. We begin by recalling the version of the Birman--Schwinger principle for points in gaps, which is essentially the key to \cite{AADH,ADH,Bir90,Bir91,Bir91-ASM,Bir91-FAA,Bir95,Bir97,Bir98,BLS,BP,BR,BW, DH86,GGHK,GS,Hem89,Hem92,Hem97,Klaus,Lev,Saf98,Saf01JMAA,Saf01}: \begin{proposition}\label{P2.1} Let $A$ be a bounded selfadjoint operator with $(x,y) \cap\sigma(A)=\emptyset$. Let $B$ be compact with $B\geq 0$. Let $e\in (x,y)$. Then \begin{equation} \label{2.1} e\in \sigma(A+\mu B) \Leftrightarrow \mu^{-1}\in\sigma (B^{1/2} (e-A)^{-1} B^{1/2}) \end{equation} with equal multiplicity. In particular, \begin{equation} \label{2.2} \#(A+B\in (e,y))\leq \#(B^{1/2} (e-A)^{-1} B^{1/2}\geq 1) \end{equation} \end{proposition} \begin{proof} This is so elementary that we sketch the proof. If for $\varphi\neq 0$, \begin{equation} \label{2.3} (A+\mu B)\varphi =e\varphi \end{equation} then \begin{equation} \label{2.4} B\varphi\neq 0 \end{equation} since $e\notin\sigma(A)$. Moreover, \begin{equation} \label{2.5} (e-A)^{-1} B\varphi =\mu^{-1}\varphi \end{equation} and \eqref{2.5} implies \eqref{2.3}. Thus \begin{equation} \label{2.6} e\in\sigma(A+\mu B)\Leftrightarrow \mu^{-1}\in\sigma((e-A)^{-1}B) \end{equation} and \eqref{2.1} follows by $\sigma (CD)\setminus\{0\}=\sigma(DC)\setminus\{0\}$ (see, e.g., Deift \cite{Deift78}). Since $\sigma(A+\mu B)\subset \sigma(A) + [-\mu\|B\|, \mu\|B\|]$ and discrete eigenvalues are continuous in $\mu$ and strictly monotone by \eqref{2.4} and (see \cite{RS4}) \begin{equation} \label{2.7} \frac{de(\mu)}{d\mu} = \langle\varphi,B\varphi\rangle \end{equation} eigenvalues of $A+B$ in $(x,y)$ must pass through $e$ as $\mu$ goes from $0$ to $1$ and \eqref{2.2} follows from \eqref{2.1}. We only have inequality in \eqref{2.2} since eigenvalues can get reabsorbed at $y$. \end{proof} \begin{proof}[Proof of Theorem~\ref{T1.1}] Let $C_+ =A+B_+$ so $C=C_+-B_-$. By Proposition~\ref{P2.1}, if \begin{equation} \label{2.8} n_1=\#(C_+\in (e_0,e_1)) \qquad n_2 =\#(C_+\in (e_1,y)) \end{equation} then \begin{equation} \label{2.9} n_1 + n_2 \leq \#(B_+^{1/2} (e_0-A)^{-1} B_+^{1/2} \geq 1) \end{equation} By a limiting argument, we can suppose that $e_1$ is not an eigenvalue of $C_+$. Since eigenvalues of $C_+-\mu B_-$ are strictly monotone decreasing in $\mu$, the number of eigenvalues of $C$ in $(e_0,e_1)$ can only increase by passing through $e_1$. By repeating the argument in Proposition~\ref{P2.1}, \begin{equation} \label{2.10} \#(C\in (e_0,e_1)) \leq n_1 + \#(B_-^{1/2} (C_+ -e_1)^{-1} B_-^{1/2}\geq 1) \end{equation} Now write \begin{equation} \label{2.11} B_-^{1/2} (C_+-e_1)^{-1} B_-^{1/2} = D_1 + D_2 + D_3 \end{equation} where $D_1$ has $E_{(-\infty, e_1)}(C_+)$ inserted in the middle, $D_2$ an $E_{(e_1,y)}(C_+)$, and $D_3$ an $E_{[y,\infty)}(C_+)$. Since $D_1\leq 0$ and $\text{\rm{rank}} (D_2)\leq n_2$, we see \begin{equation} \label{2.12} \#(B_-^{1/2} (C_+-e_1)^{-1} B_-^{1/2} \geq 1)\leq n_2 + \#(D_3\geq 1) \end{equation} Since $(C_+-e_1)^{-1} E_{[y,\infty)}(C_+)\leq (y-e_1)^{-1}=[\frac12(y-x)]^{-1}$, we have \begin{equation} \label{2.12a} D_3 \leq [\tfrac12\, (y-x)]^{-1} B_- \end{equation} and thus \begin{align} \#(D_3\geq 1) & \leq \# ([\tfrac12\, (y-x)]^{-1} B_-\geq 1) \notag \\ &= \#(B_- \geq \tfrac12\, (y-x)) \label{2.13} \end{align} \eqref{2.9}, \eqref{2.10}, \eqref{2.12}, and \eqref{2.13} imply \eqref{1.4}. \end{proof} \section{Abstract Bounds in Gaps (Relatively Compact Case)} \label{s3} In this section, we suppose $A$ is a semibounded selfadjoint operator with \begin{equation} \label{3.1} q=\inf \sigma(A) \end{equation} We will suppose $B$ is a form-compact perturbation, which is a difference of two positive form-compact perturbations. We abuse notation and write compact operators \begin{equation} \label{3.2} B_\pm^{1/2} (A-e)^{-1} B_\pm^{1/2} \end{equation} for $e\notin\sigma(A)$ even though $B_\pm$ need not be operators --- \eqref{3.2} can be defined via forms in a standard way. In the bounded case, we only considered intervals in the lower half of a gap since $A\to -A$, $B\to -B$ flips half-intervals. But, as has been noted in the unbounded case (see, e.g., \cite{Bir95,Saf98}), there is now an asymmetry, so we will state separate results. We start with the bottom half case: \begin{theorem}\label{T3.1} Let $A$ be a semibounded selfadjoint operator and $x,y\in{\mathbb{R}}$ so $(x,y)\cap \sigma(A)=\emptyset$. Let $B=B_+-B_-$ with $B_+$ form-compact positive perturbations of $A$. Let $C=A+B$ and $x < e_0 < e_1 =\f12 (x+y)$. Then \begin{equation} \label{3.3} \begin{split} \#(C\in (e_0,e_1)) &\leq \#(B_+^{1/2} (e_0-A)^{-1} B_+^{1/2}\geq 1) \\ &\quad + \# \biggl( B_-^{1/2} (A-q+1)^{-1} B_-^{1/2} \geq \tfrac12\, \biggl[ \frac{y-x}{y-q+1}\biggr]\biggr) \end{split} \end{equation} \end{theorem} \begin{proof} We follow the proof of Theorem~\ref{T1.1} without change until \eqref{2.12a} noting that instead \begin{align} (C_+-e_1)^{-1} E_{[y,\infty)}(C_+) &\leq \frac{y-q+1}{y-e_1}\, (C_+ + q+1)^{-1} \label{3.4} \\ &\leq \frac{y-q+1}{y-e_1}\, (A-q+1)^{-1} \label{3.5} \end{align} since $q\leq A\leq C_+$ and \[ \sup_{x\geq y}\, \frac{x-q+1}{x-e_1} \] is taken at $x=y$ since $q-1 <e_1$. By \eqref{3.5}, \[ \#(D_3\geq 1) \leq \# \biggl(B_-^{1/2} (A-q+1)^{-1} B_-^{1/2} \geq \frac{y-e_1}{y-q+1} \biggr) \qedhere \] \end{proof} \begin{theorem}\label{T3.2} Let $A$ be a semibounded selfadjoint operator and $(x,y)\in{\mathbb{R}}$ so $(x,y)\cap\sigma(A)=\emptyset$. Let $B=B_+-B_-$ with $B_\pm$ form-compact positive perturbations of $A$. Let $C=A+B$ and $e_1=\f12 (x+y) <e_0 <y$. Then \begin{equation} \label{3.6} \begin{split} \#(C\in (e_1,e_0)) &\leq \#(B_-^{1/2} (A-e_0)^{-1} B_-^{1/2} \geq 1) \\ &\quad + \#(B_+^{1/2} (A-B_- - e_1)^{-1} E_{(-\infty,x)} (A-B_-) B_+^{1/2} \geq 1) \end{split} \end{equation} \end{theorem} \begin{proof} Identical to the proof of Theorem~\ref{T1.1} through \eqref{2.12a}. \end{proof} The second term in \eqref{3.6} is easily seen to be finite since the operator is compact. However, any bound depends on both $B_+$ and $B_-$. \section{$L^{n/2}$ Bounds in Gaps for Periodic Schr\"odinger Operators} \label{s4} Birman \cite{Bir95} proved for $V$\!, as in Theorem~\ref{T1.2}, and any $W$ that uniformly in any gap $(x,y)$, $\sup_{\lambda\in (x,y)} \|\abs{W}^{1/2} (-\Delta +V-\lambda)^{-1} \abs{W}^{1/2}\|_{{\mathcal I}_{d/2}^w} \leq c\|W\|_{d/2}$ where $\|\cdot\|_{{\mathcal I}_{d/2}^w}$ is a weak ${\mathcal I}_d$ trace class norm \cite{S73}. To be precise, in his Proposition~3.1, he proved $\|\abs{W}^{1/2} (-\Delta +V -\lambda_0)^{-1} \abs{W}^{1/2}\|_{{\mathcal I}_{d/2}}$ is finite away from $x$ and $y$, and then in (3.15), he proved the weak estimate at the end points. He used this to prove for $W$ of a definite sign \begin{equation} \label{4.1} N_{(x,y)}(W) \leq c\int_{{\mathbb{R}}^d} \abs{W(z)}^{d/2}\, dz \end{equation} It implies relative compactness, and given Theorems~\ref{T3.1} and \ref{T3.2}, proves Theorem~\ref{1.2}. Note that, by Theorem~\ref{T3.1}, we get for any $x' >x$, \begin{equation} \label{4.2} N_{(x',y)}(W) \leq c_{x'} \int_{{\mathbb{R}}^d} \abs{W(z)}^{d/2}\, dz \end{equation} but we do not get such a bound for $x'=x$ since there is a $W_-,W_+$ cross term in \eqref{3.6}. \section{Gaps for Perturbations of Finite Gap Almost Periodic Jacobi Matrices} \label{s5} Our goal here is to prove Theorems~\ref{T1.3} and \ref{T1.4}. Let \begin{equation} \label{5.1} G_0 (n,m;\lambda) =\langle \delta_n, (J_0-\lambda)^{-1} \delta_m\rangle \end{equation} and let $(\lambda_0,\lambda_1)$ be a gap in $\sigma (J_0)$. As input, we need two estimates for $G_0$ proven in \cite{CSZ}. First we have \begin{equation} \label{5.2} \abs{G_0 (n,m;\lambda)} \leq C\text{\rm{dist}} (\lambda, \sigma(J_0))^{-1/2} \end{equation} uniformly in real $\lambda\notin\sigma (J_0)$ and $n$ and $m$. To describe the other estimate, we need some notions. At a band edge, $\lambda_0$ (here and below, we study $\lambda_0$ but there is also an analysis at $\lambda_1$), there is a unique almost periodic sequence $\{u_n (\lambda_0)\}_{n=-\infty}^\infty$ solving $(J_0-\lambda_0) u_n =0$. If $u_n=0$, we say $n$ is a resonance point. If $u_n\neq 0$, we have a nonresonance. Since $u_n=0\Rightarrow u_{n\pm 1}\neq 0$, we have lots of nonresonance points. Without loss, we will suppose henceforth that $0$ is a nonresonance point. At a nonresonance point, $\lim_{\lambda\downarrow \lambda_0} \text{\rm{dist}} (\lambda,\lambda_0)^{1/2} G_0 (n,n;\lambda)\neq 0$. The Dirichlet Green's function is defined by \begin{equation} \label{5.3} G_0^D (n,m;\lambda) = G_0 (n,m;\lambda) - G_0 (0,0;\lambda)^{-1} G_0 (n,0;\lambda) G_0 (0,m;\lambda) \end{equation} Then \cite{CSZ} proves that if $0$ is a nonresonance at $\lambda_0$, then for some small $\varepsilon$, \begin{align} \lambda\in (\lambda_0, \lambda_0 +\varepsilon) &\Rightarrow \abs{G_0^D (n,n;\lambda)} \leq Cn \label{5.4} \\ &\Rightarrow \abs{G_0^D (n,n;\lambda)} \leq C\abs{\lambda-\lambda_0}^{-1/2} \label{5.5} \end{align} Following \cite{HunS}, we use (with $c_\pm=\max (\pm c, 0)$) with $a>0$, \begin{equation} \label{5.6} \begin{pmatrix} b&a \\ a & b \end{pmatrix} =\begin{pmatrix} b_+ + a & 0 \\ 0 & b_+ +a \end{pmatrix} -\begin{pmatrix} a+b_- & -a \\ -a & a+b_- \end{pmatrix} \end{equation} to define $\delta J=\delta J_+ -\delta J_-$ where $\delta J_+$ is diagonal and given by \begin{align} (\delta J_+)_{n\,n} &= (\delta b_n)_+ + \delta a_{n-1} + \delta a_n \label{5.7} \\ \intertext{and $(\delta J_-)$ is tridiagonal with} (\delta J_-)_{n\,n+1} &= \delta a_n \label{5.8a} \\ (\delta J_-)_{n\, n-1} &= \delta a_{n-1} \label{5.8b} \\ (\delta J_-)_{n\,n} &= (\delta b_n)_- + \delta a_{n-1} + \delta a_n \label{5.8c} \end{align} We also use the fact obtained via an integration by parts that if $f(\lambda_0)=0$, $f$ continuous on $[\lambda_0, \lambda_0+\varepsilon)$, and $C^1 (\lambda_0, \lambda_0 +\varepsilon)$ with $f' >0$, then \begin{equation} \label{5.9} \sum_{\substack{\lambda\in (\lambda_0, \lambda_0+\varepsilon) \\ \lambda\in \sigma(J)}} f(\lambda) = \int_{\lambda_0}^{\lambda_0 +\varepsilon} f'(\lambda) \#(J\in (\lambda,\lambda_0 +\varepsilon))\, d\lambda \end{equation} Since $f'\in L^1(\lambda_0,\lambda_0+\varepsilon)$ and $\delta J_-$ is compact, Theorem~\ref{T1.1} implies \begin{equation} \label{5.10} \sum_{\substack{\lambda\in (\lambda_0, \lambda_0+\varepsilon) \\ \lambda\in \sigma(J)}} f(\lambda) <\infty \Leftarrow \int_{\lambda_0}^{\lambda_0+\varepsilon} \# ((\delta J_+)^{1/2} (\lambda -J_0)^{-1} (\delta J_+)^{1/2} \geq 1) f'(\lambda)\, d\lambda <\infty \end{equation} This leads to \begin{proposition}\label{P5.1} If $\delta J_\pm$ are trace class and \begin{equation} \label{5.11} \int_{\lambda_0}^{\lambda_0 +\varepsilon} f'(\lambda) \abs{\text{\rm{Tr}} ((\delta J_+)^{1/2} G_0^D (\cdot,\cdot \,;\lambda)(\delta J_+)^{1/2})}\, d\lambda <\infty \end{equation} then \begin{equation} \label{5.12} \sum_{\substack{\lambda\in (\lambda_0, \lambda_0+\varepsilon) \\ \lambda\in \sigma(J)}} f(\lambda) <\infty \end{equation} \end{proposition} \begin{proof} $G_0-G_0^D$ is rank one and $\#(C\geq 1) \leq \|C\|_1$, so \[ \#((\delta J_+)^{1/2} G_0 (\cdot,\cdot\, ;\lambda)(\delta J_+)^{1/2} \geq 1) \leq 1+ \|(\delta J_+)^{1/2} G_0^D (\cdot,\cdot\, ;\lambda)(\delta J_+)^{1/2}\|_1 \] The negative part of $G_0^D (\cdot,\cdot\, ;\lambda)$ is uniformly bounded in norm by $\abs{a-\lambda}^{-1}$ where $a$ is either $\lambda_1$ or the unique eigenvalue of the Dirichlet $J_0$ in $(\lambda_0-\lambda_1)$ and \begin{align*} \|C\|_1 &\leq \text{\rm{Tr}}(C_+) + \text{\rm{Tr}} (C_-) \\ &\leq \text{\rm{Tr}} (C) + 2\text{\rm{Tr}} (C_-) \end{align*} Thus \eqref{5.12} is implied by \eqref{5.10} so long as \eqref{5.11} holds. \end{proof} \begin{proof}[Proof of Theorem~\ref{T1.3}] By \eqref{5.5} and $\delta J_+\in{\mathcal I}_1$, we have \[ \abs{\text{\rm{Tr}}((\delta J_+)^{1/2} G_0^D (\cdot,\cdot\, ;\lambda)(\delta J_+)^{1/2})} \leq C\abs{\lambda -\lambda_0}^{-1/2} \] so the integral in \eqref{5.11} is bounded by \[ C \int_{\lambda_0}^{\lambda_0+\varepsilon} \abs{\lambda -\lambda_0}^{\alpha -1} \abs{\lambda -\lambda_0}^{-1/2} \, d\lambda <\infty \] so long as $\alpha -\f12 >0$. \end{proof} \begin{lemma}\label{L5.2} For any $\alpha >0$, there is a $C$ so for all $x,y >1$, \begin{equation} \label{5.13} \min (x,y) \leq C [\log (x+1)]^\alpha \, \frac{y}{[\log (y+1)]^\alpha} \end{equation} \end{lemma} \begin{proof} Pick $d\geq 1$ (e.g., $d=e^\alpha$), so $[\log (x+d)]^\alpha x^{-1}$ is monotone decreasing on $[1,\infty)$. Then \begin{equation} \label{5.14} \min (x,y) \leq [(\log (x+d))]^\alpha \, \frac{y}{[\log (y+d)]^\alpha} \end{equation} If $y\leq x$, the right-hand side is bigger than $y$ and so $\min (x,y)$. If $y\geq x$, the monotonicity shows \[ \text{RHS} \geq [\log (x+d)]^\alpha \, \frac{x}{[\log (x+d)]^\alpha} =x \] \eqref{5.13} follows since on $[1,\infty)$, $\frac{\log (x+d)}{\log (x+1)}$ is bounded above and below. \end{proof} \begin{proof}[Proof of Theorem~\ref{T1.4}] By \eqref{5.4}, \eqref{5.5}, and \eqref{5.13}, \[ \abs{G_0^D (n,n;\lambda)} \leq C\, \frac{[\log (1+\abs{n})]^\alpha}{\abs{\lambda - \lambda_0}^{1/2}} \, [\log (\lambda -\lambda_0)^{-1/2}]^{-\alpha} \] By \eqref{1.12}, we see \[ \abs{\text{\rm{Tr}} [(\delta J)^{1/2} G_0^D (\delta J)^{1/2}]} \leq \frac{C[\log (\lambda-\lambda_0)^{1/2}]^{-(1 +\varepsilon)}}{(\lambda-\lambda_0)^{1/2}} \] Since \[ \int_{\lambda_0}^{\lambda_0 +\varepsilon} (\lambda-\lambda_0)^{-1} [\log (\lambda -\lambda_0)^{-1/2}]^{-(1+\varepsilon)}\, d\lambda <\infty \] the result follows. \end{proof} \bigskip
1,477,468,750,230
arxiv
\section{Introduction} \label{sec:introduction} \begin{figure} \includegraphics[width=\linewidth]{include/design-space} \caption{Design space of OS kernels.} \label{fig:design-space} \end{figure} Modern OS architectures are heavily interlinked with the protection mechanisms they rely upon. OSes rigidly commit at design time to various high-level safety decisions, such as the use of software verification, hardware isolation, runtime checking, etc. Changing these after deployment is rare and costly. The current OS design landscape (depicted in Figure~\ref{fig:design-space}) broadly consists of micro-kernels~\cite{Herder2006, Klein2009}, which favor hardware protection and verification over performance, monolithic kernels~\cite{bovet2005}, which choose privilege separation and multiple address spaces (ASes) to isolate applications, but assume all kernel code is trusted, and single-address-space OSes (SASOSes), which attempt to bring isolation within the address space~\cite{Chase1994, Leslie1996, Heiser1999}, or ditch all protection for maximum performance~\cite{Madhavapeddy2013, Olivier2019, Kuenzer2021}. Making post-design changes to these high-level safety decisions is very difficult to implement. For instance, removing the user/kernel separation~\cite{Maeda2003} requires a lot of engineering effort, as does breaking down a process into multiple address spaces for isolation~\cite{Kilpatrick2003}. Recently, the potential safety benefits hinted by the proposal to introduce Rust components in Linux~\cite{RUST_LINUX1} are questioned by the fact that the bulk of the kernel code will remain written in a memory-unsafe language~\cite{RUST_LINUX2}. The rigid use of safety primitives in modern OSes poses a number of problems. First, it precludes per-application OS specialization~\cite{Engler1995, Kaashoek1997, Manco2017, Martins2014} at a time when modern applications exhibit a wide range of safety and performance requirements. Prematurely locking the design into any combination of safety primitives is likely to result in suboptimal performance/safety in many scenarios. Effortless specialization for safety is further motivated by the fact that today's applications are made up of multiple components showing different degrees of trust and criticality, and as such requiring various levels of isolation. Furthermore, new isolation mechanisms~\cite{MPK, Schrammel2020, Watson2015, Costan2016, ARMTrustZone2009, ARMMorello2020}, with the ability to complement or replace traditional ones, are regularly being proposed by CPU manufacturers. When multiple mechanisms can be used for the same task, choosing the most suitable primitive depends on many factors, and should ideally be postponed to deployment time. Finally, when the protection offered by a hardware primitive breaks down (e.g. Meltdown~\cite{MELTDOWN}), it is difficult to decide how it should be replaced, and generally costly to do so. This leads us to the following research problem: \textit{how can we enable users to easily and safely switch between different isolation and protection primitives at deployment time, avoiding the lock-in that characterizes the status-quo?} Our answer is \emph{FlexOS\xspace}, a modular OS design whose compartmentalization and protection profile can easily and cost-efficiently be tailored towards a specific application or use-case at build time, as opposed to design time. To that aim, we extend the Library OS (LibOS\xspace) model and augment its capacity to be specialized towards a given use case, historically done for performance reasons~\cite{Engler1995, Kaashoek1997, Manco2017, Martins2014}, towards the \emph{safety} dimension. With FlexOS\xspace, the user can decide, at \emph{build time}, which of the fine-grained OS components should be placed in which compartment (e.g. the scheduler, TCP/IP stack, etc.), how to instantiate isolation and protection primitives for each compartment, what data sharing strategies to use for communication between compartments, as well as what software hardening mechanisms should be applied on which compartments. To that aim, we abstract the common operations required when compartmentalizing arbitrary software behind a generic API that is used to retrofit an existing LibOS\xspace into FlexOS\xspace. This API limits the manual porting effort of kernel and application legacy components to the marking of shared data using annotations. These annotations, alongside other abstract source-level constructs, are replaced at build time by a code transformation step that instantiate a given FlexOS\xspace safety configuration. The design space enabled by the system, illustrated on Figure~\ref{fig:design-space}, is very large and difficult for a non-expert user to explore manually. This leads to the second research question we explore: \emph{how to guide the user navigating the vast design space unlocked by FlexOS\xspace?} To answer this, we propose a semi-automated exploration technique named \emph{partial safety ordering}, using partially ordered sets to describe the probabilistic security degrees of FlexOS\xspace' configurations and identify the safest ones under a given performance budget. We have implemented a prototype of FlexOS\xspace with support for Intel MPK and VM/EPT-based isolation, as well as a wide range of hardening mechanisms (CFI \cite{Abadi2009}, ASAN \cite{LinuxASAN}, etc.). Our evaluation using four popular applications demonstrates the wide safety versus performance tradeoff space unlocked by FlexOS\xspace: we evaluate over 160 configurations for Redis and Nginx. We also show the ease of exploring different points in that space: our semi-automated exploration technique can probabilistically subset the 80 Redis configurations to the 5 safest ones under a given performance budget. Finally, we demonstrate that under equivalent configurations, FlexOS\xspace performs better or similarly to baselines/competitors: a monolithic kernel, a SASOS, a microkernel, and a compartmentalized LibOS\xspace. \section{Flexible OS Isolation: Principles, Challenges}\label{sec:approach} FlexOS\xspace seeks to enable users to easily and safely switch between different isolation and protection primitives at deployment time. This section formalizes the fundamental design principles required to achieve this, the challenges that arise from them, and how we address them. \subsection{Principles} \noindent (P1) \textbf{\emph{The isolation granularity of FlexOS\xspace' components should be configurable.}} The compartmentalization strategy, i.e. the number of compartments and which components are merged/split into compartments, has a major impact on safety and performance, thus it should be configurable. \noindent (P2) \textbf{\emph{The hardware isolation mechanisms used should be configurable.}} There is a wide range of isolation mechanisms with various safety and performance implications. These should be configurable by the user. For the OS developer, supporting a new mechanism should not involve any rewrite/redesign and be as simple as implementing a well-defined API. \noindent (P3) \textbf{\emph{Software hardening and isolation mechanisms should be configurable.}} Software hardening techniques such as CFI, or Software Fault Isolation (SFI), as well as memory safe languages such as Rust, bring different levels safety at a variable performance cost. They should be selectively applicable on the components they are the most meaningful for in a given use case. \noindent (P4) \textbf{\emph{Flexibility should not come at the cost of performance.}} The OS runtime performance should be similar to what would be achieved with any particular safety configuration without the flexibility approach. \noindent (P5) \textbf{\emph{Compatibility with existing software should not come at a high porting cost}}, to maximize adoption. \noindent (P6) \textbf{\emph{The user should be guided in the vast design space enabled by FlexOS\xspace.}} Given its very large configuration space, the system should come with tools helping the user identify suitable safety/performance configurations for a given use case. \subsection{Challenges and Approach} P1 and P4 raise the question of \textbf{\emph{how to offer variable isolation granularities, and how to do so without compromising performance?}} Genericity is typically paid at the price of performance~\cite{Martins2014, Marinos2014, Kuenzer2021}, and interface design may not be easily decoupled from the isolation granularity without performance loss~\cite{Gefflaut2000}. In order to tackle this issue, we propose to rely on a LibOS\xspace design that is \textit{already finely modularized while providing state of the art performance}, Unikraft~\cite{Kuenzer2021}. The main idea is to consider Unikraft's level of modularization (micro-library) as a minimal granularity, using pre-existing interfaces as compartment boundaries. Then, in order to maximize performance and safety for a given use case, less granular configurations can be composed by merging select components into compartments. At build time when an isolation mechanism is selected, FlexOS\xspace uses code transformations to inline function-call-like cross-domain gates, avoiding the overhead of a runtime abstraction interface~\cite{Ford1997}. P2 and P5 bring the challenge of \textbf{\emph{how to design an OS in which 1) isolation can be enforced by many hardware mechanisms and 2) the engineering cost of introducing a new mechanism is low?}} Technology agnosticism is already difficult in userland software, but core kernel facilities (interrupt handling, memory management, scheduling) introduce additional complexity that should be handled very differently depending on the underlying isolation technology. For example, some technologies share a single address space between protection domains (e.g. MPK~\cite{MPK}) while other use disjoint address spaces (e.g. TEEs~\cite{ARMTrustZone2009}, EPT). The main idea of FlexOS\xspace is to abstract existing isolation technologies and identify kernel facilities that require different handling depending on the technology, and design these subsystems so as to minimize the changes needed when implementing a new technology. P5 asks \textbf{\emph{how to limit the engineering costs of porting new applications/libraries?}} To allow compatibility with existing software, FlexOS\xspace extends an OS that offers a POSIX interface. That OS is compartmentalized by marking cross-component calls and shared data using an abstract API and, in its basic form, porting a new application requires the developer to use the same API to mark shared data (i.e. data passed to other components) with source-level annotations. This avoids the need to change the application design or major code rewriting. Such an approach is common among state-of-the art compartmentalization frameworks~\cite{VahldiekOberwagner2019, Hedayati2019, Narayan2020, Schrammel2020}. Finally, P1-P3 and P6 raise the question of \textbf{\emph{how to help the user navigate the vast design space enabled by FlexOS\xspace?}} The introduction of safety flexibility increases the potential for safety/performance specialization, but selecting suitable configurations may be hard for a non-expert. For example, it can be difficult to reason about the safety implications of increasing the degree of compartmentalization vs. increasing the level of software hardening for a given configuration. To tackle that issue, we propose a method named \emph{partial safety ordering}, using partial order relationships to probabilistically rank FlexOS\xspace configurations by safety and identify the safest ones for a given application under a performance budget. Section~\ref{sec:design} presents an OS design that satisfies P1-P5, and Section~\ref{sec:implementation} gives key implementation points of a prototype we developed. Section~\ref{sec:exploration} shows an approach to tackle P6. Finally, Section~\ref{sec:evaluation} presents an evaluation of our prototype. \section{Designing an OS with Flexible Isolation}\label{sec:design} We now provide an overview of FlexOS\xspace' main elements, starting with an overview of its design, compartmentalization API, the backend API, and finally the trusted computing base. \begin{figure} \center \includegraphics[width=0.45\textwidth]{include/libs} \caption{ OS overview. The TCB includes backends and core libraries. Backends are used by the toolchain to rewrite the libraries at build time. } \label{fig:shortoverview} \end{figure} FlexOS\xspace is based on a modular LibOS\xspace, Unikraft~\cite{Kuenzer2021} composed of a set of independent, fine grained libraries. In FlexOS\xspace, each library can be placed in a given compartment (an isolation domain), and it can be hardened via techniques such as Control-Flow Integrity (CFI), address sanitization and so forth. This safety configuration is provided at build time, in a configuration file provided by the developer, and FlexOS\xspace' toolchain produces an OS image with the desired safety characteristics. Below is an example of such a configuration file that isolates libopenjpg and lwip in a separate compartment with CFI and ASan enabled. \begin{tcolorbox}[colback=lightgrey,boxrule=0pt,arc=0pt,left=6pt] {\scriptsize \begin{Verbatim}[commandchars=\\\{\}] compartments: - comp1: mechanism: \textit{intel-mpk} default: True - comp2: mechanism: \textit{intel-mpk} hardening: [cfi, asan] libraries: - libredis: comp1 - libopenjpg: comp2 - lwip: comp2 \end{Verbatim} } \end{tcolorbox} In contrast to Unikraft where all libraries are in the same protection domain and any library can directly call a function from another library, in FlexOS\xspace' source code libraries call external functions via \textit{abstract gates}, and may share data with external libraries at the granularity of a byte using abstract code annotations. Gates and annotations form an API used to compartmentalize Unikraft into FlexOS\xspace, and represent metadata which is automatically replaced by our toolchain with a particular implementation at build time. Different implementations can leverage different isolation technologies, or flavors of a same technology. We refer to the API implementation for a given technology (MPK, EPT, etc.) together with its runtime library as \textit{isolation backend}. This subsection gives a short overview of FlexOS\xspace' main design elements, which are then elaborated in the following subsections. Figure~\ref{fig:shortoverview} depicts the components described in this subsection. \paragraph{LibOS\xspace Basis.} Achieving flexible isolation at a fine granularity implies a high degree of modularity. In practice, this modularity is not offered by typical monolithic general-purpose OSes~\cite{Kuenzer2021}. A flexible isolation approach on the basis of Linux would require a first non-trivial ``modularization'' step~\cite{Li2021} that may take years of engineering and careful redesign. Library OSes~\cite{Kuenzer2021} and component-based OSes~\cite{Bruno1999, Parmer2007} are a better starting point for flexible OS isolation because they often provide highly modular code bases with good application compatibility and high performance. Flexible isolation also suits well the specialization spirit of LibOS\xspace{}es, where the OS can be tailored for a given application/use-case. This was historically done for performance~\cite{Engler1995}, and FlexOS\xspace enables specialization towards safety. \paragraph{API and Build-time Instantiation.} Unlike a typical LibOS\xspace, we design FlexOS\xspace in an \textit{isolation-agnostic} manner. Cross compartment calls are made through abstract call gates that are instantiated at build time (arrows in Figure~\ref{fig:shortoverview}). Shared data is marked using compiler annotations, used at build time to instantiate a given data sharing strategy. Unlike linker-based approaches~\cite{Sartakov2021}, FlexOS\xspace performs replacements using source to source transformations using Coccinelle~\cite{COCCINELLE1, COCCINELLE2}. This has the advantage of allowing all compiler optimizations and gives FlexOS\xspace a clear performance advantage compared to historical approaches that relied on heavyweight runtime abstraction interfaces such as \texttt{COM} for Flux OSKit~\cite{Ford1997}. It also makes FlexOS\xspace' isolation approach easy to debug and understand by anyone who knows C: transformations can be visually inspected in a high-level language with usual file comparison tools. \begin{figure*} \center \includegraphics[width=\textwidth]{include/porting} \caption{ FlexOS\xspace code transformations. First, developers manually annotate shared data, and gate placeholders are automatically inserted. At build time, API primitives are automatically replaced with the chosen mechanism. In the MPK case, shared data can for example be allocated on a shared heap. If the two libraries are in the same compartment, the result is similar to the code prior porting, resulting in zero overhead. } \label{fig:porting} \end{figure*} \subsection{Compartmentalization API and Transformations}\label{subsec:api} Most isolation mechanisms (memory protection keys~\cite{MPK}, TEEs like SGX~\cite{Costan2016}, or hardware capabilities~\cite{Watson2015}) restrict data access according to a set of current privileges, and provide a means to switch privileges and share data across compartments. Ensuring safety is equivalent to controlling privilege transitions, making sure that the system only ever enters ``legal'' couplings of executing code and data privileges. Other isolation approaches such as ARM TrustZone~\cite{ARMTrustZone2009} or EPT/VMs consider compartments as entirely different systems (or ``worlds''), enforcing a 1:1 system/compartment mapping. With this approach, systems never switch privileges, instead they communicate with other compartments via remote procedure calls (RPCs) and shared memory. We design FlexOS\xspace' call gates and data sharing primitives to cater for both approaches. In FlexOS\xspace, the only requirement for an isolation mechanism is to (1) implement the concept of protection domains and provide a domain switching mechanism, and (2) support some form of shared memory for cross-domains communication. To the best of our knowledge, this applies to the vast majority of industry and research isolation mechanisms. This subsection gives an overview of FlexOS\xspace' compartmentalization approach, first focusing on the API with call gates and shared data, and then on build-time source transformations. \paragraph{Call Gates.} In FlexOS\xspace, cross-library calls are represented in the source code by \textit{abstract call gates}. At build time, as part of the transformation phase, abstract call gates are replaced with a specific implementation. For instance, when the caller and callee are configured to be in the same compartment, call gates implement a classical function call. When they are in different compartments, isolated for example by MPK, the call gate performs a protection domain switch before finally executing the \texttt{call} instruction. In a setting where libraries are isolated using VMs, the call gate performs a remote procedure call (RPC). From the perspective of the compiler, caller, and the callee, call gates are entirely transparent as they implement the System V ABI calling convention. Unlike typical System V function calls however, call gates guarantee isolation of the register set and therefore save and zero out all registers not used by parameters. Figure~\ref{fig:porting} presents an example of gates from the porting (step \BC{2}) to the replacement by the toolchain (\BC{3} and \BC{3'}). The part of the process of porting existing user/kernel code to FlexOS\xspace consisting in marking call gates is automated: knowing the control-flow graph of the system, static analysis determines whether a procedure call crosses library boundaries, and if so, performs a syntactic replacement of the function call with a call gate instead. A corner case requiring programming effort is when a component calls another component through a function pointer. The callee cannot be determined statically, thus the programmer must annotate the possible pointed functions with the list of possible components they can be called from. The toolchain will then generate wrappers enclosing the implementations of the functions in question in the appropriate call gates. Our prototype implementation uses Cscope~\cite{CSCOPE} and Coccinelle~\cite{COCCINELLE1}. FlexOS\xspace call gates are not trampolines. Instead, they replace System V function calls entirely and are always inlined at the call site. An advantage of such approach is that call gates naturally provide an inexpensive (albeit incomplete) form of CFI, guaranteeing that libraries can only be entered through well defined entry points, known and enforced at compile time. \paragraph{Data Ownership Approach.} FlexOS\xspace takes a code-centered~\cite{Gudka2015} isolation approach. Each library is present only once and maps to a specific set of privileges. There is a slight tweak for backends that rely on several systems (TrustZone, VMs): for them, the trusted computing base (\S\ref{sec:tcb}) is duplicated; one for each system, as each compartment must possess a self-contained kernel (\S\ref{subsec:ept}). FlexOS\xspace considers all static and dynamic data allocated by a library as private by default. Individual variables can then be annotated as ``shared'' with a specific group of libraries into \emph{whitelists}, similarly to access control lists. In practice, the maximum number of isolated data sharing ``zones'' is limited by the underlying technology. Annotations are made with the keyword \texttt{\_\_shared} as illustrated in Figure~\ref{fig:porting} step \BC{2}. Compiler annotations are identical for all types of variables. However, under the hood, FlexOS\xspace differentiates between statically allocated variables, dynamically allocated heap variables, and dynamically allocated stack variables. FlexOS\xspace' compartmentalization API itself does not dictate \textit{how} variables have to be shared. Different mechanisms can require very different sharing approaches: while certain mechanisms such as MPK require shared data to be located in shared memory regions, others such as CHERI's hybrid capabilities~\cite{cheri} require compiler annotations that can be automatically generated in place of the FlexOS\xspace placeholder. Section~\ref{sec:implementation} describes the implementation of the API for the two supported backends (MPK/EPT), and sketches implementations for an additional one (CHERI). Identifying shared data represents the vast majority of the porting effort. It is necessary for both kernel libraries, user libraries, and applications. On the kernel side, this problem is simplified (but not eliminated) by the modularity of Unikraft's code base. This issue is not specific to FlexOS\xspace and is widely explored in the literature. State of the art approaches (1) rely on manual code annotations~\cite{Narayan2020}, (2) perform static analysis at compile time to identify shared data automatically~\cite{Bauer2021}, or (3) perform a mix of static, dynamic, and manual analysis~\cite{Gudka2015}. There is no silver bullet: manual code annotation can be non-trivial, but typically produces precise results that not only take into account what \textit{is} accessed across modules, but also what \textit{should be} shared from a security perspective. Static-analysis based approaches, on the other hand, are automatic, but conservative. These methods would be applicable to FlexOS\xspace, however automated shared data identification is not the main focus of this paper. The current prototype relies on manual annotations, and Section~\ref{sec:implementation} details the porting effort for a number of applications and libraries. \paragraph{Build-time Source Transformations.} Before compilation, FlexOS\xspace' toolchain performs source transformations to (1) instantiate abstract gates, (2) instantiate data sharing code, (3) generate linker scripts, and (4) generate additional code in core libraries according to backend-provided recipes. The amount code generated in considerable. As an example, the toolchain modifies about 1 KLoC for a simple Redis configuration. Figure~\ref{fig:porting} steps \BC{3} and \BC{3'} presents an example of the porting-transformation process. \subsection{Kernel Backend API} Most isolation mechanisms require changes to a specific set of components in the kernel. The kernel facilities that can require special handling depending on the technology exclusively correspond to the core libraries (see Figure~\ref{fig:shortoverview}). In order to make such changes scalable, we designed core components to expose a \textit{hook API} to isolation backends, allowing the core libraries to be easily extended with backend specific functionalities. For example, the MPK backend leverages the thread creation hook offered by the scheduler to switch a newly created thread to the right protection domain. These hooks come at no cost: since the instantiation is done at build time, the compiler is able to aggressively inline such calls. Porting FlexOS\xspace to use a new isolation mechanism does not require redesign. In general, it is equivalent to (1) implementing gates for the particular mechanism, (2) implementing hooks for core components (see previous paragraph), (3) implementing linker script generation in the toolchain, (4) implementing Coccinelle code transformations, and (5) registering the newly created backend into the toolchain. In practice, developers can heavily reuse existing transformations for new backends. \subsection{Trusted Computing Base} \label{sec:tcb} Regardless of the isolation mechanism, certain components are so deeply involved in the OS' functioning that they will cause the entire system to violate its safety guarantees when compromised. These components are (1) the early boot code, (2) the memory manager, (3) the scheduler, (4) the first-level interrupt handler's context switch primitives, and (5) the isolation backend. We refer to these components as FlexOS\xspace' trusted computing base (TCB), illustrated in Figure~\ref{fig:shortoverview}. Clearly, malfunctioning or malicious early boot code can setup the system in an unsafe manner, the memory manager can manipulate page table mappings in order to freely access any compartment's memory, the scheduler can manipulate sleeping thread's register states, and the backend provide incomplete isolation, etc. This is the case even when considering architectural hardware capabilities such as CHERI~\cite{Davis2019}. It comes as no surprise: this ``core'' set of libraries is historically the set of services that microkernel OSes provide~\cite{Tanenbaum2006}. FlexOS\xspace' TCB is small: around 3000 LoC in the case of Intel MPK, and even less for VM/EPT. \paragraph{Trust Model.} The whole point of flexible isolation is to be able to achieve a wide range of trust models where different components (such as the network stack, parser libraries, etc.) can be considered untrusted and potentially compromised. Thus there is no single trust model for FlexOS\xspace. In general, however, we assume that the TCB (see previous paragraph) is safe and error free. This is not an unreasonable assumption given the small size and the potential for formal verification (we have formally verified a version of our scheduler~\cite{Lefeuvre2021} using Dafny~\cite{Leino2010}). The hardware and the compiler are also part of the TCB. Note that the rest of the toolchain (Coccinelle included) is \textit{not} part of the TCB as the code includes compile time checks that are able to detect invalid transformations. Finally we must also assume that interfaces correctly check arguments and are free of confused deputy/Iago~\cite{Checkoway2013} situations. This is not an unreasonable assumption within the core FlexOS\xspace codebase. Further, confused deputy and Iago attacks are probabilistically made more complex to execute in FlexOS\xspace due to the variability of the interface size; the system call API, for example, is divided into a variable number of sub-interfaces depending on the chosen configuration, and several compartments may need to be subverted for an attack to be successful. \section{Prototype}\label{sec:implementation} We present a prototype of FlexOS\xspace on top of Unikraft~\cite{Kuenzer2021} v0.5, with Intel MPK and EPT backends. Modification to the Unikraft kernel represent about 3250 LoC: 1400 for the MPK backend, 1000 for EPT, and 850 for core libraries. In user space, changes to Unikraft's toolchain represents 2300 LoC. We port user codebases (Redis, Nginx, iperf, and SQLite) as well as most kernel components (the TCP/IP stack, scheduler, filesystem, etc.) to run as isolated components. This section presents the MPK and EPT backends, sketches a CHERI backend, and concludes with the porting effort. \subsection{Intel MPK Isolation Backend} MPK is a mechanism present in Intel CPUs offering low-overhead intra-AS memory isolation~\cite{inteldoc, Bannister2019, Schrammel2020}. MPK leverages unused bits in the page table entries to store a \textit{memory protection key}, enabling up to 16 protection domains. The PKRU register then stores the protection key permissions for the current thread. On each memory access, the MMU compares the key of the target page with the PKRU and triggers a page-fault in case of insufficient permissions. FlexOS\xspace associates each compartment with a protection key and reserves one key for a shared domain used for communications. If the image features less than 15 compartments, FlexOS\xspace uses remaining keys for additional shared domains between restricted groups of compartments. Any compartment can modify the value of the PKRU, thus the MPK backend has to prevent unauthorized writes. This has previously been done via runtime checks~\cite{Hedayati2019} and static analysis~\cite{VahldiekOberwagner2019}. In FlexOS\xspace, no code is loaded after compilation, hence static binary analysis coupled with strict \texttt{W$\oplus$X} is sufficient. \paragraph{MPK Gates.} For flexibility, FlexOS\xspace offers two different implementations of the MPK gate. The main one provides full spatial safety, similarly to HODOR~\cite{Hedayati2019}. The gate protects the register set and uses one call stack per thread per compartment. Each compartment has a stack registry that maps threads to their local compartment stack, making it fast and safe to switch the call stack. Upon domain transition, the gate (1) saves the current domain's registers set, (2) clears registers, and (3) loads function arguments. It then (4) saves the current stack pointer, (5) switches thread permissions, (6) switches the stack, and finally (7) executes the \texttt{call} instruction. Once the function has returned, operations are executed in reverse. The second gate implementation shares the stack and the register set across compartments, similarly to ERIM~\cite{VahldiekOberwagner2019}. It is conceptually very simple, switching the content of the PKRU before performing a normal function call. This lightweight implementation offers lesser guarantees but presents a lighter overhead, close to the raw cost of \texttt{wrpkru} instructions. \paragraph{Data Ownership.} FlexOS\xspace' MPK images feature one data, read-only data, and bss section per compartment to store private compartment static data. At boot time, the boot code protects these sections with the compartment's protection key. Each compartment has a private heap, and a shared one is used for communications. Our prototype uses a single shared heap for all shared allocations, but this is not a fundamental restriction. Stack allocations are slightly more complex. Existing works convert shared stack allocations to shared heap allocations~\cite{Hedayati2019, Kjellqvist2020, Bauer2021}. This approach is costly from a performance perspective: an allocation+free on the fast path for a modern allocator typically takes 30-60 cycles, and up to thousands of cycles on the slow path~\cite{Kanev2017}. This is as expensive as entire domain transitions, and that for a single shared stack variable. While FlexOS\xspace supports stack-to-heap conversions, we propose another approach that addresses this issue, the \textit{data shadow stack}. \paragraph{Data Shadow Stacks.} \begin{figure} \center \includegraphics[width=0.43\textwidth]{include/dss_fig} \caption{ Data Shadow Stacks. } \label{fig:dss_fig} \end{figure} Stack allocations are much faster than heap allocations because the compiler is able to perform bookkeeping \textit{at compile time}. At runtime, a single push instruction is needed, resulting in constant low cost. Data Shadow Stacks (DSS), illustrated in Figure~\ref{fig:dss_fig}, leverage this bookkeeping work for shared stack allocations. When using the DSS, the usual stack size of threads is doubled. The upper part corresponds to the DSS and is put in the shared domain. The lower part is the traditional stack and remains in the compartment's private domain. For each shared variable \texttt{x}, we define the \textit{shadow} of \texttt{x} as \texttt{\&x + STACK\_SIZE}. Thus, allocating space for a shared variable on the stack transparently allocates a shadow variable in the DSS. Before compilation, the toolchain replaces every reference to a shared stack variable with its shadow \texttt{*(\&var + STACK\_SIZE)} in the shared domain. Allocations on the DSS are much faster than on a shared heap, since the DSS' bookkeeping overhead is null (stack speed), and the locality of reference high. The cost is a relatively small increase in memory usage (stacks are twice as large). The DSS mechanism is applicable to any isolation mechanism that supports shared memory, and is compatible with common stack protection mechanisms. \paragraph{Control Flow Integrity.} Intel MPK does not provide protection from execution. As such, if a compartment is compromised and the attacker ROPs into another compartment, a fault will not directly happen. The MPK backend is able to provide a certain form of CFI, ensuring that compartments can only be entered at well-defined points. This ability is the consequence of the hardcoding of gates as described in \ref{subsec:api}. If the control-flow of one compartment is compromised and the attacker ROPs directly into another compartment $c$, then the system is guaranteed to crash if any data local to $c$ is accessed. \subsection{EPT/VM Backend} \label{subsec:ept} Virtualization has been used in many works to support isolation within a kernel~\cite{LeVasseur2004, Nikolaev2013, Zhang2018, Nikolaev2020}. Hardware-assisted virtualization is widely supported and provides strong safety guarantees compared to MPK, at the cost of higher overheads. The EPT backend is an extreme case; compartments do not share ASes and run on different vCPUs. It shows that FlexOS\xspace is able to cater very different mechanisms under a common API. FlexOS\xspace' EPT backend generates one VM image per compartment, each containing the TCB (boot code, scheduler, memory manager, backend runtime) and the compartment's libraries. Communications use a shared memory-based RPC implementation. Our prototype runs on QEMU/KVM patched to support lightweight inter-VM shared memory (less than 90 LoC). \paragraph{EPT Gates.} Upon domain transition, the caller places a function pointer and arguments in a predefined shared area of memory. All other VMs busy wait until they notice an RPC request, check that the function is a legal API entry point, execute the function and place the return value in a predefined area of the shared memory. In order to support multithreaded loads, each RPC server maintains a pool of threads that are used to service RPCs. Using function pointers instead of abstract routine identifiers simplifies the RPC server's unmarshalling operation and does not prevent the RPC server from checking the pointer to ensure that it is a legal entry point. This optimization is possible since all compartments are built at the same time, hence all addresses are known. Busy-waiting allows the EPT backend to minimize gate latency as opposed to VM notifications, but a similar implementation with \texttt{MONITOR/MWAIT} instructions would also be possible to minimize power consumption if calls are sparse. Overall, any of these tweaks can be implemented as gate variant in order to offer as much freedom as possible to the user. \paragraph{Data Ownership.} The EPT backend relies on shared memory areas to share data (static and dynamic) across VMs. Areas are always mapped at the same address in the different compartments so that pointers to/in shared structures remain valid. Each VM manages its own portion of the shared memory area to avoid the need for complex multithreaded bookkeeping. \paragraph{Control Flow Integrity.} The EPT backend is able to provide a form of CFI stronger than that of the MPK backend, ensuring that compartments can only be \textit{left and entered} at well defined points. Indeed, the RPC server can control at entry that the executed function is legal, and compartments are not able to execute other compartments' code without RPC calls. \subsection{Supporting More Isolation Mechanisms} To check whether FlexOS\xspace can support other isolation backends, we discuss how we can leverage the CHERI hardware capabilities~\cite{Watson2015}, an emerging isolation hardware mechanism. The CHERI ISA extension is available for ARMv8-A, which is supported by FlexOS\xspace. Among others, CHERI capabilities would extend FlexOS’ trade-off space with the ability to address confused-deputy situations, reduce data sharing, and allow for a larger number of domains, something that is currently impossible for architectural (MPK) and performance (EPT) reasons. The backend would use boot-time hooks to initialize CHERI support, and scheduler hooks to perform capability-aware context-switching and thread initialization. Similarly to other backends, CHERI gates would save caller context, clear the relevant traditional and capability registers, install the callee context, and rely on the domain crossing instruction \texttt{CInvoke} and sentry capabilities~\cite{Watson2020b} to perform protection domain jumps. As a first step, FlexOS\xspace should rely on the hybrid pointer model to maximize compatibility. Our API's shared data annotations would transform to \texttt{\_\_capability} at build time to treat shared variables as a capabilities for efficient communications. \begin{table}[] \caption{Porting effort: size of the patch (including automatic gate replacements), number of shared variables.} \center\small \label{tab:porting} \begin{tabular}{l|l|l} \hline \textsc{Libs/Apps} & \textsc{Patch size} & \textsc{Shared vars} \\ \hline \hline TCP/IP stack (LwIP) & +542 / -275 & 23 \\ \hline scheduler (\texttt{uksched}) & +48 / -8 & 5 \\ \hline filesystem (\texttt{ramfs}, \texttt{vfscore})& +148 / -37 & 12 \\ \hline time subsystem (\texttt{uktime}) & +10 / -9 & 0 \\ \hline Redis & +279 / -90 & 16 \\ \hline Nginx & +470 / -85 & 36 \\ \hline SQLite & +199 / -145 & 24 \\ \hline iPerf & +15 / -14 & 4 \\ \hline \end{tabular} \end{table} \subsection{Porting Effort} The porting process consists of two phases: call gate insertion (automated), and shared data annotation (manual). The typical workflow, once gates have been inserted, is to run the program with a representative test case (e.g., a benchmark or test suite) until it crashes due to memory access violations. Crash reports point to the symbol that triggered the crash, at which point the developer can annotate it for sharing. In some cases, the crash can be a genuine violation; e.g., a library exposes internal state to external libraries, in which case the developer can decide to rework the library's API to address the privacy issue. This case is much less frequent and left at the developer's discretion. An example is \texttt{ramfs}, which is so deeply entangled with \texttt{vfscore} that blindly isolating it without redesign would impair performance with little additional security benefits, as a critical portion of the component's state would be shared. However, coupled with \texttt{vfscore}, both components can perfectly well be isolated from the rest of the system. This highlights a limitation of automated tools that blindly isolate this component~\cite{Sartakov2021}. Overall, the porting process is greatly simplified by common debugging tools: GDB and all usual debugging toolchains are supported. The debugging experience in FlexOS\xspace is not significantly different from Unikraft and most mainstream OSes, and we expect it to remain intuitive for anyone familiar with OS development. Depending on the amount of data shared with the outside world, the porting process ranges from 10 minutes (time subsystem, no data shared), to 2-5 days (filesystem, network stack). This porting cost is similar that of other compartmentalization frameworks~\cite{Narayan2020}. Table~\ref{tab:porting} illustrates the porting effort with concrete numbers. \subsection{Software Hardening} The flexible isolation provided by FlexOS\xspace allows to enable/disable software hardening (SH) such as CFI, etc., on a per-component basis: isolating components without SH from components with it allows the latter to maintain the guarantees offered by SH. Moreover, many SH schemes work by instrumenting the memory allocator, and we use FlexOS\xspace' capacity to have an allocator per-compartment to enable flexible SH. This flexibility allows for example to alleviate the performance impact of SH by enabling it only for a subset of the system. Our prototype currently uses address sanitization (KASan), undefined behavior sanitization (UBSan), CFI, and stack protector. \section{Exploration with Partial Safety Ordering}\label{sec:exploration} In this section we present a design space exploration technique, \emph{partial safety ordering}, that aims to guide a user towards suitable configurations for a given use case by subsetting the vast design space enabled by FlexOS\xspace according to safety and performance requirements. \begin{figure*} \center \includegraphics[width=1.0\textwidth]{include/redis} \includegraphics[width=1.0\textwidth]{include/nginx} \caption{Redis (top) and Nginx (bottom) performance for a range of configurations. Components are on the left. Software hardening can be enabled [$\bullet$] or disabled [$\circ$] for each component. The white/blue/red color indicates the compartment the component is placed into. Isolation is achieved with MPK and DSS.} \label{fig:redis} \end{figure*} \label{subsec:exploration} Given a performance budget, partial safety ordering attempts to find the most secure configurations among those enabled by FlexOS\xspace. Quantifying safety is challenging: it is impossible to give each configuration an absolute safety score that would allow to completely order them; for instance, is the safety of a configuration with 3 compartments, MPK isolation and no hardening better or worse than another one with 2 compartments, EPT isolation and CFI hardening? Nevertheless, the safety of \emph{some} configurations is programmatically comparable. Consider 3 configurations, \emph{C1} with no isolation and no software hardening; \emph{C2} with two compartments protected by a given mechanism with a given data sharing strategy and no hardening; and \emph{C3} adding CFI for each compartment on top of \emph{C2}. In terms of (probabilistic) safety, we have the following relationship: $C1 \leq C2 \leq C3$. With that in mind, it is thus possible to organize all configurations into a \emph{partially ordered set} (poset), that can be viewed as a Directed Acyclic Graph (DAG) for which each node represents a configuration, and a directed edge between nodes \emph{n1} and \emph{n2} indicates that the level of safety of \emph{n1} is probabilistically superior to that of \emph{n2}. The safety of nodes on the same path is comparable, while that of nodes on different paths is not. Figure~\ref{fig:exploration} presents a subset of the configuration poset corresponding to fixed choices for a compartmentalization strategy with 2 compartments, an isolation mechanism, and a strategy of data sharing. This subset of the poset represents the variation of the last feature, the software hardening, for which we assume only CFI and ASAN for the sake of simplicity. Each configuration is depicted by a node indicating, for each of the two compartments, which hardening mechanism is applied: none, CFI, ASAN, and CFI+ASAN. We construct the poset partially depicted on Figure~\ref{fig:exploration}, ordering safety with the assumption that safety probabilistically increases with 1) the number of compartments; 2) data isolation (isolated vs. shared stacks, dedicated shared memory areas per pair of communicating compartments vs. shared areas accessible from everywhere, etc.); 3) stackable software hardening; and 4) the strength of the isolation mechanism. Given such a poset, we can label each node with its performance characteristics (circles in the figure denote fictional performance numbers), and prune those that don't meet minimum requirements (gray nodes), ultimately yielding a set of configurations that offer the best guarantees for a given performance budget. This set corresponds to the \emph{maximal elements} of the poset, i.e. sinks of the DAG (green nodes in the figure). \paragraph{Partial Safety Ordering in Practice.} In practice, users provide the toolchain with a test script (e.g., \texttt{wrk} for Nginx) and a performance budget (e.g., at least 500k req./s). Users are free to define performance as they may deem suitable depending on their needs: application throughput, tail latency, runtime, etc. Any metric is suitable as long as it remains comparable across configurations and runs. With this in hand, the toolchain generates the unlabeled poset. Then, it labels it by automatically measuring the performance of each configuration. The toolchain does not have to run all configurations: assuming monotonically decreasing performance, it can safely stop evaluating a path as soon as a threshold is reached. In practice, we observe that this significantly limits combinatorial explosion. The result is a set of the most secure configurations for the given budget, which the user can use to choose the most suitable one for a given use case. Ultimately, we expect this process to significantly trim the design space and allow the user to make an informed and relatively effortless choice. This approach assumes that the user is able to get representative feedback on the application's performance, and users will not be able to use FlexOS’ exploration facilities if they are not able to properly benchmark their application. However, we expect this situation to be quite rare: in the vast majority of cases, users will be able to at least minimally test their applications. These results can be used to exclude configurations that are too costly and test the best candidates in production using lightweight performance measurement systems, e.g., blue-green deployments. \paragraph{Skipping Exploration.} Some developers might already come with a particular isolation strategy in mind. In that case the developer can skip this exploration phase by providing a configuration file as shown in Section~\ref{sec:design}. In this case, the developer leverages FlexOS’ flexibility and not its exploration facilities. We note, however, that this ``expert'' approach has its limits: applications evolve over time and a compartmentalization approach that is deemed optimal at a given time may not be suitable in the future~\cite{Gudka2015}. In this case, an exploration system such as FlexOS’ can be of use for the expert to easily reconsider their approach in light of changing software. \section{Evaluation}\label{sec:evaluation} We aim to demonstrate the vast performance/safety design space enabled by FlexOS\xspace, assess the efficiency of the partial safety ordering exploration technique, and compare FlexOS\xspace' performance with the literature. To this end, we present an overview of the performance obtained with numerous safety configurations on three popular cloud applications (Redis, Nginx, and SQLite), as well as iPerf, a standard network stack benchmark. We demonstrate our design-space exploration technique with Redis and Nginx. Then, we compare selected SQLite configurations with Linux, CubicleOS~\cite{Sartakov2021}, a (non-flexible) compartmentalized LibOS\xspace, the SeL4~\cite{Klein2009}/Genode~\cite{Feske2021} microkernel, as well as Unikraft~\cite{Kuenzer2021}. Finally, we study raw isolation overheads in FlexOS\xspace: DSS efficiency and cross-compartments call gate latencies. We run experiments on an Intel Xeon Silver 4114 @2.2 GHz. For each experiment we use 4 cores from the same socket, isolated with \emph{isolcpu}: 2 cores for the client (iPerf client/\texttt{redis-benchmark}, \texttt{wrk}) on the host, 1 core for the QEMU process, and 1 core per FlexOS\xspace' vCPU. Hyperthreading is disabled. \subsection{Design Space Exploration: Redis, Nginx} We automatically generate and run a large set of configurations for Redis and Nginx using the Wayfinder~\cite{WAYFINDER} benchmarking platform. We fix the isolation mechanism to MPK with DSS and vary: the number of compartments (1-3), compartmentalized components (TCP/IP stack, libc, scheduler, application), as well as per-compartment software hardening (stack protector, UBSan and KASan), for a total of 2x80 configurations. \paragraph{Redis.} The results are on Figure~\ref{fig:redis} (top), plotting for each configuration Redis' GET throughput. Overall we observe that FlexOS\xspace enables for a very wide range of safety configurations with significant performance variation: there is one order of magnitude of difference between the configuration yielding the lowest throughput (292K~req/s) vs. the highest one (1.2M req/s). Unsurprisingly, the configuration that disables isolation and hardening gives the highest throughput. Conversely, configurations with many compartments/hardening perform worst. Still, in between these two extremes, creating more compartments and enabling hardening has a variable impact on performance. For example, with two compartments and no hardening, isolating LwIP from the rest of the system leads to an 11\% performance hit, while that number reaches more than 43\% when the scheduler is the isolated component --- indicating extensive communication between user code and the scheduler. The same is true for hardening: with a single compartment, enabling hardening on the scheduler has a 24\% performance cost, while that cost is 42\% when hardening the Redis application code. The complexity of maximizing safety and performance becomes more clear when isolating several components: isolating LwIP from the scheduler from the rest only differs from a few percentage points from isolating LwIP together with the scheduler from the rest. Such ``isolation for free'' effects are caused by communication patterns; LwIP does not directly communicate with the scheduler, hence the ``cut'' is not on a hot path, and merging them in a same compartment brings little performance benefits. Thus, the performance does not entirely depend on the number of compartments or the number of components with hardening enabled, but rather \emph{what} particular components are isolated/hardened, and their communication patterns. Such effects can be leveraged to maximize safety and performance. \paragraph{Nginx.} \begin{figure} \center \includegraphics[width=0.45\textwidth]{include/nginx-redis-scatter} \caption{Nginx versus Redis normalized performance.} \label{fig:nginx-redis-scatter} \end{figure} The results are on Figure~\ref{fig:redis} (bottom), plotting for each configuration Nginx' HTTP throughput. Overall we observe that results span over the same range of overhead as Redis (0-4.1x). However, overheads do not follow the same distribution: 9 configurations have less than 20\% overhead in the Nginx case, but only 2 for Redis. Similarly, 32 configurations have less than 45\% of overhead, only 20 for Redis. This can be explained by looking more closely at individual configurations. Compared to Redis, isolating the scheduler is much less expensive (6\% versus 43\% for Redis), and the same goes for hardening (2\% versus 24\% for Redis). The costs, however, become similar as more hardening and isolation boundaries are added because of bottleneck effects. This different distribution of costs is made more clear by Figure~\ref{fig:nginx-redis-scatter} which compares the relative performance of configurations for Nginx and Redis (same dataset as Figure~\ref{fig:redis}). These differences show that isolating and hardening the \textit{same components} on two networked applications results in uneven, difficult to predict slow-down. Existing approaches assume a one-size fits all safety configuration are therefore suboptimal; in contrast, FlexOS\xspace enables users to easily navigate the safety / performance trade-off inherent in their application. \subsection{Partial Safety Ordering} \begin{figure} \center \includegraphics[width=0.45\textwidth]{include/redis-poset} \caption{Configurations poset for the Redis numbers (Figure~\ref{fig:redis}). Stars are the most secure configurations with performance >= 500k requests/s.} \label{fig:redis-poset} \end{figure} We applied this technique on the Redis numbers from Figure~\ref{fig:redis}. We construct the poset presented in Figure~\ref{fig:redis-poset}, where each node is a Redis configuration, i.e. a column from Figure~\ref{fig:redis}. The node's color intensity indicates the configuration's performance, black being the fastest (1.2M~req/s) and slower configurations becoming gradually white (pure white representing 292K~req/s). The fastest configuration is the one with no isolation and no hardening (\BC{A} on Figure~\ref{fig:redis-poset}). Other nodes in the center of the plot represent compartments addition, still with no hardening: separating from the rest of the system either the scheduler~\BC{B}, lwip~\BC{C}, or Redis+newlib~\BC{D}, and a 3 compartments scenario~\BC{E}. From these 5 basic compartmentalization strategies come out 5 ``branches''. The nodes in each branch represent various combinations of per-component software hardening. The nodes' color evolution indicate the variable performance impact of creating new compartments and stacking software hardening on components. We set a minimum required performance of 500K~req/s, and let partial safety ordering identify the safest configurations satisfying that constraint, indicated with stars on Figure~\ref{fig:redis-poset}. In this case, the technique can prune the configuration space from 80 to 5 configurations, helping the user easily pick the most appropriate one. \subsection{Batching Effects: Network Stack Throughput} We port a simple iPerf server to FlexOS\xspace and use it to measure the network performance of our system. We fix the compartmentalization to the following scenario: the iPerf application code is placed within a compartment, and the rest of the system (including the network stack) is placed in a second compartment. We apply no software hardening, and configure the iPerf server to pass buffers of varying sizes when calling \texttt{recv} on the socket. We measure the achieved throughput using an iPerf client for FlexOS\xspace without isolation, with MPK (sharing or protecting the call stack), as well as EPT. We run vanilla Unikraft as baseline. \begin{figure} \center \includegraphics[width=\linewidth]{include/iperf} \caption{ Network stack throughput (iPerf) with Unikraft (baseline), FlexOS\xspace without isolation, with two compartments backed by MPK (\textit{-light} = shared call stacks, \textit{-dss} = protected and DSS), and with two compartments backed by EPT. } \label{fig:iperf} \end{figure} The results are on Figure~\ref{fig:iperf}. FlexOS\xspace without isolation performs similarly to Unikraft, confirming that users ``only pay for what they get''. FlexOS\xspace' isolation slowdown manifests for small payload sizes, for which the domain crossing latency is an important bottleneck in the request processing time. Depending on the buffer size, EPT isolation is 1.1-2.2x slower than MPK with DSS, which is itself 0-1.5x slower than the baseline without isolation. MPK with shared stacks bears a 0-1.3x slowdown. Although MPK with DSS pays the price of a stack switch (see Table~\ref{fig:gate-latency}), it is more secure than fully sharing the stack and still faster than fully isolating it while moving shared data to the heap (see Figure~\ref{fig:dss}). Batching effects clearly manifest as the payload size increases: MPK's performance quickly becomes similar to to baseline starting from 128~B. EPT's isolation being more costly, the payload size needs to be 256~B or above so that its performance to reach about 90\% of the baseline's. These results illustrate that, depending on the size of the payload and the frequency of domain crossings, all backends can constitute a valid solution to a given problem. \subsection{Filesystem Intensive Workloads: SQLite} \begin{figure} \center \includegraphics[width=\linewidth]{include/sqlite} \caption{ Time to perform 5000 INSERT queries with SQLite on Unikraft, FlexOS\xspace, Linux, SeL4 (with the Genode system), and CubicleOS. The isolation profile is shown on the x axis (NONE: no isolation, MPK3: MPK with three compartments, EPT2: two compartments with EPT, PT2/3: two/three compartments with page-table-based isolation). } \label{fig:sqlite} \end{figure} We evaluate the performance of FlexOS\xspace with filesystem intensive workloads and compare it to vanilla Unikraft, Linux, SeL4~\cite{Klein2009} with the Genode~\cite{Feske2021} system, and CubicleOS~\cite{Sartakov2021}. Although both FlexOS\xspace and CubicleOS extend Unikraft, the former runs in a standard Qemu/KVM VM while the latter is implemented on top of \emph{linuxu}, Unikraft's Linux userland debug platform. The Unikraft baseline number thus cover both cases. We evaluate two scenarios: one with two components (EPT2, PT2), where the filesystem is isolated from the application, and one with three components (MPK3, PT3), where the filesystem is isolated from the time subsystem from the rest of the system. This benchmark performs 5000 INSERTs queries sequentially. To increase pressure on the filesystem, each query is in a separate transaction. The results are shown in Figure~\ref{fig:sqlite}. Compared to the baseline, FlexOS\xspace without isolation adds no overhead, and MPK3 adds an overhead of 2x. This is still significantly faster than the userland Linux version which performs a large number of system calls, highlighting the benefits of the LibOS\xspace basis. Somewhat surprisingly, FlexOS\xspace with EPT2 performs almost identically to Linux. This is because the syscall latency is almost identical to the EPT2 gate latency on this system (see Figure~\ref{fig:gate-latency}). Compared to SeL4, FlexOS\xspace is 3.1x faster with MPK3, and 2x faster with EPT2. Compared to CubicleOS, FlexOS\xspace is an order of magnitude faster. This is due to (1) CubicleOS relying on \textit{linuxu}, i.e. running in Ring 3 and performing Linux system calls for privileged operations, (2) CubicleOS not implementing MPK support and relying on Linux \texttt{pkey\_mprotect} system calls (making domain transitions orders of magnitude more expensive and the TCB thousands of times larger), and (3) CubicleOS' \textit{trap-and-map} approach (that FlexOS\xspace avoids with shared data annotations). Even compared to its baseline without isolation, CubicleOS with MPK3 adds an overhead of 2.4x, about 30\% more than FlexOS\xspace. CubicleOS without isolation is faster than the Unikraft linuxu baseline; this is because it uses the Lea~\cite{Lea1996} memory allocator which behaves better than Unikraft's TLSF~\cite{Masmano2004} allocator in this benchmark. \subsection{Overheads: Stack Allocations, Gate Latencies} \label{subsec:microbench} In FlexOS\xspace, stack data can be shared via heap allocations, using the DSS (trading space for performance), or sharing the stack entirely (trading safety for performance). To illustrate the benefits of the DSS, we measure, for each of the mechanisms, the execution time of a function that allocates 1 to 3 shared stack variables (size 1 Byte) and returns immediately. \begin{figure}[] \centering \subfloat[Allocation latencies]{ \includegraphics[width=.22\textwidth]{include/dss} \label{fig:dss} } \subfloat[Gate latencies]{ \includegraphics[width=.22\textwidth]{include/latency} \label{fig:gate-latency} } \caption{FlexOS\xspace latency microbenchmarks.} \end{figure} The results are on Figure~\ref{fig:dss}. Heap-based stack allocations are one to two orders of magnitude (100-300+ cycles) slower than typical stack allocations (constant 2 cycles). This is not surprising, since general-purpose allocators typically feature unbounded execution time. This cost increases with the number of variables, since each variable triggers a separate call to malloc. The DSS matches the shared stack in performance, confirming that it combines the safety of isolation with the performance of traditional stack allocations. The memory footprint increase due to the DSS is modest as FlexOS\xspace uses small stacks (8 pages). For example, an instance with Redis (8 threads), has a space overhead of 288 KB. The DSS is a data sharing strategy and does not remove the need to perform stack switches. Another source of compartmentalization overhead is gate latency. To illustrate the raw performance of FlexOS\xspace' gates we measure the gate latency of MPK stack-sharing gates (\textit{-light}), normal MPK gates, and EPT gates. We compare them with the latency of a function call, and of a Linux system call (with and without KPTI, \textit{-nopkti}). The results are shown in Figure~\ref{fig:gate-latency}. MPK light gates are 80\% faster than normal MPK gates, and 7.6x faster than EPT gates, as they correspond to the cost of raw \texttt{wrprku} instructions. EPT latencies are similar to syscall latencies without KPTI, illustrating the practicability of the EPT backend. \section{Use Cases for Isolation Flexibility} FlexOS\xspace enables developers to seamlessly experiment with various safety configurations for their OS. An obvious use-case we presented throughout this paper is the specialization of the OS' safety strategy for a given application: manually or semi-automatically selecting, among the vast design space unlocked by FlexOS\xspace, the most suitable configuration for a particular use case with given safety/performance constraints. Still, there are many other ways in which this flexibility can be used; we detail some of them next. \paragraph{Quickly Isolate Exploitable Libraries.} Consider the period between the full disclosure of a vulnerability and the release of its fix, or the embargo period when vulnerabilities are disclosed only to affected vendors, but not to the general public; these periods can last for weeks up to years during which vulnerable software runs in the wild. With FlexOS\xspace, it takes seconds to create a new binary that isolates a vulnerable library into its own compartment (e.g. EPT + hardening) to at least mitigate the effects of exploits; an automated system could be created to respond to known vulnerabilities by recompiling production software to isolate certain libraries, similar to Self-Certifying Alerts~\cite{sca}. Such flexibility improves over the state of the art by avoiding a loss of functionality (e.g. compared to Senx~\cite{Huang2019}), and providing excellent resistance to polymorphic variations of vulnerabilities (e.g. compared to filters~\cite{Costa2007}). \paragraph{Quickly React to Hardware Protections Breaking Down.} Recent hardware vulnerabilities~\cite{MELTDOWN, SPECTRE} showed that hardware-backed isolation mechanisms are not foolproof. The corresponding fixes may require significant engineering and redesign efforts (e.g. KPTI for Meltdown), leading to long vulnerability windows. FlexOS\xspace is not immune to hardware vulnerabilities by design. In this case, however, its ability to easily switch between protection techniques comes handy: by supporting a wide range of isolation primitives relying on a range of different hardware, switching the isolation mechanism from a vulnerable to a non-vulnerable one is just a matter of rebuilding the LibOS with a different configuration (snippet in \S\ref{sec:design}), i.e. the engineering cost is nil. \paragraph{As Secure as You can Afford.} Consider a service provider who wishes to offer the best possible security as long as its server can keep up with the client load. A natural approach would be to run the safest combination that copes with peak load, as we suggested in our Redis evaluation; this means that in periods of low load the system has idle compute power. With its capacity to quickly switch safety configurations, FlexOS\xspace enables another approach: to run, at any time, the safest configuration that can sustain the actual load. This makes attacks much harder as long as the system is under-loaded, but gracefully switches off defenses as load increases to respect SLA. Another approach is to couple this with software load balancers to triage users into likely benign or malicious, sending them to machines running faster or safer software, accordingly. \paragraph{Dealing with Crashed Software.} Vulnerabilities are a fact of life, and the standard approach is to quickly restart crashed software and to examine the faults in the background. When such a crash is detected (e.g. memory error), with FlexOS\xspace it is wiser to start a safer configuration of the same software, to ensure that any vulnerability is not turned into an exploit. \paragraph{Incremental Verification.} Individual components of FlexOS\xspace can be verified and isolated from the rest of the system. In this way, one can obtain strong guarantees on pre-conditions and ensure that verified properties hold even when mixed with unverified components, something that isn't possible with monolithic operations systems~\cite{Li2021}. Over time, the entire system could be verified, gradually increasing the guarantees of the system. \paragraph{Deployment to Heterogeneous Hardware.} The flexibility of FlexOS\xspace mechanisms can also come in very handy when deploying on heterogeneous hardware. Some servers might offer MPK support for example, others CHERI, others only the classical MMU. In every case, Chrysalis is able to get the best from the available hardware without major rewrite, and without requiring insider knowledge from application developers. \section{Related Work}\label{sec:related-works} \paragraph{Improving OS safety.} Previous work proposed to address the safety issues of monolithic OSes by reducing the TCB through separation~\cite{Rushby1981, AlvesFoss2006}, micro-kernels~\cite{Golub1992, Herder2006}, and safe languages~\cite{Boos2020, Narayanan2020a, Hunt2007, Madhavapeddy2013, Cutler2018}. In SASOSes, internal isolation may be traded off for performance~\cite{Kantee2012, Kivity2014, Olivier2019, Kuenzer2021}, provided with traditional page tables~\cite{Chase1994, Leslie1996, Heiser1999, Nikolaev2020}, or intra-AS hardware isolation mechanisms~\cite{Sartakov2021, Li2020, Sung2020, Olivier2020}. Other research efforts strive to speedup IPC in microkernels~\cite{Gu2020, Mi2019}, or redesign monolithic OSes entirely~\cite{Haertig1997, Swift2002, Castro2009, LeVasseur2004, BoydWickizer2010, Nikolaev2013,Dautenhahn2015}. Overall, each of these approaches is a single or a few point(s) in the OS safety/performance design space and lacks the flexibility of FlexOS\xspace to automatically specialize for safety or performance. LibrettOS~\cite{Nikolaev2020} allows a LibOS\xspace to switch between SASOS and microkernel modes, but remains limited to a small subset of the safety/performance design space. \paragraph{Compartmentalization Frameworks.} Several compartmentalization frameworks have been proposed recently~\cite{VahldiekOberwagner2019, Hedayati2019, Narayan2020, Schrammel2020, Gudka2015, Liu2017, Bauer2021, Sartakov2021}. Contrary to FlexOS\xspace, none focuses on flexible isolation. Regarding application porting, most~\cite{VahldiekOberwagner2019, Hedayati2019, Narayan2020, Schrammel2020} rely on code annotations. A few studies provide various degrees of porting automation through data flow analysis~\cite{Gudka2015, Liu2017, Bauer2021}, but are typically bound to numerous limitations due to the complexity of breaking down monolithic code bases. Nevertheless, some of their principles can be applied to increase the degree of automation of FlexOS\xspace' porting process -- something we scope out as future works. CubicleOS~\cite{Sartakov2021} proposes a \textit{trap and map} mechanism to limit the porting effort, but this comes at a high cost, is specific to MPK, and is not entirely automated. Further, as shown in our evaluation, CubicleOS' reliance on Unikraft's \emph{linuxu} leads to suboptimal performance. \section{Conclusion}\label{sec:conclusion} The isolation strategy of today's OSes is mostly fixed at design time. This lack of flexibility is problematic in many scenarios. We propose FlexOS\xspace, an OS whose isolation strategy is decoupled from its design. We augment the historical capacity of the LibOS\xspace to specialize towards performance with the ability to specialize for safety: fundamental decisions such as the compartmentalization granularity and which isolation mechanism to use are deferred to build time. FlexOS\xspace ships with a semi-automated exploration strategy helping the user navigate the vast configuration space the system unlocks. FlexOS\xspace is available online at \url{https://project-flexos.github.io} under an open source license. In our future work, we intend to add more isolation backend implementations to FlexOS\xspace including CHERI and SGX, as well as support for more software hardening techniques. Another direction of future work is to create a formal basis to help users navigate the safety configuration space. This would enable, among others, embedding formally verified components in FlexOS\xspace configurations while preserving their proven properties. \section{Artifact Appendix} \subsection{Abstract} This artifact contains the source code of FlexOS\xspace, the proof-of-concept of our flexible isolation approach, along with all scripts necessary to reproduce the paper's measurements and plots. The goal of this artifact is to allow readers to reproduce the paper's results, and build new research on top of FlexOS\xspace. \subsection{Artifact Check-List (meta-information)} {\small \begin{itemize} \item {\bf Program: } the FlexOS\xspace library OS, benchmarked with standard application benchmarks (\texttt{wrk} and \texttt{redis-benchmark}), a custom SQLite benchmark, and custom microbenchmarks. \item {\bf Binary: } automatically built from source. \item {\bf Run-time environment:} GNU/Linux Debian 11 (Bullseye), with KVM and Docker. Other dependencies are automatically installed. \item {\bf Hardware:} Intel® Xeon® Silver 4114 @ 2.20 GHz, or any machine with more than 8 cores that supports Intel MPK, typically Intel® Xeon® Scalable Processors starting with the Skylake generation. At least 128.0~GB of RAM. \item {\bf Metrics:} requests/s, Gb/s, queries/s, execution time, gate latencies. \item {\bf Output:} performance data, FlexOS\xspace images. \item {\bf Experiments:} \Cref{fig:redis,fig:nginx-redis-scatter,fig:iperf,fig:sqlite,fig:dss,fig:gate-latency} are reproducible automatically. \Cref{fig:redis-poset} is reproducible manually (it is only a graph). \Cref{tab:porting} is also reproducible manually. \item {\bf How much disk space required (approximately)?:} 100.0~GB \item {\bf How much time is needed to prepare workflow (approximately)?:} 6-12~Hours (\textit{automated}). \item {\bf How much time is needed to complete experiments (approximately)?:} 4-5~Hours (\textit{automated}), and up to 1.5~Hours (\textit{manual}). \item {\bf Publicly available?:} Yes. \item {\bf Code licenses (if publicly available)?: } BSD-3-clause. \item {\bf Workflow framework used?: } Wayfinder~\cite{WAYFINDER}, Docker, scripts. \item {\bf Archived (provide DOI)?: } \texttt{10.5281/zenodo.5748505} \end{itemize} } \subsection{Description} \subsubsection{How to access} The latest version of the artifact can found on GitHub\footnote{\url{https://github.com/project-flexos/asplos22-ae}}. Alternatively, individual releases can be downloaded from our Zenodo archive\footnote{\url{https://zenodo.org/record/5748505}}. Note that the artifact evaluation (AE) GitHub repository only contains part of the artifact, namely scripts to reproduce this paper's experiments. The core of FlexOS\xspace, libraries, and applications, are all available in the \texttt{project-flexos} organization, as documented in the AE repository. In order to precisely reproduce this paper's measurements, we gave ASPLOS'22 reviewers access to our server, an Intel® Xeon® Silver 4114 with 128.0 GB RAM, Debian 11.1, and Linux version \texttt{5.10.70-1}. Nonetheless, access to this particular setup is not required to run this artifact; hardware and software dependencies are detailed further below. \subsubsection{Hardware dependencies} An Intel® Xeon® Silver 4114 @ 2.20 GHz, or any machine that supports Intel MPK, typically any Intel® Xeon® Scalable Processor starting with the Skylake generation. The processor must have more than 8 cores. 128.0~GB of RAM are necessary to run the experiments corresponding to \Cref{fig:redis}, as all images are built and stored in RAM by our tool in order to achieve reasonable preparation times. Note that this amount of cores/RAM is required to reproduce this paper's results, \textit{not} to run FlexOS\xspace. \subsubsection{Software dependencies} \label{ssec:deps} This artifact has been tested with Debian GNU/Linux 11 (Bullseye) with Linux kernel version \texttt{5.10.70-1} (KVM enabled), Docker version \texttt{20.10.10} (or any recent version). All other dependencies are automatically installed by the artifact's scripts. \subsubsection{Data sets} All data sets and benchmarks are included in the artifact, generated automatically, or downloaded automatically by the run scripts. \subsection{Installation} Before running any experiment, prepare your host with the recommendations detailed above in \ref{ssec:deps}. Note that all commands below assume superuser permissions. Once the system is set up, clone our AE repository: \begin{tcolorbox}[colback=lightgrey,boxrule=0pt,arc=0pt,left=6pt] {\scriptsize \begin{Verbatim}[commandchars=\\\{\}] $ git clone https://github.com/ukflexos/asplos22-ae.git \end{Verbatim} } \end{tcolorbox} Then, generate a GitHub personal access token with the permissions "\texttt{public\_repo}" and set it in the Makefiles. You can do it for the entire system by exporting an environment variable: \begin{tcolorbox}[colback=lightgrey,boxrule=0pt,arc=0pt,left=6pt] {\scriptsize \begin{Verbatim}[commandchars=\\\{\}] $ export KRAFT_TOKEN="<your token>" \end{Verbatim} } \end{tcolorbox} Alternatively, you can also set it individually in every Makefile by editing the \texttt{KRAFT\_TOKEN} variable: \begin{tcolorbox}[colback=lightgrey,boxrule=0pt,arc=0pt,left=6pt] {\scriptsize \begin{Verbatim}[commandchars=\\\{\}] ... # # Parameters # KRAFT_TOKEN ?= <your token> ... \end{Verbatim} } \end{tcolorbox} Note that if \texttt{KRAFT\_TOKEN} is set system-wide, definitions in Makefiles will not override it. After this, install dependencies on the host: \begin{tcolorbox}[colback=lightgrey,boxrule=0pt,arc=0pt,left=6pt] {\scriptsize \begin{Verbatim}[commandchars=\\\{\}] $ make dependencies \end{Verbatim} } \end{tcolorbox} \subsection{Experiment Workflow} \label{subsec:workflow} All experiments should be prepared first. The prepare step installs necessary tools and downloads additional resources before they can run. This can be done for a single experiment or for all experiments, for example: \begin{tcolorbox}[colback=lightgrey,boxrule=0pt,arc=0pt,left=6pt] {\scriptsize \begin{Verbatim}[commandchars=\\\{\}] $ make prepare-fig-07 # prepare experiment 7 $ make prepare # prepare all experiments \end{Verbatim} } \end{tcolorbox} The automated preparation of all experiments takes on average 6-12~hours on our setup. This very long preparation time is due to the generation of all images. Once one or many experiments have been prepared they can be run, again using a similar syntax as above: \begin{tcolorbox}[colback=lightgrey,boxrule=0pt,arc=0pt,left=6pt] {\scriptsize \begin{Verbatim}[commandchars=\\\{\}] $ make run-fig-07 # run experiment 7 $ make run # run all experiments \end{Verbatim} } \end{tcolorbox} Running all automated experiments takes on average 4-5~hours on our setup. The plot for \Cref{fig:redis-poset} is not automated, and neither is the measurement of LoC changes for \Cref{tab:porting}. We estimate that the combination of the two manual items may take up to 1.5~hours of manual work. Automated experiments will generate the relevant experimental results within the experimental folder of the specific experiment. To plot one or many experiment figures, use, for example: \begin{tcolorbox}[colback=lightgrey,boxrule=0pt,arc=0pt,left=6pt] {\scriptsize \begin{Verbatim}[commandchars=\\\{\}] $ make plot-fig-07 # plot experiment 7 $ make plot # plot all experiments \end{Verbatim} } \end{tcolorbox} You can clean, or "properclean" to completely reset any preparation with \texttt{make clean} or \texttt{make properclean} for individual or all experiments, for example: \begin{tcolorbox}[colback=lightgrey,boxrule=0pt,arc=0pt,left=6pt] {\scriptsize \begin{Verbatim}[commandchars=\\\{\}] $ make clean-fig-07 $ make properclean-fig-07 $ make clean $ make properclean \end{Verbatim} } \end{tcolorbox} The clean rule removes results and plots, the properclean rule additionally deletes containers. \subsection{Evaluation and Expected Results} Reproducing experiments on the same machine should produce the same results as in the paper. On other machines, we expect different absolute numbers but similar ordering. On recent processors that benefit from hardware mitigations for transient executions attacks we expect EPT, Linux, and SeL4 measurements to improve comparatively to the MPK baseline. \subsection{Experiment Customization} Reviewers may use the base FlexOS\xspace Docker container to access a clean FlexOS\xspace development environment, port their own application, and build custom images. Instructions to build the base FlexOS Docker image, port applications, and build custom images are available in the \texttt{README.md} file of our main AE repository\footnote{\url{https://github.com/project-flexos/asplos22-ae/blob/main/README.md}}. \subsection{Notes} Some experiments have a slightly different workflow compared to the one described in \ref{subsec:workflow}. \Cref{fig:redis} requires you to set \texttt{HOST\_CORES} with a set of cores to be used for the experiment. \Cref{fig:nginx-redis-scatter} is only a plot and requires some manual steps. \Cref{fig:gate-latency} requires a reboot of the machine with different kernel parameters. \Cref{tab:porting} is manual. In all of these cases, the local \texttt{README.md} provides appropriate explanations. In general, the top-level and individual \texttt{README.md} files of our artifact contains more precise information on experiment timings, repository structure, setup requirements, and potential issues and solutions. We strongly recommend a careful read of these instructions before starting to reproduce experiments. \subsection{Methodology} Submission, reviewing and badging methodology: \begin{itemize} \item \url{https://acm.org/publications/policies/artifact-review-badging} \item \url{http://cTuning.org/ae/submission-20201122.html} \item \url{http://cTuning.org/ae/reviewing-20201122.html} \end{itemize} \section{MSpec\xspace specification} \label{appendix:a} A detailed overview of the MSpec\xspace specification syntax is below: \begin{scriptsize} \begin{Verbatim}[commandchars=\#+!] <#userinput+[Memory Access]!> ::= <access modifier>? {<memory model>} <#userinput+[Call]!> ::= <execution modifier>? {<execution model>} <#userinput+[API]!> ::= {<execution model>} <#userinput+[Requires]!> ::= <access modifier>? {<memory model>} | \ {<execution modifier>? \ {<execution model>}} <memory model> :: = (#userinput+<ptr>!, <basic access modifier>, \ #userinput+<size>!, #userinput+<memtype>!) <access modifier> ::== <basic access modifier>|#userinput+R*!|#userinput+W*!|#userinput+U! <basic access modifier> ::= #userinput+R!|#userinput+W! <execution modifier> ::= #userinput+U!|#userinput+X!|#userinput+X*! <execution model> ::= (#userinput+<ptr>!, #userinput+<call type>!) \end{Verbatim} \end{scriptsize} We use \texttt{\{\}} to mark repetition, \texttt{?} for optional terms and bold to mark terminals. \texttt{[Memory Access]} specifies memory access properties for the component. \textit{R/W} grants read/write access to shared memory, \textit{R*/W*} grants read/write access to all the memory. For a more fine-grained specification, the tuple \textit{(<ptr>, <basic access modifier>, <size>, <memtype>)} can be used to enact read/write properties for a \textit{memtype} element. A \textit{memtype} element can be a memory address or segment. \texttt{[Call]} adds properties for calls to outer components. We have a coarse grained specification: \textit{U} means no call, \textit{X} will respect the control flow, and \textit{X*} means that the component may jump-access anywhere in outer components without constraints (e.g. it may change the stack return address through a buffer overflow). The fine-grained specification uses a tuple to specify symbols or addresses where the component may jump-access. \textit{call type} can be either \textit{SYMB} for a symbol or \textit{ADDR} for an address. Note that both \texttt{<Memory Access>} and \texttt{<Call>} should be used to describe the component's behavior under normal but also adversarial operation: a component written in a memory unsafe language with no particular level of verification should be marked as able to read/write/jump anywhere in memory. \texttt{[API]} is used to register the API provided to other components. Both symbols and addresses may be specified. The exported symbols may be later used in the specification of other components. They are meant to be used by components that are closely related. \texttt{[Requires]} specifies preconditions for outer components. It encapsulates both \texttt{[Memory Access]} and \texttt{[Call]} maximum properties that outer components must have when calling the current component in the same compartment. \section*{Acknowledgements} We would like to thank the anonymous reviewers, and our shepherd, Gerd Zellweger, for their comments and insights. A similar thanks goes to our colleague Marc Rittinghaus for his insights. We are immensely grateful to the Unikraft OSS community for their past and ongoing contributions. This work was funded by a studentship from NEC Labs Europe, EU H2020 grants 825377 (UNICORE), 871793 (ACCORDION) and 758815 (CORNET), as well as the UK’s EPSRC grants EP/V012134/1 (UniFaaS) and EP/V000225/1 (SCorCH). UPB authors were partly supported by VMWare gift funding. \input{ae} \bibliographystyle{ACM-Reference-Format}
1,477,468,750,231
arxiv
\section{Introduction} The prevalent paradigm of text entry on computing devices is sequential typing of characters. Word completion and prediction can theoretically save up to 40-50\% keystrokes when 3-5 predictions are provided \cite{trnka2008evaluating, fowler2015effects}. This reduces the motor and cognitive demand of entering text, especially on devices where typing is difficult, e.g., phones. In AAC use cases such as eye-gaze keyboards for severely motor-impaired individuals, the cost per keystroke is so high that there is a desire to save as many keystrokes as possible. Gaze-typing requires the user to precisely control the direction and timing of gaze for each keystroke, resulting in an extremely low text-entry speed of 8-10 words per minute and severely limiting real-time communication \cite{waller2019telling}. A text-entry paradigm with substantially higher keystroke saving rate (KSR) can reduce motor demand and thereby benefit AAC usage in real-time communication. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{figs/sf-naacl-overview.png} \caption{Our approach to abbreviation expansion based on an LLM with context compared to one without. The conversation context (e.g., a previous turn of conversation) along with the abbreviation of the intended phrase form the LLM's input. Sampled continuations from the model are filtered to discard those that do not match the abbreviation. Top-5 options after sorting by frequency are presented. } \label{fig:fig1} \end{figure} One potential paradigm is "SMS language", a spontaneously-evolved system for saving keystrokes in which each word is abbreviated as a single letter, such as in the well-known abbreviations {\it sg} for {\it sounds good} and {\it ttyl} for {\it talk to you later} \cite{anjaneyulu2013glossary}. SMS language features a high KSR (75-80\%), but is limited by its small closed set of common phrases of mostly six words or shorter. Its abbreviation scheme is not applied to longer or less frequent phrases because such abbreviations would be hard for the recipient to decipher. For example, the abbreviation {\it iipitb} is highly ambiguous and may represent many possible phrases, e.g., \textit{it is pouring in the bay} and \textit{it is pretty in the backyard} (see Figure~\ref{fig:fig1} for more examples). Some existing AAC systems support abbreviation expansion (e.g., \citet{TobiiAbbreviationExpansion}), but are limited by hardcoded, closed phrase sets. The current study is based on the insight that although decoding open-set phrases from abbreviations is hard {\it without context} due to ambiguity, providing conversational context significantly constrains the space of likely phrases as shown by the example in Fig.\ref{fig:fig1} (\textit{it is playing in the backyard}). Hence we propose a high-KSR abbreviation scheme that focuses on conversational scenarios. We apply this scheme to three existing dialog datasets and create datasets for abbreviation expansion (AE). This allows us to study whether LLMs, trained on web text including conversational data, can enable AE and benefit from added context. We take a 64B parameter LLM and compare zero-shot, few-shot, and fine-tuning performance on the AE task. Additionally, we simulate typing noise to study tolerance of the approach to typos. The main contributions of our work are: \begin{enumerate}[wide]\itemsep-0.05cm \vspace{-0.4cm} \item Demonstrating the potential of abbreviation expansion using LLMs aided by conversational context for highly-abbreviated text entry, while measuring the effects of different amounts of context and different dialog turns. \item Describing a high-KSR abbreviation scheme, a method for simulating typing noise, and conversation datasets based on these. \item Comparing \textit{zero-shot}, \textit{few-shot}, and model fine-tuning approaches for the AE task and their tolerance to typo noise. \end{enumerate} \section{Related Work} \paragraph{Abbreviation expansion for text entry.} Previous research on aiding text entry through AE used abbreviation schemes such as using only content words \cite{demasco1992generating}, discarding certain vowels and consonants \cite{shieber2007abbreviated}, and flexible letter saving schemes \cite{pini2010text,adhikary2021accelerating,gorman2021structured}. Spontaneous abbreviations schemes primarily omit vowels, repeating consonants, last characters, and spaces, and lead to modest KSR (e.g., 25-40\% in \citealt{willis2005probabilistic}, and 21\% in \citealt{adhikary2021accelerating}.) The low KSR of such schemes can be attributed to the implicit need for a human reader to decode the phrases without significant cognitive burden. N-gram models and neural language models (LMs) have been applied to expanding abbreviations for these relatively low-KSR schemes. By using LSTM models and context, ~\citet{gorman2021structured} achieve a word error rate of 1.5\%. \citet{adhikary2021accelerating} report a 24.2\% top-5 sentence error rate decoding abbreviations using an RNN to augment an n-gram LM. Our presented approach is a step towards using automation and context to expand abbreviations at a higher KSR that is close to that of SMS language. \paragraph{Large language model prompting and fine-tuning.} Our approach builds on prior work on LLMs including few-shot prompting, fine-tuning, and conversation models \cite{raffel2019exploring,brown2020language,adiwardana2020towards,roller2020recipes}. We focus primarily on few-shot prompting \cite{brown2020language} and fine-tuning~\cite{ruder2021lmfine-tuning}. \textit{Few-shot} prompting uses a text description of a task along with a small number of examples for the task in the input text in order to elicit desired task responses from an LLM. In the \textit{zero-shot} scenario, no examples are provided. Prompting involves no updates to the model parameters. Model fine-tuning requires more data compared to prompting, but often leads to higher task accuracy than prompt engineering (e.g., \citealt{austin2021program, lester2021power}). For our AE task, data for fine-tuning can be synthesized from existing conversation datasets based on an abbreviation scheme (Sec.~\ref{sec:abbrev_regime}). Thus, we explore both prompting and fine-tuning and compare their performance. \paragraph{Assisting text entry with context.} Textual contexts have been exploited to aid email writing ~\cite{kannan2016smart, chen2019gmail}. For text entry in AAC, \citet{wisenburn2008aac} demonstrated that providing noun phrases from a conversation partner's speech as selection options increases text-entry speed by 36.7\%. \citet{adhikary2019investigating} concluded that with currently-attainable accuracy of ASR, partner speech can be valuable in improving language modeling for AAC text entry. \citet{shen2022kwickchat} used a fine-tuned GPT-2 model \cite{radford2019language} to expand bags of keywords into full phrases in conversational contexts based on the ConvAI2 dataset \citep{dinan2020second} and reported a KSR of 77\% at a word error rate threshold of 0.65. Our current study differs from the previous studies in the following aspects. First, we provide an abbreviation scheme to allow greater user control over the exact phrase structure and wording. Second, we performed detailed quantitative analysis of the combined predictive power of state-of-the-art LLMs and context awareness. \section{Methodology} \input{tab_finetune_datasets} \input{tab_ae_data_ex} \paragraph{Abbreviation Scheme.} \label{sec:abbrev_regime} Our abbreviation scheme differs from previous studies in that we optimize for KSR and do not expect a human reader to be able to easily decode the abbreviations. Additionally, it offers the benefit that each given phrase is mapped to a fixed abbreviation. The detailed rules for abbreviating phrases are: \vspace{-0.2cm} \begin{enumerate}[wide, labelindent=5pt]\itemsep0em \item Each word is abbreviated as its initial letter, unless the word contains an apostrophe (i.e., contraction), in which case the word is split at the apostrophe and the initial letters from the splits are taken (e.g., {\it can't} --> {\it ct}). This prevents abbreviations that are otherwise identical but semantically opposite (e.g., {\it can} vs. {\it can't}). \item All letters in the abbreviation are lowercase. \item Arabic numerals in a sentence are preserved (e.g., {\it see you at 10 o'clock} --> {\it sya10oc}). \item Sentence-final punctuation are removed. Mid-sentence punctuation and special characters (e.g., {\it \#} and {\it \$}) are preserved to help constrain the structure of the sentence (e.g., {\it OK, but be quick.} --> {\it o,bbq}). \end{enumerate} \subsection{Datasets for context-aware AE} \label{sec:dataset} We study modified versions of existing dialog datasets, which we converted for the context-aware AE task. We also describe how we simulate typos. \paragraph{Datasets.} Table~\ref{tab:dataset_table} summarizes the four datasets. We use their original train/dev/test splits in our experiments. The Turk Dialogues dataset \cite{vertanen2017towards} consists of crowd-sourced dialogs, each of which is exactly six turns in length. The dataset has typos and grammatical errors. We manually correct these and refer to the corrected dataset as \textbf{Turk Dialogues Corrected (TDC)}.\footnote{The corrected version is available in the file {\scriptsize\texttt{turk\_dialogues\_corrected.txt}} in Supplemental Data} We use three more datasets, \textbf{DailyDialog} \cite{li2017dailydialog}, a dataset of everyday conversations; the \textbf{Cornell Movie Dialogues (CMD)} \cite{danescu2011chameleons} based on movie scripts, and the \textbf{Turk AAC dataset (TAC)} \cite{vertanen2011imagination}. For evaluation on out-of-domain dialogs, we use the \textbf{TaskMaster-1 Self Dialogs (TMSD)} dataset \cite{byrne2019taskmaster}, a corpus of dialogs written by crowdworkers for task-oriented scenarios such as ordering pizza. TMSD is used only for evaluation and not for training or validation of the models. For DailyDialog, we remove 228 dialogues from the test split that are duplicate with conversations in the train split (see Appendix~\ref{sec:ddc_corrections}), which leads to what we call the \textbf{DailyDialog Corrected (DDC)} dataset. No correction is applied to the other datasets. The TAC dataset contains only isolated phrases without any conversational-turn context. Hence we use it only for training. In all of our experiments, we combine data from the training splits of all four datasets when fine-tuning models. We perform evaluations on the TDC, DDC, CMD, and TMSD datasets. The TDC dataset is chosen as our primary benchmarks because of its strict six-turn dialog structure. \textbf{Modifications for the AE task.} The above-mentioned datasets are typically used to study dialog generation. For our scenario, we convert each turn of the conversation in these datasets into the following canonical format: \vspace{0.2cm} \begin{tabular}{ r l } \toprule \multirow{1}{4em}{Context:} & \{Content of the contextual turn\}\\ \multirow{1}{4em}{Shorthand:} & \{Abbreviation of \textit{next turn}\}\\ \multirow{1}{4em}{Full:} & \{Expanded content of \textit{next turn}\}\\ \hline \multirow{1}{4em}{Context:} & \{Would you like to sit down?\}\\ \multirow{1}{4em}{Shorthand:} & \{n,imfsu\}\\ \multirow{1}{4em}{Full:} & \{No, I'm fine standing up\}\\ \bottomrule \end{tabular} For the AE task, the context consists of one or more previous dialog turns. When context is absent (e.g., for the opening turn), the context part is omitted. For a multi-turn dialog, the n$^{th}$ (1-based) example contains the first (n - 1) dialog turns as the context as well as the shorthand and the full form of the n$^{th}$ turn. Thus, a 6-turn conversation yields six examples for the AE task. When multiple sentences are present in a single turn, we use only the first sentence for expansion; when a turn is used as context, all available sentences are used. Table \ref{tab:ae_data_ex} shows examples generated from all six turns of a dialog from TDC. Each dialog in the TDC, DDC, and CMD datasets yields several examples covering different amount of context. We create only 0-context-turn examples for the TAC dataset since it contains only isolated phrases. \paragraph{Text-entry noise in AE datasets.} \label{sec:approach_noise} As with our AE scheme, the introduction of noise to the datasets is also motivated by the AAC text entry use case, and in particular eye-gaze typing, which is error prone \cite{feit2017toward}. Here, misclicks occur frequently and must be taken into account when designing a gaze-driven text entry system. In order to simulate the noise, we model eye-gaze typing as uncorrelated 2D Gaussian distributions around the intended key \cite{azenkot2012touch}. \begin{figure} \centering \vspace{-0.4cm} \includegraphics[width=\columnwidth]{figs/keyboard_layout.png} \caption{Keyboard layout for simulating noise in AE keypresses. The circles on the {\it f} key show $1\sigma$ around the mean for $\sigma \in \{0.3, 0.5\}$ in the 2D Gaussian distributions used to model typing noise. \label{fig:keyboard_layout_noise}} \vspace{-0.4cm} \end{figure} To simulate noise in the abbreviation input, we use a simplified rectangular-grid qwerty keyboard layout with 30 keys arranged in three rows and 10 columns. The keys are \begin{math}1\times1\end{math} squares with no gaps in between. The keystrokes for an intended key are drawn from 2D Gaussian distribution centered on the center of the intended key and standard deviations denoted $\sigma$ equal in the two spatial dimensions. To model different levels of noise, we use three values of $\sigma$: 0.0 (i.e., no-typo baseline), 0.3, and 0.5, which corresponds to 0\%, 13\%, and 44\% character error rates, respectively. Examples with simulated typos are shown in Table \ref{tab:ae_data_ex}. \subsection{Large Language Model} \label{sec:app_model} One of our goals is to test whether \textit{zero-shot} and \textit{few-shot} prompting of LLMs are effective at the AE task without the need for supervised fine-tuning. Prompting is the method of eliciting desired task-specific responses from an LLM by including a natural-language description of the task and/or input-output examples of the task in the input string for an LLM, without altering the model's weights \cite{brown2020language}. Zero- and few-shot prompting differ in whether any examples are included in the prompt to the LLM. For this, we use a decoder-only Transformer language model~\cite{vaswani2017attention} from the LaMDA~\cite{thoppilan2022lamda} family of models. Our experiments are based on the 64B parameter model, unless otherwise specified. This model has $32$ Transformer layers, with $d_{model}=8192$, $d_{ff}=65536$, $h=128$, $dk = dv = 128$. The model was pre-trained on 2.97B public web documents, Wikipedia, and dialogs. The training data was tokenized with the SentencePiece vocabulary ~\cite{kudo2018sentencepiece} of size 32K. We call this the \textbf{BaseLLM}. We also developed \textbf{fine-tuned versions} of this model for the AE task. The fine-tuning uses examples in the format as shown in Table \ref{tab:ae_data_ex}. Since the BaseLLM is a decoder only model, and we use both the context and abbreviation as triggers to the model during inference, we modify the loss to only be calculated on the tokens of the AE target, i.e. the full form to be predicted in the pair of curly brackets after "Full:". For both training and inference, we split the characters in the abbreviation with spaces to force SentencePiece to use per-character IDs. We tune \footnote{Appendix~\ref{sec:appx_finetune} and~\ref{sec:appx_char_split} provide details on fine-tuning and discuss the effect of character splitting.} two models, \textbf{FT-LLM} on the combined AE datasets without typos, and \textbf{FTnoise-LLM} on the version with simulated typos. Both use early stopping on a dev set consisting of combined examples from the dev splits of TAC and TDC (Table \ref{tab:dataset_table}). \section{Experiments} \label{sec:experiments} \paragraph{Models.} We use and compare the following models in our different experiment settings. \textbf{Look-Up Table (LUT)}. As a straight-forward, non-ML baseline, we compile a dictionary of 375,298 sentence-level abbreviations from the train splits of the datasets in Table \ref{tab:dataset_table}. Each abbreviation maps to one or more phrases with their frequencies, leading to 447,249 unique abbreviation-sentence pairs. During evaluation, we map the query abbreviation to the top-5 expansion phrases (by frequency) by using the dictionary and breaking ties randomly. \textbf{BaseLLM} (from Sec.~\ref{sec:app_model}). We study the BaseLLM in the \textit{zero-shot} and \textit{few-shot} (specifically \textit{4-shot}) settings\footnote{The prompts are prefixed with the natural language instruction ``Given acronym, write the full phrase.'' when there's no context or ``Given previous turn(s) of conversation and acronym of reply, write the full phrase.'' when there is context. }. The four examples are selected from the train split of the TDC dataset (see Appendix~\ref{sec:appx_4shot_ex}). We quantify the variability of the model on a sets of 856 4-example sequences from the train split of the TDC dataset. The best performing one on the dev set is denoted \textbf{BaseLLM$^{*}$}. \textbf{FTnoise-LLM} tuned on simulated typos with noise level $\sigma = 0.3$ (see Appendix~\ref{sec:appx_noise.3_motiv}), and \textbf{FT-LLM} tuned on AE data without noise as described in Sec.~\ref{sec:app_model} are additional models we compare to. \textbf{T5 encoder-decoder}. For comparison with smaller models, we use the T5 encoder-decoder \textbf{small} (60M), \textbf{large} (770M), and \textbf{3B} parameter models fine-tuned on AE data without noise, identical to FT-LLM. We evaluate the fine-tuned models in the setting without any explicit natural language instructions (denoted ``no instr.'') unless mentioned otherwise. \noindent For all models, we perform random sampling with \textit{temperature=1.0} over the \textit{top\_k=40} candidates with the highest logits at each step. We decode \textit{128 samples} for each abbreviation unless otherwise specified. For each model and evaluation setting we report the standard deviations (SDs) of metrics over 3 repeated runs. \paragraph{Studies.} For the BaseLLM, we study the variance in performance based on the prompt selection. For all the models, we sample multiple responses for each query, hence we study the effect of number of responses sampled on AE accuracy and latency. We also compare the performance of the models with varying amounts of conversation context and with no context. To study the effect of typos, we compare the performance of the models on the noise induced AE dataset. To measure the impact of model size on accuracy and latency, we also fine-tune and evaluate performance of the decoder-only LaMDA models with fewer than 64B parameters, specifically 4B, 8B, and 27B parameters. All these models were trained on the same data, so that the model size consitutes the only difference. \input{tab_eval_datasets} \input{tab_res_table} \paragraph{Evaluation.} We only evaluate on conversation queries with abbreviation length $\le10$ characters. This encompasses the majority (85$\%$) of the dialog turns from the original dataset (Table \ref{tab:eval_datasets}). Where applicable, we prepend the following natural-language instruction to the model input for the AE task: \textit{"Given previous turn(s) of conversation and acronym of reply, write the full phrase."} Before calculating performance metrics, we filter the model's responses: we remove sentence-final punctuation, standardize whitespace to one space, lower-case, de-duplicate, and filter for precise match of the abbreviation. The responses that pass the filtering are sorted by descending count. For evaluation with noise, we do filtering to allow matches to nearby characters on the keyboard. \paragraph{Metrics.}\textbf{Accuracy} measures whether any response expansion exactly matches the ground truth (with standardized letter-casing and whitespace, and discarded final punctuation). Additionally, we measure \textbf{BLEU} score~ \cite{papineni2002bleu} using the SacreBLEU library \cite{post2018call} as a more fine-grained metric for the similarity between AE options and the ground truth. For both metrics, we report performance in the top-5 responses after they are sorted based on frequency. \textbf{Key Stroke Savings (KSR)} measures the number of saved keystrokes compared to the full length of the phrase. Note, however, that AE succeeds only for a subset of the cases, while for others the top-5 options do not contain the intended phrase. Hence we compute two types of KSR: \textbf{KSR$_{all}$}, computed on all phrases, is defined as \begin{equation} \label{eq:ksr} \scriptsize \scriptsize{KSR_{all}}=\begin{cases} \left(1 - \frac{L_{abbrev}}{L_{full}}\right) \times 100, & \text{\scriptsize if in top-5}.\\ \left(1 - \frac{L_{abbrev} + L_{full}}{L_{full}}\right) \times 100, & \text{\scriptsize otherwise}.\\ \end{cases} \end{equation} where $L_{abbrev}$ and $L_{full}$ are the character lengths of the abbreviation and full phrase, respectively. In other words, if a phrase has a matching option in the top-5, we calculate the KSR as the percentage of keypresses saved by using the abbreviation. If the ground truth is not in top-5, we add a penalty term ($L_{full}$) to account for the need to enter the phrase by starting anew character-by-character, leading to a \textit{negative} KSR. $KSR_{all}$ is calculated by averaging over all phrases in an experiment. \textbf{KSR$_{success}$}, is calculated by averaging over only the subset of phrases with exact matches and uses the first case in Equation \ref{eq:ksr}. \section{Results} We present the main results comparing the models on all datasets in Table~\ref{tab:tab_res_table} and then highlight results from specific experiments. \paragraph{The accuracy of LLMs at expanding word-initial abbreviations is enhanced by fine-tuning.} Table~\ref{tab:tab_res_table} compares the performance of all the models on the abbreviation expansion (AE) task\footnote{Appendix Tab. \ref{tab:tab_tdc_dev_table} reports performance on dev split of the TDC (TDC-dev) which was used for hyperparameter tuning.}. The data shown in the table are for AE on the 2$^{nd}$ turn of a dialog that utilizes the 1$^{st}$ turn as the context, which focuses on our main hypothesis regarding the effect of context on AE. It's noteworthy that the BaseLLM$^*$, which has seen just four examples in its prompt (unlike the other models), shows performance that exceeds the look-up table (LUT) baseline in many cases, demonstrating the versatility of LLMs. The higher scores of the LUT on DailyDialogs (DDC) and Cornell Movie Dialogues (CMD) datasets are indicative of the high percentage of similar phrases in the train and test sets of the datasets. Unsurprisingly, the fine-tuned models (FT-LLM , FTnoise-LLM, and T5 models) far outperform even the best \textit{4-shot} BaseLLM$^*$, achieving 74-77\% top-5 exact-match accuracy on the TDC and DDC datasets in the absence of typo noises. The accuracies are lower on the CMD dataset (comprised of movie scripts.) The out-of-domain evaluation on the TaskMaster Self Dialogs (TMSD) dataset also showed accuracies lower than the TDC and DDC datasets, but higher than the results from the CMD dataset. \paragraph{Fine-tuning and tolerance to noise.} For conditions that involve simulated typo noise in the abbreviation input, FTnoise-LLM shows superior performance compared to other models (see the column "TDC-test + noise" in Table \ref{tab:tab_res_table}.) Interestingly, the performance of the BaseLLM$^*$ doesn't drop as much as any of the fine-tuned models - T5 or FT-LLM - in this setting. However, while FT-LLM still outperforms BaseLLM on the noisy abbreviations, the smaller T5 models fail to do so. \paragraph{Context is critical for AE accuracy} \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/fig_finetuned_dev.png} \caption{AE accuracy of FT-LLM, evaluated (inference only) with different amounts of input context (different curves) on different dialog turns (x-axis) on the TDC dev set. With all turns as context (solid blue curve) or just the previous turn as context (dashed orange curve), the model considerably outperforms the setting where no context is provided (dot-dash green) with the abbreviation query. } \label{fig:fig_finetuned_dev} \end{figure} Figure~\ref{fig:fig_finetuned_dev} show how the AE accuracy of FT-LLM varies when different amounts of context from previous turns of the conversation are provided. Compared to having no context (dash-dotted curve), including just one previous turn of context (dashed curve) approximately doubles accuracy. Using the full context (all dialog turns from the 1$^{st}$ to the $(n$-$1)^{th}$, solid curve) leads to further improvements indicating that prior turns carry useful information for the AE task. Compared to the 1$^{st}$ turn, AE under no context on subsequent turns (2$^{nd}$-6$^{th}$) shows significantly worse accuracy. This is due to the fact that the first turn consists of conversation starters that are easier to predict without context. Overall, irrespective of context, the accuracy of AE decreases as the number conversation turns increases, indicating increasing difficulty in predicting the full phrases from the abbreviation as the dialogs progress. However, including full context during inference still achieves accurate expansions for 60$\%$-70$\%$ of the cases on the later turns. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/fig_al_breakdown.dev.png} \caption{AE accuracy as a function of abbreviation length (AL). The results shown are from FT-LLM evaluated with no prompt. Different colors of bars show AE on the 1st and 2nd turns of the dialog in the TDC dev split, with 0 and 1 previous turn as the context. The 1-2 bin contains no 1st-turn examples.} \label{fig:fig_al_breakdown} \end{figure} \textbf{Effect of context is more pronounced on longer abbreviations.} When performance is sliced by the abbreviation length (Figure~\ref{fig:fig_al_breakdown}), accuracy without context decreases sharply and nearly monotonically with increasing abbreviation length, regardless of whether it's the opening turn or the 2nd turn. With context however, the accuracy remains higher and decreases more slowly with abbreviation length, extending the approximately 80\% or higher accuracy into longer phrase lengths. \paragraph{The variability and usefulness of few-shot prompts decreases after model tuning.} \input{tab_prompt_acc_variance} Here we focus on how much the LLM benefits from prompting before and after fine-tuning. The first row of Table~\ref{tab:variance_prompt} compares AE accuracies from different 4-shot prompts on the TDC dataset for BaseLLM and FT-LLM. We use the 856 example abbreviation-expansion pairs from the train split of the TDC dataset, using four conversation examples for the prompt at a time. The BaseLLM shows a large variance in performance depending on the selected examples in the prompt by as much as $SD=4.83$. The best 4-shot prompt for BaseLLM outperforms the 0-shot prompt, despite the fact that the average 4-shot prompt accuracy is lower. Therefore for BaseLLM we report the results from the best 4-shot prompt (BaseLLM$^*$). By contrast, the fine-tuned model (FT-LLM) shows significantly lower prompt-related variance ($SD=1.79$) in addition to a 2.3-fold increase in the mean accuracy. Moreover, FT-LLM is able to perform the AE task with only a natural-language prompt without examples (0-shot prompt) and even without any instruction (``No instr.'') at average accuracies that are more than 1 SD above that of 4-shot prompting. The ``No instr.'' setting is attractive due to its simplicity (no need to search for or hand-engineer a prompt) and reduced latency (due to shorter input prefix lengths). Given these results, we use the ``No instr.'' as the default setting and for all other experiments on FT-LLM and FTnoise-LLM. \paragraph{Increasing number of decoded samples improves accuracy at the cost of latency.} \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/fig_num_samples.png} \caption{Increasing number of samples from the LLMs improves top-5 exact-match accuracy. FT-LLMs, even with fewest samples and smallest model size, outperform the BaseLLM$^*$. } \label{fig:num_samples_acc_lat} \end{figure} Latency is important for interactive text-entry applications. During sampled decoding, the LLMs generate 128 continuations of length 16 tokens for a batch of prefix length 256 with a median latency of 0.568 s (interquartile range: 0.16 s). This latency is close to typical dwell time of eye-gaze keyboards \cite{majaranta2007text} and hence could be acceptable for the eye-gaze typing use cases. Figure~\ref{fig:num_samples_acc_lat} shows the effect of increasing the number of continuations sampled from the LLMs. As expected, increasing sample count from 128 to 2048 improves top-5 accuracy for both BaseLLM* (with 4-shot prompts) and FT-LLM (no instr.). Improved accuracy comes at the cost of increased latency.\footnote{Note, that it is possible to cut down latency by parallelizing sampling, however this might increase hardware requirements at inference time.} BaseLLM benefits significantly more from increasing sample count than FT-LLM \paragraph{Comparison of model sizes} Figure~\ref{fig:num_samples_acc_lat} also compares fine-tuned models of different sizes (4B, 8B, 27B, and 64B). With model fine-tuning, the accuracy increases monotonically with increasing number of parameters. Interestingly, even with the fewest samples (128), fine-tuned models of all sizes outperform the larger (64B) model under \textit{few-shot} learning. Amongst the encoder-decoder T5 models (Table~\ref{tab:tab_res_table}) larger models significantly outperform smaller ones. As observed for the decoder-only models, the smaller fine-tuned T5 models outperform the few-shot BaseLLM in almost all cases except when the input consists of typos. \paragraph{Keystroke saving rates.} \input{tab_ksr} KSR can be considered as a proxy measure of usability of the approach for AAC use-cases. $KSR_{success}$ values are in the range of 73-77\% for the 1st and 2nd turns of dialogs in the TDC and DDC datasets (Table~\ref{tab:ksr}), indicating that our proposed AE scheme does indeed lead to high KSRs. Values of $KSR_{all}$ are lower, reflecting the penalties for when a perfect match is not achieved. However, with context, $KSR_{all}$ approaches 50\% and is higher compared to no context (20\%-37\%). Note that $KSR_{all}$ is extremely conservative as it does not consider (a) the possibility of using the information already contained in the abbreviation to "recover from AE failure" (e.g., by letting the user specify a word and invoke the LLM again) or (b) the fact that word completion and prediction may still be utilized even if the user falls back to sequential text entry. \paragraph{Fine-tuning with noise improves typo tolerance.} \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/fig_typotolerance.test.png} \caption{AE accuracy with and without typo noise in the input abbreviation. We compare the accuracies of the models fine-tuned without and with noise. Each curve shows the average top-5 accuracy in the 2nd turns of the dialogs in the test split of the TDC dataset. } \label{fig:fig_typotolerance} \end{figure} Figure \ref{fig:fig_typotolerance} compares the AE accuracies of LLMs fine-tuned with and without noise (FTnoise-LLM and FT-LLM). While both models show decreasing AE accuracies with increasing amounts of typos, FTnoise-LLM is much more robust showing lesser drop in performance. Further, on noise-free inputs ($\sigma$=0), FTnoise-LLM shows only slight accuracy deterioration compared to FT-LLM. We also find that typo tolerance, for both FT-LLM and FTnoise-LLM, is more pronounced with context than without. \paragraph{Cross-domain generalization.} We use the TMSD dataset to compare and evaluate the performance of models on conversation domains not seen in training. In Table~\ref{tab:tab_res_table} we can observe that few-shot prompting does fall behind the simple Look-Up Table baseline on DDC and CMD datasets. However, when we evaluate the models on cross-domain TMSD dataset of dialogs we can observe that the fine-tuned and few-shot models do generalize better to unseen domains and perform better than the baseline look-up. \section{Discussion} \paragraph{Qualitative analysis of AE failures.} As indicated by the relatively high BLEU scores in Table~\ref{tab:tab_res_table} ($>80\%$), there are many expansions in the top-5 options that are "near misses". Appendix Table \ref{tab:ae_near_misses} shows a few examples of such near misses, in which the options differ from the the ground-truth phrase by only a semantically-similar word (e.g., ``yes'' vs. ``yeah'', ``head out'' vs. ``head over''.) Future studies need to investigate the frequency and UX acceptability of such near-miss AE options. But their existence implies that exact-match accuracy reported above slightly underestimates the practical effectiveness of the models. Another category of AE failures involve phrases that contain certain proper nouns. The last four examples in Table \ref{tab:ae_near_misses} show such cases in which the model correctly expands all the words but a proper noun. When such errors occur, the model tends to predict more common proper nouns, which is likely a reflection of the higher frequency of the predicted nouns in the model's pre-training and fine-tuning datasets. \paragraph{The benefit of AE relative to sequential text entry.} Word completion and prediction incur scanning cost: users scan the options in order to determine whether any of them match their intention, which has a detrimental effect on speed that needs to be overcome by the high quality of the options \cite{trnka2009user}. Although the speed of AE-based text entry remains to be quantified in future studies, we point out that: (1) AE removes overhead of scanning for options in between keystrokes, (2) there are fewer characters to examine or correct when typing, both of which may offer speed-ups in addition to the higher KSR afforded by AE. Although the current study is motivated by and focuses on the AAC use case, our paradigm of abbreviated text entry may be applicable to text input on touch screens as well. The AE approach of the current study can be regarded as a variation of contextual prediction of user text \cite{kannan2016smart, chen2019gmail} that affords greater flexibility in message content at the trade-off of requiring specification of the message with a small number of keystrokes. \paragraph{Future directions.} We found fine-tuning to be significantly better than prompting in terms of (a) accuracy (for both scenarios with and without typo-noise) and also (b) exhibit lower latency as we achieve better results with fewer samples. Future work should investigate the differences in latencies between the encoder-decoder architecture and decoder-only models. For training efficiency, instead of fine-tuning, it will also be worth investigating strategies such as prompt tuning~\cite{lester2021power} that continue to keep the model frozen, but learn some additional parameters for the task. Even in the best case scenario models can fail to find accurate expansions\footnote{see Appendix~\ref{sec:appx_recovery_failure} for analysis} among the top-5 options. Recovering from such failures is important for AAC use cases. Future studies should consider options for partial specifications of one or more words or selection of some words from the available options. Once the recovery from failure is proven in offline analysis, user studies are required to validate and quantify the actual benefit of the AE text-entry paradigm in lab and real-life settings. Integration with UI approaches is also an essential direction, e.g., speeding up eye-gaze typing such as cascading dwell time and dwell-free paradigms \cite{mott2017improving, kristensson2012potential}. \section{Acknowledgements} We would like to thank Shumin Zhai and Michael Terry for feedback on a draft of this work, Yanping Huang for pointers on model inference, as well as James Stout, Bob MacDonald, Julie Cattiau, and Maarten Bosma for their support. We are grateful to Team Gleason for their active involvement and feedback in the development of this work. \section{Ethical Considerations, Limitations, and Societal Impact} Accelerating augmentative and alternative communication (AAC) can enhance quality of life of people with extremely limited mobility by facilitating increased social participation and independence~\cite{calgari2013}. While the benefits of AE may be large for this population, we note that this approach may have risks. The primary risk of AE is errors in expansions that substantially misrepresent the intent of the speaker in a way that might cause harm to themselves or others (e.g., failure to correctly convey critical health information, insertion of offensive language.) The abbreviation expansions may also reflect biases in the underlying language model (e.g., perpetuating stereotypes by more frequently suggesting male pronouns than female, \citealt{weidinger2021ethical}.) A more subtle risk is when expansions miss the ground-truth phrase closely (see Table~\ref{tab:ae_near_misses}), which may accurately convey content but reduce the speaker's sense of autonomy and authentic self-expression. Prior work (e.g., \citealt{kane2017cscw}) has shown that people with ALS highly value AAC that preserves and facilitates authentic identity expression. Providing speakers with multiple AE options to choose from and requiring user confirmation before voicing an expansion are design options that can mitigate these risks. Model fine-tuning to improve safety or personalization to the end-user's communication style are additional risk-mitigation approaches. Beyond enhancing communication speed, another intended benefit of AE is the potential to reduce fatigue associated with gaze-based AAC by reducing keystrokes; however, a risk of our system is that if errors in AE are frequent for a given user (perhaps due to eye tracker miscalibration or long-tail abbreviation use) then these savings could be outweighed by the need to correct errors, inadvertently increasing fatigue. User studies to better understand error rates in practice, as well as future work designing interfaces to simplify AE error correction, are important for minimizing this risk. Similarly, our abbreviations scheme's simple design based on first letters aims to minimize cognitive load; however, user studies with the target population using instruments such as NASA's Task Load Index\footnote{\url{https://humansystems.arc.nasa.gov/groups/tlx/}} would be required to verify that AE does not cognitively strain end-users. \section{Conclusion} In this work we proposed a high-KSR form of abbreviation expansion to dramatically save keystrokes for severely-disabled users. We use it to synthesize three datasets for the AE task. Based on extensive experiments using few-shot prompting and model tuning we demonstrate that across the datasets, fine-tuned LLMs can accurately predict expansions for 48-77\% of phrases that are replies to initial turns of dialogs and exhibit KSRs in the range of 73-77\% for the correctly predicted expansions, thus pointing at a promising direction for future user studies of contextual and abbreviated text entry based on LLMs. Models evaluated with conversation context show significantly higher accuracy than without, thus supporting our hypothesis that context is the key to effective abbreviated text entry in conversational settings. Furthermore, fine-tuning with simulated typos substantially improves tolerance to noise in abbreviation. \section*{Appendix} \section{Removal of duplicate dialogs from the DailyDialog dataset} \label{sec:ddc_corrections} We observed that the DailyDialog dataset \cite{li2017dailydialog} contains a significant number of dialogs in its dev (validation) and test splits that are identical or nearly identical to the dialogs found in its train split. We determined two dialogs to be duplicate by using the following criterion: \begin{enumerate}\itemsep0em \item If both dialogs consist of the same number of turns and the corresponding turns are all identical (case-insensitive), or \item If both dialogs consist of the same number of turns and there are three or more turns at which both dialogs contain identical text (case-insensitive). \end{enumerate} See the file daily\_dialog\_deduplications.csv in Supplemental Data for a list of the 177 dialogs in the dev split and the 228 dialogs in the test splits that are found to be duplicates with the train split and hence are removed from our DailyDialog Corrected (DDC) dataset. \input{tab_ae_near_misses} \section{4-$shot$ examples for BaseLLM$^*$} \label{sec:appx_4shot_ex} We select four consecutive dialogues from the 859 examples from train split of the TDC dataset \cite{vertanen2017towards} while varying the starting conversation, which yields $859 - 4 + 1 = 856$ different 4-shot prompt sets. \section{Tuning on noisy data vs. accuracy} \label{sec:appx_noise.3_motiv} Preliminary experiments have shown that $\sigma=0.3$ is a good trade-off between accuracy gains on noisy data and losses on non-noisy data. \section{Model fine-tuning details} \label{sec:appx_finetune} Our model fine-tuning uses the AdaFactor optimizer~\cite{shazeer2018adafactor}. The nominal batch size 16 is made more efficient through example packing~\cite{raffel2019exploring}, leading to an average effective batch size of approximately 200 examples under a maximum sequence length of 1024 tokens. We used TPUv3s \cite{jouppi2018motivation} with a configuration of 4x8 for the LLM fine-tuning. Our fine-tuning recipe applies a constant, low learning rate of $5\times10^{-5}$ and a dropout rate of $0.2$, which helps to prevent early overfitting. Early stopping is based on a dev set consisting of combined examples from the dev splits of the TAC and TDC datasets. We find the best checkpoint after 2100 and 1800 training steps for the FT-LLM and FTnoise-LLM models, respectively, which amounts to approximately 1-1.2 epochs of training. We ran a small set of hyperparameter tuning experiments, varying batch size, learning rate and dropout and chose the best setting based on the TAC$+$TDC dev set. \begin{table}[h] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{l|cc} \toprule \multicolumn{1}{c|}{{}} & \multicolumn{2}{c}{{TDC-dev}} \\ \multicolumn{1}{c|}{{Model}} & Acc.@5 & BLEU@5 \\ \midrule Look-Up Table (LUT) & 16.9 $\pm$ 0.2 & 25.2 $\pm$ 0.2 \\ T5-small (60M) & 37.8 $\pm$ 0.0 & 59.2 $\pm$ 0.5 \\ T5-large (770M) & 48.2 $\pm$ 0.0 & 69.1 $\pm$ 0.5 \\ T5-3B (3B) & 53.9 $\pm$ 0.0 & 72.3 $\pm$ 0.5 \\ \hline BaseLLM$^*$ (best, $4shot$) & 43.0 $\pm$ 1.0 & 52.0 $\pm$ 1.4 \\ FT-LLM (no instr.) & \textbf{76.7 $\pm$ 1.1} & \textbf{83.9 $\pm$ 0.5} \\ FTnoise-LLM (no instr.) & 75.8 $\pm$ 0.7 & 83.4 $\pm$ 0.2 \\ \bottomrule \end{tabular} } \caption{Comparing models (from Sec.~\ref{sec:experiments}) on the AE task on turn-2 given turn-1 as context. We report accuracy and BLEU score at top-5, as percentages, std. dev. computed on 3 runs. Higher is better, values in \textbf{bold} are highest in each column. The TDC-dev set was used for model selection before evaluation on test sets. \label{tab:tab_tdc_dev_table}} \end{table} \section{Computation cost} \label{sec:computation_cost} Fine-tuning of the 64B LLM uses TPU v3 with a 4x8 configuration, i.e., 32 TPUs. FT-LMM and FTnoise-LLM are each trained for approximately 2100 and 1800 steps, respectively. The training time is approximately 3 hours. This leads to a model fine-tuning budget of 32 x 3 = 96 TPU * hour per model. Evaluation and inference on the 64B LLM uses TPU v3 with a 4x4 configuration, i.e., 16 TPUs. Each example (batch size = 128 samples) takes 0.653 s. This leads to $16\times0.568 / 128$ = 0.071 TPU $\times$ second per sample. \section{Splitting characters in abbreviations.} \label{sec:appx_char_split} Pilot experiments showed the importance of programmatically inserting spaces between characters in the abbreviations. Since the vocabulary used by the LaMDA models is fairly large (32k entries), unless we enforce character-level splitting, subsequences of multiple characters in many abbreviations will be combined into spurious tokens, leading to slightly reduced AE accuracy. \section{Recovery from failure - analysis} \label{sec:appx_recovery_failure} In the best scenario of replying to a question, the fine-tuned LLM is capable of predicting the correct phrase expansion approximately 81\% of the times with top-5 options and sufficient sampling (Figure \ref{fig:num_samples_acc_lat}). Hence the model will fail to find the correct expansion at least 19\% of the cases. \section{Inference latencies of different LaMDA model sizes} \label{sec:appx_latency} In Figure~\ref{sec:appx_latency} we compare the latencies during inference time for the decoder-only models of different sizes. Compared to the 4B model, the 27B model shows 1.5x latency, while the 64B model shows 2.2x latency. While the latency increase is quite significant, this analysis shows that we cannot substitute the 64B model with a smaller model (e.g., by increasing the number of samples) in a way that improves latency without significantly harming the AE accuracy (compare the AE accuracies in Figure~\ref{fig:num_samples_acc_lat}.) \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/latencies_numsamples128.png} \caption{Inference latencies for different sizes of the LaMDA model (4B, 8B, 27B, and 64B.) The latencies are shown as box plots.} \label{fig:latency_comparison} \end{figure} \section{Introduction} These instructions are for authors submitting papers to *ACL conferences using \LaTeX. They are not self-contained. All authors must follow the general instructions for *ACL proceedings,\footnote{\url{http://acl-org.github.io/ACLPUB/formatting.html}} and this document contains additional instructions for the \LaTeX{} style files. The templates include the \LaTeX{} source of this document (\texttt{acl.tex}), the \LaTeX{} style file used to format it (\texttt{acl.sty}), an ACL bibliography style (\texttt{acl\_natbib.bst}), an example bibliography (\texttt{custom.bib}), and the bibliography for the ACL Anthology (\texttt{anthology.bib}). \section{Engines} To produce a PDF file, pdf\LaTeX{} is strongly recommended (over original \LaTeX{} plus dvips+ps2pdf or dvipdf). Xe\LaTeX{} also produces PDF files, and is especially suitable for text in non-Latin scripts. \section{Preamble} The first line of the file must be \begin{quote} \begin{verbatim} \documentclass[11pt]{article} \end{verbatim} \end{quote} To load the style file in the review version: \begin{quote} \begin{verbatim} \usepackage[review]{acl} \end{verbatim} \end{quote} For the final version, omit the \verb|review| option: \begin{quote} \begin{verbatim} \usepackage{acl} \end{verbatim} \end{quote} To use Times Roman, put the following in the preamble: \begin{quote} \begin{verbatim} \usepackage{times} \end{verbatim} \end{quote} (Alternatives like txfonts or newtx are also acceptable.) Please see the \LaTeX{} source of this document for comments on other packages that may be useful. Set the title and author using \verb|\title| and \verb|\author|. Within the author list, format multiple authors using \verb|\and| and \verb|\And| and \verb|\AND|; please see the \LaTeX{} source for examples. By default, the box containing the title and author names is set to the minimum of 5 cm. If you need more space, include the following in the preamble: \begin{quote} \begin{verbatim} \setlength\titlebox{<dim>} \end{verbatim} \end{quote} where \verb|<dim>| is replaced with a length. Do not set this length smaller than 5 cm. \section{Document Body} \subsection{Footnotes} Footnotes are inserted with the \verb|\footnote| command.\footnote{This is a footnote.} \subsection{Tables and figures} See Table~\ref{tab:accents} for an example of a table and its caption. \textbf{Do not override the default caption sizes.} \begin{table} \centering \begin{tabular}{lc} \hline \textbf{Command} & \textbf{Output}\\ \hline \verb|{\"a}| & {\"a} \\ \verb|{\^e}| & {\^e} \\ \verb|{\`i}| & {\`i} \\ \verb|{\.I}| & {\.I} \\ \verb|{\o}| & {\o} \\ \verb|{\'u}| & {\'u} \\ \verb|{\aa}| & {\aa} \\\hline \end{tabular} \begin{tabular}{lc} \hline \textbf{Command} & \textbf{Output}\\ \hline \verb|{\c c}| & {\c c} \\ \verb|{\u g}| & {\u g} \\ \verb|{\l}| & {\l} \\ \verb|{\~n}| & {\~n} \\ \verb|{\H o}| & {\H o} \\ \verb|{\v r}| & {\v r} \\ \verb|{\ss}| & {\ss} \\ \hline \end{tabular} \caption{Example commands for accented characters, to be used in, \emph{e.g.}, Bib\TeX{} entries.} \label{tab:accents} \end{table} \subsection{Hyperlinks} Users of older versions of \LaTeX{} may encounter the following error during compilation: \begin{quote} \tt\verb|\pdfendlink| ended up in different nesting level than \verb|\pdfstartlink|. \end{quote} This happens when pdf\LaTeX{} is used and a citation splits across a page boundary. The best way to fix this is to upgrade \LaTeX{} to 2018-12-01 or later. \subsection{Citations} \begin{table*} \centering \begin{tabular}{lll} \hline \textbf{Output} & \textbf{natbib command} & \textbf{Old ACL-style command}\\ \hline \citep{Gusfield:97} & \verb|\citep| & \verb|\cite| \\ \citealp{Gusfield:97} & \verb|\citealp| & no equivalent \\ \citet{Gusfield:97} & \verb|\citet| & \verb|\newcite| \\ \citeyearpar{Gusfield:97} & \verb|\citeyearpar| & \verb|\shortcite| \\ \hline \end{tabular} \caption{\label{citation-guide} Citation commands supported by the style file. The style is based on the natbib package and supports all natbib citation commands. It also supports commands defined in previous ACL style files for compatibility. } \end{table*} Table~\ref{citation-guide} shows the syntax supported by the style files. We encourage you to use the natbib styles. You can use the command \verb|\citet| (cite in text) to get ``author (year)'' citations, like this citation to a paper by \citet{Gusfield:97}. You can use the command \verb|\citep| (cite in parentheses) to get ``(author, year)'' citations \citep{Gusfield:97}. You can use the command \verb|\citealp| (alternative cite without parentheses) to get ``author, year'' citations, which is useful for using citations within parentheses (e.g. \citealp{Gusfield:97}). \subsection{References} \nocite{Ando2005,borschinger-johnson-2011-particle,andrew2007scalable,rasooli-tetrault-2015,goodman-etal-2016-noise,harper-2014-learning} The \LaTeX{} and Bib\TeX{} style files provided roughly follow the American Psychological Association format. If your own bib file is named \texttt{custom.bib}, then placing the following before any appendices in your \LaTeX{} file will generate the references section for you: \begin{quote} \begin{verbatim} \bibliographystyle{acl_natbib} \section{Introduction} These instructions are for authors submitting papers to the NAACL-HLT 2021 conference using \LaTeX. They are not self-contained. All authors must follow the general instructions for *ACL proceedings,\footnote{\url{http://acl-org.github.io/ACLPUB/formatting.html}} as well as the newly introduced formatting guidelines for an optional ethics/broader impact section (see the conference website at \url{https://2021.naacl.org/}). This document contains additional instructions for the \LaTeX{} style files. The templates include the \LaTeX{} source of this document (\texttt{naacl2021.tex}), the \LaTeX{} style file used to format it (\texttt{naacl2021.sty}), an ACL bibliography style (\texttt{acl\_natbib.bst}), an example bibliography (\texttt{custom.bib}), and the bibliography for the ACL Anthology (\texttt{anthology.bib}). \section{Engines} To produce a PDF file, pdf\LaTeX{} is strongly recommended (over original \LaTeX{} plus dvips+ps2pdf or dvipdf). Xe\LaTeX{} also produces PDF files, and is especially suitable for text in non-Latin scripts. \section{Preamble} The first line of the file must be \begin{quote} \begin{verbatim} \documentclass[11pt]{article} \end{verbatim} \end{quote} To load the style file in the review version: \begin{quote} \begin{verbatim} \usepackage[review]{naacl2021} \end{verbatim} \end{quote} For the final version, omit the \verb|review| option: \begin{quote} \begin{verbatim} \usepackage{naacl2021} \end{verbatim} \end{quote} To use Times Roman, put the following in the preamble: \begin{quote} \begin{verbatim} \usepackage{times} \end{verbatim} \end{quote} (Alternatives like txfonts or newtx are also acceptable.) Please see the \LaTeX{} source of this document for comments on other packages that may be useful. Set the title and author using \verb|\title| and \verb|\author|. Within the author list, format multiple authors using \verb|\and| and \verb|\And| and \verb|\AND|; please see the \LaTeX{} source for examples. By default, the box containing the title and author names is set to the minimum of 5 cm. If you need more space, include the following in the preamble: \begin{quote} \begin{verbatim} \setlength\titlebox{<dim>} \end{verbatim} \end{quote} where \verb|<dim>| is replaced with a length. Do not set this length smaller than 5 cm. \section{Document Body} \subsection{Footnotes} Footnotes are inserted with the \verb|\footnote| command.\footnote{This is a footnote.} \subsection{Tables and figures} Xyz See Table~\ref{tab:accents} for an example of a table and its caption. \textbf{Do not override the default caption sizes.} \begin{table} \centering \begin{tabular}{lc} \hline \textbf{Command} & \textbf{Output}\\ \hline \verb|{\"a}| & {\"a} \\ \verb|{\^e}| & {\^e} \\ \verb|{\`i}| & {\`i} \\ \verb|{\.I}| & {\.I} \\ \verb|{\o}| & {\o} \\ \verb|{\'u}| & {\'u} \\ \verb|{\aa}| & {\aa} \\\hline \end{tabular} \begin{tabular}{lc} \hline \textbf{Command} & \textbf{Output}\\ \hline \verb|{\c c}| & {\c c} \\ \verb|{\u g}| & {\u g} \\ \verb|{\l}| & {\l} \\ \verb|{\~n}| & {\~n} \\ \verb|{\H o}| & {\H o} \\ \verb|{\v r}| & {\v r} \\ \verb|{\ss}| & {\ss} \\ \hline \end{tabular} \caption{Example commands for accented characters, to be used in, \emph{e.g.}, Bib\TeX{} entries.} \label{tab:accents} \end{table} \subsection{Hyperlinks} Users of older versions of \LaTeX{} may encounter the following error during compilation: \begin{quote} \tt\verb|\pdfendlink| ended up in different nesting level than \verb|\pdfstartlink|. \end{quote} This happens when pdf\LaTeX{} is used and a citation splits across a page boundary. The best way to fix this is to upgrade \LaTeX{} to 2018-12-01 or later. \subsection{Citations} \begin{table*} \centering \begin{tabular}{lll} \hline \textbf{Output} & \textbf{natbib command} & \textbf{Old ACL-style command}\\ \hline \citep{Gusfield:97} & \verb|\citep| & \verb|\cite| \\ \citealp{Gusfield:97} & \verb|\citealp| & no equivalent \\ \citet{Gusfield:97} & \verb|\citet| & \verb|\newcite| \\ \citeyearpar{Gusfield:97} & \verb|\citeyearpar| & \verb|\shortcite| \\ \hline \end{tabular} \caption{\label{citation-guide} Citation commands supported by the style file. The style is based on the natbib package and supports all natbib citation commands. It also supports commands defined in previous ACL style files for compatibility. } \end{table*} Table~\ref{citation-guide} shows the syntax supported by the style files. We encourage you to use the natbib styles. You can use the command \verb|\citet| (cite in text) to get ``author (year)'' citations, like this citation to a paper by \citet{Gusfield:97}. You can use the command \verb|\citep| (cite in parentheses) to get ``(author, year)'' citations \citep{Gusfield:97}. You can use the command \verb|\citealp| (alternative cite without parentheses) to get ``author, year'' citations, which is useful for using citations within parentheses (e.g. \citealp{Gusfield:97}). \subsection{References} \nocite{Ando2005,borschinger-johnson-2011-particle,andrew2007scalable,rasooli-tetrault-2015,goodman-etal-2016-noise,harper-2014-learning} The \LaTeX{} and Bib\TeX{} style files provided roughly follow the American Psychological Association format. If your own bib file is named \texttt{custom.bib}, then placing the following before any appendices in your \LaTeX{} file will generate the references section for you: \begin{quote} \begin{verbatim} \bibliographystyle{acl_natbib}
1,477,468,750,232
arxiv
\section{Introduction} For any complex arithmetic function $f:\mathbb{N}\rightarrow \mathbb{C}$ and any positive integer $N$ we indicate with $$V(N,Q; f):=\sum_{q\leq Q}\sum_{h|q}\sum_{\substack{a\ \bmod{q}\\ (a,q)=h}}\bigg|\sum_{\substack{n\leq N\\ n\equiv a\ \bmod{q}}}f(n)-\frac{1}{\varphi(q/h)}\sum_{\substack{n\leq N\\ (n,q)=h}}f(n)\bigg|^{2}$$ the probabilistic variance of $f$ in arithmetic progressions. Here $\varphi(\cdot)$ is the Euler totient function, $1<Q\leq N$ is a real number and the symbol $(\cdot,\cdot)$ stands for the greatest common divisor of two positive integers. In a previous paper \cite{M}, the author found a lower bound for the quantity $V(N,Q;f)$ over a large class of multiplicative functions $f$, referred to as ``generalized divisor functions'', which contains, as a particular instance, all the $\alpha$--fold divisor functions $d_\alpha(n)$, for parameters $\alpha\in \mathbb{C}\setminus \{\{1\}\cup -\mathbb{N}\}$. For all such values $\alpha$, it was proved that: \begin{equation} \label{lowerboundgenvariance} V(N,Q;d_\alpha)\gg_{\alpha,\delta} Q\sum_{n\leq N}|d_\alpha(n)|^2, \end{equation} uniformly on $N^{1/2+\delta}\leq Q\leq N$, whenever $\delta>0$ is sufficiently small and $N$ is large enough with respect to $\alpha$ and $\delta$ (see \cite[Theorem 1.1]{M}). When $\alpha=1$, the lower bound \eqref{lowerboundgenvariance} does not hold, as shown in \cite[Proposition 1.10]{M}, where the estimate $V(N,Q; d_1)\ll Q^2$, for any $Q\geq 1$, was proved, by an elementary direct inspection of the variance. The first new result of this paper, consequence of some new computations on a certain related mean square integral of a complete exponential sum, demonstrates that such an upper bound is sharp, at least in some ranges of $Q$. \begin{thm} \label{varianced1} There exists an absolute constant $c>0$ such that for any $cN^{2/3}\leq Q\leq N$ and $N$ large enough, we have \begin{align*} V(N,Q; d_1)\gg Q^2. \end{align*} \end{thm} For parameters $\alpha=\alpha_N:=1+1/R(N)$, where $R(N)$ is a real non-vanishing function, the method developed in \cite{M}, which makes strong use of the asymptotic expansion of the partial sum of divisor functions, also produced the following result (see \cite[Theorem 1.11]{M}). \begin{prop} \label{propmainvariance} Let $A>0$ and $\alpha_N$ as above with $|R(N)|\leq (\log N)^A$. Let $\delta>0$ small enough and $N^{1/2+\delta}\leq Q\leq N$. Then there exists a constant $B>0$ such that if $|R(N)|\geq B$ and $N$ is large with respect to $\delta$ and $A$, we have \begin{equation} \label{varianceuniform} V(N,Q;d_{\alpha_N})\gg_{A,\delta}\frac{Q}{R(N)^{4}}\sum_{n\leq N}d_{\alpha_N}(n)^2\gg \frac{QN}{R(N)^{4}}\exp\bigg(\bigg(2+\frac{1}{R(N)}\bigg)\frac{\log\log N}{R(N)}\bigg). \end{equation} \end{prop} In particular, we notice that the lower bound \eqref{varianceuniform} is always of size $\frac{QN}{R(N)^{4}}$ whenever $|R(N)|\geq \log\log N$. \begin{rmk} By going through the proof of Proposition \ref{propmainvariance}, it is not difficult to verify that the same lower bound also holds when replacing the function $d_{\alpha_N}(n)$ with $\alpha_{N}^{\omega(n)}$ or with $\alpha_{N}^{\Omega(n)}$, where $\Omega(n)$ and $\omega(n)$ stand for the prime divisors counting functions with or without multiplicity. \end{rmk} In the following, we will indicate with $\varpi(n)$ the function $\omega(n)$ or $\Omega(n)$, when a statement holds for both, and with $d_{\alpha_N}^{\varpi}(n)$ the function $\alpha_{N}^{\varpi(n)}$. The main aim of this paper is to improve, by means of a different approach, the result of Proposition \ref{propmainvariance} to what we expect to be the best possible lower bound for the variance of $d_{\alpha_N}^{\varpi}(n)$ in arithmetic progressions. \begin{thm} \label{thmvariancealpha1} Let $\alpha_N=1+1/R(N)$, where $R(N)$ is a non-zero real function. Assume $N^{1/2+\delta}\leq Q\leq N$, with $\delta>0$ sufficiently small. Then there exists a constant $C=C(\delta)>0$ such that if $C\log \log N\leq |R(N)|\leq N^{\delta/12}$ and $N$ is large in terms of $\delta$, we have \begin{equation} \label{mainlowerbound} V(N,Q; d_{\alpha_N}^{\varpi})\gg_{\delta}\frac{QN}{R(N)^2}\log\bigg(\frac{\log N}{\log(2N/Q)}\bigg)+Q^2. \end{equation} \end{thm} Compared to \eqref{varianceuniform}, the lower bound \eqref{mainlowerbound} improves the exponent of $R(N)$, shows the presence of the extra factor $Q^2$, which dominates on certain ranges of $R(N)$, and $|R(N)|$ is allowed to grow much bigger than an arbitrarily large power of $\log N$. In order to exploit more the extra cancellation we have, compared to \eqref{lowerboundgenvariance}, when $\alpha$ is close to $1$, we will input the Taylor expansion of the function $d_{\alpha_N}^{\varpi}(n)=(1+1/R(N))^{\varpi(n)}$ into our new computations. Since the function $\varpi(n)$ is, for the majority of positive integers $n\leq N$, of size roughly $\log\log N$ (see e.g. \eqref{meanvarpi} below), this justifies the condition $|R(N)|\geq C\log\log N$ in the hypotheses of Theorem \ref{thmvariancealpha1}. Regarding the additive function $\varpi(n)$ we will prove the following result. \begin{thm} \label{varianceomega} Assume $N^{1/2+\delta}\leq Q\leq N$, with $\delta>0$ sufficiently small. Then we have \begin{equation*} V(N,Q; \varpi)\gg_{\delta} Q^2(\log\log N)^2+QN\log\bigg(\frac{\log N}{\log(2N/Q)}\bigg), \end{equation*} if $N$ is large enough in terms of $\delta$. \end{thm} \begin{rmk} The proof of Theorem \ref{varianceomega} contains some aspects and computations preliminary to that of Theorem \ref{thmvariancealpha1}. This is why we decided to insert such result here. \end{rmk} The sequence of functions $d_{\alpha_N}^{\varpi}(n)$ is only one instance of a wide class of multiplicative functions ``close'' to $1$. Another interesting representative of such class is the characteristic function of the $y$--smooth numbers, for parameters $y$ near $N$. These are defined as those numbers made only by prime factors smaller than $y$. For $y$-smooth numbers we will prove the following theorem. \begin{thm} \label{lowerboundvariancesmooth2} Let $N^{1/2+\delta}\leq Q\leq N$, with $\delta>0$ sufficiently small. Let $u:=(\log N)/(\log y)$. There exists a large constant $C>0$ such that the following holds. If $$1+\frac{\log C}{\log N}\leq u\leq 2$$ and $N$ is large enough in terms of $\delta$, we have $$V(N,Q;\textbf{1}_{y-\text{smooth}})\gg_{\delta}QN\log u+Q^2.$$ \end{thm} \begin{rmk} We observe that Harper's result \cite[Theorem 2]{H} gives a tight corresponding upper bound for the variance above, when $Q=N/(\log N)^A$, with $A>0$ and $\sqrt{N}\leq y\leq N^{1-\delta}$, say. \end{rmk} We will show our new theorems by first reducing ourselves to study certain $L^2$--integrals of the exponential sums with coefficients $1$, $\varpi(n)$, $d_{\alpha_N}^{\varpi}(n)$ or $\textbf{1}_{y-\text{smooth}}(n)$, through an application of a technique introduced in a seminal work of Harper and Soundararajan. For them we will determine their size, which constitute an interesting result on its own and, since we believe that for the aforementioned functions their variance in arithmetic progressions should be well approximated by such integrals, will also give us a strong indication of the fact that our theorems should be sharp. \section{Preliminary notions and results} Throughout the rest of this paper the letter $p$ will be reserved for a prime number. Other letters might still indicate a prime number but in each case it will be specified. \subsection{Some basic facts about certain arithmetic functions} It is a classical result going back to Hardy and Ramanujan (see also Diaconis' paper \cite{D}) that the partial sum of the $\varpi$--function satisfies the following asymptotic expansion: \begin{align} \label{meanvarpi} \sum_{n\leq x}\varpi(n)=x\log\log x+B_{\varpi}x+O\bigg(\frac{x}{\log x}\bigg)\ \ \ (x\geq 2), \end{align} where $B_{\varpi}$ is a constant depending on the function $\varpi$. In particular, we deduce that the mean value of $\varpi(n)$, over the integers $n\leq x$, is roughly $\log\log x$. Regarding its variance, we can appeal to the Tur\'{a}n-Kubilius' inequality (see e.g. \cite[Ch. III, Theorem 3.1]{T}), which states that \begin{align} \label{variancevarpi} \sum_{n\leq x}(\varpi(n)-\log\log n)^2\ll x\log\log x\ \ \ (x\geq 2). \end{align} In particular, \eqref{meanvarpi} and \eqref{variancevarpi} together give \begin{align} \label{secondmomentvarpi} \sum_{n\leq x}\varpi(n)^2\ll x(\log\log x)^2\ \ \ (x\geq 2). \end{align} Finally, we remind of the following bound on the maximal size of $\varpi(n)$ (see e.g. \cite[Ch. I, Eq. 5.9]{T}): \begin{align} \label{maxsizevarpi} \varpi(n)\leq (\log x)/(\log 2)\ \ \ (1\leq n\leq x). \end{align} We will make use of the following result on the partial sum of some non-negative multiplicative functions. \begin{lem} \label{Rankinestimate0} For any non-negative multiplicative function $g(n)$ uniformly bounded on the prime numbers by a positive real constant $B$ and such that the sum $S=\sum_{q}(g(q)\log q)/q$ over all the prime powers $q=p^{k}$, with $k\geq 2$, converges, one has $$\sum_{n\leq x}g(n)\ll_{B,S} \frac{x}{\log x}\sum_{n\leq x}\frac{g(n)}{n}\ \ \ (x\geq 2)$$ and \begin{equation*} 1\ll_{B,S} \sum_{n\leq x}\frac{g(n)}{n}\prod_{p\leq x}\bigg(1+\frac{g(p)}{p}\bigg)^{-1}\ll_{B,S} 1\ \ \ (x\geq 1). \end{equation*} \end{lem} \begin{proof} The first conclusion is \cite[Ch. III, Theorem 3.5]{T} and the second one is a special case of \cite[Lemma 20]{EK} of Elliott and Kish. \end{proof} In particular, we evidence the following immediate consequence for the partial sum of certain types of divisor functions (see e.g. \cite[Ch. III, Theorem 3.7]{T}). \begin{cor} \label{lemmapartialsumdiv} Let $0<y_0<2$. Then, uniformly for $0\leq y\leq y_0$ and $x\geq 2$, one has \begin{align*} \sum_{n\leq x}y^{\varpi(n)}\ll x(\log x)^{y-1}. \end{align*} \end{cor} \subsection{Preliminaries about the variance in arithmetic progressions} As usual, we define the so called set of major arcs ${\frak M} = {\frak M}(K,Q_0,Q)$ consisting of those $\theta \in {\Bbb R}/{\Bbb Z}$ having an approximation $|\theta-a/q| \le K/(qQ)$ with moduli $q\le K Q_0$ and reduced residue classes $(a,q)=1$. Let instead ${\frak m}={\frak m}(K,Q_0,Q)$, the minor arcs, denote the complement of the major arcs in ${\Bbb R}/{\Bbb Z}$. Thus, the union of minor arcs is defined as the set of those real numbers in $[0,1]$ approximable by rational fractions with large denominator, as large as depending on $K,Q_0$ and $Q$. As explained in \cite{M}, to produce lower bounds for the variance of complex sequences in arithmetic progressions we rely on an application of Harper and Soundararajan's method introduced in \cite{HS}, which points out a direct link between the variance and the $L^2$-norm of some exponential sums over unions of minor arcs. This is the content of \cite[Proposition 1]{HS}, which we next report. \begin{prop} \label{Prop5.1} Let $f(n)$ be any complex sequence. Let $N$ be a large positive integer, $K\ge 5$ be a parameter and $K,Q_0$ and $Q$ be such that \begin{equation} \label{eq0} K \sqrt{N\log N} \le Q \le N\ \textrm{and}\ \ \frac{N \log N}{Q} \le Q_0 \le \frac{Q}{K^2}. \end{equation} Then we have \begin{align} \label{estimateprop1} V(N,Q;f) &\ge Q\Big(1+ O\Big(\frac{\log K}{K}\Big)\Big) \int_{\frak m} |{\mathcal S}_{f}(\theta)|^2d\theta + O \Big( \frac{NK}{Q_0} \sum_{n\le N} |f(n)|^2 \Big)\\ &+O\bigg(\sum_{q\le Q } \frac{1}{q} \sum_{\substack {d|q \\ d>Q_0}} \frac{1}{\varphi(d)} \Big| \sum_{n\leq N} f(n) c_d(n)\Big|^2\bigg)\nonumber, \end{align} where $c_d(n)$ are the Ramanujan sums defined as $$c_d(n)=\sum_{\substack{a=1,\dots,d\\ (a,d)=1}}e(an/d)$$ and ${\mathcal S}_{f}(\theta):=\sum_{n\leq N}f(n)e(n\theta)$ with $e(t)=e^{2\pi i t}$, for any $t\in\mathbb{R}$. \end{prop} \begin{rmk} One has the following representation for the Ramanujan sums as a sum over divisors (see e.g. \cite[Theorem 4.1]{MV}): \begin{equation} \label{mainpropc_q} c_d(n)=\sum_{k|(n,d)}k\mu(d/k), \end{equation} where $\mu(n)$ is the M\"{o}bius function. \end{rmk} To handle the $L^2$-integrals over minor arcs as in \eqref{estimateprop1} for the function $f(n)=\textbf{1}_{y-smooth}(n)$ we will appeal to \cite[Proposition 3]{HS}, which we next report adapted to our context. \begin{prop} \label{newprop10} Keep notations as above and assume $KQ_0< R:=N^{1/2-\delta/2}$. Then we have \begin{equation} \label{C-S*} \int_{\frak m} |{\mathcal S}_{f}(\theta)|^2d\theta\geq \bigg(\int_{\frak m} |{\mathcal S}_{f}(\theta)\mathcal{G}(\theta)|d\theta\bigg)^2\bigg(\int_{\frak m} |\mathcal{G}(\theta)|^2d\theta\bigg)^{-1}, \end{equation} where $$\mathcal{G}(\theta)=\sum_{n\leq N}\bigg(\sum_{\substack{r|n\\ r\leq R}}g(r)\bigg)e(n\theta),$$ for any complex arithmetic function $g(r)$. If moreover there exists a constant $\kappa>1$ for which $|g(n)|\leq d_{\kappa}(n)$, for any $n\leq N$, we also have \begin{equation} \label{eq81} \int_{\frak m} |{\mathcal S}_{f}(\theta)\mathcal{G}(\theta) | d\theta \ge \sum_{KQ_0 < q\le R} \Big| \sum_{\substack{r\le R \\ q|r}} \frac{g(r)}{r} \Big| \Big| \sum_{\substack{n\le N}} f(n)c_q(n) \Big|+ O_{\delta,\kappa}(N^{1-\delta/11}), \end{equation} if $N$ is large enough in terms of $\delta$ and $\kappa$. \end{prop} \begin{proof} This is a slight variation of \cite[Proposition 3]{HS} for functions bounded by a divisor function and the content of \cite[Lemma 5.2]{M}. \end{proof} To estimate the $L^2$--integrals over minor arcs as in \eqref{estimateprop1} for the functions $f(n)=d_{\alpha_N}^{\varpi}(n)$ and $f(n)=\varpi(n)$, we will instead need to invoke \cite[Proposition 2]{HS}, which we next report in a more compact form. For ease of readability, we say that a real smooth function $\phi(t)$ belongs to the ``Fourier class'' of functions $\mathcal{F}$ if: \begin{itemize} \item $\phi(t)$ is compactly supported in $[0,1]$; \item $0\leq \phi(t)\leq 1$, for all $0\leq t\leq 1$; \item $\int_{0}^{1}\phi(t)dt\geq 1/2$; \item $|\hat{\phi}(\xi)|\ll_{A}(1+|\xi|)^{-A}$, for any $A>0$, where $\hat{\phi}(\xi):=\int_{-\infty}^{+\infty}\phi(t)e(-\xi t)dt$ denotes the Fourier transform of $\phi(t)$. \end{itemize} \begin{prop} \label{newprop11} Keep notations as above and assume $KQ_0< R\leq Q/2K$. Then we have \begin{equation} \label{C-S2} \int_{\frak m} |{\mathcal S}_{f}(\theta)|^2d\theta\geq \bigg|\int_{\frak m} {\mathcal S}_{f}(\theta)\overline{\mathcal{G}(\theta)}d\theta\bigg|^2\bigg(\int_{\frak m} |\mathcal{G}(\theta)|^2d\theta\bigg)^{-1}, \end{equation} where $$\mathcal{G}(\theta)=\sum_{n\leq N}\bigg(\sum_{\substack{r|n\\ r\leq R}}g(r)\bigg)\phi\left(\frac{n}{N}\right)e(n\theta),$$ for any complex arithmetic function $g(r)$ and real function $\phi(t)$. Let $M:=\max_{r\leq R}|g(r)|$. If $\phi(t)\in \mathcal{F}$, then we also have \begin{align} \label{lowerboundintminarcs2} \int_{\frak m} {\mathcal S}_{f}(\theta)\overline{\mathcal{G}(\theta)}d\theta&=\sum_{n\leq N}f(n)\bigg(\sum_{\substack{r|n\\ r\le R}} \overline{g(r)}\bigg)\phi\bigg(\frac{n}{N}\bigg)\\ &-N\sum_{q\leq KQ_0}\int_{-K/qQ}^{K/qQ}\bigg(\sum_{n\leq N}f(n)c_q(n)e(n\beta)\bigg)\bigg(\sum_{\substack{r\leq R\\q|r}}\frac{\overline{g(r)}}{r}\bigg)\hat{\phi}(\beta N)d\beta\nonumber\\ &+O\bigg(\frac{MKR\sqrt{Q_0}\log N}{\sqrt{Q}}\sqrt{\sum_{n\leq N}f(n)^2}\bigg).\nonumber \end{align} \end{prop} It turns out that to lower bound the integral $\int_{\frak m} |{\mathcal S}_{\textbf{1}_{y-smooth}}(\theta)|^2d\theta$ is sufficient to only look at certain minor arcs, i.e. at those centred on fractions with denominators $q\leq R$. This makes the application of the Cauchy--Schwarz's inequality in the form \eqref{C-S*} efficient, which in turn simplifies our task by means of \eqref{eq81}. On the other hand, the contribution to the integrals $\int_{\frak{m}} |{\mathcal S}_{d_{\alpha_N}^{\varpi}}(\theta)|^2 d\theta$ and $\int_{\frak m} |{\mathcal S}_{\varpi}(\theta)|^2d\theta$ comes from all of the minor arcs, even from those centred on fractions with possibly very large denominators. This forces us to use the Cauchy--Schwarz's inequality as in \eqref{C-S2} and then to asymptotically estimate $\int_{\frak m} {\mathcal S}_{f}(\theta)\overline{\mathcal{G}(\theta)}d\theta$ by means of \eqref{lowerboundintminarcs2}. \section{The $L^2$-integral of some exponential sums over minor arcs} As already discussed, Harper and Soundararajan showed that, to lower bound the variance $V(N,Q;f)$ of complex arithmetic functions $f(n)$ in arithmetic progressions, we can switch our attention to integrals of exponential sums over unions of minor arcs, such as $\int_{\frak m} |{\mathcal S}_{f}(\theta)|^2d\theta$, for which we seek for a sharp lower bound. This is accomplished by an application of Proposition \ref{Prop5.1}. Our aim is to employ such strategy in the case $f(n)=1$, $f(n)=\varpi(n)$, $f(n)=d_{\alpha_N}^{\varpi}(n)$ and $f(n)=\textbf{1}_{y-\text{smooth}}(n)$ and with the choice of minor arcs $\frak{m}=\frak{m}(K,Q,Q_0)$ given by $K$ a large positive constant, $N^{1/2+\delta}\leq Q\leq N$, for any suitably small $\delta>0$, and $Q_0$ satisfying \eqref{eq0}. This will indeed be the underlying choice of minor arcs in the next propositions. Regarding the constant function $1$, we have the following result. \begin{prop} \label{proplowerboundint1} For any $N$ large enough with respect to $\delta$, we have \begin{align} \label{lowerboundint1} \int_{\frak{m}} |{\mathcal S}_{1}(\theta)|^2 d\theta\gg Q. \end{align} \end{prop} Regarding the additive function $\varpi(n)$, we will prove the next proposition. \begin{prop} \label{propL2integralofomega} Suppose $KQ_0<N^{1/2-\delta/2}$. If $N$ is sufficiently large in terms of $\delta$, we have \begin{equation} \label{sizeintomega} \int_{\frak m} |{\mathcal S}_{\varpi}(\theta)|^2 d\theta\gg_\delta Q(\log\log N)^2+N\log\bigg(\frac{\log N}{\log (2N/Q)}\bigg). \end{equation} \end{prop} Regarding the multiplicative function $d_{\alpha_N}^{\varpi}(n)$, the result is the following. \begin{prop} \label{proplowerboundintomega} Suppose $KQ_0<N^{1/2-\delta/2}$. There exists a large constant $C=C(\delta)>0$ such that if $C\log\log N< |R(N)|\leq N^{\delta/12}$ and $N$ is large enough in terms of $\delta$, we have \begin{equation} \label{lowerboundintomega1.5} \int_{\frak{m}} |{\mathcal S}_{d_{\alpha_N}^{\varpi}}(\theta)|^2 d\theta\gg_\delta \frac{N}{R(N)^2}\log\bigg(\frac{\log N}{\log(2N/Q)}\bigg)+Q. \end{equation} \end{prop} \begin{rmk} From the proof of \cite[Theorem 1.11]{M} it can be easily evinced that \begin{align*} \int_{\frak{m}} |{\mathcal S}_{d_{\alpha_N}^{\varpi}}(\theta)|^2 d\theta&\gg_\delta \frac{N}{R(N)^4}\exp\bigg(\bigg(2+\frac1{R(N)}\bigg)\frac{\log\log N}{R(N)}\bigg), \end{align*} whenever $B<|R(N)|\leq \log\log N$, for a suitable large constant $B\geq 3$. \end{rmk} Regarding the indicator of $y$--smooth numbers, we will show the following lower bound. \begin{prop} \label{proplowerboundvariancesmooth2} Assume that $KQ_0\leq N^{1/2-\delta}(\log N)^{17}$. Let $u:=(\log N)/(\log y)$. There exists a large constant $C>0$ such that the following holds. If $$1+\frac{\log C}{\log N}\leq u\leq 2$$ and $N$ is large enough in terms of $\delta$, we have \begin{align} \label{lowerboundsmoothint} \int_{\frak m} |{\mathcal S}_{\textbf{1}_{y-\text{smooth}}}(\theta)|^2 d\theta\gg_{\delta} N\log u+Q. \end{align} \end{prop} In order to show that $Q$-times our lower bounds \eqref{lowerboundint1}, \eqref{sizeintomega}, \eqref{lowerboundintomega1.5} and \eqref{lowerboundsmoothint} provides us with the expected best possible approximation for the related variances, we will produce corresponding sharp upper bounds for them, which in some cases will also turn out to be useful to deduce the aforementioned lower bounds themselves. \begin{prop} \label{propupperbounds} With notations as in Propositions \ref{proplowerboundint1}, \ref{propL2integralofomega}, \ref{proplowerboundintomega} and \ref{proplowerboundvariancesmooth2}, we have that \begin{enumerate}[a)] \item \eqref{lowerboundint1} is sharp; \item \eqref{sizeintomega} is sharp; \item the estimate \eqref{lowerboundintomega1.5} is sharp when $|R(N)|>(\log\log N)^{3/2}$; \item \eqref{lowerboundsmoothint} is sharp. \end{enumerate} \end{prop} \begin{rmk} \label{rmkonupperboundsdiv} It should be possible to produce a sharp upper bound for the integral in \eqref{lowerboundintomega1.5} in the whole range $|R(N)|>C\log\log N$ (see Remark \ref{RemarksmallerR} below). \end{rmk} To work out the size of the $L^2$-integral over minor arcs of the exponential sum with coefficients $f(n)=d_{\alpha_N}^{\varpi}(n)$, we will split $f$ into a sum $f=f_d+f_r$ of a deterministic part $f_d$, constant, and a pseudorandom one $f_r$. By triangle inequality we will separate their contribution to the integrals to then analyse them individually. To deal with $\int_{\frak m} |{\mathcal S}_{f_d}(\theta)|^2 d\theta$ we will unfold the definition of minor arcs and insert classical estimates for the size of a complete exponential sum. Regarding $\int_{\frak m} |{\mathcal S}_{f_r}(\theta)|^2 d\theta$ instead, when $|R(N)|> (\log\log N)^{3/2}$, we will reduce the problem to estimate the $L^2$-integral over minor arcs of the exponential sum with coefficients $\varpi(n)$. To this aim, we will write $\varpi(n)=\Sigma_1+\Sigma_2$, where $\Sigma_1$ is a sum over prime numbers smaller than a power of $2N/Q$ and $\Sigma_2$ the remaining part, and again use triangle inequality. To estimate $\int_{\frak m} |{\mathcal S}_{\Sigma_2}(\theta)|^2 d\theta$ we will use Parseval's identity and an application of Tur\'{a}n--Kubilius' inequality. Regarding $\int_{\frak m} |{\mathcal S}_{\Sigma_1}(\theta)|^2 d\theta$ instead we will expand out the square inside the integral and unfold the definition of minor arcs to then conclude by counting the number of primes which are solution to certain systems of congruences. \section{Proof of Proposition \ref{propupperbounds}} We set the parameter $K$ to be a large constant, $N^{1/2+\delta}\leq Q\leq N$, with $N$ sufficiently large in terms of $\delta$, and $Q_0$ satisfying \eqref{eq0}. We keep these notations throughout the rest of this section. \subsection{The case of the constant function $1$} We use the well-known bound \begin{align} \label{fundamentalestimate} |{\mathcal S}_{1}(\theta)|\ll \min\bigg\{N,\frac1{||\theta||}\bigg\}, \end{align} where $||\theta||$ indicates the distance of $\theta$ from the nearest integer. Since $\theta=a/q+\delta$, with $|\delta|\leq K/qQ$ and $q>KQ_0$, we have that either $||\theta||=|\theta|$ or $||\theta||=1-|\theta|$. Hence, by symmetry, we find that \begin{align*} \int_{\frak{m}} |{\mathcal S}_{1}(\theta)|^2 d\theta\ll \sum_{KQ_0<q\leq Q}\sum_{\substack{1\leq a< q/2\\ (a,q)=1}}\int_{a/q-\frac{K}{qQ}}^{a/q+\frac{K}{qQ}}\frac{1}{\theta^2}d\theta&=\frac{2K}{Q}\sum_{KQ_0<q\leq Q}q\sum_{\substack{1\leq a<q/2\\ (a,q)=1}}\frac{1}{a^2-(K/Q)^2}\ll Q, \end{align*} where we used that $a^2-(K/Q)^2\geq a^2/2$, for any $a\geq 1$, if $N$ is large enough. This shows Proposition \ref{propupperbounds} a). \subsection{The case of smooth numbers} We first observe that for any two complex numbers $w,z$ we have \begin{align} \label{trivialineq} |w+z|^2\leq 2(|w|^2+|z|^2). \end{align} By writing $\textbf{1}_{y-\text{smooth}}(n)=1-\textbf{1}_{\exists p|n: p>y}(n)$ and using \eqref{trivialineq} to separate their contribution to the integral, we get \begin{align*} \int_{\frak m} |{\mathcal S}_{\textbf{1}_{y-\text{smooth}}}(\theta)|^2 d\theta\ll Q+\sum_{\substack{n\leq N\\ \exists p|n: p>y}}1\leq Q+N\sum_{y<p\leq N}\frac{1}{p}\ll Q+ N\log u, \end{align*} by Proposition \ref{propupperbounds} a), Parseval's identity and Mertens' theorem, where $u:=(\log N)/(\log y)\in [1+1/\log N, 2].$ This shows Proposition \ref{propupperbounds} d). \subsection{The case of divisor functions close to $1$} Let $\alpha_N=1+1/R(N)$, where $R(N)$ is a non-vanishing real function with $|R(N)|>C\log\log N$, for a constant $C>0$ to determine later on. By \eqref{trivialineq}, one has \begin{equation} \label{triangleineq} \int_{\mathfrak{m}}|{\mathcal S}_{d_{\alpha_N}^{\varpi}}(\theta)|^2d\theta\leq 2\int_{\mathfrak{m}}|{\mathcal S}_{d_{\alpha_N}^{\varpi}-1}(\theta)|^2d\theta+2\int_{\mathfrak{m}}|{\mathcal S}_{1}(\theta)|^2d\theta \end{equation} and we split the exponential sum with coefficients $d_{\alpha_N}^{\varpi}(n)-1$ according to whether $\varpi(n)\leq A\log\log N$ or $\varpi(n)>A\log\log N$, with $A>0$ large to be chosen later. We do this only when $|R(N)|\leq (\log N)/(\log 2)$. We separate their contribution to the integral by \eqref{trivialineq}. The second one is bounded by Parseval's identity by \begin{align*} \sum_{\substack{n\leq N\\ \varpi(n)> A\log\log N}}(\alpha_N ^{\varpi(n)}-1)^2&\leq \sum_{\substack{n\leq N\\ \varpi(n)> A\log\log N}}(\alpha_N^{2\varpi(n)}+1)\\ &\leq \frac{1}{(\log N)^{A\log(5/4)}}\sum_{\substack{n\leq N}}\bigg(\bigg(\frac{3}{2}\bigg)^{\varpi(n)}+1\bigg)\bigg(\frac{5}{4}\bigg)^{\varpi(n)}\ll \frac{N}{(\log N)^{3}}, \end{align*} say, by Corollary \ref{lemmapartialsumdiv} and choosing $A$ large enough. Let \begin{equation} \label{defErr} \text{Err}(N):= \left\{ \begin{array}{ll} \frac{N}{(\log N)^{3}}& \mbox{if $|R(N)|\leq (\log N)/(\log 2)$};\\ 0 & \mbox{otherwise}.\end{array} \right. \end{equation} From the above considerations and Proposition \ref{propupperbounds} a), we deduce that \begin{equation*} \int_{\mathfrak{m}}|{\mathcal S}_{d_{\alpha_N}^{\varpi}}(\theta)|^2d\theta\ll\int_{\mathfrak{m}}\bigg|\sum_{\substack{n\leq N\\ \varpi(n)\leq A\log\log N}}(\alpha_N^{\varpi(n)}-1)e(n\theta)\bigg|^2d\theta+Q+\text{Err}(N), \end{equation*} where the restriction on the sum is there only when $|R(N)|\leq (\log N)/(\log 2)$. The integral on the right-hand side by \eqref{trivialineq} is \begin{align*} \ll \frac{1}{R(N)^2}\int_{\mathfrak{m}}\bigg|\sum_{\substack{n\leq N\\ \varpi(n)\leq A\log\log N}}\varpi(n)e(n\theta)\bigg|^2d\theta+\int_{\mathfrak{m}}|{\mathcal S}_{T_N}(\theta)|^2d\theta, \end{align*} where we let $$T_N(n):=\bigg(\alpha_N^{\varpi(n)}-1-\frac{\varpi(n)}{R(N)}\bigg)\textbf{1}_{\varpi(n)\leq A\log\log N}.$$ The second integral above, again by \eqref{trivialineq}, is \begin{align*} \ll M_N^2\int_{\mathfrak{m}}|{\mathcal S}_{1}(\theta)|^2d\theta+\int_{\mathfrak{m}}|{\mathcal S}_{T_N-M_N}(\theta)|^2d\theta, \end{align*} where $$M_N:=\alpha_N^{\log\log N}-1-\frac{\log\log N}{R(N)}.$$ By Proposition \ref{propupperbounds} a), the first term above is $\ll Q(\log\log N)^4/R(N)^4\leq Q,$ if $C$ is large enough. On the other hand, the second one, by Parseval's identity, by Taylor expanding $\alpha_N^{\varpi(n)}$ and $\alpha_N^{\log\log N}$ (which we can do thanks to the restriction in the sum and reminding of the maximal size \eqref{maxsizevarpi} of $\varpi(n)$) and using the well-known identity $a^k-b^k=(a-b)\sum_{j=0}^{k-1}a^jb^{k-1-j}$, which holds for a couple of positive real numbers $a,b$ and any positive integer $k$, can be estimated with \begin{align*} \ll \frac{(\log\log N)^2}{R(N)^4}\sum_{\substack{n\leq N}}(\varpi(n)\textbf{1}_{\varpi(n)\leq A\log\log N}-\log\log N)^2\ll \frac{N(\log\log N)^3}{R(N)^4}, \end{align*} if $A,C(A)$ and $N$ are sufficiently large, by inserting and after removing the condition $\varpi(n)\leq A\log\log N$ on the sum, at a cost of an acceptable error term, and performing the mean square estimate using \eqref{variancevarpi}. Overall, by gathering all of the above considerations, we have showed that \begin{equation} \label{boundexpsumdiv} \int_{\frak{m}} |{\mathcal S}_{d_{\alpha_N}^{\varpi}}(\theta)|^2 d\theta \ll \frac{1}{R(N)^2}\int_{\mathfrak{m}}|{\mathcal S}_{\varpi}(\theta)|^2d\theta+Q+\frac{N(\log\log N)^3}{R(N)^4}+\frac{N}{R(N)^2\log N}, \end{equation} say, whenever $|R(N)|>C\log\log N$ and $C$ and $N$ are sufficiently large. It is then clear that assuming the upper bound in Proposition \ref{propupperbounds} b) for $\int_{\frak m} |{\mathcal S}_{\varpi}(\theta)|^2 d\theta$ and $|R(N)|>(\log\log N)^{3/2}$ we get Proposition \ref{propupperbounds} c). \begin{rmk} \label{RemarksmallerR} If we had $T_N(n)=\varpi(n)^2/(2R(N)^2)$, we believe that we would roughly find \begin{align*} \int_{\mathfrak{m}}|{\mathcal S}_{T_N}(\theta)|^2d\theta\approx \frac{N(\log\log N)^2}{R(N)^4}\log\bigg(\frac{\log N}{\log(2N/Q)}\bigg). \end{align*} This would imply that the lower bound \eqref{lowerboundintomega1.5} for the integral $\int_{\frak{m}} |{\mathcal S}_{d_{\alpha_N}^{\varpi}}(\theta)|^2 d\theta$ is sharp in the whole range $|R(N)|>C\log\log N$, with $C$ large. In practice, by writing $T_N(n)$ as a truncated Taylor series up to order $k$, plus a remainder term, we believe we would get to prove that \eqref{lowerboundintomega1.5} is sharp in the range $|R(N)|>(\log\log N)^{1+1/k}$, for any fixed positive integer $k$, by inspecting the structure of the minor arcs. Even though this would constitute an improvement over the result of Proposition \ref{propupperbounds} c), we will not commit ourselves to formally proving this here. \end{rmk} \subsection{The case of the $\varpi$ function} To begin with, we write \begin{equation*} \sum_{n\leq N}\omega(n)e(n\theta)=\sum_{n\leq N}\omega_1(n)e(n\theta)+\sum_{n\leq N}\omega_2(n)e(n\theta), \end{equation*} where $\omega_1(n)$ is the number of prime factors of $n$ smaller than $\sqrt[4]{2N/Q}$ and $\omega_2(n)$ that of prime divisors contained in the interval $(\sqrt[4]{2N/Q}, N]$. By \eqref{trivialineq}, one has \begin{align} \label{removingsmoothdependence} \int_{\mathfrak{m}}|S_{\omega}(\theta)|^2d\theta\ll\int_{\mathfrak{m}}|S_{\omega_1}(\theta)|^2d\theta+\int_{\mathfrak{m}}|S_{\omega_2}(\theta)|^2d\theta. \end{align} A simple calculation shows that $\omega_2(n)$ has a mean value of size $\log((4\log N)/(\log(2N/Q))).$ Hence, isolating this term inside the corresponding integral gives \begin{align*} \label{secondintomega} \int_{\mathfrak{m}}|S_{\omega_2}(\theta)|^2d\theta&\ll Q\bigg(\log\bigg(\frac{4\log N}{\log (2N/Q)}\bigg)\bigg)^2+ \int_{\frak m} \bigg|\sum_{\substack{n\leq N}}\bigg(\omega_2(n)-\log\bigg(\frac{4\log N}{\log (2N/Q)}\bigg)\bigg)e(n\theta)\bigg|^2 d\theta\\ &\ll Q\bigg(\log\bigg(\frac{4\log N}{\log (2N/Q)}\bigg)\bigg)^2+N\log\bigg(\frac{4\log N}{\log (2N/Q)}\bigg),\nonumber \end{align*} by Proposition \ref{propupperbounds} a), Parseval's identity and an application of the general form of the Tur\'{a}n--Kubilius' inequality, which gives an analogue for $\omega_2(n)$ of \eqref{variancevarpi} (see e.g. \cite[Ch. III, Theorem 3.1]{T}). Moreover, from \begin{align*} \sum_{n\leq N}\Omega(n)e(n\theta)=\sum_{n\leq N}\omega(n)e(n\theta)+\sum_{n\leq N}\bigg(\sum_{\substack{p^k|n\\ k\geq 2}}1\bigg)e(n\theta) \end{align*} we immediately get \begin{align*} \int_{\mathfrak{m}}|{\mathcal S}_{\Omega}(\theta)|^2d\theta\ll \int_{\mathfrak{m}}|{\mathcal S}_{\omega}(\theta)|^2d\theta+\sum_{n\leq N}\bigg(\sum_{\substack{p^k|n\\ k\geq 2}}1\bigg)^2 \end{align*} and, by expanding the square out and swapping summations, we see that the above sum is \begin{align*} \sum_{n\leq N}\sum_{\substack{p_1^k|n\\ k\geq 2}}\sum_{\substack{p_2^j|n\\ j\geq 2}}1&=\sum_{p_1\leq \sqrt{N}}\sum_{k=2}^{\left\lfloor \frac{\log N}{\log p_1}\right\rfloor}\sum_{p_2\leq \sqrt{N}}\sum_{j=2}^{\left\lfloor \frac{\log N}{\log p_2}\right\rfloor}\sum_{\substack{n\leq N\\ n\equiv 0\pmod{[p_1^k,p_2^j]}}}1\\ &\leq N\sum_{p_1\leq \sqrt{N}}\sum_{k=2}^{\left\lfloor \frac{\log N}{\log p_1}\right\rfloor}\sum_{j=2}^{\left\lfloor \frac{\log N}{\log p_1}\right\rfloor}\frac{1}{p_1^{\max\{k,j\}}}+N\sum_{p_1\leq \sqrt{N}}\sum_{k=2}^{\left\lfloor \frac{\log N}{\log p_1}\right\rfloor}\frac{1}{p_1^k}\sum_{\substack{p_2\leq \sqrt{N}\\ p_2\neq p_1}}\sum_{j=2}^{\left\lfloor \frac{\log N}{\log p_2}\right\rfloor}\frac{1}{p_2^j}\ll N. \end{align*} For the rest of this section, we will focus on showing the following statement. \begin{claim} \label{Claim2} Let $K$ be a large constant, $N^{1/2+\delta}\leq Q\leq N$, with $N$ sufficiently large in terms of $\delta$, and $Q_0$ satisfying \eqref{eq0}. Then we have \begin{align*} \int_{\frak m} |{\mathcal S}_{\omega_1}(\theta)|^2 d\theta\ll N. \end{align*} \end{claim} Assuming the validity of Claim \ref{Claim2}, and collecting the above observations together, it is immediate to deduce Proposition \ref{propupperbounds} b). We now then move to the proof of Claim \ref{Claim2}. By expanding the integral, we find \begin{equation} \label{expansionint} \int_{\frak m} |{\mathcal S}_{\omega_1}(\theta)|^2 d\theta=\sum_{KQ_0<q\leq Q}\sum_{\substack{a=1,\dots,q\\ (a,q)=1}}\int_{a/q-K/qQ}^{a/q+K/qQ}\bigg|\sum_{p\leq \sqrt[4]{2N/Q}}\sum_{\substack{k\leq N/p}}e(kp\theta)\bigg|^2d\theta. \end{equation} We observe that each innermost exponential sum is quite ``long'', since for any $p\leq \sqrt[4]{2N/Q}$, it always runs over at least $Q$ numbers. We thus expect to individually observe cancellation. Hence, we should not lose much by trivially upper bounding the double sum using the triangle inequality followed by \eqref{fundamentalestimate}. Since $p\theta=pa/q+p\beta$ and by \eqref{eq0} $$|p\beta|\ll \frac{N}{qQ^2}\leq \frac1{q}\leq \frac1{Q_0}\leq \frac1{\log N},$$ we deduce that $$||p\theta||=||\overline{pa}/q+p\beta||=\min\{|\overline{pa}/q+p\beta|,|1-\overline{pa}/q-p\beta|\},$$ where $\overline{pa}$ stands for the residue class of $pa$ modulo $q$. We will only focus on the case $\overline{pa}\leq q/2$, so that the above minimum always coincides with $|\overline{pa}/q+p\beta|$, since the complementary one can be similarly dealt with. We notice that $\overline{pa}>0$. For, if $\overline{pa}=0$ then $q|p$ and $p\leq 2N/Q$, which cannot happen since $q>KQ_0$. Hence, $|\overline{pa}/q+p\beta|\geq \overline{pa}/2q$. Indeed, for any $N$ large enough compared to $\delta$, we have $$p|\beta|\leq\frac{KN}{qQ^2}\leq \frac1{2q}\leq \frac{\overline{pa}}{2q}.$$ Putting together the above information, we see that \eqref{expansionint} is \begin{align} \label{mainstartingpoint} &\ll \frac{1}{Q}\sum_{KQ_0<q\leq Q}\frac{1}{q}\sum_{\substack{a=1,\dots,q\\ (a,q)=1}}\bigg(\sum_{\substack{p\leq \sqrt[4]{2N/Q}}}\min\bigg\{\frac{N}{p},\frac{q}{\overline{pa}}\bigg\}\bigg)^2. \end{align} Note that the above minimum is always of size $q/\overline{pa}$. So, the above reduces to be \begin{align} \label{mainstartingpoint2} &=\frac{1}{Q}\sum_{KQ_0<q\leq Q}q\sum_{\substack{a=1,\dots,q\\ (a,q)=1}}\bigg(\sum_{\substack{p\leq \sqrt[4]{2N/Q}}}\frac1{\overline{pa}}\bigg)^2\\ &=\frac{1}{Q}\sum_{KQ_0<q\leq Q}q\sum_{\substack{a=1,\dots,q\\ (a,q)=1}}\sum_{\substack{p_1,p_2\leq \sqrt[4]{2N/Q}}}\frac{1}{\overline{p_1a}}\frac{1}{\overline{p_2a}}\nonumber\\ &\leq \frac{1}{Q}\sum_{KQ_0<q\leq Q}q\sum_{\substack{p_1,p_2\leq \sqrt[4]{2N/Q}}}\sum_{\substack{b_1, b_2\leq q}}\frac{1}{b_1b_2}\sum_{\substack{a=1,\dots,q\\ (a,q)=1\\ p_1a\equiv b_1\pmod q\\p_2a\equiv b_2\pmod q}}1.\nonumber \end{align} The system of congruences \begin{equation*} \left\{ \begin{array}{ll} p_1a\equiv b_1\pmod q\\ p_2a\equiv b_2\pmod q\end{array} \right. \end{equation*} has always at most $\min\{p_1,p_2\}$ solutions. By multiplying through the first equation by $b_2$ and the second one by $b_1$ we need to have $$p_1b_2a\equiv p_2b_1a \pmod q \Leftrightarrow p_1b_2\equiv p_2b_1 \pmod q. $$ Therefore, we may upper bound the quantity in the last line of \eqref{mainstartingpoint2} with \begin{align} \label{secondcasecongruence} \frac{1}{Q}\sum_{KQ_0<q\leq Q}q\sum_{\substack{p_1,p_2\leq \sqrt[4]{2N/Q}}}\min\{p_1,p_2\}\sum_{\substack{b_1,b_2\leq q\\ p_1b_2\equiv p_2b_1 \pmod q}}\frac{1}{b_1b_2}. \end{align} It is easy to verify that we have at most $p_1$ solutions $b_2\pmod{q}$ of the congruence relation $p_1b_2\equiv p_2b_1 \pmod q$, with $b_2\geq p_2b_1/p_1$. Hence \eqref{secondcasecongruence} may be upper bounded by \begin{align*} \frac{1}{Q}\sum_{KQ_0<q\leq Q}q\sum_{\substack{p_1,p_2\leq \sqrt[4]{2N/Q}}}\frac{p_1^2 \min\{p_1,p_2\}}{p_2}\sum_{\substack{b_1\leq q}}\frac{1}{b_1^2}\ll \frac{1}{Q}\sum_{KQ_0<q\leq Q}q\sum_{\substack{p_1,p_2\leq \sqrt[4]{2N/Q}}}p_1^2\ll N, \end{align*} thus concluding the proof of Claim \ref{Claim2}. \begin{rmk} Note that we have been able to facilitate the estimate of \eqref{mainstartingpoint2} thanks to our choice of parameter $\sqrt[4]{2N/Q}$ in \eqref{removingsmoothdependence}. \end{rmk} \section{The partial sum of some arithmetic functions twisted with Ramanujan sums} A key step to find a lower bound for the variance of a function $f$ in arithmetic progressions is to produce a lower bound for the $L^2$-integral over minor arcs of the exponential sum with coefficients $f(n)$. For smooth numbers, this will be accomplished by means of Proposition \ref{newprop10}. More specifically, \eqref{eq81} allows us to reduce the problem to asymptotically estimate the partial sum of $f(n)=\textbf{1}_{y-\text{smooth}}(n)$ twisted with the Ramanujan sums $c_q(n)$. This will indeed constitute a crucial point in our argument and next we are going to state and prove the relative result. \begin{lem} \label{lemsumsmoothtwisted} Let $C$ be a sufficiently large positive constant and consider $\sqrt{N}\leq y\leq N/C$. Then for any prime number $\log N<q\leq \sqrt{N}$ and $N$ large enough, we have \begin{align*} \bigg|\sum_{\substack{n\leq N\\ p|n\Rightarrow p\leq y}}c_q(n)\bigg|\gg N\log\bigg(\frac{\log N}{\log(\max\{N/q,y\})}\bigg) \end{align*} and for any squarefree positive integer $1<q\leq \sqrt{N}$ with all the prime factors larger than $N/y$, we have \begin{align*} \bigg|\sum_{\substack{n\leq N\\ p|n\Rightarrow p\leq y}}c_q(n)\bigg|\gg N\log u, \end{align*} where $u:=(\log N)/(\log y)$. \end{lem} \begin{proof} By \cite[Ch. III, Theorem 5.8]{T} we know that \[\Psi\left(\frac{N}{d},y\right):=\sum_{\substack{n\leq N/d\\ p|n\Rightarrow p\leq y}}1= \left\{ \begin{array}{ll} \lfloor\frac{N}{d}\rfloor & \mbox{if $d> N/y$};\\ \frac{N}{d}(1-\log(\frac{\log(N/d)}{\log y}))+O(\frac{N}{d\log y}) & \mbox{if $d\leq N/y$}.\end{array} \right. \] For any prime number $q$ the identity \eqref{mainpropc_q} reduces to $c_q(n)=-1+q\textbf{1}_{q|n}.$ It is then immediate to verify the following equality: \begin{align*} \sum_{\substack{n\leq N\\ p|n\Rightarrow p\leq y}}c_q(n)=-\Psi(N,y)+q\Psi\left(\frac{N}{q},y\right), \end{align*} from which it is straightforward to deduce the first estimate of the lemma. By \eqref{mainpropc_q}, and letting $\sigma(q):=\sum_{d|q}d$, we can always rewrite the sum in the statement as \begin{align*} \sum_{d|q}d\mu\left(\frac{q}{d}\right)\Psi\left(\frac{N}{d},y\right)&=N\sum_{\substack{d|q\\d> N/y}}\mu\left(\frac{q}{d}\right)+N\sum_{\substack{d|q\\d\leq N/y}}\mu\left(\frac{q}{d}\right)\bigg(1-\log\bigg(\frac{\log(N/d)}{\log y}\bigg)\bigg)\nonumber\\ &+ O\bigg(\frac{N}{\log N}\sum_{\substack{d|q\\d\leq N/y}}1+\sigma(q)\bigg).\nonumber \end{align*} In the hypothesis that $q>1$ has all the prime factors larger than $N/y$, the sums over the divisors of $q$ smaller than or equal to $N/y$ reduce only to the single term corresponding to $d=1$. Hence, we actually have \begin{align*} \sum_{\substack{n\leq N\\ p|n\Rightarrow p\leq y}}c_q(n)=-N\mu(q)+N\mu(q)(1-\log u)+O\bigg(\frac{N}{\log N}\bigg)=-N\mu(q)\log u+O\bigg(\frac{N}{\log N}\bigg), \end{align*} since $\sigma(q)\ll q\log\log q\ll \sqrt{N}\log\log N\leq N/\log N$ (see \cite[Ch. I, Theorem 5.7]{T}), if $N$ is large, which immediately leads to deduce the second estimate of the lemma. \end{proof} To prove the lower bound for the variance of $\varpi(n)$ and of $d_{\alpha_N}^{\varpi}(n)$ in arithmetic progressions we will instead invoke Proposition \ref{newprop11}. To this aim, we need to study the partial sum of $\varpi(n)$ twisted with the Ramanujan sums and weighted by the smooth weight $\phi(n/N)$, with $\phi(t)$ belonging to the Fourier class $\mathcal{F}$ as in Proposition \ref{newprop11}. \begin{lem} \label{lempartialsumsmoothweight} Let $R:=N^{1/2-\delta/2}$, for $\delta>0$ small, and suppose that $N^{1/2+\delta}\leq Q\leq cN/\log\log N$, for a certain absolute constant $c>0$. Then for any $N$ large enough with respect to $\delta$, we have \begin{align*} \bigg|\sum_{\substack{2N/Q<p\leq R}}\frac{1}{p}\sum_{n\leq N}\varpi(n)c_p(n)\phi\left(\frac{n}{N}\right)\bigg|\gg N\log\bigg(\frac{\log R}{\log(2N/Q)}\bigg). \end{align*} \end{lem} \begin{proof} To begin with, we note that for prime numbers $p$ the identity \eqref{mainpropc_q} reduces to $c_p(n)=-1+p\textbf{1}_{p|n}.$ Hence, the sum over $n$ in the statement is \begin{align*} &=-\sum_{n\leq N}\varpi(n)\phi\left(\frac{n}{N}\right)+p\sum_{\substack{n\leq N\\ p|n}}\varpi(n)\phi\left(\frac{n}{N}\right)\\ &=-\sum_{n\leq N}\varpi(n)\phi\left(\frac{n}{N}\right)+p\sum_{\substack{k\leq N/p}}(\varpi(k)+1)\phi\left(\frac{kp}{N}\right)+O\bigg(p\sum_{\substack{k\leq N/p^2}}(\varpi(k)+2)\bigg), \end{align*} where we used that $\varpi(pk)\leq \varpi(k)+\varpi(p)=\varpi(k)+1$. By \eqref{meanvarpi} the above big-Oh error term contributes at most $\ll N(\log\log N)/p$. By partial summation from \eqref{meanvarpi}, it is easy to show that \begin{align*} \sum_{n\leq N}\varpi(n)\phi\left(\frac{n}{N}\right)=JN\log\log N+JNB_\varpi+O\bigg(\frac{N\log\log N}{\log N}\bigg), \end{align*} for any $N$ large enough, where $J:=\int_{0}^{1}\phi(t)dt\in [1/2, 1].$ This, applied once with $N$ and once with $N/p$, together with the previous observations, gives \begin{align*} \sum_{n\leq N}\varpi(n)c_p(n)\phi\left(\frac{n}{N}\right)= JN\bigg(1+\log\bigg(1-\frac{\log p}{\log N}\bigg)\bigg)+O\bigg(\frac{N\log\log N}{\log N}+\frac{N\log\log N}{p}\bigg). \end{align*} Therefore, we see that the double sum in the statement is \begin{align*} &=JN\sum_{\substack{2N/Q<p\leq R}}\frac{1+\log(1-\frac{\log p}{\log N})}{p}+O\bigg(\frac{N(\log\log N)^2}{\log N}+N\log\log N\sum_{\substack{2N/Q<p\leq R}}\frac{1}{p^2}\bigg)\\ &\gg N\log\bigg(\frac{\log R}{\log(2N/Q)}\bigg)+O\bigg(\frac{N(\log\log N)^2}{\log N}+\frac{Q\log\log N}{\log(2N/Q)}\bigg), \end{align*} by Mertens' theorem, from which the thesis follows on our range of $Q$, if $N$ is large enough with respect to $\delta$. \end{proof} The next result shows a huge amount of cancellation for the partial sum of a Ramanujan sum weighted with an exponential phase. \begin{lem} \label{lemramanujansmooth} Let $R:=N^{1/2-\delta/2}$, for $\delta>0$ small, and $q<R$ be a prime number. Then we have \begin{align*} &\sum_{n\leq N}c_q(n)e\left(\frac{nu}{N}\right)\ll q(1+|u|), \end{align*} uniformly for all real numbers $u$. \end{lem} \begin{proof} To begin with, we notice that for any prime number $q$, the following estimate holds: \begin{align} \label{defandpropS} S(t):=\sum_{\substack{n\leq t}}c_q(n)=\sum_{\substack{n\leq t\\ q|n}}q-\sum_{n\leq t}1\ll q, \end{align} by \eqref{mainpropc_q}, for any $t\geq 1$. Hence, by partial summation we find \begin{align*} \sum_{n\leq N}c_q(n)e\left(\frac{nu}{N}\right)=\int_{1}^{N}e\left(\frac{tu}{N}\right)dS(t)&=S(N)e(u)-S(1)e\left(\frac{u}{N}\right)-\frac{u}{N}\int_{1}^{N}S(t)e\left(\frac{tu}{N}\right)dt, \end{align*} from which, by using \eqref{defandpropS}, the thesis follows. \end{proof} The last result of this section, preliminary to the proof of the lower bound for $\int_{\frak{m}} |{\mathcal S}_{d_{\alpha_N}^{\varpi}}(\theta)|^2 d\theta$ contained in Proposition \ref{proplowerboundintomega}, concerns the partial sum of the divisor functions $d_{\alpha_N}^{\varpi}(n)$ twisted with Ramanujan sums and weighted by $\phi(n/N)$, with $\phi(t)$ belonging to the Fourier class $\mathcal{F}$ as in Proposition \ref{newprop11}. \begin{lem} \label{lempartialsumdivtwisted} Let $\alpha_N=1+1/R(N)$, where $R(N)$ is a non-zero real function, and $R:=N^{1/2-\delta/2}$, for $\delta>0$ small. Assume $N^{1/2+\delta}\leq Q<cN(\log\log N)/R(N)^2$, for a certain absolute constant $c>0$. There exists a sufficiently large constant $C=C(\delta)>0$ such that if $C\log\log N\leq |R(N)|\leq (\log\log N)^{3}$ and $N$ is large enough with respect to $\delta$, we have \begin{align*} \bigg|\sum_{2N/Q<p\leq R}\sum_{\substack{n\leq N}}d_{\alpha_N}^{\varpi}(n)c_p(n)\phi\left(\frac{n}{N}\right)\bigg|\gg \frac{N}{|R(N)|}\log\bigg(\frac{\log R}{\log(2N/Q)}\bigg). \end{align*} \end{lem} \begin{proof} By adapting the proof of \cite[Theorem 1.11]{M}, it is not difficult to show that \begin{align} \label{asymptoticforf(n)} \sum_{n\leq t}d_{\alpha_N}^{\varpi}(n)=\frac{c_0(\alpha_N, \varpi)}{\Gamma(\alpha_N)}t(\log N)^{\alpha_N-1}\bigg(1+O\bigg(\frac{\log\log N}{|R(N)|\log N}\bigg)\bigg)+O\bigg(\frac{N\log\log N}{\log N}\bigg), \end{align} for any $t\in [N/\log N, N]$, if $N$ is large enough, where $\Gamma(z)$ stands for the Gamma function and \[c_0(\alpha_N, \varpi):=\left\{ \begin{array}{ll} \prod_{p}\left(1-\frac{1}{p}\right)^{\alpha_N}\left(1+\frac{\alpha_N}{p-1}\right)& \mbox{if $\varpi(n)=\omega(n)$};\\ \prod_{p}\left(1-\frac{1}{p}\right)^{\alpha_N}\left(1-\frac{\alpha_N}{p}\right)^{-1} & \mbox{if $\varpi(n)=\Omega(n)$}.\end{array} \right. \] It is easy to verify that \begin{align} \label{asymptc0gamma} c_0(\alpha_N, \varpi)=1+O\bigg(\frac{1}{|R(N)|}\bigg)=\Gamma(\alpha_N), \end{align} if $N$ is large enough (see \cite[Appendix C]{MV} for basic results on the Gamma function). By Corollary \ref{lemmapartialsumdiv}, we certainly have \begin{align*} \sum_{n\leq N/\log N}d_{\alpha_N}^{\varpi}(n)\phi\left(\frac{n}{N}\right)\ll \sum_{n\leq N/\log N}\bigg(1+\frac{1}{|R(N)|}\bigg)^{\varpi(n)}\ll \frac{N}{\log N}(\log N)^{1/|R(N)|}\ll \frac{N}{\log N}. \end{align*} This, together with partial summation from \eqref{asymptoticforf(n)} applied to the remaining part of the sum, leads to \begin{align*} \sum_{n\leq N}d_{\alpha_N}^{\varpi}(n)\phi\left(\frac{n}{N}\right)=\frac{c_0(\alpha_N, \varpi)}{\Gamma(\alpha_N)}JNe^{\frac{\log\log N}{R(N)}}+O\bigg(\frac{N\log\log N}{\log N}\bigg), \end{align*} where $J:=\int_{0}^{1}\phi(t)dt\in [1/2, 1]$ and we made use of \eqref{asymptc0gamma} to simplify the error term. Applying this asymptotic estimate with length of the sum $N/p$ in place of $N$, we find \begin{align*} \sum_{\substack{n\leq N\\ p|n}}d_{\alpha_N}^{\varpi}(n)\phi\left(\frac{n}{N}\right)&= \alpha_N\sum_{\substack{k\leq N/p\\ p\nmid k}}d_{\alpha_N}^{\varpi}(k)\phi\left(\frac{pk}{N}\right)+\sum_{k\leq N/p^2}d_{\alpha_N}^{\varpi}(kp^2)\phi\left(\frac{kp^2}{N}\right)\\ &=\alpha_N\sum_{\substack{k\leq N/p}}d_{\alpha_N}^{\varpi}(k)\phi\left(\frac{pk}{N}\right)+O\bigg(\sum_{\substack{k\leq N/p^2}}d_{1+1/|R(N)|}^{\varpi}(k)\bigg)\\ &=\frac{c_0(\alpha_N, \varpi)}{\Gamma(\alpha_N)}\frac{JN\alpha_N}{p}e^{\frac{\log\log(N/p)}{R(N)}}+O\bigg(\frac{N\log\log N}{p\log N}+\frac{N}{p^2}\bigg), \end{align*} where we used $\varpi(pk)\leq \varpi(k)+1$ and Corollary \ref{lemmapartialsumdiv} to handle the error term contribution. The collection of the above estimates, taking into account of the identity \eqref{mainpropc_q} for the Ramanujan sums, makes the sum over $n$ in the statement equals to \begin{align} \label{asymptestimatedivtwisted} \frac{c_0(\alpha_N, \varpi)}{\Gamma(\alpha_N)}JNe^{\frac{\log\log N}{R(N)}}( \alpha_N e^{\frac{\log(1-\frac{\log p}{\log N})}{R(N)}}-1)+O\bigg(\frac{N\log\log N}{\log N}+\frac{N}{p}\bigg). \end{align} By Taylor expansion and thanks to \eqref{asymptc0gamma}, one has \begin{align*} \alpha_Ne^{\frac{\log(1-\frac{\log p}{\log N})}{R(N)}}-1&=\bigg(1+\frac{1}{R(N)}\bigg)\bigg(1+\frac{\log(1-\frac{\log p}{\log N})}{R(N)}+O\bigg(\frac{1}{R(N)^2}\bigg)\bigg)-1\\ &=\frac{1+\log(1-\frac{\log p}{\log N})}{R(N)}+O\bigg(\frac{1}{R(N)^2}\bigg) \end{align*} and \begin{align*} \frac{c_0(\alpha_N,\varpi)}{\Gamma(\alpha_N)}e^{\frac{\log\log N}{R(N)}}=\bigg(1+O\bigg(\frac{1}{|R(N)|}\bigg)\bigg)\bigg(1+O\bigg(\frac{\log\log N}{|R(N)|}\bigg)\bigg)=1+O\bigg(\frac{\log\log N}{|R(N)|}\bigg). \end{align*} Inserting the above estimates into \eqref{asymptestimatedivtwisted}, we see that the double sum in the statement is \begin{align*} &=\bigg(\frac{JN}{R(N)}\sum_{\substack{2N/Q<p\leq R}}\frac{1+\log(1-\frac{\log p}{\log N})}{p}+O\bigg(\frac{N\log\log N}{R(N)^2}\bigg)\bigg)\bigg(1+O\bigg(\frac{\log\log N}{|R(N)|}\bigg)\bigg)\\ &+O\bigg(\frac{N(\log\log N)^2}{\log N}+N\sum_{2N/Q<p\leq R}\frac{1}{p^2}\bigg)\\ &\gg \frac{N}{|R(N)|}\log\bigg(\frac{\log R}{\log(2N/Q)}\bigg)+O\bigg(\frac{Q}{\log(2N/Q)}\bigg), \end{align*} by Mertens' theorem, by taking $C$ and $N$ large enough with respect to $\delta$ and thanks to our assumption on $|R(N)|,$ from which we get the thesis on our range of $Q$. \end{proof} \section{Proof of Proposition \ref{proplowerboundint1}} By restricting the integral in the statement over minor arcs of the form $(1/q-1/KqQ,1/q+1/KqQ)$, for positive integers $q$ in the range $Q/(2M^2)<q\leq Q/M^2$, where $M$ is a large positive constant to be chosen later, we can lower bound it with \begin{align*} \sum_{Q/(2M^2)<q\leq Q/M^2}\int_{-1/KqQ}^{1/KqQ}\bigg|\sum_{\substack{n\leq N}} e(n/q)e(n\theta) \bigg|^2 d\theta. \end{align*} Since, by definition of minor arcs, $q>KQ_0$ and by \eqref{eq0} $Q_0\leq Q/K^2$, we require $K>2M^2$, say. Moreover, we remind that $K$, and thus $M$, are absolute constants here. By partial summation it is easy to verify that \begin{align*} \bigg|\sum_{\substack{1\leq n\leq N}} e(n/q)e(n\theta)\bigg|= \bigg|\frac{e^{2\pi i(N+1)/q}-e^{2 \pi i/q}}{e^{2\pi i/q}-1}\bigg|+O\bigg(\frac{N}{Q}\bigg). \end{align*} We deduce that \begin{align*} \int_{\mathfrak{m}(K,Q_0,Q)}|{\mathcal S}_{1}(\theta)|^2d\theta &\geq \sum_{Q/(2M^2)<q\leq Q/M^2}\frac{2}{KqQ}\bigg|\frac{e^{2\pi i(N+1)/q}-e^{2 \pi i/q}}{e^{2\pi i/q}-1}\bigg|^2\\ &+O\bigg(\frac{N^2}{Q^3}+\frac{N}{Q}\sum_{Q/(2M^2)<q\leq Q/M^2}\frac{1}{qQ}\bigg|\frac{e^{2\pi i(N+1)/q}-e^{2 \pi i/q}}{e^{2\pi i/q}-1}\bigg|\bigg)\\ &\gg\sum_{Q/(2M^2)<q\leq Q/M^2}\frac{q}{Q}|e^{2\pi i(N+1)/q}-e^{2 \pi i/q}|^2+O\bigg(\frac{N^2}{Q^3}+\frac{N}{Q}\bigg), \end{align*} by expanding \begin{align*} e^{2\pi i/q}-1=\frac{2\pi i}{q}+O\bigg(\frac{1}{q^2}\bigg)\asymp \frac{1}{q}. \end{align*} Notice that $$|e^{2\pi i(N+1)/q}-e^{2 \pi i/q}|^2=2-2\Re(e^{2\pi iN/q}).$$ Therefore, to conclude, we only have to produce some saving on the size of the partial sum of $\Re(e^{2\pi iN/q})$ over the interval $I:=[Q/(2M^2)<q\leq Q/M^2]$ compared to its length. Once done that, we immediately deduce that \begin{align*} \int_{\mathfrak{m}(K,Q_0,Q)}|{\mathcal S}_{1}(\theta)|^2d\theta \gg Q+O\bigg(\frac{N^2}{Q^3}+\frac{N}{Q}\bigg), \end{align*} where the term $Q$ dominates whenever $Q\geq c\sqrt{N}$, for a suitable absolute constant $c>0$. To this aim, we apply the van der Corput's inequality (see e.g. \cite[Ch. I, Theorem 6.5]{T}) to the function $f_N(t):=N/t,$ for which $f_N(t)\in C^2(I)$ with $f''_N(t)\asymp NM^6 /Q^3$, for $t\in I$. We thus get \begin{align*} \bigg|\sum_{q\in I}\Re(e^{2\pi i f_N(q)})\bigg| \ll \frac{Q}{M^3}, \end{align*} for any $M^{8/3}N^{1/3}\leq Q\leq N$, if we take $N$ sufficiently large, from which the thesis follows, by taking $M$ large enough. \section{Proof of Proposition \ref{propL2integralofomega}} Let $K$ be a large constant, $Q_0$ and $Q$ be real numbers satisfying \eqref{eq0}. \subsection{Large values of $Q$} By isolating the constant term $Z:=\log\log N$ and expanding the square out, we have \begin{align*} \int_{\mathfrak{m}}|{\mathcal S}_{\varpi}(\theta)|^2d\theta&\geq \int_{\mathfrak{m}}|{\mathcal S}_{\varpi-Z}(\theta)|^2d\theta+\int_{\mathfrak{m}}|{\mathcal S}_{Z}(\theta)|^2d\theta-2\int_{\mathfrak{m}}|{\mathcal S}_{\varpi-Z}(\theta){\mathcal S}_{Z}(\theta)|d\theta\\ &\geq \int_{\mathfrak{m}}|{\mathcal S}_{Z}(\theta)|^2d\theta-2\sqrt{\int_{\mathfrak{m}}|{\mathcal S}_{\varpi-Z}(\theta)|^2d\theta\int_{\mathfrak{m}}|{\mathcal S}_Z(\theta)|^2d\theta},\nonumber \end{align*} by an application of Cauchy--Schwarz's inequality. By completing the integral $\int_{\mathfrak{m}}|{\mathcal S}_{\varpi-Z}(\theta)|^2d\theta$ to the whole circle and using Parseval's identity followed by an application of the upper bound \eqref{variancevarpi} on the second centred moment of $\varpi(n)$, we find it is $\ll N\log\log N$. Since from Propositions \ref{lowerboundint1} and \ref{propupperbounds} a) we know that $\int_{\mathfrak{m}}|{\mathcal S}_{1}(\theta)|^2d\theta\asymp Q$, on a wide range of $Q$, we also in particular have \begin{align*} \int_{\mathfrak{m}}|{\mathcal S}_{Z}(\theta)|^2d\theta\asymp Q(\log\log N)^2, \end{align*} whenever e.g. $Q\geq c N/\log\log N$, for any fixed constant $c>0$. By then choosing $c$ suitably large, we get the lower bound \eqref{sizeintomega} on such range of $Q$. \subsection{Small values of $Q$} Assume now $N^{1/2+\delta}\leq Q< cN/\log\log N$, with $c$ as in the previous subsection, and $KQ_0<R$, where $R:=N^{1/2-\delta/2}$, for a small $\delta>0$. Let $g(r)$ be the characteristic function of the set of prime numbers smaller than $R$. We apply Proposition \ref{newprop11} with such sets of minor arcs and functions $g(r)$ and $f(n)=\varpi(n)$. \begin{rmk} In order to successfully apply Proposition \ref{newprop11}, as a rule of thumb, we might think of $g(r)$ as an approximation of the Dirichlet convolution $f\ast \mu(r)$, where $\mu(r)$ is the M\"{o}bius function. This motivates our choice of $g$, since for any $n\leq N$ we either have $g\ast 1(n)=\omega(n)$ or $g\ast 1(n)=\omega(n)-1$, with $\omega(n)\approx \log\log N\approx \Omega(n)$, for most of the integers $n\leq N$, by \eqref{meanvarpi}. \end{rmk} With the notations introduced in Proposition \ref{newprop11}, we have \begin{align} \label{denomestimate1} \int_{\frak m} |\mathcal{G}(\theta)|^2d\theta\ll N\log\bigg(\frac{\log N}{\log(2N/Q)}\bigg), \end{align} which follows from Proposition \ref{propupperbounds} b), on our range of $Q$. Next, by \eqref{lowerboundintminarcs2}, with $f(n)=\varpi(n)$, the integral $\int_{\frak m} {\mathcal S}_{f}(\theta)\overline{\mathcal{G}(\theta)}d\theta$ is \begin{align} \label{numeratorshape} &=\sum_{n\leq N}\varpi(n)\bigg(\sum_{\substack{p|n\\ p\le R}} 1\bigg)\phi\left(\frac{n}{N}\right)\\ &-N\sum_{q\leq KQ_0}\int_{-K/qQ}^{K/qQ}\bigg(\sum_{n\leq N}\varpi(n)c_q(n)e(n\beta)\bigg)\bigg(\frac{\textbf{1}_{q>2,\ \text{prime}}}{q}+\sum_{\substack{p\leq R}}\frac{\textbf{1}_{q=1}}{p}\bigg)\hat{\phi}(\beta N)d\beta\nonumber\\ &+O(N^{1-\delta}),\nonumber \end{align} if $N$ is large enough with respect to $\delta$, where we trivially estimated the error term using the bound \eqref{maxsizevarpi} on the maximal size of $\varpi(n)$ and our hypotheses on $Q_0, Q$ and $R$. The second expression in \eqref{numeratorshape} equals \begin{align} \label{startingpointintest1} &-N\sum_{n\leq N}\varpi(n)\sum_{\substack{p\leq R}}\frac{1}{p}\int_{-K/Q}^{K/Q}e(n\beta)\hat{\phi}(\beta N)d\beta\\ \label{startingpointintest2} &-N\sum_{\substack{2\leq q\leq KQ_0\\ q\ \text{prime}}}\frac{1}{q}\sum_{n\leq N}\varpi(n)c_q(n)\int_{-K/qQ}^{K/qQ}e(n\beta)\hat{\phi}(\beta N)d\beta. \end{align} By changing variable and since $\phi(t)$ belongs to the Fourier class $\mathcal{F}$ as in Proposition \ref{newprop11}, one has \begin{align} \label{smoothint1} N\int_{-K/Q}^{K/Q}e(n\beta)\hat{\phi}(\beta N)d\beta&=\phi\left(\frac{n}{N}\right)+O\bigg(\int_{KN/Q}^{+\infty}\hat{\phi}(u)du+\int_{-\infty}^{-KN/Q}\hat{\phi}(u)du \bigg)\\ &=\phi\left(\frac{n}{N}\right)+O\bigg(\frac{Q^4}{N^4}\bigg),\nonumber \end{align} where we remind that $Q<cN/\log\log N$. Thus, by the asymptotic expansion \eqref{meanvarpi} for the partial sum of $\varpi(n)$ and Mertens' theorem, \eqref{startingpointintest1} equals \begin{align} \label{startingpointintest11} -\sum_{\substack{p\leq R}}\frac{1}{p}\sum_{n\leq N}\varpi(n)\phi\left(\frac{n}{N}\right)+O\bigg(\frac{Q}{\log\log N}\bigg). \end{align} We now split the sum over $q$ in \eqref{startingpointintest2} into two parts according to whether $q\leq 2N/Q$ or $q>2N/Q$. In the second case, since $\hat{\phi}(\xi)$ is bounded, we find \begin{align} \label{trivialestimateint} N\int_{-K/qQ}^{K/qQ}e(n\beta)\hat{\phi}(\beta N)d\beta=\int_{-KN/qQ}^{KN/qQ}e\left(\frac{nu}{N}\right)\hat{\phi}(u)du\ll \frac{N}{qQ}, \end{align} from which we deduce that the contribution in \eqref{startingpointintest2} from the primes $q>2N/Q$ is \begin{align} \label{contributionsecondset} \ll \frac{N}{Q}\sum_{\substack{q>2N/Q\\ q\ \text{prime}}}\frac{1}{q^2}\sum_{n\leq N}\varpi(n)|c_q(n)|\ll \frac{N^2\log\log N}{Q}\sum_{\substack{q>2N/Q\\ q\ \text{prime}}}\frac{1}{q^2}\ll \frac{N\log\log N}{\log(2N/Q)}. \end{align} On the other hand, for values of $q\leq 2N/Q$, by changing variable and by definition of $\phi(t)$, we can rewrite the integral $\int_{-KN/qQ}^{KN/qQ}e(nu/N)\hat{\phi}(u)du$ as \begin{align} \label{asymtoticint} \phi\left(\frac{n}{N}\right)+\int_{KN/qQ}^{+\infty}e\left(\frac{nu}{N}\right)\hat{\phi}(u)du+\int_{-\infty}^{-KN/qQ}e\left(\frac{nu}{N}\right)\hat{\phi}(u)du=\phi\left(\frac{n}{N}\right)+O\bigg(\frac{qQ}{N}\bigg), \end{align} from which we may deduce that the contribution in \eqref{startingpointintest2} coming from those primes is \begin{align} \label{smallcontribution} &-\sum_{\substack{2\leq q\leq 2N/Q\\ q\ \text{prime}}}\frac{1}{q}\sum_{n\leq N}\varpi(n)c_q(n)\phi\left(\frac{n}{N}\right)+O\bigg(\frac{N\log\log N}{\log(2N/Q)}\bigg). \end{align} Collecting together \eqref{startingpointintest11}, \eqref{smallcontribution} and previous observations and thanks to the identity \eqref{mainpropc_q} for the Ramanujan sums, we see that \eqref{numeratorshape} equals to \begin{align*} &\sum_{\substack{2N/Q<p\leq R}}\frac{1}{p}\sum_{n\leq N}\varpi(n)c_p(n)\phi\left(\frac{n}{N}\right)+O\bigg(\frac{N\log\log N}{\log(2N/Q)}\bigg), \end{align*} if $N$ is large enough with respect to $\delta$, where a lower bound for the size of the above sum has already been given in Lemma \ref{lempartialsumsmoothweight}. Overall, we have thus found that \begin{align*} \int_{\frak m} {\mathcal S}_{f}(\theta)\overline{\mathcal{G}(\theta)}d\theta\gg N\log\bigg(\frac{\log R}{\log(2N/Q)}\bigg), \end{align*} in the range $N^{1/2+\delta}\leq Q\leq cN/\log\log N$, which, together with the upper bound \eqref{denomestimate1} for the integral $\int_{\frak m} |\mathcal{G}(\theta)|^2d\theta$, concludes the proof of the lower bound \eqref{sizeintomega} for the integral $\int_{\frak m} |{\mathcal S}_{\varpi}(\theta)|^2 d\theta$, via an application of Proposition \ref{newprop11}, whenever $N$ is suitably large with respect to $\delta$. Indeed, to rewrite the result as in the statement of Proposition \ref{propL2integralofomega} we appeal to the following lemma. \begin{lem} \label{lemdeltadep} For any $\delta$ small enough and $N$ sufficiently large with respect to $\delta$, we have \begin{align*} \log\bigg(\frac{\log R}{\log(2N/Q)}\bigg)\geq \delta\log\bigg(\frac{\log N}{\log(2N/Q)}\bigg). \end{align*} \end{lem} \begin{proof} The aimed inequality is equivalent to \begin{align*} \bigg(\frac{1}{2}-\frac{\delta}{2}\bigg)\bigg(\frac{\log N}{\log(2N/Q)}\bigg)^{1-\delta}\geq 1, \end{align*} which is satisfied when in particular \begin{align*} \left(\frac{1}{2}-\frac{\delta}{2}\right)\geq \left(\frac{1}{2}-\delta+O(\delta^2)\right)^{1-\delta} \end{align*} and $N$ is sufficiently large with respect to $\delta$. The above in turn is equivalent to \begin{align*} \frac{1+\frac{\delta}{\log2}+O(\delta^2)}{1+\frac{2\delta}{\log 2}+O(\delta^2)}\leq 1-\delta. \end{align*} Since the left-hand side above equals to $1-\delta/\log 2+O(\delta^2),$ the thesis immediately follows if $\delta$ is taken small enough. \end{proof} \section{Proof of Proposition \ref{proplowerboundintomega}} Let $K$ be a large constant, $Q_0$ and $Q$ be real numbers satisfying \eqref{eq0}. Moreover, let $C\log\log N\leq |R(N)|\leq N^{\delta/12}$, with $C$ as in Lemma \ref{lempartialsumdivtwisted}. \subsection{Large values of $Q$} By isolating the constant term $1$ and expanding the square out, we have \begin{align*} \int_{\mathfrak{m}}|{\mathcal S}_{d_{\alpha_N}^{\varpi}}(\theta)|^2d\theta&\geq \int_{\mathfrak{m}}|{\mathcal S}_{d_{\alpha_N}^{\varpi}-1}(\theta)|^2d\theta+\int_{\mathfrak{m}}|{\mathcal S}_{1}(\theta)|^2d\theta-2\int_{\mathfrak{m}}|{\mathcal S}_{d_{\alpha_N}^{\varpi}-1}(\theta){\mathcal S}_{1}(\theta)|d\theta\\ &\geq \int_{\mathfrak{m}}|{\mathcal S}_{1}(\theta)|^2d\theta-2\sqrt{\int_{\mathfrak{m}}|{\mathcal S}_{d_{\alpha_N}^{\varpi}-1}(\theta)|^2d\theta\int_{\mathfrak{m}}|{\mathcal S}_1(\theta)|^2d\theta},\nonumber \end{align*} by an application of Cauchy--Schwarz's inequality. The estimate of the integral $\int_{\mathfrak{m}}|{\mathcal S}_{d_{\alpha_N}^{\varpi}-1}(\theta)|^2d\theta$ has already been performed in subsect. $4.3$, where we found (see Eq. \eqref{boundexpsumdiv}): \begin{align*} \int_{\mathfrak{m}}|{\mathcal S}_{d_{\alpha_N}^{\varpi}-1}(\theta)|^2d\theta\ll\frac{1}{R(N)^2}\int_{\mathfrak{m}}|{\mathcal S}_{\varpi}(\theta)|^2d\theta+\frac{N(\log\log N)^3}{R(N)^4}+\frac{N}{R(N)^2\log N}. \end{align*} By Propositions \ref{proplowerboundint1} and \ref{propupperbounds} a), which together give $\int_{\mathfrak{m}}|{\mathcal S}_{1}(\theta)|^2d\theta\asymp Q$, by Proposition \ref{propupperbounds} b), which shows that $$\int_{\frak m} |{\mathcal S}_{\varpi}(\theta)|^2 d\theta\ll Q(\log\log N)^2+N\log\bigg(\frac{\log N}{\log (2N/Q)}\bigg),$$ and by the above considerations, we may deduce the lower bound \eqref{lowerboundintomega1.5}, at least when $Q\geq cN(\log\log N)/R(N)^2$, for $c$ a suitable positive constant, by taking $N$ large enough and possibly replacing $C$ with a larger value. \subsection{Small values of $Q$} Let us now assume $N^{1/2+\delta}\leq Q<cN(\log\log N)/R(N)^2$ and $KQ_0<R$, where $R:=N^{1/2-\delta/2}$, for a small $\delta>0$. Let $g(r)$ be the characteristic function of the set of prime numbers smaller than $R$. We apply Proposition \ref{newprop11} with such sets of minor arcs and functions $g(r)$ and $f(n)=d_{\alpha_N}^{\varpi}(n)$. With the notations introduced there, we again have \begin{align} \label{denomestimate2} \int_{\frak m} |\mathcal{G}(\theta)|^2d\theta\ll N\log\bigg(\frac{\log N}{\log(2N/Q)}\bigg), \end{align} which follows from Proposition \ref{propupperbounds} b), since by assumption on $|R(N)|$ we always at least have $Q\ll N/\log\log N$. Next, by \eqref{lowerboundintminarcs2}, with $f(n)=d_{\alpha_N}^{\varpi}(n)$, the integral $\int_{\frak m} {\mathcal S}_{f}(\theta)\overline{\mathcal{G}(\theta)}d\theta$ is \begin{align} \label{numeratorshape2} &=\sum_{n\leq N}d_{\alpha_N}^{\varpi}(n)\bigg(\sum_{\substack{p|n\\ p\le R}} 1\bigg)\phi\left(\frac{n}{N}\right)\\ &-N\sum_{q\leq KQ_0}\int_{-K/qQ}^{K/qQ}\bigg(\sum_{n\leq N}d_{\alpha_N}^{\varpi}(n)c_q(n)e(n\beta)\bigg)\bigg(\frac{\textbf{1}_{q>2,\ \text{prime}}}{q}+\sum_{\substack{p\leq R}}\frac{\textbf{1}_{q=1}}{p}\bigg)\hat{\phi}(\beta N)d\beta\nonumber\\ &+O(N^{1-\delta}),\nonumber \end{align} if $N$ is large enough with respect to $\delta$, where we trivially estimated the error term using Corollary \ref{lemmapartialsumdiv} and our hypotheses on $Q_0, Q$ and $R$. The second expression in the above displayed equation equals \begin{align} \label{startingpointintest111} &-N\sum_{n\leq N}d_{\alpha_N}^{\varpi}(n)\sum_{\substack{p\leq R}}\frac{1}{p}\int_{-K/Q}^{K/Q}e(n\beta)\hat{\phi}(\beta N)d\beta\\ \label{startingpointintest222} &-N\sum_{\substack{2\leq q\leq KQ_0\\ q\ \text{prime}}}\frac{1}{q}\sum_{n\leq N}d_{\alpha_N}^{\varpi}(n)c_q(n)\int_{-K/qQ}^{K/qQ}e(n\beta)\hat{\phi}(\beta N)d\beta. \end{align} By the second identity in \eqref{smoothint1} for $N\int_{-K/Q}^{K/Q}e(n\beta)\hat{\phi}(\beta N)d\beta$, we see that \eqref{startingpointintest111} is \begin{align} \label{firstexpressionestimate} &=-\sum_{\substack{p\leq R}}\frac{1}{p}\sum_{n\leq N}d_{\alpha_N}^{\varpi}(n)\phi(n/N)+O\bigg(\frac{Q}{R(N)^2} \bigg), \end{align} where we used Corollary \ref{lemmapartialsumdiv} and our hypothesis on $Q$ to estimate the error term. We now split the sum over $q$ in \eqref{startingpointintest222} into two parts according to whether $q\leq 2N/Q$ or $q>2N/Q$. The term corresponding to the second set of primes equals to \begin{align} \label{secondexpression} &-\frac{N}{R(N)}\sum_{\substack{2N/Q< q\leq KQ_0\\ q\ \text{prime}}}\frac{1}{q}\sum_{n\leq N}\varpi(n)c_q(n)\int_{-K/qQ}^{K/qQ}e(n\beta)\hat{\phi}(\beta N)d\beta\\ &-N\sum_{\substack{2N/Q< q\leq KQ_0\\ q\ \text{prime}}}\frac{1}{q}\sum_{n\leq N}c_q(n)\int_{-K/qQ}^{K/qQ}e(n\beta)\hat{\phi}(\beta N)d\beta\nonumber\\ &-N\sum_{\substack{2N/Q< q\leq KQ_0\\ q\ \text{prime}}}\frac{1}{q}\sum_{n\leq N}E(n)c_q(n)\int_{-K/qQ}^{K/qQ}e(n\beta)\hat{\phi}(\beta N)d\beta,\nonumber \end{align} where for the sake of readiness we defined $E(n):=d_{\alpha_N}^{\varpi}(n)-1-\varpi(n)/R(N)$. The sum in the first term above has already been estimated before, with the result given in \eqref{contributionsecondset}. Whence, the first expression in \eqref{secondexpression} is $$\ll \frac{N\log\log N}{|R(N)|\log(2N/Q)}.$$ Regarding the second term in \eqref{secondexpression}, by changing variable inside the integral and swapping integral and summation, it is \begin{align*} -\sum_{\substack{2N/Q<q\leq KQ_0\\ q\ \text{prime}}}\frac{1}{q}\int_{-KN/qQ}^{KN/qQ}\sum_{\substack{n\leq N}}c_q(n)e\left(\frac{nu}{N}\right)\hat{\phi}(u)du\ll \frac{N}{Q}\sum_{\substack{2N/Q< q\leq KQ_0\\ q\ \text{prime}}}\frac{1}{q}\ll \frac{N\log\log N}{Q}\leq \sqrt{N}, \end{align*} by Lemma \ref{lemramanujansmooth}, Mertens' theorem and taking $N$ large enough with respect to $\delta$. Finally, the third term in \eqref{secondexpression}, by the estimate \eqref{trivialestimateint} for $N\int_{-K/qQ}^{K/qQ}e(n\beta)\hat{\phi}(\beta N)d\beta$, the identity \eqref{mainpropc_q} for the Ramanujan sums and the bound \eqref{secondmomentvarpi} on the second moment of $\varpi(n)$, is easily seen to be \begin{align*} \ll \frac{N^2(\log\log N)^2}{QR(N)^2}\sum_{\substack{2N/Q< q\leq KQ_0\\ q\ \text{prime}}}\frac{1}{q^2}\ll \frac{N(\log\log N)^2}{R(N)^2\log(2N/Q)}. \end{align*} Here, to estimate the sum over $n$ in \eqref{secondexpression} we argued as in subsect. $4.3$, by dividing the argument according to whether $|R(N)|\leq (\log N)/(\log 2)$ or not, and, in the first case, by splitting the sum over those integers $n$ such that $\varpi(n)\leq C(\log\log N)$ or the opposite holds. Regarding the part of \eqref{startingpointintest222} corresponding to primes $q\leq 2N/Q$, we first rewrite the integral $\int_{-KN/qQ}^{KN/qQ}e(nu/N)\hat{\phi}(u)du$ as in \eqref{asymtoticint}. Afterwards, by writing $d_{\alpha_N}^{\varpi}(n)=:1+\varpi(n)/R(N)+E(n)$, using Lemma \ref{lemramanujansmooth} to handle the contribution coming from the constant function $1$ and arguing similarly as before to compute the contribution from $\varpi(n)$ and $E(n)$, we readily see that such part equals to \begin{align*} -\sum_{\substack{2\leq q\leq 2N/Q\\ q\ \text{prime}}}\frac{1}{q}\sum_{n\leq N}d_{\alpha_N}^{\varpi}(n)c_q(n)\phi\left(\frac{n}{N}\right)+O\bigg(\frac{N\log\log N}{|R(N)|\log(2N/Q)}\bigg). \end{align*} Overall, we have found that \eqref{numeratorshape2} is \begin{align} \label{onestepbeforeend} \sum_{\substack{2N/Q<q\leq R\\ q\ \text{prime}}}\frac{1}{q}\sum_{n\leq N}d_{\alpha_N}^{\varpi}(n)c_q(n)\phi\left(\frac{n}{N}\right)+O\bigg(\frac{N\log\log N}{|R(N)|\log(2N/Q)}+\frac{Q}{R(N)^2}\bigg), \end{align} if $N$ is sufficiently large with respect to $\delta$. We now split the argument into two parts, according to whether $|R(N)|\leq (\log\log N)^{3}$ or not. In the first case, we remind that the size of the above sum has already been estimated in Lemma \ref{lempartialsumdivtwisted}. From this, the upper bound \eqref{denomestimate2} for the integral $\int_{\frak m} |\mathcal{G}(\theta)|^2d\theta$ and taking into account of Lemma \ref{lemdeltadep}, we may deduce the lower bound \eqref{lowerboundintomega1.5} for the integral $\int_{\frak{m}} |{\mathcal S}_{d_{\alpha_N}^{\varpi}}(\theta)|^2 d\theta$ in such range of $|R(N)|$, via an application of Proposition \ref{newprop11}, if $N$ is suitably large with respect to $\delta$. On the other hand, when $|R(N)|>(\log\log N)^3$, we replace $d_{\alpha_N}^{\varpi}(n)$ inside \eqref{onestepbeforeend} with $1+\varpi(n)/R(N)+E(n)$. Afterwards, we estimate the error contribution coming from the constant function $1$ using partial summation from the bound \eqref{defandpropS} on the partial sum of $c_q(n)$ and trivially that from $E(N)$ thanks to our current assumption on $|R(N)|$ and arguing as before. Finally, the main contribution coming from $\varpi(n)/R(N)$ can be immediately handled by Lemma \ref{lempartialsumsmoothweight}. Combining the estimate we get, by proceeding in this way, for \eqref{numeratorshape2} together with the bound \eqref{denomestimate2} via an application of Proposition \ref{newprop11}, we may deduce the lower bound \eqref{lowerboundintomega1.5} also on this range of $|R(N)|$ and thus conclude the proof of Proposition \ref{proplowerboundintomega}. \section{Proof of Proposition \ref{proplowerboundvariancesmooth2}} \subsection{Large values of $Q$} We always have \begin{align*} \int_{\frak m} |{\mathcal S}_{\textbf{1}_{y-\text{smooth}}}(\theta)|^2 d\theta\geq \int_{\frak m} |{\mathcal S}_{1}(\theta)|^2 d\theta+\int_{\frak m} \bigg|\sum_{\substack{n\leq N\\ \exists p|n: p> y}} e(n\theta)\bigg|^2d\theta-2\int_{\frak m} \bigg|{\mathcal S}_{1}(\theta)\sum_{\substack{n\leq N\\ \exists p|n: p> y}} e(n\theta)\bigg| d\theta. \end{align*} By Parseval's identity and Mertens' theorem, the second integral on the right-hand side above is $\ll N\log u$, where $u:=(\log N)/(\log y)$. This, together with the upper bound for $\int_{\frak m} |{\mathcal S}_{1}(\theta)|^2 d\theta$ given in Proposition \ref{propupperbounds} a) and Cauchy--Schwarz's inequality, makes the third integral instead of size $\ll \sqrt{QN\log u}$. By using the lower bound \eqref{lowerboundint1} for the integral $\int_{\frak m} |{\mathcal S}_{1}(\theta)|^2 d\theta$, for values $DN\log u\leq Q\leq N$, with $D>0$ a large constant, we may deduce the lower bound \eqref{lowerboundsmoothint} on such range of $Q$. \subsection{Small values of $Q$} Let $\delta>0$ small. Let $K$ be a large constant, $Q_0$ and $Q$ be real numbers satisfying \eqref{eq0} and such that $N^{1/2+\delta}\leq Q<DN\log u,$ with $D$ as in the previous subsection, and $\log N<Q_0\leq Q_0^{\max}:=N^{1/2-\delta}(\log N)^{17}/K$. Let $R:=N^{1/2-\delta/2}$. We keep these notations throughout the rest of this section. \begin{rmk} The choice of the maximal possible size of $Q_0$ only reflects the fact that, to deduce the lower bound on the variance of the $y$--smooth numbers in arithmetic progressions as in Theorem \ref{lowerboundvariancesmooth2}, we will take $Q_0=N(\log N)^{17}/Q$ in Proposition \ref{Prop5.1}. \end{rmk} \subsubsection{Case $y$ small} Let $\sqrt{N}\leq y\leq N^{1-\delta/8}$. Let $g(r)$ be the indicator of the prime numbers $r\in [Q_0^{\max},R]$. We apply Proposition \ref{newprop10} with functions $f(n)=\textbf{1}_{y-\text{smooth}}(n)$ and $g(r)$ as above. \begin{rmk} The choice of $g$ here has been inspired by the fact that the Dirichlet convolution $\textbf{1}_{y-\text{smooth}}\ast \mu (n)$ equals $\textbf{1}_{\text{primes}\ \in (y, N]}(n)$. \end{rmk} With notations as in Proposition \ref{newprop10}, by Parseval's identity, we have \begin{align} \label{applicationCS2} \int_{\frak m} |\mathcal{G}(\theta)|^2d\theta\leq \sum_{n\leq N}\bigg(\sum_{\substack{p|n\\ Q_0^{\max}<p\leq R}}1\bigg)^2\leq \sum_{Q_0^{\max}<p\leq R}\frac{N}{p}+\sum_{\substack{Q_0^{\max}<p_1,p_2\leq R\\ p_1\neq p_2}} \frac{N}{p_1p_2}\ll_\delta N\nonumber, \end{align} by expanding the square out and swapping summations. Let $W:=\min\{N/y, R\}$ and $Z:=\max\{KQ_0^{\max}, N/y\}.$ By \eqref{eq81}, with $f(n)=\textbf{1}_{y-\text{smooth}}(n)$, and employing the first part of Lemma \ref{lemsumsmoothtwisted}, we get \begin{align*} \int_{\frak m} |{\mathcal S}_{f}(\theta)\mathcal{G}(\theta) | d\theta\gg \frac{N}{\log N}\sum_{\substack{KQ_0^{\max}<q\leq W\\ q\ \text{prime}}}\frac{\log q}{q}+N\log u\sum_{\substack{Z<q\leq R\\ q\ \text{prime}}}\frac{1}{q}+O_\delta(N^{1-\delta/11})\gg_\delta N, \end{align*} by Mertens' theorem, if $N$ is large with respect to $\delta$. This concludes the proof of Proposition \ref{proplowerboundvariancesmooth2} when $\sqrt{N}\leq y\leq N^{1-\delta/8}$ via the application of Proposition \ref{newprop10} and the results just proved. \subsubsection{Case $y$ large} Let us now consider $N^{1-\delta/8}<y\leq N/C,$ where $C$ is as in Lemma \ref{lemsumsmoothtwisted}. Let $g$ be a multiplicative function supported on the squarefree numbers and given on the primes by \begin{equation*} g(p)= \left\{ \begin{array}{ll} 1 & \mbox{if $N/y<p\leq R$};\\ 0 & \mbox{otherwise}.\end{array} \right. \end{equation*} We again apply Proposition \ref{newprop10} with functions $f(n)=\textbf{1}_{y-\text{smooth}}(n)$ and $g(r)$ as above. \begin{rmk} From the work in subsubsect. 9.2.1, it is clear that we cannot make use of the same type of $g$ even when $y$ is very close to $N$. For, we would always have \begin{align*} \int_{\frak m} |\mathcal{G}(\theta)|^2d\theta\ll N\max\left\{\sum_{p\in \text{Supp}(g)\cap [KQ_0,R]}\frac{1}{p},\bigg(\sum_{p\in \text{Supp}(g)\cap [KQ_0,R]}\frac{1}{p}\bigg)^2\right\}, \end{align*} whereas by \eqref{eq81} and Lemma \ref{lemsumsmoothtwisted} we would always also have \begin{align*} \int_{\frak m} |{\mathcal S}_{f}(\theta)\mathcal{G}(\theta) | d\theta\gg N\log u \sum_{p\in \text{Supp}(g)\cap [KQ_0,R]}\frac{1}{p}, \end{align*} which are not of comparable size, whenever $u$ is close to $1$. For such values of $y$, we then opted for a multiplicative function $g$ with the right logarithmic density, suggested to us from the second part of Lemma \ref{lemsumsmoothtwisted} and the following computations. \end{rmk} By Parseval's identity, we have \begin{align*} \int_{\frak m} |\mathcal{G}(\theta)|^2d\theta\leq\sum_{n\leq N}\bigg(\sum_{\substack{r|n\\ r\leq R\\ p|r\Rightarrow N/y<p\leq R}}1\bigg)^2\leq \sum_{\substack{r_1,r_2\leq R\\ p|r_1,r_2\Rightarrow N/y<p\leq R}}\sum_{\substack{n\leq N\\ [r_1,r_2]|n}}1\leq N\sum_{\substack{r_1,r_2\leq R\\ p|r_1,r_2\Rightarrow N/y<p\leq R}}\frac{1}{[r_1,r_2]}, \end{align*} by expanding the square and swapping summations. By using a manipulation employed in a work of Dress, Iwaniec and Tenenbaum (see \cite[Eq. 1]{DIT}) we can rewrite the last sum above as \begin{align*} \sum_{\substack{r_1,r_2\leq R\\ p|r_1,r_2\Rightarrow N/y<p\leq R}}\frac{1}{r_1r_2}\sum_{d|r_1,r_2}\varphi(d)&\leq \sum_{\substack{d\leq R\\ p|d\Rightarrow N/y<p\leq R}}\frac{\varphi(d)}{d^2}\bigg(\sum_{\substack{k\leq R\\ p|k\Rightarrow N/y<p\leq R}}\frac{1}{k}\bigg)^2\leq\bigg(\sum_{\substack{k\leq R\\ p|k\Rightarrow N/y<p\leq R}}\frac{1}{k}\bigg)^3. \end{align*} Since the last sum in the above displayed equation is \begin{align*} \ll \prod_{N/y<p\leq R}\bigg(1+\frac{1}{p}\bigg)\ll\exp\bigg(\sum_{N/y<p\leq R}\frac{1}{p}\bigg)\ll \frac{\log R}{\log(N/y)}\ll \frac{1}{u-1}, \end{align*} thanks to Lemma \ref{Rankinestimate0} and Mertens' theorem, we deduce that \begin{equation} \label{denominatorestimatesmooth} \int_{\frak m} |\mathcal{G}(\theta)|^2d\theta\ll \frac{N}{(u-1)^3}. \end{equation} We note that \begin{align*} \sum_{\substack{r\leq R\\ q|r\\ \mu^2(r)=1}}\frac{g(r)}{r}=\frac{g(q)}{q}\sum_{\substack{k\leq R/q\\ (q,k)=1\\ \mu^2(k)=1}}\frac{g(k)}{k}\geq \frac{g(q)}{q}\prod_{p|q}\bigg(1+\frac{g(p)}{p}\bigg)^{-1}\sum_{\substack{k\leq R/q\\ \mu^2(k)=1}}\frac{g(k)}{k}=:\frac{h(q)}{q}\sum_{\substack{k\leq R/q\\ \mu^2(k)=1}}\frac{g(k)}{k}, \end{align*} where we observe that $h(q)$ is a positive multiplicative function. Supposing $q\leq N^{1/2-3\delta/4}$, using Lemma \ref{Rankinestimate0} and Mertens' theorem, we have \begin{align*} \sum_{\substack{k\leq R/q\\ \mu^2(k)=1}}\frac{g(k)}{k}\gg \exp\bigg(\sum_{N/y<p\leq N^{\delta/4}}\frac{1}{p}\bigg)\gg \frac{\log N^{\delta/4}}{\log(N/y)}\gg_\delta \frac{1}{u-1}. \end{align*} By \eqref{eq81}, with $f(n)=\textbf{1}_{y-\text{smooth}}(n)$, and employing the second part of Lemma \ref{lemsumsmoothtwisted} after restricting the summation over $q$ on those integers $N^{1/2-5\delta/6}<q\leq N^{1/2-3\delta/4}$, we find \begin{align*} \int_{\frak m} |{\mathcal S}_{f}(\theta)\mathcal{G}(\theta) | d\theta\gg_\delta \frac{N\log u}{u-1}\sum_{N^{1/2-5\delta/6}<q\leq N^{1/2-3\delta/4}}\frac{h(q)}{q}, \end{align*} if $N$ is large enough also with respect to $\delta$. Let $\mathcal{P}=\prod_{p\leq N/y}p$. For any integer $k\geq 0$, we let \begin{align*} &S_1(k):=\bigg(\sum_{\substack{2^k N^{1/2-5\delta/6}<q\leq 2^{k+1}N^{1/2-5\delta/6}\\ (q,\mathcal{P})=1\\ \mu^2(q)=1}}1 \bigg)^2\\ &S_2(k):=\sum_{\substack{2^k N^{1/2-5\delta/6}<q\leq 2^{k+1}N^{1/2-5\delta/6}\\ \\ (q,\mathcal{P})=1\\ \mu^2(q)=1}}\frac{q}{h(q)}. \end{align*} By dyadic subdivision, one has \begin{align*} \sum_{N^{1/2-5\delta/6}<q\leq N^{1/2-3\delta/4}}\frac{h(q)}{q}\geq\sum_{k=0}^{\frac{\delta \log N}{12\log 2}-1}\sum_{2^k N^{1/2-5\delta/6}<q\leq 2^{k+1}N^{1/2-5\delta/6}}\frac{h(q)}{q}\geq \sum_{k=0}^{\frac{\delta \log N}{12\log 2}-1}\frac{S_1(k)}{S_2(k)}, \end{align*} by Cauchy--Schwarz's inequality, where we have restated the condition on the support of $q$, implicit in $h(q)$, as $\mu^2(q)=1$ and $(q,\mathcal{P})=1$. By the fundamental lemma of sieve theory (see e.g. \cite[Ch. I, Theorem 4.4]{T}), taking $\delta$ small enough, and Mertens' theorem, we have $$S_1(k)\gg_{\delta} \bigg(2^k N^{1/2-5\delta/6} \frac{\varphi(\mathcal{P})}{\mathcal{P}}\bigg)^2\gg \bigg(\frac{2^k N^{1/2-5\delta/6}}{\log(N/y)}\bigg)^2.$$ On the other hand, by Lemma \ref{Rankinestimate0} and Mertens' theorem, we get that $S_2(k)$ is \begin{align*} \leq \sum_{\substack{q\leq 2^{k+1}N^{1/2-5\delta/6}\\ (q,\mathcal{P})=1\\ \mu^2(q)=1}}\frac{2^{k+1} N^{1/2-5\delta/6}}{h(q)}\ll \frac{(2^{k} N^{1/2-5\delta/6})^2}{\log N}\prod_{N/y<p\leq N^{1/2-3\delta/4}}\bigg(1+\frac{1}{p}\bigg)\ll \frac{(2^{k} N^{1/2-5\delta/6})^2}{\log(N/y)}. \end{align*} Putting things together, we have proved that \begin{align*} \sum_{N^{1/2-5\delta/6}<q\leq N^{1/2-3\delta/4}}\frac{h(q)}{q}\gg_{\delta} \sum_{k=0}^{\frac{\delta \log N}{12\log 2}-1}\frac{1}{\log(N/y)}\gg_\delta \frac{\log N}{\log(N/y)}\geq \frac{1}{u-1} \end{align*} and consequently that \begin{align*} \int_{\frak m} |{\mathcal S}_{f}(\theta)\mathcal{G}(\theta) | d\theta\gg_{\delta} \frac{N\log u}{(u-1)^2}. \end{align*} This, in combination with the upper bound \eqref{denominatorestimatesmooth} for the integral $\int_{\frak m} |\mathcal{G}(\theta)|^2d\theta$ and $\log u\gg u-1$, if $\delta$ small, concludes the proof of Proposition \ref{proplowerboundvariancesmooth2} via the application of Proposition \ref{newprop10}. \section{Deduction of Theorem \ref{varianced1}} By Proposition \ref{Prop5.1}, we have \begin{align} \label{partiallowerboundvarianced1} V(N,Q;d_1)\gg Q\int_{\frak m} |{\mathcal S}_{1}(\theta)|^2d\theta + O \Big( \frac{N^2}{Q_0}+\sum_{q\le Q } \frac{1}{q} \sum_{\substack {d|q \\ d>Q_0}} \frac{1}{\varphi(d)} \Big| \sum_{n\leq N}c_d(n)\Big|^2\bigg), \end{align} by choosing $K$ large and where $Q$ and $Q_0$ need to satisfy \eqref{eq0}. The sum in the big-Oh error term has already been estimated in \cite[Proposition 4.3]{M}, but here we are going to produce a better bound for the function $d_1(n)$. First of all, by \eqref{mainpropc_q}, we notice that \begin{align*} \sum_{n\leq N}c_d(n)=\sum_{n\leq N}\sum_{k|(n,d)}k\mu\left(\frac{d}{k}\right)=\sum_{k|d}k\mu\left(\frac{d}{k}\right)\sum_{\substack{n\leq N\\ k|n}}1=\sum_{k|d}k\mu\left(\frac{d}{k}\right)\bigg\lfloor \frac{N}{k}\bigg\rfloor=O(\sigma(d)), \end{align*} where $\sigma(d):=\sum_{k|d}k$ and where we used the well-known identity $\sum_{k|d}\mu(k)=0$, for any $d>1$. Therefore, we need to study the following sum: \begin{align} \label{sumerrorq} \sum_{q\le Q } \frac{1}{q} \sum_{\substack {d|q \\ d>Q_0}} \frac{\sigma(d)^2}{\varphi(d)}=\sum_{\substack {Q_0<d\leq Q}} \frac{\sigma(d)^2}{\varphi(d)}\sum_{\substack{q\le Q\\ d|q}} \frac{1}{q}\ll \sum_{\substack {Q_0<d\leq Q}} \frac{\sigma(d)^2}{d\varphi(d)}\left(\log\left( \frac{Q}{d}\right)+1\right). \end{align} Now, let $$S(t):=\sum_{\substack {d\leq t}} \frac{\sigma(d)^2}{d\varphi(d)}\ \ \ (t\geq 1).$$ It is not difficult to verify that the summand satisfies the hypotheses of Lemma \ref{Rankinestimate0}, from which we easily deduce that $S(t)\ll t$, for any $t\geq 1$. By partial summation, we find that the last sum in \eqref{sumerrorq} is $\ll Q$, on our range of parameters $K, Q_0$ and $Q$ satisfying \eqref{eq0}. We employ Proposition \ref{lowerboundint1} to lower bound the integral in \eqref{partiallowerboundvarianced1} and choose $Q_0=CN^2/Q^2$, with $C>0$ a large constant, to get the thesis for any $Q$ in the range $C^{1/3}K^{2/3}N^{2/3}\leq Q\leq CN/\log N$ (remember that $Q_0$ has to satisfy \eqref{eq0}). By doing the same, but with $Q_0=N^2(\log N)/Q^2$, we get instead the thesis for any $Q$ in the range $K^{2/3}N^{2/3}(\log N)^{1/3}\leq Q\leq N$. Together, they give Theorem \ref{varianced1}, whenever $N$ is sufficiently large. \section{Deduction of Theorem \ref{thmvariancealpha1}} In this final section we prove the lower bound for the variance of $d_{\alpha_N}^{\varpi}(n)$ in arithmetic progressions as presented in Theorem \ref{thmvariancealpha1}. The proofs of Theorems \ref{varianceomega} and \ref{lowerboundvariancesmooth2} are similar, so they will be omitted. By plugging the lower bound \eqref{lowerboundintomega1.5} for the integral $\int_{\frak{m}} |{\mathcal S}_{d_{\alpha_N}^{\varpi}}(\theta)|^2 d\theta$ into the lower bound expression \eqref{estimateprop1} for the variance of $f(n)=d_{\alpha_N}^{\varpi}(n)$ in arithmetic progressions, and choosing $K$ large enough, we find \begin{align} \label{variancelowerboundfinal} V(N,Q; d_{\alpha_N}^{\varpi})&\gg_{\delta} \frac{QN}{R(N)^2}\log\bigg(\frac{\log N}{\log(2N/Q)}\bigg)+Q^2+O\bigg(\frac{N^2(\log N)^{14}}{Q_0}\bigg), \end{align} where to estimate the error term we used \cite[Proposition 4.3]{M} with $\kappa=2$, say, and Corollary \ref{lemmapartialsumdiv}. Taking $Q_0:=N R(N)^2(\log N)^{15}/Q$, which satisfies the hypotheses of Proposition \ref{proplowerboundintomega}, we get the thesis, if $N$ is large enough with respect to $\delta$. \section*{Acknowledgements} I am deeply indebted to my supervisor Adam J. Harper for some discussions and insightful comments that notably improved the results presented here and simplified the exposition in this paper.
1,477,468,750,233
arxiv
\section{Introduction} This is a follow-up paper of \cite{dirr_ve}. There, we studied the $C$-numerical range $W_C (T)$ of $T$ generalized to trace-class operators $C$ and bounded operators $T$ acting on some infinite-dimensional separable complex Hilbert space $\mathcal H$, i.e. $$ W_C (T) = \lbrace \operatorname{tr}(CU^\dagger TU)\,|\,U\in\mathcal B(\mathcal H)\text{ unitary}\rbrace\,, $$ where $\mathcal B(\mathcal H)$ denotes the set of all bounded linear operators on $\mathcal H$. In this setting, however, symmetry in $C$ and $T$ compared to the matrix case is lost in the sense that by construction the mapping $(C,T) \mapsto W_C (T)$ is no longer defined on a symmetric domain. Probably, the most natural symmetric domain where $\operatorname{tr}(CT)$ is still well-defined is the set $\mathcal B^2(\mathcal H)$ of all Hilbert-Schmidt operators. Thus a natural question to ask is whether the known results about convexity, star-shapedness and the $C$-spectrum carry over to Hilbert-Schmidt operators. While analyzing this problem, it rapidly becomes evident that one can easily go one step further by considering operators $C$ and $T$ which belong to conjugate Schatten-classes, as the set $\mathcal B^p(\mathcal H)$ of all $p$-Schatten-class operators constitutes a two-sided ideal in the $C^*$-algebra $\mathcal B(\mathcal H)$ for all $p\in[1,\infty]$. Starting from the symmetry requirement, our line-of-thought will arrive in a quite natural way at the outlined Schatten-class setting of the $C$-numerical range of $T$. The paper is organized as follows: After a preliminary section collecting notation and basic results on Schatten-class operators, we present our main results in Section \ref{sec:results}. We show that the closure of $W_C (T)$ for conjugate Schatten-class operators $C$ and $T$ is always star-shaped with respect to the origin. We reformulate this result in terms of the image of the unitary orbit of $T\in\mathcal B^q(\mathcal H)$ under any continuous linear functional $L\in(\mathcal B^q(\mathcal H))'$. Moreover, we prove that the closure of $W_C(T)$ is convex if either $C$ or $T$ is normal with collinear eigenvalues. Finally, we introduce the $C$-spectrum of $T$ and derive some inclusion and convexitiy results, which are well known for matrices, under the assumption that both Schatten-class operators $C$ and $T$ are normal. \section{Notation and Preliminaries}\label{sec:prelim} Unless stated otherwise, here and henceforth $\mathcal X$ and $\mathcal Y$ are arbitrary infinite-dimensional complex Hilbert spaces while $\mathcal H$ and $\mathcal G$ are reserved for infinite-dimensional \textit{separable} complex Hilbert spaces (for short i.s.c. Hilbert spaces). Moreover, $\mathcal B(\mathcal X,\mathcal Y)$, $\mathcal K(\mathcal X,\mathcal Y)$ and $\mathcal B^p(\mathcal X,\mathcal Y)$ denote the set of all bounded, compact and $p$-th Schatten-class operators between $\mathcal X$ and $\mathcal Y$, respectively. Scalar products are conjugate linear in the first argument and linear in the second one. For an arbitrary subset $S \subset \mathbb{C} $, the notations $\overline{S}$ and $\operatorname{conv}(S)$ stand for its closure and convex hull, respectively. Finally, given $p,q\in[1,\infty]$, we say $p$ and $q$ are conjugate if $\frac1p+\frac1q=1$. \subsection{Infinite-dimensional Hilbert Spaces and the Trace Class} For a comprehensive introduction to infinite-dimensional Hilbert spaces and Schatten-class operators, we refer to, e.g., \cite{berberian1976} and \cite{MeiseVogt}. Here, we recall only some basic results which will be use frequently throughout this paper. \medskip Let $(e_i)_{i\in I}$ be any orthonormal basis of $\mathcal X$ and let $x \in \mathcal X$. Then one has \textit{Parseval's identity} \begin{equation*} \sum_{i\in I}|\langle e_i,x\rangle|^2 = \Vert x\Vert^2 \end{equation*} which reduces to \textit{Bessel's inequality} \begin{equation*} \sum_{j\in J}|\langle f_j,x\rangle|^2 \leq \Vert x\Vert^2 \end{equation*} if $(f_j)_{j\in J}$ is any orthonormal system in $\mathcal X$ instead of an orthonormal basis. \begin{lemma}[Schmidt decomposition]\label{thm_1} For each $C \in \mathcal K(\mathcal X,\mathcal Y)$, there exists a decreasing null sequence $(s_n(C))_{n\in\mathbb N}$ in $[0,\infty)$ as well as orthonormal systems $(f_n)_{n\in\mathbb N}$ in $\mathcal X$ and $(g_n)_{n\in\mathbb N}$ in $\mathcal Y$ such that \begin{align*} C = \sum_{n=1}^\infty s_n(C)\langle f_n,\cdot\rangle g_n\,, \end{align*} where the series converges in the operator norm. \end{lemma} As the \emph{singular numbers} $(s_n(C))_{n\in\mathbb N}$ in Lemma \ref{thm_1} are uniquely determined by $C$, the \emph{$p$-th Schatten-class} $\mathcal B^p(\mathcal X,\mathcal Y)$ is defined by \begin{align*} \mathcal B^p(\mathcal X,\mathcal Y) := \Big\lbrace C \in\mathcal K(\mathcal X,\mathcal Y)\,\Big|\,\sum\nolimits_{n=1}^\infty s_n(C)^p<\infty\Big\rbrace \end{align*} for $p\in [1,\infty)$. The Schatten-$p$-norm \begin{align*} \nu_p(C) := \Big(\sum_{n=1}^\infty s_n(C)^p\Big)^{1/p} \end{align*} turns $\mathcal B^p(\mathcal X,\mathcal Y)$ into a Banach space. Moreover, for $p=\infty$, we identify $\mathcal B^\infty (\mathcal X,\mathcal Y)$ with the set of all compact operators $\mathcal K(\mathcal X,\mathcal Y)$ equipped with the norm \begin{align*} \nu_\infty(C) := \sup_{n \in \mathbb{N}}s_n(C) = s_1(C)\,. \end{align*} Note that $\nu_\infty(C)$ coincides with the ordinary operator norm $\|C\|$. Hence $\mathcal B^\infty (\mathcal X,\mathcal Y)$ constitutes a closed subspace of $\mathcal B (\mathcal X,\mathcal Y)$ and thus a Banach space, too. The following results can be found in \cite[Coro.~XI.9.4 \& Lemma XI.9.9]{dunford1963linear}. \begin{lemma}\label{lemma_10} \begin{itemize} \item[(a)] Let $p\in [1,\infty]$. Then, for all $S,T\in\mathcal B(\mathcal X)$ and $C\in\mathcal B^p(\mathcal X)$, one has \begin{align*} \nu_p(SCT)\leq\Vert S\Vert\nu_p(C)\Vert T\Vert\,. \end{align*} \item[(b)] Let $1\leq p\leq q\leq\infty$. Then $\mathcal B^p(\mathcal X,\mathcal Y)\subseteq\mathcal B^q(\mathcal X,\mathcal Y)$ and $\nu_p( C)\geq\nu_q(C)$ for all $C\in\mathcal B^p(\mathcal X,\mathcal Y)$. \end{itemize} \end{lemma} \noindent Note that due to (a), all Schatten-classes $\mathcal B^p(\mathcal X)$ constitute -- just like the compact operators -- a two-sided ideal in the $C^*$-algebra of all bounded operators $\mathcal B(\mathcal X)$.\medskip \begin{lemma}\label{lemma_2b} Let $T\in\mathcal K(\mathcal X)$ and $(e_k)_{k\in\mathbb N}$ be any orthonormal system in $\mathcal X$. Then\vspace{2pt} \begin{itemize} \item[(a)] $ \displaystyle \sum\nolimits_{k=1}^n|\langle e_k,Te_k\rangle| \leq\sum\nolimits_{k=1}^n s_k(T) $ for all $n\in\mathbb N$ and\vspace{6pt} \item[(b)] $\lim_{k \to \infty} \langle e_k,Te_k\rangle = 0\,.$ \end{itemize} \end{lemma} \begin{proof} (a) Consider a Schmidt decomposition $\sum_{m=1}^\infty s_m(T)\langle f_m,\cdot\rangle g_m$ of $T$. Then \begin{align*} \sum_{k=1}^n|\langle e_k,Te_k\rangle|\leq \sum_{m=1}^\infty s_m(T)\Big( \underbrace{\sum_{k=1}^n |\langle e_k,f_m\rangle\langle g_m,e_k\rangle|}_{=:\lambda_m} \Big)\,. \end{align*} Note that by Cauchy-Schwarz and Bessel's inequality one has \begin{align*} \lambda_m\leq \Big(\sum_{k=1}^n |\langle e_k,f_m\rangle|^2\Big)^{1/2}\Big(\sum_{k=1}^n |\langle g_m,e_k\rangle|^2\Big)^{1/2}\leq 1 \end{align*} for all $m\in\mathbb N$. On the other hand, Cauchy-Schwarz and Bessel's inequality also imply \begin{align*} \sum_{m=1}^\infty \lambda_m&\leq \sum_{k=1}^n \Big(\sum_{m=1}^\infty |\langle e_k,f_m\rangle|^2\Big)^{1/2}\Big(\sum_{m=1}^\infty |\langle g_m,e_k\rangle|^2\Big)^{1/2}\leq \sum_{k=1}^n \|e_k \|^2=n\,. \end{align*} Hence an upper bound of $\sum_{m=1}^\infty s_m(T)\lambda_m$ is given by choosing $\lambda_1=\ldots=\lambda_n=1$ and $\lambda_j=0$ whenever $j>n$, since $s_1(T)\geq s_2(T)\geq\ldots$ by construction. This shows the desired inequality. A proof of (b) can be found, e.g., in \cite[Lemma 16.17]{MeiseVogt}. \end{proof} Now for any $C\in\mathcal B^1(\mathcal X)$, the trace of $C$ is defined via \begin{align}\label{eq:trace} \operatorname{tr}(C):=\sum\nolimits_{i\in I}\langle f_i,Cf_i\rangle\,, \end{align} where $(f_i)_{i\in I}$ can be any orthonormal basis of $\mathcal X$. The trace is well-defined as one can show that the right-hand side of \eqref{eq:trace} does not depend on the choice of $(f_i)_{i\in I}$. Important properties are the following, cf. \cite[Lemma XI.9.14]{dunford1963linear}. \begin{lemma}\label{lemma_nu_hoelder} Let $C\in\mathcal B^p(\mathcal X)$ and $T\in\mathcal B^q(\mathcal X)$ with $p,q\in [1,\infty]$ conjugate. Then one has $CT,TC\in\mathcal B^1(\mathcal X)$ with \begin{align} \operatorname{tr}(CT)&=\operatorname{tr}(TC)\nonumber\\ |\operatorname{tr}(CT)|&\leq \nu_p(C)\nu_q(T)\,.\label{eq:4} \end{align} \end{lemma} \noindent Note that the space of so called Hilbert-Schmidt operators $\mathcal B^2(\mathcal X)$ turns into a Hilbert space under the scalar product $\langle C,T\rangle:=\operatorname{tr}(C^\dagger T)$ \cite[Prop.~16.22]{MeiseVogt}. \subsection{Set Convergence} In order to transfer results about convexity and star-shapedness of the $C$-numerical range of matrices to Schatten-class operators, we need some basic facts about set convergence. We will use the Hausdorff metric on compact subsets (of $\mathbb C$) and the associated notion of convergence, see, e.g., \cite{nadler1978}.\medskip The distance between $z \in \mathbb C$ and any non-empty compact subset $A \subseteq \mathbb C$ is defined by \begin{align}\label{eq.Hausdorff-1} d(z,A) := \min_{w \in A} d(z,w) = \min_{w \in A} |z-w|\,. \end{align} Based on \eqref{eq.Hausdorff-1}, the \emph{Hausdorff metric} $\Delta$ on the set of all non-empty compact subsets of $\mathbb C$ is given by \begin{align*} \Delta(A,B) := \max\Big\lbrace \max_{z \in A}d(z,B),\max_{z \in B}d(z,A) \Big\rbrace. \end{align*} \noindent The following result is proven in \cite[Lemma 2.5]{dirr_ve}. \begin{lemma}\label{lemma_5} Let $(A_n)_{n\in\mathbb N}$ and $(B_n)_{n\in\mathbb N}$ be bounded sequences of non-empty compact subsets of $\mathbb C$ such that $\lim_{n\to\infty}A_n = A$, $\lim_{n\to\infty}B_n = B$ and let $(z_n)_{n\in\mathbb N}$ be any sequence of complex numbers with $\lim_{n\to\infty}z_n = z$. Then the following statements hold. \begin{itemize} \item[(a)] If $A_n\subseteq B_n$ for all $n\in\mathbb N$, then $A \subseteq B$.\vspace{4pt} \item[(b)] The sequence $(\operatorname{conv}(A_n))_{n\in\mathbb N}$ of compact subsets converges to $\operatorname{conv}(A)$, i.e. \begin{align*} \lim_{n\to\infty}\operatorname{conv}(A_n) = \operatorname{conv}(A)\,. \end{align*} \item[(c)] If $A_n$ is convex for all $n\in\mathbb N$, then $A$ is convex.\vspace{4pt} \item[(d)] If $A_n$ is star-shaped with respect to $z_n$ for all $n\in\mathbb N$, then $A$ is star-shaped with respect to $z$. \end{itemize} \end{lemma} \section{Results}\label{sec:results} Let $\mathcal H$ denote an arbitrary infinite-dimensional separable complex (i.s.c.) Hilbert space. Our goal will be to carry over the characterizations of the geometry of the $C$-numerical range $W_C (T)$, like star-shapedness or convexity, from the trace class \cite{dirr_ve} to conjugate Schatten-class operators on $\mathcal H$. \begin{definition}\label{defi_1} Let $p,q\in [1,\infty]$ be conjugate. Then for $C\in\mathcal B^p(\mathcal H)$ and $T\in\mathcal B^q(\mathcal H)$, we define the \emph{$C$-numerical range} of $T$ to be \begin{align*} W_C (T):=\lbrace \operatorname{tr}(CU^\dagger TU)\,|\,U\in\mathcal B(\mathcal H)\text{ unitary}\rbrace\,. \end{align*} \end{definition} \noindent Note that the trace $\operatorname{tr}(CU^\dagger TU)$ is well-defined due to Lemma \ref{lemma_10} and \ref{lemma_nu_hoelder}. \medskip Moreover, throughout this paper we need some mechanism to associate bounded operators on $\mathcal H$ with matrices. In doing so, let $(e_n)_{n\in\mathbb N} $ be some orthonormal basis of $\mathcal H$ and let $(\hat e_i)_{i=1}^n$ be the standard basis of $\mathbb C^n$. For any $n\in\mathbb N$ we define \begin{align*} \Gamma_n:\mathbb C^n\to \mathcal H,\qquad \hat{e_i}\mapsto \Gamma_n(\hat e_i):=e_i \end{align*} and its linear extension to all of $\mathbb C^n$. Next, let \begin{align}\label{cut_out_operator} [\;\cdot\;]_n:\mathcal B(\mathcal H)\to\mathbb C^{n\times n},\qquad A\mapsto [A]_n:=\Gamma_n^\dagger A\Gamma_n \end{align} be the operator which ``cuts out'' the upper $n\times n$ block of (the matrix representation of) $A$ with respect to $(e_n)_{n\in\mathbb N} $. \subsection{Star-Shapedness} Our strategy is to transfer well-known properties of the finite-dimensional $[C]_n$-numerical range of $[T]_n$ to $W_C(T)$ via the convergence results of Lemma \ref{lemma_5}. \begin{lemma}\label{lemma_proj_strong_conv} Let $p\in[1,\infty]$, $C \in \mathcal B^p(\mathcal H)$ and $(S_n)_{n\in\mathbb N}$ be a sequence in $\mathcal B(\mathcal H)$ which converges strongly to $S \in\mathcal B(\mathcal H)$. Then one has $S_n C \to SC$, $CS_n^\dagger \to CS^\dagger$, and $S_nCS_n^\dagger \to SCS^\dagger$ for $n \to \infty$ with respect to the norm $\nu_p$. \end{lemma} \begin{proof} The cases $p=1$ and $p=\infty$ are proven in \cite[Lemma 3.2]{dirr_ve}. As the proof for $p\in(1,\infty)$ is essentially the same, we sketch only the major differences. First, choose $K\in\mathbb N$ such that \begin{align*} \sum_{k=K+1}^\infty s_k(C)^p<\frac{\varepsilon^p}{(3\kappa)^p}\,, \end{align*} where $\kappa > 0$ satisfies $\|S\|\leq\kappa$ and $\|S_n\|\leq\kappa$ for all $n\in\mathbb N$. The existence of the constant $\kappa > 0$ is guaranteed by the uniform boundedness principle. Then decompose $C=\sum_{k=1}^\infty s_k(C)\langle e_k,\cdot\rangle f_k$ into $C = C_1 + C_2$ with $C_1 := \sum_{k=1}^K s_k(C)\langle e_k,\cdot\rangle f_k$ finite-rank. By Lemma \ref{lemma_10} one has \begin{align*} \nu_p(SC - S_nC)\leq \nu_p(SC_1-S_n C_1)+\Vert S\Vert\nu_p(C_2)+\Vert S_n\Vert\nu_p(C_2) <\nu_p(SC_1-S_nC_1 )+\frac{2\varepsilon}{3}\,. \end{align*} Thus, what remains is to choose $N\in\mathbb N$ such that $\nu_p(SC_1-S_nC_1)<\varepsilon/3$ for all $n\geq N$. Starting from \begin{align*} \nu_p(SC_1-S_n C_1) \leq \sum_{k=1}^K s_k(C)\nu_p\big(\langle e_k,\cdot\rangle (Sf_k-S_nf_k)\big)=\sum_{k=1}^K s_k(C) \Vert Sf_k-S_nf_k \Vert\,, \end{align*} the strong convergence of $(S_n)_{n\in\mathbb N}$ yields $N \in \mathbb N$ such that \begin{align*} \Vert Sf_k - S_nf_k \Vert<\frac{\varepsilon}{3\sum_{k=1}^Ks_k(C)} \end{align*} for $k = 1, \dots, K$ and all $n\geq N$. This shows $\nu_p(SC - S_nC)\to 0$ as $n\to\infty$. All other assertions are an immediate consequence of $\nu_p(A) = \nu_p(A^\dagger)$ for all $A \in \mathcal B^p(\mathcal H)$ and \begin{align*} \nu_p(SCS^\dagger - S_nCS_n^\dagger) & \leq \Vert S\Vert\nu_p(CS^\dagger-CS_n^\dagger)+\nu_p(SC-S_nC)\Vert S_n\Vert \\ & \leq \kappa \big(\nu_p(CS^\dagger-CS_n^\dagger) + \nu_p(SC-S_nC)\big)\,.\qedhere \end{align*} \end{proof} \begin{lemma}\label{strong_tr_conv} Let $C\in\mathcal B^p(\mathcal H)$ and $T\in\mathcal B^q(\mathcal H)$ with $p,q\in [1,\infty]$ conjugate and let $(S_n)_{n\in\mathbb N}$ be a sequence in $\mathcal B(\mathcal H)$ which converges strongly to $S\in\mathcal B(\mathcal H)$. Then \begin{align*} \lim_{n\to\infty}\operatorname{tr}(CS_n^\dagger TS_n)=\operatorname{tr}(CS^\dagger TS)\,. \end{align*} Furthermore, the sequence of linear functionals $(\operatorname{tr}(CS_n^\dagger(\cdot)S_n))_{n\in\mathbb N}$ converges uniformly to $\operatorname{tr}(CS^\dagger (\cdot)S)$ on $\nu_q$-bounded subsets of $\mathcal B^q(\mathcal H)$, while the sequence $(\operatorname{tr}((\cdot)S_n^\dagger TS_n))_{n\in\mathbb N}$ converges uniformly to $\operatorname{tr}((\cdot)S^\dagger TS)$ on $\nu_p$-bounded subsets of $\mathcal B^p(\mathcal H)$. \end{lemma} \begin{proof} The statement is a simple consequence of (\ref{eq:4}) and Lemma \ref{lemma_proj_strong_conv} as \begin{align*} |\operatorname{tr}(CS^\dagger TS) &-\operatorname{tr}(CS_n^\dagger TS_n)| = |\operatorname{tr}((SCS^\dagger-S_nCS_n^\dagger)T)|\\ &\leq \nu_p(SCS^\dagger-S_nCS_n^\dagger)\nu_q(T) \to 0 \quad\text{as } n\to\infty\,.\qedhere \end{align*} \end{proof} \begin{theorem}\label{lemma_2} Let $C\in\mathcal B^p(\mathcal H)$, $T\in\mathcal B^q(\mathcal H)$ with $p,q\in [1,\infty]$ conjugate be given. Furthermore, let $(e_n)_{n\in\mathbb N},(g_n)_{n\in\mathbb N}$ be arbitrary orthonormal bases of $\mathcal H$. Then \begin{align*} \lim_{n\to\infty}W_{[C]^e_{2n}}([T]^g_{2n})=\overline{W_C(T)} \end{align*} where $[\,\cdot\,]_k^e$ and $[\,\cdot\,]_k^g$ are the maps given by \eqref{cut_out_operator} with respect to $(e_n)_{n\in\mathbb N}$ and $(g_n)_{n\in\mathbb N}$, respectively. \end{theorem} \begin{proof} The proof for $p=1$ and $q=\infty$ (or vice versa) given in \cite[Thm.~3.1]{dirr_ve} can be adjusted to the case $p,q\in(1,\infty)$ by minimal modifications. \end{proof} Before proceeding with the star-shapedness of $\overline{W_C(T)}$, we need the following auxilliary result to characterize the star-center later on. \begin{lemma}\label{lemma_0_conv} Let $C\in\mathcal B^p(\mathcal H)$ with $p\in( 1,\infty]$ and let $q\in[1,\infty)$ such that $p,q$ are conjugate. Furthermore, let $(e_n)_{n\in\mathbb N}$ be any orthonormal system in $\mathcal H$. Then \begin{align*} \lim_{n\to\infty}\frac{1}{n^{1/q}}\sum_{k=1}^n\langle e_k,Ce_k\rangle=0\,. \end{align*} \end{lemma} \begin{proof} First, let $p=\infty$, so $q=1$. As $C$ is compact, by Lemma \ref{lemma_2b} (b), one has $\lim_{k\to\infty}\langle e_k,Ce_k\rangle=0$ and thus the sequence of arithmetic means converges to zero as well. Next, let $p\in(1,\infty)$ and $\varepsilon>0$. Moreover, we assume w.l.o.g.~$C\neq 0$ so $s_1(C) = \Vert C\Vert\neq 0$. As $C\in \mathcal B^p(\mathcal H)$, one can choose $N_1\in\mathbb N$ such that \begin{align*} \sum_{k=N_1+1}^\infty s_k(C)^p<\frac{\varepsilon^p}{2^p} \end{align*} and moreover $N_2\in\mathbb N$ such that \begin{align*} \frac{1}{n^{1/q}}<\frac{\varepsilon}{2\sum_{k=1}^{N_1}s_k(C)} \end{align*} for all $n\geq N_2$. Then, for any $n\geq N:=\max\lbrace N_1+1,N_2\rbrace$, by Lemma \ref{lemma_2b} and H\"older's inequality we obtain \begin{align*} \Big|\frac{1}{n^{1/q}}\sum_{k=1}^n\langle e_k,Ce_k\rangle\Big|&\leq \frac{1}{n^{1/q}}\sum_{k=1}^{N_1}s_k(C)+\frac{1}{n^{1/q}}\sum_{k=N_1+1}^{n}s_k(C)\\ &\leq \frac{1}{n^{1/q}}\sum_{k=1}^{N_1}s_k(C)+\Big(\sum_{k=N_1+1}^{n} s_k(C)^p \Big)^{1/p}\Big(\sum_{k=N_1+1}^{n} \frac{1}{n} \Big)^{1/q}\\ &<\frac{\varepsilon}{2}+\Big(\sum_{k=N_1+1}^{\infty} s_k(C)^p \Big)^{1/p}\Big(\underbrace{ \frac{n-N_1}{n} }_{\leq 1}\Big)^{1/q}\leq\varepsilon\,. \end{align*} This concludes the proof. \end{proof} Now, our main result of this section reads as follows. \begin{theorem}\label{theorem_1} Let $C\in\mathcal B^p(\mathcal H)$ and $T\in\mathcal B^q(\mathcal H)$ with $p,q\in [1,\infty]$ conjugate. Then $\overline{W_C(T)}$ is star-shaped with respect to the origin. \end{theorem} \begin{proof} Let $(e_n)_{n\in\mathbb N},(g_n)_{n\in\mathbb N}$ be arbitrary orthonormal bases of $\mathcal H$. For $n\in\mathbb N$, it is readily verified that \begin{align*} \frac{\operatorname{tr}([C]^e_{2n})\operatorname{tr}([T]^g_{2n})}{2n} &=\frac{\operatorname{tr}([C]^e_{2n})}{(2n)^{1/q}}\frac{\operatorname{tr}([T]^g_{2n})}{(2n)^{1/p}}\\ &=\Big( \frac{1}{(2n)^{1/q}}\sum_{j=1}^{2n} \langle e_j,Ce_j\rangle \Big)\Big( \frac{1}{(2n)^{1/p}}\sum_{j=1}^{2n} \langle g_j,Tg_j\rangle \Big)\,. \end{align*} Both factors converge and, by Lemma \ref{lemma_0_conv}, at least one of them goes to $0$ as $n\to\infty$. Moreover, $W_{[C]^e_{2n}}([T]^g_{2n})$ is star-shaped with respect to $(\operatorname{tr}([C]^e_{2n})\operatorname{tr}([T]^g_{2n})/(2n)$ for all $n\in\mathbb N$, cf.~\cite[Thm.~4]{article_cheungtsing}. Thus Lemma \ref{lemma_5} (d) and Theorem \ref{lemma_2} imply that $\overline{W_{C}(T)}$ is star-shaped with respect to $0 \in \mathbb{C}$, i.e. with respect to the origin. \end{proof} \begin{remark} The limit case $p=1$ and $q=\infty$ returns the known star-shapedness result in the case of trace-class \cite[Thm.~3.3]{dirr_ve} because the essential numerical range satisfies $W_e(T)=\lbrace0\rbrace$ if (and only if) $T$ is compact \cite[Thm.~34.2]{bonsallduncan}. \end{remark} In analogy to the essential numerical range of a bounded linear operator as characterized in, e.g., \cite[Thm.~34.9]{bonsallduncan}, we introduce the \emph{essential range} of a bounded linear functional $L\in(\mathcal B^q(\mathcal H))'$ via \begin{align*} W_e(L) := \Big\lbrace \lim_{n \to \infty} L(\langle f_n,\cdot \rangle f_n)\,\Big|\, (f_n)_{n \in \mathbb N}\;\text{ ONS of }\mathcal H\Big\rbrace \subset \mathbb C\,. \end{align*} By the canonical isomorphism $A \mapsto \operatorname{tr}(A\,\cdot\,)$ one has $(\mathcal B^1(\mathcal H))' \simeq \mathcal B(\mathcal H)$ and $(\mathcal B^q(\mathcal H))' \simeq \mathcal B^p(\mathcal H)$ for $q \in (1,\infty]$ with $p,q$ conjugate, refer to \cite[Thm.~V.15]{Schatten} and \cite[Prop.~16.26]{MeiseVogt}. Thus for $q \in [1,\infty]$, to each $L\in(\mathcal B^q(\mathcal H))'$ we can associate a unique bounded linear operator $C \in \mathcal B(\mathcal H)$ if $q = 1$ and $C \in \mathcal B^p(\mathcal H)$ if $q \in (1,\infty]$, such that \begin{equation}\label{eq:ess_range} W_e(L) = W_e(C)\,. \end{equation} This shows that $W_e(L)$ is non-empty, compact and convex and, in particular, $W_e(L)= \{0\}$ for $q \in (1,\infty]$, cf.~\cite[Thm.~34.2]{bonsallduncan}. With the above terminology one has the following straightforward conclusion. \begin{corollary} \begin{itemize} \item[(a)] Let $q\in (1,\infty]$ and $T\in\mathcal B^q(\mathcal H)$ be given. The closure of the image of the unitary orbit of $T$ under any bounded linear functional $L\in(\mathcal B^q(\mathcal H))'$, i.e. the closure of $$ L(\mathcal{O}_U(T)):=\lbrace L(U^\dagger TU)\,|\,U\in\mathcal B(\mathcal H)\text{ unitary}\rbrace\,, $$ is star-shaped with respect to the origin.\vspace{4pt} \item[(b)] Let $q = 1$ and $T\in\mathcal B^1(\mathcal H)$ be given. The closure of the image of the unitary orbit of $T$ under any bounded linear functional $L\in(\mathcal B^1(\mathcal H))'$ is star-shaped with respect to $\operatorname{tr}(T)W_e(L)$, i.e. all $z \in \operatorname{tr}(T)W_e(L)$ are possible star centers. \end{itemize} \end{corollary} \begin{proof} (a) Let $q\in(1,\infty]$ with conjugate $p\in [1,\infty)$. Then, as seen above, $\mathcal B^p(\mathcal H) \simeq (\mathcal B^q(\mathcal H))'$ by means of the canonical map $A\mapsto \operatorname{tr}(A\,\cdot\,)$. Now, $L(\mathcal{O}_U(T))=W_C(T)$ for some unique $C\in \mathcal B^p(\mathcal H)$ and thus, by Theorem \ref{theorem_1}, the closure of this set is star-shaped with respect to $0 \in \mathbb C$. \medskip \noindent (b) For $q=1$, again as seen above one has $(\mathcal B^1(\mathcal H))' \simeq \mathcal B(\mathcal H)$ and thus $L=\operatorname{tr}(B\,\cdot\,)$ for some $B\in\mathcal B(\mathcal H)$. Hence, $L(\mathcal{O}_U(T))$ equals $W_T(B)$, cf.~\cite[Defi.~3.1]{dirr_ve}, and therefore is star-shaped with respect to $\operatorname{tr}(T)W_e(B)=\operatorname{tr}(T)W_e(L)$, refer to \eqref{eq:ess_range} and \cite[Thm.~3.3]{dirr_ve}. \end{proof} \subsection{Convexity and the $C$-Spectrum}\label{sect_C_spectrum} Convexity is definitely one of the most beautiful properties in the context of numerical ranges. A useful tool in order to characterize convexity of the $C$-numerical range is the $C$-spectrum, which was first introduced for matrices in \cite{article_marcus} and was generalized to infinite dimensions (more precisely, to trace-class operators) in \cite{dirr_ve}. Consequently, the next step is to transfer this concept and some of the known results to the Schatten-class setting.\medskip In order to define the $C$-spectrum, we first have to fix the term \emph{eigenvalue sequence} of a compact operator $T \in \mathcal K(\mathcal H)$. In general, it is obtained by arranging the (necessarily countably many) non-zero eigenvalues in decreasing order with respect to their absolute values and each eigenvalue is repeated as many times as its algebraic multiplicity\footnote{By \cite[Prop.~15.12]{MeiseVogt}, every non-zero element $\lambda \in \sigma(T)$ of the spectrum of $T$ is an eigenvalue of $T$ and has a well-defined finite algebraic multiplicity $\nu_a(\lambda)$, e.g., $\nu_a(\lambda) := \dim \ker (T - \lambda I)^{n_0}$, where $n_0 \in \mathbb N$ is the smallest natural number $n \in \mathbb N$ such that $\ker (T - \lambda I)^n = \ker (T - \lambda I)^{n+1}$. \label{footnote_alg_mult}}. If only finitely many non-vanishing eigenvalues exist, the sequence is filled up with zeros, see \cite[Ch.~15]{MeiseVogt}. For our purposes, we have to pass to a slightly \emph{modified eigenvalue sequence} as follows: \begin{itemize} \item If the range of $T$ is infinite-dimensional and the kernel of $T$ is finite-dimensional, then put $\operatorname{dim}(\operatorname{ker}T)$ zeros at the beginning of the eigenvalue sequence of $T$. \vspace{4pt} \item If the range and the kernel of $T$ are infinite-dimensional, mix infinitely many zeros into the eigenvalue sequence of $T$.\footnote{Since in Definition \ref{defi_3} arbitrary permutations will be applied to the modified eigenvalue sequence, we do not need to specify this mixing procedure further, cf. also \cite[Lemma 3.6]{dirr_ve}.}\vspace{4pt} \item If the range of $T$ is finite-dimensional leave the eigenvalue sequence of $T$ unchanged. \end{itemize} Note that compact normal operators have a spectral decomposition of the form \begin{align*} T = \sum_{n=1}^\infty \tau_n \langle f_n, \cdot \rangle f_n \end{align*} where $(f_n)_{n \in \mathbb N}$ is an orthonormal basis of $\mathcal H$ and $(\tau_n)_{n \in \mathbb N}$ denotes the modified eigenvalue sequence of $T$, cf.~\cite[Thm.~VIII.§4.6]{berberian1976}. Hence it is evident that for arbitrary $p\in[1,\infty)$, the absolute values of the non-vanishing eigenvalues and the singular values of a \textit{normal} $T\in\mathcal B^p(\mathcal H)$ coincide and thus \begin{align*} \nu_p(T)=\Big(\sum_{n=1}^\infty |\tau_n|^p\Big)^{1/p}<\infty\,. \end{align*} \begin{definition}[$C$-spectrum]\label{defi_3} Let $p,q\in[1,\infty]$ be conjugate. Then, for $C\in\mathcal B^p(\mathcal H)$ with modified eigenvalue sequence $(\gamma_n)_{n\in\mathbb N}$ and $T\in\mathcal B^q(\mathcal H)$ with modified eigenvalue sequence $(\tau_n)_{n\in\mathbb N}$, we define the $C$-\emph{spectrum} of $T$ to be \begin{align*} P_C(T):=\Big\lbrace \sum\nolimits_{n=1}^\infty \gamma_n\tau_{\sigma(n)} \,\Big|\, \sigma:\mathbb N \to\mathbb N \text{ is permutation}\Big\rbrace. \end{align*} \end{definition} \noindent Due to H\"older's inequality and the standard estimate $\sum_{n=1}^\infty |\gamma_n(A)|^p \leq \sum_{n=1}^\infty s_n(A)^p$, cf.~\cite[Prop.~16.31]{MeiseVogt}, one has $$ \sum_{n=1}^\infty |\gamma_n\tau_{\sigma(n)}|\leq \Big(\sum_{n=1}^\infty s_n(C)^p \Big)^{1/p}\Big(\sum_{n=1}^\infty s_n(T)^q\Big)^{1/q}=\nu_p(C)\nu_q(T)\,. $$ Thus, the series $\sum\nolimits_{n=1}^\infty \gamma_n\tau_{\sigma(n)}$ in the definition of $P_C(T)$ are well-defined and bounded by $\nu_p(C)\nu_q(T)$. \medskip A comprehensive survey on basic results regarding the $C$-spectrum of a matrix can be found in \cite[Ch.~6]{article_li_radii}. Below, in Theorem \ref{theorem_3}, we generalize some well-known inclusion relations between the $C$-numerical range and the $C$-spectrum of matrices to Schatten-class operators. Prior to this, however, we have to derive an approximation result similar to Theorem \ref{lemma_2}. \begin{theorem}\label{lemma_6} Let $C\in\mathcal B^p(\mathcal H)$ and $T\in\mathcal B^q(\mathcal H)$ both be normal with $p,q\in [1,\infty]$ conjugate. Then \begin{align*} \lim_{n\to\infty}P_{[C]^e_n}([T]^g_n)= \overline{P_C(T)}\,. \end{align*} Here, $[\,\cdot\,]_k^e$ and $[\,\cdot\,]_k^g$ are the maps given by \eqref{cut_out_operator} with respect to the orthonormal bases $(e_n)_{n\in\mathbb N}$ and $(g_n)_{n\in\mathbb N}$ of $\mathcal H$ which diagonalize $C$ and $T$, respectively. \end{theorem} \begin{proof} A proof for $p=1,q=\infty$ (or vice versa) is given in \cite[Thm.~3.6]{dirr_ve} and can be adjusted to $p,q\in(1,\infty)$ by minimal modifications. \end{proof} Now our main result of this section reads as follows. \begin{theorem}\label{theorem_3} Let $C\in\mathcal B^p(\mathcal H)$ and $T\in\mathcal B^q(\mathcal H)$ with $p,q\in [1,\infty]$ conjugate. Then the following statements hold. \begin{itemize} \item[(a)] If either $C$ or $T$ is normal with collinear eigenvalues, then $\overline{W_C(T)}$ is convex.\vspace{4pt} \item[(b)] If $C$ and $T$ both are normal, then \begin{align*} P_C(T)\subseteq W_C(T)\subseteq\operatorname{conv}(\overline{P_C(T)})\,. \end{align*} \item[(c)] If $C$ and $T$ both are normal and the eigenvalues of $C$ or $T$ are collinear, then \begin{align*} \overline{W_C(T)}=\operatorname{conv}(\overline{P_C(T)})\,. \end{align*} \end{itemize} \end{theorem} \begin{proof} (a) W.l.o.g. let $C$ be normal with collinear eigenvalues. There exists an orthonormal basis $(e_n)_{n\in\mathbb N}$ of $\mathcal H$ such that $C=\sum_{n=1}^\infty \gamma_n\langle e_n,\cdot\rangle e_n$. Since $\gamma_n\to 0$ as $n\to\infty$, due to the collinearity assumption there exists $\phi\in[0,2\pi)$ such that $e^{i\phi}C$ is hermitian. Thus, by Theorem \ref{lemma_2}, one has \begin{align*} \overline{W_C(T)}=\overline{W_{e^{i\phi}C}(e^{-i\phi}T)}=\lim_{n\to\infty} W_{[e^{i\phi}C]_{2n}^e}([e^{-i\phi}T]_{2n}^e)\,. \end{align*} As $[e^{i\phi}C]_{2n}^e\in\mathbb C^{2n\times 2n}$ is obviously hermitian for all $n\in\mathbb N$, it follows that $W_{[e^{i\phi}C]_{2n}^e}([e^{-i\phi}T]_{2n}^e)$ is convex for $n\in\mathbb N$, cf.~\cite{article_poon}. Hence Lemma \ref{lemma_5} (c) yields the desired result. \medskip \noindent (b) The statement can be proven completely analogously to \cite[Thm.~3.4 -- second inclusion]{dirr_ve}. \medskip \noindent (c) Finally, applying the closure and the convex hull to (b) yields $\operatorname{conv}(\overline{P_C(T)})=\operatorname{conv}(\overline{W_C(T)})=\overline{W_C(T)}$, where the last equality holds because of (a). \end{proof}\bigskip \textbf{Acknowledgements.} The authors are grateful to Thomas Schulte-Herbr\"uggen for valuable comments, and furthermore to the organizers of the WONRA 2018 which gave the inspiration for this follow-up paper. This work was supported in part by the \textit{Elitenetzwerk Bayern} through ExQM. \bibliographystyle{tfnlm}
1,477,468,750,234
arxiv
\section{{\bf Introduction}} \label{sec:intro} The nature and production mechanism of dark matter still remains a mystery.~While most of the experimental effort in the past was aimed at detecting weakly interacting massive particles (WIMPs), the lack of observation has necessitated new theoretical ideas and proposed search strategies to cover as many alternatives as possible~\cite{Bertone:2018xtm}.~In this context, it is important to explore new production mechanisms for dark matter and assess their impact on search strategies.~As the WIMP paradigm comes into increasing tension, candidates for non-thermal dark matter have gained renewed interest~\cite{Bertone:2018xtm}.~Recently, one class of non-thermal dark matter candidates receiving considerable attention is that of vector (or ``dark photon"\,\footnote{We use `dark photon' and `dark vector' interchangeably throughout to refer to a neutral spin-one massive vector boson associated with a broken dark $U(1)_D$ gauge symmetry.}) dark matter for which several production mechanisms have been proposed.~These include production mechanisms associated with inflation~\cite{Nelson:2011sf,Arias:2012az,Graham:2015rva,Bastero-Gil:2018uel,Ema:2019yrd,Nakayama:2020rka,Nakai:2020cfw,Ahmed:2020fhc,Kolb:2020fwh,Salehian:2020asa,Firouzjahi:2020whk} as well as oscillating scalars after inflation~\cite{Agrawal:2018vin,Co:2018lka,Dror:2018pdh}.~The common feature among them is a coupling between a dark abelian gauge boson and a separate sector which induces a time dependence into the dispersion relation of the dark vector field.~This separate sector can come in the form of a scalar like the inflaton~\cite{Bertone:2018xtm,Salehian:2020asa} or an axion~\cite{Agrawal:2018vin,Co:2018lka} or simply gravity~\cite{Graham:2015rva,Ahmed:2020fhc,Kolb:2020fwh}.~Over large regions of parameter space this leads to exponential dark vector production and can reproduce the observed dark matter relic abundance over many orders of magnitude of dark matter mass. Motivated by, but not limited to, scenarios of axion inflation, in~\cite{Bastero-Gil:2018uel} it was shown that a dark abelian gauge field coupled to an inflaton via a $\phi F\tilde{F}$ coupling can be produced by a tachyonic instability and generate the observed dark matter relic abundance in the mass range $\mu\,{\rm eV} \lesssim m \lesssim 10\,{\rm TeV}$.~More specifically, the time dependance induced by the rolling inflaton leads to a tachyonic enhancement and exponential production of \emph{one transverse polarization} of the dark photon.~As the Universe expands after inflation the dark gauge bosons redshift and, at some point in their cosmic evolution, they obtain a mass and become non-relativistc.~As in~\cite{Graham:2015rva} where the longitudinal mode is produced by inflationary fluctuations, there is a peaked structure in the power spectrum.~However, in this case the peak is not due to redshifting, but instead to the time dependence of the inflaton as it rolls down its potential.~As we examine in detail below, the dark photon production is exponentially sensitive to the inflaton velocity.~This leads to the maximum production just at the end of inflation as the inflaton exits slow roll.~This in turn gives rise to a peak in the dark photon power spectrum at scales corresponding to the Hubble scale at the end of inflation. The goal of this work is to examine this mechanism in detail and to compute the late time energy density spectrum.~We focus in particular on the case where the dark vector is \emph{relativistic} at the time its mass is generated and examine the associated cosmic evolution.~We first review the production mechanism and track the total energy density to estimate the parameter space which can reproduce the observed dark matter relic abundance.~We then examine the energy density spectrum at the end of inflation as well as its evolution to late times once the dark vector has become non-relativistic.~We obtain the late time power spectrum demonstrating explicitly that the peak generated at the end of inflation is preserved after redshifting.~We then show that the peak corresponds to small physical scales today, $\ell_{\rm today} \sim {\rm cm} - 100\,{\rm km}$, with potentially large density fluctuations at $\ell_{\rm today}$ leading to a clumpy nature for the dark photon matter.~We also discuss potential phenomenology and future directions, briefly commenting on the non-relativistic case. \section{{\bf Vector dark matter production from end of inflation}} Here we expand on the discussion presented in~\cite{Bastero-Gil:2018uel} to show that a dark vector field coupled to a slow rolling inflaton via a $\phi F\tilde{F}$ coupling can be produced by a tachyonic instability and generate the observed dark matter relic abundance.~To do this we first derive the equations of motion and obtain (approximate) analytic solutions for the tachyonic modes.~We then compute the total energy density at the end of inflation and track its cosmic evolution. Since we work in the weak field regime where backreaction effects can be neglected, the evolution of the dark vector modes is linear.~Thus, while the vector field begins as quantum fluctuations during inflation that turn classical only later on, the mode functions of the creation and destruction operators can be obtained using the classical equations of motion.~The evolution of the energy density and power spectrum can then be directly extracted from the mode functions obeying classical evolution. \subsection{Tachyonic production in an expanding universe } \label{sec:vdmprod} Our starting point is the action for an inflaton with potential $V(\phi)$ coupled to a spin-1 vector boson which is neutral under the Standard Model gauge group, \begin{eqnarray} S &=& - \int d^4x \, \sqrt{-g} \Big[\frac{1}{2} \partial_\mu \phi \partial^\mu \phi + V(\phi) +~\frac{1}{4} F_{\mu\nu} F^{\mu\nu} + \frac{1}{2} m^2 A_\mu A^\mu + \frac{\alpha}{4 f} \phi F_{\mu\nu} \tilde F^{\mu\nu} \Big] \, , \label{eq:Lag} \end{eqnarray} where $\phi$ is the inflaton field that drives inflation, $A_\mu$ is the dark vector field, $F_{\mu\nu}= \partial_\mu A_\nu - \partial_\nu A_\mu$ is the field strength, and $\tilde F^{\mu\nu} = \epsilon^{\mu\nu\alpha\beta} F_{\alpha\beta}/2$ with $\epsilon^{\mu\nu\alpha\beta}$ the completely antisymmetric tensor.~We use the Friedmann-Robertson-Walker metric with $ds^2 = - dt^2 + a^2(t) d\vec{x}^2$ and the convention $\epsilon^{0123} = 1/\sqrt{-g}$.~The vector boson mass $m$ can be zero or non-zero during inflation and can be of a Stueckelberg or `Higgs-ed' type with an associated symmetry breaking phase transition.~As long as it is smaller than the Hubble scale at the end of inflation the vector mass has negligible effects on the tachyonic production mechanism and only becomes relevant when considering cosmological evolution and the final relic abundance.~There could in principle be a kinetic mixing between the visible and dark photons, but it does not spoil the production mechanism as long as it is small enough to prevent thermalization of the dark vector with the visible sector and decay to Standard Model particles for masses above the electron threshold as well as satisfy other experimental constraints~\cite{Gherghetta:2019coi}.~We also do not specify the inflaton potential $V(\phi)$ as its precise form is not crucial for the production mechanism.~However, the form of the inflationary potential can affect the shape of the dark photon energy density spectrum as we examine in more detail in~\sref{energy}. The only crucial ingredients needed for the production mechanism are:~$(i)$ the scale of the inflaton potential which sets the Hubble scale during inflation, \begin{eqnarray} H = \frac{\sqrt{V(\phi)}}{\sqrt{3} M_{\rm Pl}} \, , \end{eqnarray} and $(ii)$ the coupling of the inflaton to $F\tilde F$ responsible for exponential production of only one polarization of the transverse modes.~Such a coupling is generically present in models of natural inflation, where $\phi$ is a pseudoscalar (odd under parity) axion-like field subject to a shift symmetry.~For this reason, this class of models provides a well motivated theoretical framework for the mechanism presented here.~However, since the dynamics of the mechanism do not depend on whether $\phi$ is an axion or not, $\phi$ can be a generic scalar (or function of $\phi$~\cite{Barnaby:2011qe}) as long as it is rolling towards its minimum and couples to $F \tilde F$ \footnote{These ingredients are also present in some models aimed at the relaxation of the electroweak scale \cite{Tangarife:2017rgl} or of the cosmological constant \cite{Graham:2019bfu}.}.~Note that we do not need to impose that the Lagrangian respects parity so $\phi$ could also be parity even and, in particular, is not necessarily a pseudo Goldstone boson. We quantize the vector field by expanding in the helicity basis in terms of creation and annihilation operators and their mode functions as follows, \begin{eqnarray} \hat{\vec A}(\vec x,t) &=& \sum_{\lambda = \pm,L} \int \frac{d^3k}{(2\pi)^3} e^{i \vec k \cdot \vec x} \ \vec \epsilon_\lambda(\vec k) \times [A_\lambda(k,t) a_\lambda(\vec k) + A_\lambda(k,t)^\ast a_\lambda^\dagger(-\vec k)] , \end{eqnarray} where we include transverse and longitudinal polarisation in the sum and the creation and annihilation operators satisfy the commutation relation, \begin{eqnarray} \Big[a_\lambda(\vec k),\,a_\lambda^\dagger(\vec k^\prime) \Big] = (2\pi)^3 \delta_{\lambda\lambda^\prime}\,\delta^3(\vec k - \vec k^\prime) . \end{eqnarray} The mode functions obey the equations of motion derived in the Appendix starting from the action in~\eref{Lag} and which in Fourier space read~\cite{Anber:2009ua,Barnaby:2010vf}, \begin{eqnarray} \ddot \phi &+& 3 H \dot\phi + V' = \frac{\alpha}{4f} F \tilde F , \label{eq:EOMphi} \\ \ddot A_\pm &+& H \dot A_\pm + \left( \frac{k^2}{a^2} \pm \frac{k}{a} \frac{\alpha\dot\phi}{f} + m^2 \right) A_\pm = 0 \, , \label{eq:AEOMT} \\ \ddot A_L &+& \frac{3 k^2 + a^2 m^2}{k^2 + a^2 m^2} H \dot A_L + \left( \frac{k^2}{a^2} + m^2 \right) A_L = 0 \, , \label{eq:AEOML} \end{eqnarray} where we have also included the inflaton equation of motion.~The overdots denote derivatives with respect to physical time $t$ and $k \equiv |\vec k|$ is the magnitude of the comoving momentum.~We consider only the spatially homogeneous zero momentum mode ($k=0$) of the inflaton.~We have separated the three degrees of freedom of the vector into transverse and longitudinal components, $\vec A_T$ and $A_L$ respectively, where $\vec k \cdot \vec A = k A_L$ and $\vec k \cdot \vec A_T = 0$, and we have written the transverse component in terms of the two helicities, $\vec A_T = \vec\epsilon_+ A_+ + \vec\epsilon_- A_-$.~We see explicitly the $\phi F\tilde F$ coupling only enters into the equations of motion for the transverse modes.~The equation of motion for $A_L$ then corresponds to the one derived in~\cite{Graham:2015rva} and thus, as demonstrated in~\cite{Graham:2015rva}, if $m\neq 0$ during inflation the longitudinal mode is produced via inflationary fluctuations and will also contribute to the dark vector energy density. In what follows we concentrate on the equation of motion of the transverse vector modes in~\eref{AEOMT}.~It is convenient to introduce the dimensionless ``instability parameter', \begin{eqnarray} \label{eq:xidef} \xi \equiv \frac{\alpha \dot \phi}{2H f} = \sqrt{\frac{\epsilon}{2}} \frac{\alpha}{f} M_{\rm Pl} \, , \end{eqnarray} where $\epsilon \equiv - \dot H/H^2$ and for single field inflation we have, \begin{eqnarray}\label{eq:slowroll} |\dot\phi| \approx V'/3H,~~ \epsilon = \frac{\dot \phi^2}{2H^2 M_{Pl}^2} . \end{eqnarray} We then rewrite the equation of motion in terms of conformal time $\tau$ defined as $a d\tau = dt$, \begin{eqnarray} \label{eq:AEOMtau} \Big[ \frac{\partial^{2}}{\partial\tau^{2}} + k^{2} \pm 2\,k\,\frac{\xi}{\tau} + \frac{\bar{m}^2}{\tau^2} \Big] A_\pm(k, \tau) \equiv \Big[ \frac{\partial^{2}}{\partial\tau^{2}} + \omega^2(k,\tau) \Big] A_\pm(k, \tau) = 0 \, , \end{eqnarray} where we have defined the dimensionless ratio, \begin{eqnarray}\label{eq:mbar1} \bar m \equiv \frac{m}{H} . \end{eqnarray} Without loss of generality we use the convention $\dot\phi > 0$ which gives $\xi > 0$, implying that \emph{only} the mode $A_+$ experiences a tachyonic instability when, \begin{eqnarray} \label{eq:Omegatach} \omega^2(k,\tau) = k^2 + 2k \frac{\xi}{\tau} + \frac{\bar{m}^2}{\tau^2} = k^2 - 2 k \xi a H + \bar{m}^2 (a H)^2 < 0 \, , \end{eqnarray} where we have used the fact that during inflation ($\tau < 0$) we have $\tau \simeq -\frac{1}{a H}$.~On the other hand, the opposite polarization $A_-$~does not have tachyonic modes and is therefore neglected.~The condition in~\eref{Omegatach} then leads to the tachyonic conditions on the vector mass (which could be zero during inflation) and physical momentum ($q$), \begin{eqnarray}\label{eq:qtac} ~~~~ q \equiv \frac{k}{a} < \xi H + \xi H \sqrt{1- \bar{m}^2/ \xi^2} ~~\rm{~~~(tachyonic~condition)}, \end{eqnarray} which requires $\bar{m} < \xi$ to avoid the mass term quenching the tachyonic production~\cite{Meerburg:2012id}.~When $\bar{m} \ll \xi$, we also see the tachyonic condition on the physical momentum becomes $q < 2\xi H$.~As we will see below, we are interested in the (weak coupling) regime where $\xi$ is $\mathcal{O}(1)$ implying that the modes become tachyonic as their physical wavelength is stretched to be of order the horizon, $\lambda \equiv q^{-1} \sim H^{-1}$.~Since $-\omega^2$ is maximal at $q \simeq \xi H$, this implies the vector field power spectrum should also have a peak at scales around the co-moving horizon.~Within the horizon these modes add up coherently and have very large occupation number due to the exponential enhancement from the tachyonic instability.~Thus, these dark vectors are well described by a classical dark `electromagnetic' field and since only one helicity is enhanced exponentially, we are left at the end of inflation with a maximally \emph{helical dark electromagnetic field}.~This is in analogy with magnetogenesis scenarios~\cite{Adshead:2016iae} constructed to solve the puzzle of the origin of primordial magnetic fields. Treating $\xi$ as a constant, a good approximation early on during inflation as the inflaton slow-rolls, one can solve~\eref{AEOMtau} analytically~\cite{Meerburg:2012id} in terms of the Whittaker functions.~The overall normalization is determined by the requirement that the gauge field is initially (in the sub-horizon limit $-k\tau \to \infty$) in the Bunch-Davies (BD) vacuum \footnote{This choice of normalization makes the classical mode solutions consistent with the quantum ones.}, \begin{eqnarray}\label{eq:ABD} \lim_{-k\tau\to \infty} A_\pm (k,\tau) = \frac{e^{-ik\tau}}{\sqrt{2k}} \equiv A_{\rm{BD}} \, . \end{eqnarray} Neglecting the dark photon mass (with $\xi$ as constant) the full analytic solution to~\eref{AEOMtau} can be found in terms of Coulomb functions~\cite{Meerburg:2012id,Adshead:2015pva,Domcke:2018eki} which, in the tachyonic regime we are interested in $-k\tau < 2\xi$ ($k < 2 \xi a H$), are very well approximated by~\cite{Barnaby:2011vw}, \begin{eqnarray} \label{eq:Atachsol} A_+(k,\tau) &\simeq & e^{\pi \xi} \sqrt{\frac{-2 \tau}{\pi}} K_1 \left[ 2 \sqrt{-2\xi k \tau} \right] \, , \end{eqnarray} where $K_1$ is a modified Bessel function of the second kind.~In this form we see the exponential dependence on $\xi $ (or $\dot{\phi}$) explicitly.~In the super-horizon (SH) limit ($-k\tau\to 0$)~\eref{Atachsol} gives, \begin{eqnarray} \label{eq:ATP} \lim_{-k\tau\to 0} A_+ (k,\tau) = \frac{e^{\pi \xi}}{2 \sqrt{2 \pi k \xi}} \equiv A_{\rm{SH}} \, . \end{eqnarray} Useful analytic solutions can be obtained via the WKB approximation~\cite{Anber:2009ua, Barnaby:2011vw, Tangarife:2017rgl}, \begin{eqnarray} \label{eq:AWKB} A_+(k,\tau)_{\rm WKB} & \simeq & \frac{1}{\sqrt{2k}} \left( \frac{-k\tau}{2\xi} \right)^{1/4} e^{\pi \xi - 2 \sqrt{-2\xi k\tau}} ~~~~ (\frac{1}{8\xi} < -k\tau < 2\xi) \, , \end{eqnarray} where the regime of validity is dictated by the adiabatic condition $|\Omega' / \Omega^2| \ll1$ with $\Omega' \equiv d\Omega/d\tau$.~For modes with $k$ in the range $aH/(8\xi) < k < 2\xi aH$, $A_+(k,\tau)_{\rm WKB}$ approximates very well the solution obtained in~\eref{Atachsol} and gives us intuition into the behavior of the modes around horizon crossing as they become exponentially enhanced.~This also allows us to use analytic solutions to obtain an estimate of the total vector dark matter relic abundance and the viable regions of parameter space for the mechanism.~Eventually, we will need to compute the power spectrum at the end of inflation to use as input when tracking the cosmological evolution of the energy density spectrum.~However, as we emphasize in ~\sref{energy}, a more precise calculation of the dark matter energy density and shape of the power spectrum requires accounting for the time dependence of $\xi$ which necessitates solving the system of equations in~\eref{EOMphi} and~\eref{AEOMT} numerically. \subsection{Energy density at the end of inflation} \label{sec:prod} To eventually obtain the final relic abundance for the vector dark matter we need to track the evolution of the energy density starting from the time of its production at the end of inflation.~Thus we need to first compute the total vector energy density at the end of inflation.~As we review in the Appendix, starting from the action in~\eref{Lag} we can obtain the total energy density for the transverse component of the dark vector field ($\rho_D$) in terms of the tachyonic mode amplitude and its (conformal) time derivative, \begin{eqnarray} \rho_D &=& \frac{1}{4\pi^2 a^4} \int_0^\infty dk \, k^2 \Big( | \partial_\tau A_{+}(k,\tau) |^2 + \left( k^2 + a^2 m^2 \right) |A_{+}(k,\tau)|^2 \Big) \nonumber \\ &=& \frac{1}{2a^4} \int d\ln k \Big( \mathcal{P}_{\partial_\tau A_+}(k,\tau) + \left( k^2 + a^2 m^2 \right) \mathcal{P}_{A_+}(k,\tau) \Big) \nonumber \\ &=& \langle (\partial_\tau A_+)^2 \rangle + \langle A_+^2 \rangle \, = \frac{1}{2}\langle \vec{E}^2 + \vec{B}^2 \rangle. \label{eq:rhoDint} \end{eqnarray} where $\rho_D \equiv \langle \rho_D \rangle$ represents the spatial average as defined in the Appendix (see~\eref{rhoTt}) and we identify the `magnetic' ($B$) and `electric' ($E$) components respectively.~We have used $dt = a d\tau$ as well as defined the field and (time) derivative power spectra, \begin{eqnarray}\label{eq:PSdef} \mathcal{P}_X (k,\tau) &=& \frac{k^3}{2\pi^2} |X (k,\tau) |^2~~~ X = A_{+} \ {\rm or} \ \ \partial_\tau A_{+} \, , \end{eqnarray} which allows us to define the electric and magnetic energy density spectra respectively, \begin{eqnarray} \label{eq:rhoEandB} \frac{d\rho_E}{d\,{\rm ln}\,k} &=& \frac{1}{2a^4} \mathcal{P}_{\partial_\tau A_+}(k,\tau) ,~~ \frac{d\rho_B}{d\,{\rm ln}\,k} = \frac{1}{2a^4} \left( k^2 + a^2 m^2 \right) \mathcal{P}_{A_+}(k,\tau) \,. \end{eqnarray} Note as part of the magnetic component we have included the mass term which of course is not present in the case of the visible electromagnetic field.~Since only one transverse mode is exponentially enhanced (which we take as $A_+$) by the tachyonic instability, we can safely neglect the contribution from $A_-$ to~\eref{rhoDint}.~For the case where the dark vector already has a mass during inflation, we also compute the energy density contained in the longitudinal mode and obtain the same result as in~\cite{Graham:2015rva}. Taking $\bar{m} \ll 1$ during inflation and neglecting the time dependence of $\xi$, we can use the WKB solution for $A_+$ in~\eref{AWKB} to compute analytically the dark electric field contribution given by $\mathcal{P}_{\partial_\tau A_+}(k,\tau)$ as well as the dark magnetic field contribution $k^2 \mathcal{P}_{A_+}(k,\tau)$, where the former always dominates over the latter during inflation.~Integrating over momenta we can then estimate the energy density contained in the dark vector during inflation, \begin{eqnarray}\label{eq:rhoDinf} \rho_D \approx 10^{-4} \frac{e^{2\pi \xi}}{\xi^3} \, H^4 . \end{eqnarray} In reality of course, both the Hubble parameter and $\xi$ depend on time.~The parameter $\xi$, which controls the dark photon production and grows with $\dot\phi$ (or the slow roll parameter $\epsilon$), is largest at the end of inflation.~Thus, the largest contribution to the dark electromagnetic energy density comes from the end of inflation, as confirmed in our numerical analysis discussed below.~From \eref{rhoDinf} we can estimate the energy density at the end of inflation as, \begin{eqnarray} \label{eq:rhoD} \rho_D(a_{\rm{end}}) \equiv \rho^{\rm{end}}_D \approx 10^{-4} \frac{e^{2\pi \xi_{\rm end}} }{\xi_{\rm end}^3} H_{\rm end}^4 \, = 10^{-4} \frac{e^{2\pi \xi_{\rm end}}}{\xi_{\rm end}^3} \, \epsilon_H^4 H^4 , \end{eqnarray} with $\xi_{\rm end}$ the value of $\xi$ at the end of inflation and given by (using~\eref{xidef}), \begin{eqnarray}\label{eq:xiend} \xi_{\rm end} = \frac{\alpha}{\sqrt{2}}\frac{M_{\rm Pl}}{f} , \end{eqnarray} where we have assumed $\epsilon = 1$ at the end of inflation\footnote{Note in hybrid inflation models~\cite{Copeland:1994vg,Dvali:1994ms}, one can have values different from $\epsilon \approx 1$ at the end of inflation.}.~In the last equality in~\eref{rhoD} we account for a decreasing Hubble parameter during inflation and parametrize it as, \begin{eqnarray} \label{eq:Hend} H_{\rm end} = \epsilon_H H \, , \end{eqnarray} with $\epsilon_H$ a dimensionless parameter that can be calculated in a particular model of inflation.~Typically the slow-roll parameter $\epsilon = - \dot H/H^2$ is $\mathcal{O}(10^{-2} - 10^{-3})$ during inflation and $\approx 1$ at the end of inflation.~This translates into values of $\epsilon_H$ that are model dependent, but for most models of inflation we expect $\epsilon_H$ to be in the range $10^{-3} < \epsilon_H < 10^{-1}$.~In axion inflation models involving tachyonic production of vector fields, which have roughly 60 e-folds of inflation, $\xi$ during the first few e-folds is constrained by CMB measurements~\cite{Barnaby:2010vf,Barnaby:2011vw} to be less than $\xi_{\rm CMB} \lesssim 2.5$, but $\xi_{\rm end}$ is allowed to be significantly larger.~However, if $\xi$ is too large this will induce back reaction effects~\cite{Barnaby:2011qe,Peloso:2016gqs} which must be accounted for and which could potentially destroy the production mechanism.~Thus, in order to neglect these back reaction effects we limit ourselves to $\xi_{\rm end} < \mathcal{O}(10)$ in the following.~On the other hand, to obtain sufficient dark vector production requires $\xi_{\rm end} \gtrsim 1$ leading us to consider the range $1 \lesssim \xi_{\rm end} \lesssim 10$ when examining the viable vector dark matter parameter space below. During inflation, when $\phi$ dominates the energy density, we have for the inflaton, \begin{eqnarray} \label{eq:rhoI} \rho_I = V(\phi) = 3 H^2 M_{\rm Pl}^2 \, , \end{eqnarray} while at the end of inflation, with the inflaton still dominating, we have $H = H_{\rm end}$ and, \begin{eqnarray} \label{eq:rhoIend} \rho^{\rm{end}}_I = 3 H_{\rm end}^2 M_{\rm Pl}^2 = 3 \, \epsilon_H^2 H^2 M_{\rm Pl}^2 \, . \end{eqnarray} A fraction of $\rho^{\rm{end}}_I$ is transferred into the dark vector while another fraction goes into visible radiation to reheat the Universe whose energy density we can write as, \begin{eqnarray} \label{eq:rhoRH} \rho_R (T_{\rm RH}) = \frac{\pi^2}{30}g_*(T_{\rm RH}) T_{\rm RH}^4 \, \equiv \epsilon_R^4 \, \rho_I = 3 H_{\rm RH}^2 M_{\rm Pl}^2 . \end{eqnarray} Combining with~\eref{rhoI}, this allows us to define the reheating temperature, \begin{eqnarray} \label{eq:TRH} T_{\rm RH} = \epsilon_R\, \left( \frac{90}{\pi^2 g_*(T_{\rm RH})} \right)^{1/4} \sqrt{H M_{\rm Pl}} \, , \end{eqnarray} as well as the Hubble scale at reheating in terms of the Hubble scale during inflation, \begin{eqnarray} \label{eq:HHR} H_{\rm RH} = \epsilon_R^2 H \, . \end{eqnarray} The dimensionless parameter $\epsilon_R < 1$ parametrizes the fraction of the inflaton energy that goes into visible radiation.~We take $g_*(T_{\rm RH})$ to denote the number of relativistic degrees of freedom which we fix to $g_*(T_{\rm RH}) \sim 100$ and restrict ourselves to reheating temperatures above the electroweak scale.~We work in the approximation of instantaneous reheating assuming it takes place as soon as the inflaton exits slow-roll.~Thus, we assume the Universe has a temperature $T_{\rm RH}$ at $a = a_{\rm end}$ (see~\fref{confdiag}) implying $\rho^{\rm{end}}_D = \rho_D(T_{\rm RH})$.~Note however, we do not assume $H_{\rm RH} = H_{\rm end}$ which will be important to account for when we discuss constraints on the vector dark matter parameter space below. \subsection{Evolution of total energy density} \label{sec:EDevolution} In this section we track the redshift of the dark vector energy density after production at the end of inflation and estimate the present day relic abundance.~Here we only consider the case where the dark vector is relativistic at the time its mass is generated.~We also assume the dark vector mass is already present during inflation and remains non-zero throughout its cosmic evolution.~This scenario can be applied to either a Stueckelberg or dark Higgs mechanism for generating the dark vector mass.~We study the non-relativistic case and other possibilities for cosmic evolution when the dark vector mass is generated via a dark Higgs mechanism in~\cite{Higgsfollowup} where we find a different result from~\eref{DMT1} for the relic density.~For purposes of tracking the evolution and estimating the viable dark matter parameter space, it is enough to track the redshift of the energy density considering only modes around the peak of the power spectrum for which we assume they all redshift together. As discussed in~\sref{vdmprod}, for a given scale factor, the power spectrum will be peaked around scales the size of the co-moving horizon.~Thus, at the end of inflation the modes which give the largest contribution to $\rho^{\rm{end}}_D$ have physical momentum, \begin{eqnarray}\label{eq:qRH} q(T_{\rm RH}) \equiv \frac{k}{a_{\rm end}} \simeq H_{\rm end} \gg m\, , \end{eqnarray} where again we have assumed instantaneous reheating allowing us to define $T_{\rm RH}$ and track the redshift using temperature instead of scale factor.~At reheating the dark vectors are relativistic with the physical momentum then redshifting as, \begin{eqnarray}\label{eq:qT} q(T) = q(T_{\rm RH}) \frac{T}{T_{\rm RH}} \, . \end{eqnarray} The dark photons become non relativistic at a temperature $T = \bar T$ defined by the condition, \begin{eqnarray}\label{eq:qeqm} q(\bar T) = m \, , \end{eqnarray} which combining with~\eref{Hend},~\eref{TRH} and~\eref{qRH} allows us to solve for $\bar{T}$ as, \begin{eqnarray} \bar T = \frac{m}{H_{\rm end}} \, T_{\rm RH} = m \left( \frac{90}{\pi^2 g_*(T_{\rm RH})} \right)^{1/4} \frac{\epsilon_R}{\epsilon_H} \left( \frac{M_{\rm Pl}}{H} \right)^{1/2} \, . \label{eq:Tbar} \end{eqnarray} Above $\bar T$ (or before scale factor $a = \bar{a}$) the vector energy density redshifts like radiation, \begin{eqnarray} \label{eq:rhoDrad} \rho_D(T) = \rho_D(T_{\rm RH}) \left( \frac{T}{T_{\rm RH}} \right)^4 \, , \end{eqnarray} while below $\bar T$ (after $a = \bar{a}$) it redshifts like matter giving, \begin{eqnarray} \label{eq:rhoDmat} \rho_D(T) = \rho_D(T_0) \left( \frac{T}{T_0} \right)^3 \, , \end{eqnarray} where $T_0 \approx 10^{-13}$ GeV is today's CMB temperature.~The cosmic evolution described above and in the previous section can be summarized with~\fref{peakevo} which shows the evolution of the various energy densities defined in~\eref{rhoD},~\eref{rhoI}, and~\eref{rhoRH}. \begin{figure}[tbh] \includegraphics[scale=.25]{PeakModeEvolution} \caption{Schematic representation of the cosmic evolution of the vector dark matter energy density ($\rho_D$) as well as the energy density in visible radiation ($\rho_R$) and the inflaton ($\rho_I$).} \label{fig:peakevo} \end{figure} Equating~\eref{rhoDrad} and~\eref{rhoDmat} at $T = \bar{T}$ and combining with~\eref{rhoD},\,\eref{TRH},\,and~\eref{Tbar}, we obtain the energy density today in terms of the energy density at the end of inflation, \begin{eqnarray} \rho_D(T_0) &=& m \,T_0^3 \left( \frac{90}{\pi^2 g_*(T_{\rm RH})} \right)^{-3/4} \left( \frac{\epsilon_H}{ \epsilon_R} \right)^3 \left( \frac{H}{M_{\rm Pl} } \right)^{3/2} \left(\frac{\rho^{\rm{end}}_D}{H_{\rm end}^4}\right) \, . \label{eq:DMeq1} \end{eqnarray} This expression assumes the modes around the peak all redshift together which is strictly speaking not true as modes of different momenta become non-relativistic at different times, but for present purposes the approximation in~\eref{DMeq1} is sufficient.~The dark vector energy density at the end of inflation $\rho^{\rm{end}}_D$ (normalized to $H_{\rm end}^4$) can be obtained numerically, but as found in~\eref{rhoD} a useful analytic result can be obtained with the WKB approximation. Using the WKB approximation and taking the observed energy density of cold dark matter today~\cite{Ade:2015xua} to be $\rho_{\rm CDM} = 9.6 \times 10^{-48} \ {\rm GeV}^4$, the contribution from the transverse dark vector mode produced via tachyonic instability can then be written as, \begin{eqnarray} \frac{\rho_D(T_0)}{\rho_{\rm CDM}} = \frac{\Omega_T}{\Omega_{\rm CDM}} \simeq 2 \times 10^{-4} \cdot \left( \frac{m}{\rm MeV} \right) \left( \frac{H}{10^{14} \ {\rm GeV}} \right)^{3/2} \left( \frac{e^{2\pi \xi_{\rm end}}}{\xi_{\rm end}^3} \right) \, \left( \frac{\epsilon_H}{\epsilon_R} \right)^3 . \label{eq:DMT1} \end{eqnarray} We see in this case the final relic abundance depends on five parameters.~The first two are the dark vector mass $m$ and the Hubble scale during inflation $H$, as in the case of the longitudinal mode~\cite{Graham:2015rva}.~The third parameter $\xi_{\rm end}$ parametrizes the strength of the inflaton - dark vector coupling (see~\eref{xidef}) and depends weakly on the precise shape of the inflaton potential.~The final two parameters parametrize our ignorance of the inflaton sector, $\epsilon_H$ which parametrizes how much $H$ has decreased by the end of inflation (see~\eref{Hend}) and $\epsilon_R$ which parametrizes the fraction of energy transferred from the inflaton to the reheating sector (see~\eref{rhoRH}).~Both can in principle be calculated in a particular model of inflation. If the dark vector has a mass already during inflation, whether of Stueckelberg or Higgs-ed type (with the $U(1)_D$ \emph{not} restored during reheating), which persists throughout the entirety of its cosmic evolution until today, there is also a contribution to the relic density from the longitudinal mode produced via inflationary quantum fluctuations~\cite{Graham:2015rva}, \begin{eqnarray} \label{eq:DML} \frac{\Omega_L}{\Omega_{\rm CDM}} = \left( \frac{m}{ 6 \times 10^{-15} \ {\rm GeV}} \right)^{1/2} \left( \frac{H}{10^{14} \ {\rm GeV}} \right)^2 \, . \end{eqnarray} We see this only depends on the vector mass and Hubble scale during inflation.~In this case both the transverse and longitudinal modes could in principle contribute appreciably to the dark matter relic abundance.~Thus, for these cases of dark vector mass generation, we will also include the longitudinal mode when exploring the viable parameter space below. \subsection{Vector dark matter parameter space} \label{sec:paramspace} Having followed the cosmic evolution of the the dark vector energy density and found an estimate for the relic abundance today, we can go on to estimate the regions of parameter space for a viable dark matter candidate.~To find these regions, we must ensure that not only the observed cold dark matter relic abundance is reproduced, but also that a number of constraints on the parameters in~\eref{DMT1} are satisfied for consistency of the mechanism.~As discussed in the previous section, we will focus only on scenarios where the dark vector is relativistic at the time its mass is generated and assume in the Higgs-ed case that once broken, the dark $U(1)_D$ is not restored by reheating or any other phase transition~\cite{Higgsfollowup}. \begin{center} {\bf \emph{Constraints on model parameters}} \end{center} \begin{enumerate} \item For the constraints on the dark vector mass the upper bound results from requirng efficient tachyonic production at the end of inflation and constrains only the case of $m \neq 0$ during inflation.~At the same time $m$ is bounded from below by the condition that the dark vector becomes non-relativistic, thus behaving like cold dark matter, before matter-radiation equality.~We write this condition as $\bar T > T_{\rm CMB}$ (see~\eref{Tbar}), with $T_{\rm CMB} \simeq 10^{-9}$ GeV.~Thus $m$ is constrained to lie in the window, \begin{eqnarray} \label{eq:mconstraint} \left(\frac{\pi g_*(T_{\rm RH})}{90}\right)^{1/4} \frac{\epsilon_H}{\epsilon_R} \sqrt{\frac{H}{M_{\rm Pl}}} \, T_{\rm CMB} < m &<& \epsilon_H H\, . \end{eqnarray} This window allows for dark vector masses spanning many orders of magnitude. \item The energy density in the inflaton at the end of inflation must be larger than in the radiation which reheats the Universe, $\rho^{\rm{end}}_I > \rho_R(T_{\rm RH})$ implying $H_{\rm end} > H_{\rm RH}$ and, \begin{eqnarray} \label{largepsH} \frac{\epsilon_H}{\epsilon_R^2} > 1 \, , \end{eqnarray} to ensure the Universe does not reheat to energies greater than $\rho^{\rm{end}}_I$. \item At $a = a_{\rm RH} = a_{\rm end}$, the energy density of radiation in the reheating sector must be greater than the energy density of the dark vector, $\rho_R (T_{\rm RH}) > \rho^{\rm{end}}_D$ which leads to, \begin{eqnarray} \label{eq:epsilonconstraint} \frac{\epsilon_H}{\epsilon_R} \ll 10 \ \xi_{\rm end}^{3/4} e^{-\pi\xi_{\rm end}/2} \left( \frac{M_{\rm Pl}}{H} \right)^{1/2} \, . \end{eqnarray} Otherwise, the Universe would become (dark) matter dominated at a temperature above $T_{\rm CMB}$, thus violating matter-radiation equality at $T_{\rm CMB}$. \item If the inflaton - dark vector coupling is too large, the dark vector can thermalize with the inflaton which must also couple to Standard Model particles to reheat the Universe.~This would lead to thermalization of the dark vector with the visible sector and spoil our dark matter production mechanism.~Ensuring the inflaton and dark vector do not thermalize~\cite{Ferreira:2017lnd, Ferreira:2017wlx} puts an upper bound on $\xi$, \begin{eqnarray} \xi < 0.44 \ln \frac{f}{\alpha H} + 3.4 \, . \end{eqnarray} Using~\eref{xidef} and taking the slow-roll parameter to be $\epsilon = 1$ at $a_{\rm end}$ then gives, \begin{eqnarray} \label{eq:thermal} \xi_{\rm end} < 0.44 \ln \left( \frac{1}{\sqrt{2} \xi_{\rm end}} \frac{M_{\rm Pl}}{H} \right) + 3.4 \, . \end{eqnarray} \item We also assume that back-reaction effects on the inflaton dynamics are negligible which leads to two conditions.~The first is $3 H \dot\phi \simeq V' \gg \alpha / f \langle \vec E \cdot \vec B \rangle$ meaning that the $\langle F\tilde{F}\rangle = \langle \vec E \cdot \vec B \rangle$ term is negligible in the inflaton equation of motion in~\eref{EOMphi}.~The second condition is that $3 H^2 M_{\rm Pl}^2 \gg \langle \vec E^2 \rangle / 2$, ensuring that the inflaton dominates the energy density during inflation rather than the dark vector.~Both conditions are satisfied as long as $\xi$ is not too large~\cite{Barnaby:2011qe,Peloso:2016gqs}.~Requiring that they hold all the way to the end of inflation results in the constraint on $\xi_{\rm end}$, \begin{eqnarray} \label{eq:backreaction} \frac{\epsilon_H H}{M_{\rm Pl}} \ll 10^2\, \xi_{\rm end}^{3/2} \, e^{-\pi\xi_{\rm end}} \, . \end{eqnarray} \item When the inflaton exits the slow-roll regime, it starts oscillating about the minimum of its potential and reheats the Universe.~If the coupling $\alpha / f$ is moderately large, roughly $\alpha / f > 35 M_{\rm Pl}^{-1}$, the production of dark vectors during these oscillations can be important.~This phenomenon is referred to as gauge-preheating and has been studied in~\cite{Adshead:2015pva}.~However, in order to satisfy the constraints listed above, here we consider the range $1 \lesssim \xi_{\rm{end}} \lesssim 10$ which results in the window for $\alpha/f$, \begin{eqnarray} \frac{\sqrt{2}}{M_{\rm{Pl}}} \lesssim \frac{\alpha}{f} \lesssim \frac{10 \sqrt{2}}{M_{\rm{Pl}}} \, , \end{eqnarray} where we have used~\eref{xiend}.~In this range of $\alpha / f$, preheating into dark vectors is not efficient~\cite{Adshead:2015pva} implying that during the inflaton oscillations only a negligible fraction of its energy density is transferred to the dark vector.~We then assume that reheating proceeds via the perturbative decay of the inflaton into visible radiation. \end{enumerate} Finally, we also implicitly assume there are no other light scalar or fermion fields in the dark sector which couple to the dark vector.~If such light fields were present, they would be produced by the strong dark electromagnetic field via the Schwinger effect~\cite{Tangarife:2017vnd, Tangarife:2017rgl} and would in principle contribute to the dark matter abundance today.~Here we assume they are heavy enough to avoid this interesting possibility which we leave to forthcoming work~\cite{DSfollowup}. \begin{center} {\bf \emph{Final viable parameter space}} \end{center} In~\fref{relic} we show the relic abundance given in~\eref{DMT1} as a function of $m$ and $H$, imposing the constraints listed above, for different values of the parameters $\xi_{\rm end}$, $\epsilon_R$, $\epsilon_H$.~In practice, requiring the dark vector is not over abundant along with the first two constraints automatically ensures the remaining constraints are satisfied, so only these are shown.~Along the contours labeled ``Transverse'' for different values of $\xi_{\rm end}$ which separate the purple shaded regions we obtain the observed relic abundance with the transverse mode making up the entirety of the dark matter.~In the colored regions to the right of these lines the dark matter is overabundant.~We see that the transverse mode of the dark vector can make a viable dark matter candidate over a wide range of parameter space: $\mu {\rm eV} \lesssim m \lesssim {\rm TeV}$, $100 \ {\rm GeV} \lesssim H \lesssim 10^{14} \ {\rm GeV}$ for $\xi_{\rm end} \sim \mathcal{O}(1-10)$ (see~\eref{xiend}). \begin{figure}[tbh] \includegraphics[scale=.62]{DMplot11} \includegraphics[scale=.62]{DMplot15} \caption{Parameter space in the dark photon mass versus Hubble scale $m - H$ plane for values of the parameters $\xi_{\rm end}$, $\epsilon_R$, and $\epsilon_H$ as indicated above each plot and described in the text.~Along the lines labeled ``Transverse'' for different values of $\xi_{\rm end}$, we obtain the observed relic abundance for the transverse mode while in the colored regions to the right of these lines the dark matter is overabundant.~The region in the grey band at large masses is excluded by requiring efficient tachyonic production during inflation while the region in gray at low masses is excluded by requiring the dark photons are non-relativistic by the time of CMB formation (see~\eref{mconstraint}).~For the case when the dark vector has a mass during inflation, we also show along the black line labeled ``Longitudinal'' the contour where the longitudinal mode makes up all of the observed dark matter.} \label{fig:relic} \end{figure} For comparison, we also plot the relic abundance of the longitudinal mode in~\eref{DML} in the case the dark vector has a mass during inflation and is thus also produced via inflationary fluctuations~\cite{Graham:2015rva}.~In the regions where the line labelled `Transverse' is to the left of the ones labelled `Longitudinal', the transverse mode gives the dominant contribution to the relic density.~We see large regions of parameter space where this is the case.~On the left in~\fref{relic} we also see a region of parameter space where the longitudinal and transverse modes give comparable contributions to the dark matter relic abundance which raises interesting possibilities to be discussed more below.~Since for the parameters shown on the right in~\fref{relic} the transverse mode always dominates we do not show the contour for the longitudinal mode.~Note however the relic abundance for the longitudinal mode only depends on $m$ and $H$ so it has the same contour regardless of the other parameters. Finally, for illustration purposes we consider a specific benchmark point, \begin{eqnarray}\label{eq:bench} \xi_{\rm end}=6\, , \ \ \epsilon_R = 10^{-1}\, , \ \ \epsilon_H=10^{-1}\, , \\ H = 10^9 \ {\rm GeV}\, , \ \ m = 1.3 \ \mathrm{keV}\, .\nonumber \end{eqnarray} This leads to a reheating temperature $T_{\rm RH} = 2.7 \times 10^{12}$ GeV and an initial radiation energy density $\rho_R(T_{\rm RH}) = 1.7 \times 10^{51} \ {\rm GeV}^4$, several orders of magnitude larger than the initial energy density in the dark electromagnetic field $\rho_D(T_{\rm RH}) = 10^{42} \ {\rm GeV}^4$.~The dark photons become non-relativistic at $\bar T = 36$~MeV, then redshift like matter for some time before matching the energy density of radiation at $T_{\rm CMB}$.~Note that the momentum of the dark photon has a long time to redshift from $\bar T$ to $T_{\rm CMB}$ so it is very `cold' by the time of matter-radiation equality.~Note also that at the time of Big Bang Nucleosynthesis (BBN), $T_{\rm BBN} \sim 1$ MeV, the dark photon is already non-relativistic and still constitutes a small fraction of the total energy density.~Therefore, bounds on extra relativistic species ($N_{\rm eff}$) are easily avoided. \section{{\bf Energy Density Spectrum and Clumpy Dark Matter}} \label{sec:energy} Here we examine in detail the cosmological evolution of the dark vector energy density starting from the end of inflation.~In particular, we examine the energy density spectrum and show explicitly that the peak in the spectrum at the end of inflation survives cosmic evolution until late times.~We then examine the spectrum of \emph{fluctuations} in the energy density around the time of matter radiation equality.~We confirm that power at large scales remains highly suppressed allowing for a vector produced in this manner to evade constraints on isocurvature from measurements of the CMB~\cite{Akrami:2018odb}, thus making it a viable dark matter candidate.~The power spectrum of density fluctuations also serves as the starting point for studying implications on structure formation~\cite{Kolb:1994fi,Graham:2015rva,Alonso-Alvarez:2018tus,Berges:2019dgr} which we leave for future work. \subsection{Energy density spectrum at the end of inflation} \label{sec:PSend} To gain further intuition for the tachyonic production mechanism and evolution of the modes during inflation as well as facilitate the discussion below, we can consider the analytic solutions in~\eref{ABD}-\eref{AWKB} together with the conformal diagram in~\fref{confdiag}.~Here we show co-moving length scales versus scale factor (or conformal time) with the co-moving horizon indicated by the contour (solid black) at $k^{-1} = (a H)^{-1}$.~The last mode to exit the horizon during inflation is indicated by the black line labeled $k_{\rm{end}}^{-1} = (a_{\rm{end}} H_{\rm end})^{-1}$.~The Compton wavelength contour at $\lambda = q^{-1} = (k/a)^{-1} = m^{-1}$ applies in the case that the dark vector already has a mass during inflation and defines the time when it becomes non-relativistic.~We also show the last mode to cross this contour (and become non-relativistic) during inflation labeled $k_m$ as well as the maximum momentum tachyonic mode $k_{\rm max} \simeq 2\xi k_{\rm max}$.~Utilizing~\fref{confdiag} and~\eref{qtac} we find the ratios of these scales, \begin{figure}[tbh] \includegraphics[scale=.5]{PhaseDiagram_end} \caption{Conformal diagram during inflation zoomed in around the horizon region (green) showing co-moving scales versus scale factor (see text for more information).} \label{fig:confdiag} \end{figure} \begin{eqnarray}\label{eq:scaleratios1} \frac{k_m}{k_{\rm{end}}} &=& \frac{m}{H_{\rm end}},~~ \frac{k_{\rm{max}}}{k_{\rm{end}}} = 2\xi_{\rm end} + \mathcal{O}(\frac{m^2}{H_{\rm end}^2}), \end{eqnarray} where $H_{\rm end}$ indicates the Hubble scale at the end of inflation.~In the super horizon limit ($-k\tau\to 0 $) shown in blue the amplitudes approach the asymptotic solution in~\eref{ATP}.~In the orange region deep inside the horizon ($-k\tau \to \infty$) the modes are in the Bunch-Davies vacuum given by the solution in~\eref{ABD}.~As the modes enter the horizon region shown in green and~\eref{qtac} is satisfied, the tachyonic instability leads to exponential growth of the amplitude for one of the transverse modes (which we take to be $A_+$).~As we'll see in~\sref{PSend}, for a fixed point in time (or scale factor), the power in the dark electromagnetic field is dominated by modes contained within this region leading to a peak in the power spectrum at $k \sim a H$ when the modes have a wavelength of order the horizon.~This leads to a power spectrum at the end of inflation that is peaked at co-moving scales around $k_{\rm{end}}^{-1} $. The result for the total dark vector energy density at the end of inflation obtained in~\eref{rhoD} relied on the WKB approximation for the amplitude in~\eref{AWKB} which assumed the time dependence of $\xi$ could be neglected.~This was sufficient for obtaining an estimate of the relic density and viable dark matter parameter space in~\fref{relic}.~However, the instability parameter $\xi \propto \dot\phi \propto \sqrt\epsilon$ is not only largest towards the end of inflation, but also experiences the largest \emph{growth} just as inflation is ending and the slow roll parameter approaches $\epsilon \approx 1$ (in single field inflation scenarios).~Since the energy density depends exponentially on $\xi$ (see~\eref{rhoD}), to obtainin an accurate density spectrum it is crucial to account for this time dependence.~However, to account for the time dependence of $\xi$ and the Hubble parameter as well as the breakdown of the slow roll approximation, it is necessary to numerically solve the equations of motion in~\eref{EOMphi} and~\eref{AEOMT}.~This requires a robust integration procedure which we describe in the Appendix. Having performed this numerical integration, we show in~\fref{energyspeccomp} a comparison between the energy density spectrum obtained numerically (solid) versus analytically in~\eref{Atachsol} (dashed) for both the electric (blue) and magnetic (red) components (see~\eref{rhoEandB}).~We consider a $\phi^4$ type inflaton potential and $m/H_{\rm end} \ll 1$.~On the left we show the spectra at early times during inflation as the CMB modes leave the horizon where we require $\xi$ to satisfy $\xi_{\rm{CMB}} < 2.5$~\cite{Barnaby:2010vf,Barnaby:2011vw}.~On the right we show the same spectra, but now just at the end of inflation for $\xi_{\rm end} = 9$.~As we can see, at early times during inflation when $\xi_{\rm end} \approx 1$ the analytic and numerical solutions are in good agreement.~However, at the end of inflation we see the shape, location, and exact height of the peak depends not just on $\xi_{\rm end}$ (which is the same in both cases), but on how $\xi$ changes with time.~We also see that the time dependence in $\xi$ leads to stronger suppression of power at large scales than the analytic case.~Note this way of suppressing power at large scales is distinct from other dark matter production mechanisms connected to inflation which also lead to a peaked power spectrum~\cite{Graham:2015rva,Alonso-Alvarez:2018tus,Berges:2019dgr}. \begin{figure}[tbh] \begin{center} \includegraphics[scale=.36]{AnaVsNumBeg} \includegraphics[scale=.36]{AnaVsNumEnd} \end{center} \caption{Energy density spectrum for the electric (E) and magnetic (B) components in~\eref{rhoEandB} obtained numerically (solid) versus analytically (dashed) at both early times during inflation (left) as the CMB modes leave the horizon and just at the end of inflation (right) for $\xi_{\rm end} = 9$ and $m = 0$.~The spectrum for the Bunch-Davies vacuum modes is also shown (dotted).} \label{fig:energyspeccomp} \end{figure} With the numerical solutions in hand, we obtain the energy density spectrum at the end of inflation for various inflationary scenarios.~We first examine the effects of the dark vector having a mass already during inflation\footnote{In this case there may be a constraint from the Swampland conjecture which requires $m > 60$\,eV~\cite{Bastero-Gil:2018uel,Reece:2018zvv}.} which can arise through either a Stueckelberg or Higgs mechanism.~In~\fref{energyspec} we show the energy density spectrum for the electric (solid) and magnetic (dashed) component at the end of inflation as a function of $k/k_{\rm{end}}$ for $m/H_{\rm end} = 0~(\rm{black}),~3.15\cdot10^{-5}~(\rm{red}),~3.15\cdot10^{-1}~(\rm{green})$ and $\xi_{\rm end} = 9$.~The spectrum for the Bunch-Davies vacuum (black dotted) is also shown.~We see all the spectra are above the vacuum for modes with $k/a_{end} < 2\xi_{\rm end} H_{\rm end}$ showing the particle production effect.~Higher momentum modes stay in the Bunch-Davies vacuum and have a spectrum below the vacuum one once a proper subtraction scheme has been implemented (see Appendix).~This signals the absence of the tachyonic instability and particle production effects for these modes. \begin{figure}[tbh] \begin{center} \includegraphics[scale=.32]{plot_drhoAdlnk_lam300_mvec0_cmb2} \includegraphics[scale=.32]{plot_drhoAdlnk_lamAll_mvec0} \end{center} \caption{{\bf Left:}\,Energy density spectrum for the electric (solid) and magnetic (dashed) component at the end of inflation for $m/H_{\rm end} = 0~(\rm{black}),~3.15\cdot10^{-5}~(\rm{red}),~3.15\cdot10^{-1}~(\rm{green})$ and $\xi_{\rm end} = 9$.~The red and green dotted vertical lines indicate $k_m/k_{\rm{end}} = m/H_{\rm end}$ (see~\fref{confdiag} and~\eref{scaleratios1}).~The spectrum for the Bunch-Davies vacuum (black dotted) is also shown.~{\bf Right:}\,Same energy density spectra as left, but zoomed in around the peak for $\xi_{\rm end} = 3, 6, 9$ and $m/H_{\rm end} \ll 1$.} \label{fig:energyspec} \end{figure} We also see a number of features which arise when the dark vector has a mass already during inflation.~For instance, we see at around $k_m/k_{\rm{end}} = m/H_{\rm end}$ (see~\fref{confdiag} and~\eref{scaleratios1}) indicated by the red and green dotted vertical lines, the slope in the spectrum for the magnetic component changes from decreasing like $k^{-4}$ to one decreasing like $k^{-2}$ as we go to larger scales.~This change occurs when $q = m$ and the modes at large co-moving scale (top in~\fref{confdiag}) cross the Compton wavelength contour in~\fref{confdiag} causing them to become non-relativistic and damp more slowly with expansion than the still relativistic modes at smaller scales.~As discussed above, the last mode for which this occurs is at $k_m$ so modes at scales larger than $k_m^{-1}$ will see an enhancement relative to the massless case as seen in~\fref{energyspec}.~We see also that at the end of inflation this leads to domination by the magnetic component at scales $k^{-1} > k_m^{-1}$ where $m >> q$.~At even larger scales $k^{-1} \gg k_m^{-1}$, we see the electric component also changes slope from one decreasing like $k^{-4}$ to one decreasing like $k^{-2}$ after going through a kink when the field time derivative $\partial_\tau A_+$ changes sign.~However, we see it still remains subdominant to the magnetic component at these scales.~This is in contrast to the massless case in which the electric component dominates at all scales\footnote{We discuss these mass effects on the spectrum at large scales in more detail in the Appendix.}.~Finally, we see that around the peak the mass effects are negligible and, in particular, the massive and massless cases have the same spectrum for modes $k > k_m$ which will always contain the majority of the peak.~This is the case unless $H_{\rm end} \lesssim m < \xi H_{\rm end}$ which we do not consider since the tachyhonic production begins to be suppressed.~However, this super heavy mass case could be an interesting possibility. We next examine the dependence of the energy density spectrum on $\xi_{\rm end}$.~Since around the peak the electric component always dominates and largely does not depend on the vector mass, we can focus on the region around the peak and consider different values for $\xi_{\rm end}$.~On the right in~\fref{energyspec} we show the same spectra as on the left, but only around the peak for $\xi_{\rm end} = 3, 6, 9$ and $m/H_{\rm end} \ll 1$.~Here the exponential sensitivity to $\xi_{\rm end}$ becomes clear as well as the domination of the electric component of the energy density.~Note also that only for the $\xi_{\rm end} = 9$ case does the energy density in the dark vector begin to approach the energy density of the inflaton ($\frac{d\rho_E}{d\,{\rm ln}\,k} \approx \rho^{\rm{end}}_I$) so we can safely neglect back reaction effects. \begin{figure}[tbh] \begin{center} \includegraphics[scale=.32]{plotepsH_models_paper} \includegraphics[scale=.32]{plot_drhoAdlnk_models_paper_2} \end{center} \caption{{\bf Left:~}\,Instability parameter $\xi$ as a function of number of e-folds for various models of inflation.~{\bf Right:}\,Energy density spectrum at the end of inflation for the same models with $\xi_{\rm end} = 9$.} \label{fig:energyspecsteep} \end{figure} Finally, we examine how the shape of the inflaton potential affects the dark vector energy density spectrum.~In~\fref{energyspecsteep} we show on the left how the instability parameter $\xi$ changes as a function of the number of e-folds for various models of inflation while the energy density spectrum at the end of inflation is shown on the right for the same models.~We see that regardless of the behaviour of $\xi$ early on during inflation, the largest growth occurs just at the very end of inflation as $\xi$ approaches $\xi_{\rm end}$.~We also see that different inflationary potentials can lead to different spectra for the same $\xi_{\rm end}$.~This opens the possibility that by precisely measuring the dark vector power spectrum, we can potentially infer properties of the inflaton potential, but we leave an exploration of this possibility to future work.~These energy density spectra will serve as the input needed to track the cosmic evolution of the dark vector energy density.~Below we study the cosmic evolution to obtain the late time energy density spectra and its fluctuations. \subsection{Cosmological evolution of modes} \label{sec:evo} Here we track the evolution of the tachyonic modes starting from the end of inflation through to matter radiation equality following a similar analysis to that found in~\cite{Graham:2015rva,Alonso-Alvarez:2018tus}, keeping track of both the (dark) magnetic and electric components as the Universe continues to expand.~As we saw above, when produced at the end of inflation, the electric component dominates over the magnetic.~However, as we will see below, the magnetic component quickly grows relative to the electric after inflation and `catches up' by the time the Hubble scale becomes comparable to the dark matter mass.~After this point they redshift together like radiation and then eventually like matter by the time of matter radiation equality. The cosmological evolution of the field as a function of scale factor or time (either physical or conformal) can be parametrized in the same way as, \begin{eqnarray}\label{eq:PSlate} \mathcal{P}_X (k,x) &=& \mathcal{P}_X (k,x_{\rm{end}}) \frac{|X(k,x)|^2}{|X(k,x_{\rm{end}})|^2} , ~~ X = A_+,\,\partial_\tau A_+ ,~~x = a, t, \tau, \end{eqnarray} where the power spectrum is defined in~\eref{PSdef} (with $\tau \to x$) and $x_{\rm{end}}$ indicates the end of inflation.~The input amplitude (plus derivative) and power spectrum are taken at $x_{\rm end}$ and obtained by numerically solving the equations of motion in~\eref{AEOMtau} as discussed in~\sref{PSend} and the Appendix.~The late time modes can similarly be obtain numerically, but it becomes computationally intensive to evolve them to late times.~An approximate analytic solution for the late time amplitude and its derivative can be found in different limits of the equations of motion in~\eref{AEOMtau}.~These different limits can be depicted geometrically via the conformal diagram shown in~\fref{confdiaglate} where each limit corresponds to one of the five colored regions as indicated.~As we discuss in more detail below, taking the amplitude (plus derivative) and power spectrum at the end of inflation as input, we can obtain approximations to $X(k,x)$ in the various limits and then `glue' them together to construct the full late time amplitude as well as the mean energy density and fluctuation spectrum~\cite{Graham:2015rva,Alonso-Alvarez:2018tus}. \begin{figure}[tbh] \begin{center} \includegraphics[scale=.4]{PhaseDiagram_late} \end{center} \caption{Conformal diagram showing co-moving scales versus scale factor from early during inflation until matter radiation equality (see text for more information).} \label{fig:confdiaglate} \end{figure} Using~\fref{confdiaglate} we derive a second set of ratios of scales (in addition to~\eref{scaleratios1}), \begin{eqnarray}\label{eq:scaleratios2} \frac{k_\ast}{k_{\rm{end}}} = \frac{a_{\rm{end}}}{a_\ast} &=& \sqrt{\bar{m}},~~ \frac{a_{\rm{end}}}{\bar{a}} = \bar{m},~~ \frac{a_{\rm{end}}}{a_{\rm{max}}} = \frac{\bar{m}}{2\xi_{\rm end}}, \end{eqnarray} where $a_\ast$ is defined as the scale factor when $H = m$ and the $k_\ast$ mode\footnote{As we discuss below, the peak in the power spectrum of the longitudinal mode occurs at $k_\ast.$} becomes non-relativistic (crosses the Compton wavelength contour in~\fref{confdiaglate}) while again $a_{\rm end}$ is the scale factor at the end of inflation.~The scale factors $\bar{a}$ and $a_{\rm max}$ (or e-folds) indicate respectively the time when the modes $k_{\rm end}$ and $k_{\rm max}$ become non-relativistic.~Since we are taking the end of inflation as the initial condition for cosmological evolution, unless otherwise stated we define $\bar{m}$ as the ratio of the dark vector mass to Hubble scale at the end of inflation, \begin{eqnarray}\label{eq:mbdef} \bar m = \frac{m}{H_{\rm end}}. \end{eqnarray} For the following discussion it will also be useful to rewrite the electric and magnetic energy density spectra respectively as a function of scale factor $a$, \begin{eqnarray} \label{eq:drhoEB} \frac{d\rho_E}{d\,{\rm ln}\,k} &=& \frac{1}{2} \left(\frac{a_{\rm{end}}}{a}\right)^4 H_{\rm{end}}^2 \mathcal{P}_{\partial_a A_+}(k,a),~~ \frac{d\rho_B}{d\,{\rm ln}\,k} = \frac{1}{2a^2} \left( \frac{k^2}{a^2} + m^2 \right) \mathcal{P}_{A_+}(k,a) \, , \end{eqnarray} where we have used $da = a^2 H d\tau$ and $H = (a_{\rm{end}}/a)^2 H_{\rm{end}}$ for a radiation dominated era. \begin{itemize} \item {\bf Region I - End of Inflation: $ H = H_{\rm end}$} During inflation (region I in~\fref{confdiaglate}) quantum fluctuations of the dark vector boson are amplified by the expansion of the Universe leading to the tachyonic instability and exponential production of one polarization.~As discussed, this production is maximal just at the end of inflation leading to a power spectrum peaked at $k_{\rm end}$ that serves as input for cosmological evolution after inflation (see~\eref{PSlate}).~Explicitly, at scale factor $a = a_{\rm end}$, we take the tachyonically enhanced amplitude and its derivative to be, \begin{eqnarray}\label{eq:AIdef} A_{\rm{I}} &=& A_+(k, a_{\rm{end}}),\\ \partial_a A_{\rm{I}} &=& \partial_a A_+(k, a)|_{a = a_{\rm{end}}} \nonumber, \end{eqnarray} where $A_+$ and $\partial_a A_+$ are obtained numerically as discussed in~\sref{PSend}.~These will serve as the initial conditions for the cosmological evolution of the tachyonic modes. \item {\bf Region II - Super-horizon radiation era relativistic: $ H \gg q \gg m $} Just after inflation we are in the super horizon regime $H \gg q,m$ for modes with $a m < k < k_{\rm end}$ (region II in~\fref{confdiaglate}).~At this point, the inflaton energy has been converted into reheating and thus, $\xi = 0$ in the equation of motion in~\eref{AEOMT}.~For these relativistic Hubble damped modes with $q \gg m$ this gives an equation of motion, \begin{eqnarray} \label{eq:EOMII} (\partial_t^2 + H \partial_t )\,A_+ \simeq 0, ~~~\Longleftrightarrow~~~ (\partial_a a H \partial_a + H \partial_a )\,A_+ \simeq 0 , \end{eqnarray} where we have used $\dot{a} = a H$ and $da = a^2 H d\tau$.~Defining $A_{\rm{II}}$ as the approximate solution for the amplitude in this regime, during the radiation dominated era when the Hubble parameter scales as $H\propto a^{-2}$ we have, \begin{eqnarray}\label{eq:AII} A_+ \simeq A_{\rm{II}} &=& c^{\rm{II}}_1 + c^{\rm{II}}_2 a ,\\ \partial_a A_{\rm{II}} &=& c^{\rm{II}}_2 ,\nonumber \end{eqnarray} where $c^{\rm{II}}_{1,2}$ are constants in time, but functions of scale $k^{-1}$, and we see a term in the amplitude that grows linearly with scale factor.~Using~\eref{drhoEB}, we see the electric and magnetic energy density scale as, \begin{eqnarray} \frac{d\rho_E}{d\,{\rm ln}\,k} \, \propto a^{-4} ,~~ \frac{d\rho_B}{d\,{\rm ln}\,k} \, \propto \frac{q^2}{a^2} (c^{\rm{II}}_1 + c^{\rm{II}}_2 a )^2 \propto a^{-(2-4)} , \end{eqnarray} where again $q = k/a$ is the physical momentum.~However, imposing continuity of the amplitude and derivative at $a_{\rm{end}}$ we also have, \begin{eqnarray} c^{\rm{II}}_1 &=& A_{\rm{I}} - a_{\rm end} (\partial_a A_{\rm{I}}) \nonumber \\ c^{\rm{II}}_2 &=& \partial_a A_{\rm{I}}, \end{eqnarray} where we have used~\eref{AIdef}.~So we see the size of the term that grows linearly with $a$ depends on the the size of the input amplitude derivative at the end of inflation.~Since this depends on $k$ and has a peak at $\sim k_{\rm{end}}$, modes at small scales around $k_{\rm end}$ have a large $c^{\rm{II}}_2$.~However, they spend less time growing linearly before re-entering the horizon while the converse is true for modes at larger scales (see~\fref{confdiaglate}).~The net effect is a brief period of damping like $a^{-2}$ which is much slower than the $a^{-4}$ damping of the electric component.~As we show in~\fref{lateenergyspec} and discuss more below, this brief period of enhancement during super-horizon evolution allows the magnetic component of the energy density to quickly `catch up' to the electric component after which they redshift together.~~Note that in the case of the longitudinal mode the factor of 3 in front of the $\partial_t H$ Hubble damping term leads instead to a solution of the form $A_L \simeq c_1 + c_2 a^{-1}$, which quickly leads to a constant as the Universe expands. \item {\bf Region III - Super-horizon radiation era non-relativistic: $ H \gg m \gg q $} If the dark vector has a mass already during inflation or one is generated while some modes are still super-horizon during radiation era, we have the possibility of non-relativistic Hubble damped evolution (region III in~\fref{confdiaglate}).~Here we have the same equation of motion as in II leading to the same solutions, including the coefficients ($c^{\rm{III}}_i = c^{\rm{II}}_i$) which at these scales $k \ll k_{\rm end}$ are very small.~Note, this solution has the same form as the longitudinal case which has the same equation of motion in this region with differing coefficients due to the different input spectrum at the end of inflation.~Unlike in region II, the mass term in~\eref{drhoEB} now dominates over the momentum term for the magnetic component of the dark vector energy density spectrum.~The electric and magnetic energy densities then damp as, \begin{eqnarray} \frac{d\rho_E}{d\,{\rm ln}\,k} \, \propto a^{-4} ,~~ \frac{d\rho_B}{d\,{\rm ln}\,k} \, \propto \frac{m^2}{a^2} (c^{\rm{II}}_1 + c^{\rm{II}}_2 a )^2 \propto a^{-(0-2)} . \end{eqnarray} Since we are far from the peak, these contributions to the energy density are negligible unless $m \sim H_{\rm end}$ which we do not consider as we assume $m \ll H_{\rm end}$. \item {\bf Region IV - Sub-horizon radiation era relativistic: $ q \gg m, H $} This is the region in~\fref{confdiaglate} containing the modes around the peak in the dark vector energy density spectrum.~In this regime we have for the equation of motion, \begin{eqnarray} \label{eq:EOMIV} \Big(\partial_t^2 + H \partial_t + \frac{k^2}{a^2} \Big)\,A_+ \simeq 0 ~~~\Longleftrightarrow~~~ \Big(\partial_\tau^2 + k^2 \Big)\,A_+ \simeq 0 , \end{eqnarray} where we have used $\partial_\tau a^{-1} = -H$.~Note this differs from the equation of motion for the longitudinal component which in conformal time has a $2a H\partial_\tau$ `damping' term~\cite{Graham:2015rva}.~This is due to a factor of 3 in front of the $H \partial_t$ Hubble damping term in the physical time equation of motion.~Thus~\eref{EOMIV} has solution for the transverse mode, \begin{eqnarray} A_+ \simeq A_{IV} &=& (c^{\rm{IV}}_1 e^{ik\tau} + c^{\rm{IV}}_2 e^{-ik\tau} ) \nonumber \\ \partial_a A_{IV} &=& \frac{ik}{a_{\rm{end}}^2 H_{\rm{end}}} (c^{\rm{IV}}_1 e^{ik\tau} - c^{\rm{IV}}_2 e^{-ik\tau} ), \label{eq:AIV} \end{eqnarray} which we see has no overall damping.~The longitudinal mode amplitude on the other hand has an overall $a^{-1}$ suppression and the derivative damps like $a^{-2}$.~Note that the factor of $a_{\rm end}^2 H_{\rm end}$ comes from the Jacobian in going from $\partial_\tau$ to $\partial_a$.~For the transverse mode the electric and magnetic energy density then scale as, \begin{eqnarray} \frac{d\rho_E}{d\,{\rm ln}\,k} \, \propto a^{-4} ,~~ \frac{d\rho_B}{d\,{\rm ln}\,k} \propto a^{-2} q^2 \propto a^{-4}. \end{eqnarray} We see that the magnetic and electric components of the energy density redshift the same with scale factor in this regime.~Note that while the amplitude scales differently, the damping of the energy density is the same as for the longitudinal mode~\cite{Graham:2015rva}. \item {\bf Region V - Non-relativistic massive regime: $ m \gg H, q $} In the non-relativistic massive regime we have $m \gg H,q$ which gives for the equations of motion (now in terms of physical time,~see~\eref{AEOMT}), \begin{eqnarray} \label{eq:EOMV} \Big(\partial_t^2 + H \partial_t + m^2 \Big)\,A_+ \simeq 0 . \end{eqnarray} This is the same as for the longitudinal component~\cite{Graham:2015rva} and has solution, \begin{eqnarray}\label{eq:AV} A_+ \simeq A_{V} &=& \frac{1}{\sqrt{a}} (c^{V}_1 e^{imt} + c^{V}_2 e^{-imt} ) \nonumber \\ \partial_a A_{V} &=& \frac{im \sqrt{a} }{a_{\rm{end}}^2 H_{\rm{end}}} (c^{V}_1 e^{imt} -c^{V}_2 e^{-imt} ) , \end{eqnarray} where we have used the change of variables from physical time to scale factor, \begin{eqnarray} \label{eq:ttoa} t \to \frac{1}{2H_{\rm{end}}} (\frac{a^2}{a_{\rm{end}}^2} - 1) + t_{\rm{end}} . \end{eqnarray} We see the amplitude damps like $a^{-1/2}$ while the derivative grows like $a^{1/2}$ leading to a scaling for the electric and magnetic energy densities respectively, \begin{eqnarray} \frac{d\rho_B}{d\,{\rm ln}\,k} &\propto& \frac{ m^2}{a^3} \propto a^{-3},~~ \frac{d\rho_E}{d\,{\rm ln}\,k} \propto \frac{a_{\rm end}^4H_{\rm end}^2}{a^3} \, \propto a^{-3} . \end{eqnarray} Thus both the magnetic and electric components of the energy density redshift like matter at late times.~In this regime the longitudinal component has the same equation of motion and therefore same solution as in~\eref{AV}.~So again we have the same damping behaviour with scale factor~\cite{Graham:2015rva}, but with different coefficients in~\eref{AV}. \end{itemize} \subsection{Late time energy density spectrum} \label{sec:lateespec} \begin{figure}[tbh] \begin{center} \includegraphics[scale=.37]{Normrad_drhoElate_vs_Rk_ep9_mbem5_B} \includegraphics[scale=.37]{Normrad_drhoBlate1_vs_Rk_ep9_mbem5_C} \end{center} \caption{On the left and right we show the electric and magnetic energy density spectra respectively (near the peak) for various numbers of e-folds after inflation, $N_e = 0\,({\rm dashed~black}), N_\ast/10\,({\rm blue}), N_\ast\,({\rm orange}), N_{\rm{max}}\,({\rm light~brown})$ where $a = a_{\rm end} e^{N_e}$ defines the scale factor (see~\fref{confdiaglate}).~We have normalized the spectrum to the energy density of the inflaton at the end of inflation for $\xi_{\rm{end}} \simeq 9, \bar{m} \simeq 10^{-5}$ with a $\phi^4$ inflaton potential and factored out an $a^4$.} \label{fig:lateenergyspec} \end{figure} In~\fref{lateenergyspec} and~\fref{lateenergyspec2} we summarize the evolution of the energy density spectrum after inflation focusing on modes with co-moving momentum $k_\ast \lesssim k \lesssim k_{\rm max}$ which contain the vast majority of the power.~In~\fref{lateenergyspec} on the left and right we show the electric and magnetic energy density spectra respectively while in~\fref{lateenergyspec2} we show their sum for various e-folds after inflation with $\xi_{\rm end} \simeq 9, \bar{m} \simeq 10^{-5}$ and a $V \propto \phi^4$ inflaton potential.~The oscillatory behaviour seen in~\fref{lateenergyspec} arises from the oscillatory solutions for the modes (see~\eref{AIV} and~\eref{AV}) in regions IV and V of~\fref{confdiaglate}.~For modes $k_\ast < k < k_{\rm end}$ this occurs once they re-enter the horizon while modes with $k_{\rm end} < k < k_{\rm max}$, which contain the peak of the power spectrum, approach but never exit the horizon during inflation.~Modes with $k < k_\ast$ at $N > N_\ast$ remain outside the horizon for longer than modes with $k > k_\ast$ (see~\fref{confdiaglate}).~Thus they have not had enough time to begin oscillating upon reentering the horizon so we see a still smooth spectrum in this regime.~For the input values $\xi_{\rm end} = 9$ and $\bar{m} = 10^{-5}$ this gives in terms of e-folds $(N_e)$ after inflation $N_e = N_\ast \simeq 5.65,\,\bar{N} \simeq 11.30,\,N_{\rm{max}} \simeq 14.19$ where $a = a_{\rm end} e^{N_e}$.~There are a number of features which are evident that reflect the behavior of the modes in the different limits of the equations of motion discussed in~\sref{evo} and can be understood with the help of~\fref{confdiaglate} and~\eref{scaleratios1},~\eref{scaleratios2}. \begin{figure}[tbh] \begin{center} \includegraphics[scale=.55]{Normrad_drho_vs_Rk_ep9_mbem5} \end{center} \caption{Total energy density spectra (electric plus magnetic shown in~\fref{lateenergyspec})~for~$N_e = 0\,({\rm dashed~black}), N_\ast\,({\rm red}), \bar{N}\,({\rm blue}), N_{\rm{max}}\,({\rm orange}), 1.5\, N_{\rm{max}}\,({\rm grey})$ where $a = a_{\rm end} e^{N_e}$ and again normalized to $\rho^{\rm{end}}_I$ with $a^4$ factored out.~See text for more information.} \label{fig:lateenergyspec2} \end{figure} Looking first at the electric energy density spectrum on the left in~\fref{lateenergyspec}, we normalize to the energy density of the inflaton at the end of inflation and divide by $a^{-4}$ in order to compare to radiation like damping.~We see that once the input electric energy density spectrum is set at the end of inflation (black dashed) and which dominates over the magnetic component, the electric energy density then redshifts like radiation until $a_\ast$ (orange).~At this point scales larger than $k^{-1}_\ast$ begin redshifting like matter while modes at smaller scales around the peak continue redshifting like radiation.~Modes around the peak start becoming non-relativistic, and thus relatively enhanced as they begin redshifting like matter, once $q \leq m$ which happens at progressively later times for modes with larger and larger momenta as can be understood geometrically in~\fref{confdiaglate}.~The last mode to exit the horizon during inflation, $k_{\rm end} = \aendH_{\rm end}$, becomes non-relativistic at $\bar{a}$ while the highest momentum tachyonic mode, $k_{\rm max} = 2\xi\,k_{\rm end}$, becomes non-relativistic at $a_{\rm{max}}$ (light brown).~After this point, all of the modes redshift together like matter with $\propto a^{-3}$ damping and the shape of the electric energy density spectrum no longer changes. On the right of~\fref{lateenergyspec} we examine the magnetic energy density spectrum which has a more interesting evolution.~We again normalize to the energy density of the inflaton at the end of inflation and factor out an $a^{4}$.~We see that after initially being sub dominant to the electric energy density at the end of inflation (black dashed), there is a brief period where modes around the peak damp like $\propto a^{-(2-3)}$ (blue) as compared to $\propto a^{-4}$ damping for the electric component.~As discussed in~\sref{evo}, this brief period of relative enhancement is due to the linear growth that the mode amplitudes in region II experience while they are super horizon (see~\eref{AII}) combined with continuity of the amplitude and derivative at the various boundaries.~After this brief period, the modes around the peak quickly begin redshifting like radiation with the usual $a^{-4}$ damping well before the time they reach $a_\ast$ (orange).~It is this very brief period of growth relative to matter, typically lasting around an e-fold or less after inflation, that allows the magnetic energy density to ``catch up" to the electric by the time $H = m$ at $a_\ast$ after which they redshift together like radiation and then eventually like matter (see~\fref{confdiaglate}). In~\fref{lateenergyspec2} we show the total dark vector energy density spectrum given by the sum of the electric and magnetic components.~We see that when summed the oscillations of the electric and magnetic component cancel one another to give a smooth spectrum as can be shown analytically using~\eref{AIV} or~\eref{AV} and plugging it into~\eref{drhoEB}.~As we can also see, the total dark vector energy density spectrum essentially evolves like (the envelope of) the electric component.~This is because the electric component dominates at the end of inflation when the dark vector is produced and then redshifts along with the magnetic component once the magnetic component catches up as discussed above.~Thus to a good approximation the total energy density evolution has the same qualitative behavior as the electric component.~In particular, after inflation ends modes around the peak redshift like radiation until $q = m$ after which they redshift like matter (see~\fref{peakevo}) as the mass term dominates the dark vector energy density. As we see from the energy density spectrum in~\fref{lateenergyspec}, the vast majority of the power is located at scales $\sim k_{\rm end}^{-1}$ around the peak which are vastly smaller than the scales probed by CMB measurements.~To see this explicitly we write for modes around the peak $\simk_{\rm end}^{-1}$, \begin{eqnarray}\label{eq:kpeak} 1/k_{\rm end} &=& (a_{\rm end} H_{\rm end})^{-1} = (a_{\rm RH} \epsilon_H H)^{-1} = (\frac{T_0}{T_{\rm RH}} \epsilon_H H)^{-1} \nonumber \\ &\approx& \frac{10^{-1}}{T_0}\,\frac{ \epsilon_R}{ \epsilon_H} \left( \frac{M_{\rm Pl}}{H} \right)^{1/2} \approx 10\ {\rm km} \ \frac{\epsilon_R}{\epsilon_H} \left( \frac{100 \ {\rm GeV}}{H} \right)^{1/2} \, , \end{eqnarray} where we have assumed $a_{\rm end} = a_{\rm RH}$ and used~\eref{TRH} as well as $a_{\rm RH}/a_0 = T_0/T_{\rm RH}$ with the scale factor today set to $a_0 = 1$.~In the absence of an extreme hierarchy between $\epsilon_R$ and $\epsilon_H$, we see that the typical co-moving scale associated with the peak is $\lesssim 10\,\rm{km}$ which is tiny on cosmological scales and thus we expect isocurvature to be negligible on the large scales relevant for the CMB.~However, since it is relevant for how the dark matter is distributed spatially after matter radiation equality and the evolution of density perturbations, below we compute the power spectrum of isocurvature perturbations.~If the vector has a mass already during inflation, there could be regions of parameter space where the longitudinal and transverse components give comparable contributions to the dark matter energy density.~As discussed below, in this case there would be a double peaked structure in the energy density spectrum with one peak corresponding to the transverse component located at co-moving momenta $\sim k_{\rm{end}}$ and a second one corresponding to the longitudinal mode at $k_\ast$~\cite{Graham:2015rva}. \subsection{Isocurvature and density contrast power spectrum} \label{sec:isopec} Up until now, we have considered only the mean energy density and spectra which can be written in terms of the dark vector field power spectrum as given in~\eref{rhoDint}.~However, the dark vector energy density is subject to fluctuations which can be of the same order as the mean energy density.~These fluctuations will be independent of the inflaton ones which set the curvature (or adiabatic) perturbations that are imprinted on the metric.~The fluctuations in the dark vector energy density will therefore contribute to isocurvature perturbations and can have implications for the CMB as well as structure formation and clumping~\cite{Kolb:1994fi} of the dark matter.~Since measurements of the CMB~\cite{Akrami:2018odb} severely constrain the amplitude of isocurvature perturbations, we must ensure that they are suppressed on long length scales.~Following~\cite{Graham:2015rva,Alonso-Alvarez:2018tus}, here we compute these isocurvature perturbations and demonstrate explicitly that they are highly suppressed at the large scales relevant for the CMB. The starting point is the density contrast field $\delta(\vec x)$ which describes deviations from the mean dark vector energy density $\langle \rho \rangle$ and is defined as, \begin{eqnarray} \label{rhoqu} \rho(\vec x) = \langle \rho \rangle (1 + \delta(\vec x)) . \end{eqnarray} Near the time of matter radiation equality and once the vector dark matter begins redshifting like matter, we can describe the energy density via the mass term in the magnetic component.~As we derive in the Appendix, the Fourier transform of $\delta(\vec x)$ can be written in terms of products of the power spectrum for the tachyonically enhanced transverse mode as, \begin{eqnarray}\label{eq:deltaPS} {\cal P}_{\delta}(k,t) & = \frac{k^2}{\left[ \int_0^\infty \frac{dk'}{k'} {\cal P}_{A_+} (k',t) \right]^2} \int_0^\infty dq \int_{|q-k|<p<q+k} dp \ \frac{1}{q^2 p^2} {\cal P}_{A_+}(p,t) {\cal P}_{A_+}(q,t)\, , \end{eqnarray} where we have taken the mass term in~\eref{rhoDint} to represent the energy density at late times and defined the power spectrum in terms of the 2-point function of a random variable $X$, \begin{eqnarray} \label{powdqu} \langle X(\vec k) X(\vec k') \rangle = (2\pi)^3 \delta^3(\vec k + \vec k') \frac{2\pi^2}{k^3} {\cal P}_{X}(k) \, , ~~~X \equiv A_+, \delta. \end{eqnarray} We see in~\eref{deltaPS} that the power spectrum for $\delta$ corresponding to the transverse vector mode\,\footnote{We note that~\eref{deltaPS} agrees with the result found in~\cite{Alonso-Alvarez:2018tus} for \emph{scalar} dark matter produced during inflation.} differs from the one corresponding to the longitudinal mode~\cite{Graham:2015rva}. Performing the integral in~\eref{deltaPS} numerically we show the density contrast power spectrum in~\fref{density} for $\bar{m} \ll 1$ and $\xi_{\rm end} \approx 9$.~We see explicitly that the spectrum falls off sharply at large scales.~We also see that the location of the peak does not shift noticeably from the peak in the mean energy density spectrum.~At scales relevant for the CMB we see the power is completely negligible though for $m \sim H_{\rm end}$ and $\xi_{\rm{CMB}} \sim 2.5$ perhaps these isocurvature perturbations may have observable consequences, but we do not explore this possibility here.~To determine the matter distribution and scale of clumping of the vector dark matter today, one needs to follow the evolution of these density perturbations through the non-linear regime until today.~The power spectrum in~\fref{density} serves as the input for this evolution~\cite{Berges:2019dgr}, beginning at matter radiation equality, but further investigation is left to future work. \begin{figure}[tbh] \begin{center} \includegraphics[scale=.5]{PAdelta} \end{center} \caption{The late time density contrast power spectrum for $\bar{m} \ll 1$ and $\xi_{\rm end} \approx 9$.} \label{fig:density} \end{figure} Finally, we comment that in the mechanism presented here, isocurvature for the transverse mode is suppressed due to the fact that the tachyonic modes experience maximal growth when they are of order the horizon combined with the time dependence of the tachyonic instability parameter $\xi$ \emph{during} inflation.~This is in contrast to the mechanism for suppressing isocurvature in the energy density of the longitudinal mode\,\footnote{Note this is also distinct from how isocurvature is suppressed on long length scales for scalar dark matter production mechanisms~\cite{Alonso-Alvarez:2018tus,Berges:2019dgr,Markkanen:2018gcw} which are also connected to inflation.} which is due to how the different modes redshift \emph{after} inflation~\cite{Graham:2015rva}. \subsection{Clumping of vector dark matter} \label{sec:clumping} As discussed in other `clumpy' dark matter scenarios~\cite{Graham:2015rva,Alonso-Alvarez:2018tus,Berges:2019dgr}, the scale corresponding to the peak of the power spectrum of energy density fluctuations also has implications for the scale on which the dark matter `clumps'.~In one of these scenarios, a dark vector has a mass during inflation and the longitudinal mode is necessarily produced by inflationary fluctuations~\cite{Graham:2015rva}.~In this case, due to how the modes redshift after inflation, a peak is produced in the energy density spectrum at $k_\ast$ (see~\fref{confdiaglate}) where, \begin{eqnarray} 1/k_\ast \sim 10^{10} \,{\rm km} \times \sqrt{ \frac{10^{-5} \, {\rm eV }}{m} } . \end{eqnarray} We see that for the longitudinal mode the location of the peak only depends on the dark vector mass which is constrained to be $m \geq 10^{-5} \, {\rm eV }$ if it makes up the entirety of the dark matter~\cite{Graham:2015rva}.~Considering the range of Hubble scales during inflation $10^2 < H/{\rm GeV} < 10^{14}$ this leads to the range of `clumping' scales for the longitudinal component, \begin{eqnarray} 10^{-1} \, {\rm km} \ < k_\ast^{-1} < 10^{10} \, {\rm km} \, . \end{eqnarray} In the case of the transverse component produced via tachyonic instability, we saw in~\eref{kpeak} that the location of the peak does not depend on the dark vector mass, but instead on the Hubble scale during inflation as well as $\epsilon_R$ and $\epsilon_H$.~For the same range of Hubble scales during inflation $10^2 < H/{\rm GeV} < 10^{14}$ and taking $\epsilon_H / \epsilon_R \sim 1$ we find the transverse component instead clumps on scales in the range, \begin{eqnarray} {\rm cm} < k_{\rm end}^{-1} < 10 \, {\rm km} . \end{eqnarray} The locations of the two peaks are related by the Hubble scale and dark vector mass, \begin{eqnarray} \frac{k_{\rm end}^{-1} }{k_*^{-1} }= \frac{\epsilon_R}{\epsilon_H} \left( \frac{m}{H} \right)^{1/2} \sim \left( \frac{m}{H} \right)^{1/2} \, . \end{eqnarray} We see in general the transverse mode clumps on much smaller scales than the longitudinal mode though when $m\sim H$ the scales can be comparable.~As discussed, this also opens the possibility of a double peak in the power spectrum when both the longitudinal and transverse components contribute appreciably to the relic abundance and would imply dark matter clumping on two different scales.~This `doubly clumpy' possibility can of course only occur if the dark vector has a mass already during inflation and thus would constitute a striking signal of vector dark matter with an inflationary origin. \section{{\bf Summary and Outlook}} In this study we have examined in detail the recently proposed mechanism~\cite{Bastero-Gil:2018uel} for producing non-thermal dark photon dark matter at the end of inflation.~This mechanism can generate the observed dark matter relic abundance for dark vector masses in the range $\mu\,{\rm eV} \lesssim m \lesssim 10\,{\rm TeV}$ and Hubble scales during inflation in the range $100\,{\rm GeV} \lesssim H \lesssim 10^{14}\,{\rm GeV}$.~We have focused in particular on the case where the dark vectors are relativistic at the time their mass is generated and examined the associated cosmic evolution to compute the relic abundance today.~We have also examined the power spectrum and cosmic evolution of the dark vector modes demonstrating explicitly that the late time spectrum preserves the peak generated at the end of inflation.~We have shown that the peak corresponds to small physical scales today, $\ell_{\rm today} \sim {\rm cm} - 100\,{\rm km}$, with large density fluctuations at $\ell_{\rm today}$ implying a clumpy nature for the vector dark matter.~The case of a non-relativistic dark vector at the time its mass is generated has been left to forthcoming work. There are a number of interesting avenues to explore the phenomenology associated with the dark photon dark matter production mechanism presented here.~If there are other dark charged particles present during inflation they can potentially be produced via a dark Schwinger production mechanism.~In this case dark charged fermions and scalars may also contribute to the final dark matter relic abundance.~If the dark vector obtains its mass via a dark Higgs mechanism, this can lead to an alternative cosmic evolution to the one explored in this study and there may also be a possibility of generating gravitational wave signals associated with the dark Higgs phase transition~\cite{Breitbach:2018ddu}.~Allowing for non-zero (but small) kinetic mixing between the dark and visible photons can lead to interesting dark matter phenomenology as has been thoroughly explored in many studies.~Whether any of the initial polarization of the dark vector survives cosmic evolution and has observable effects is also worth investigating.~Explorations of these various interesting possibilities are ongoing. \vspace{0.5cm} \noindent {\bf \emph{Acknowledgements}}: We thank Prateek Agrawal, Diego Blas, Adam Falkowski, Bohdan Grzadkowski, Takeshi Kobayashi, Eric Madge, Manuel Masip, Gilad Perez, Maxim Pospelov, Jennifer Schober, Pedro Schwaller, Javi Serra, Anna Socha, Walter Tangarife, Tomer Volansky, and Tien-Tien Yu for useful comments and discussions.~This work has been partially supported by MINECO grants PID2019-106087GB-C22,~including ERDF (J.S.,\,R.V.M.),~PID2019-105943GB-I00 (M.B.G.),~Junta de Andaluc\'{i}a Projects FQM-101, A-FQM-211-UGR18, P18-FR-4314, (fondos FEDER), and SOMM17/6104/UGR (M.B.G.,\,J.S.,\, R.V.M.).~L.U.~acknowledges support from the PRIN project ``Search for the Fundamental Laws and Constituents'' (2015P5SBHT\textunderscore 002).~R.V.M. would also like to acknowledge the Mainz Institute for Theoretical Physics (MITP) of the Cluster of Excellence PRISMA+ (Project ID 39083149) for their hospitality and partial support as well as participants of \emph{The Mysterious Universe} workshop for useful discussions and stimulating atmosphere while part of this work was completed. \section*{{\bf Appendix}}\label{sec:app} \subsection{Derivation of equations of motion, energy density, and pressure} Here we derive the equations of motion as well as the energy and pressure densities. \begin{center} {\bf\emph{Conventions}} \end{center} Before presenting the derivation we define the conventions used.~For the metric we have, \begin{eqnarray} ds^2 = -dt^2 + a^2(t) d\vec x^2 = a^2(\tau)(-d\tau^2 + d\vec x^2) \, , \end{eqnarray} \begin{eqnarray} g_{\mu\nu} &=& \begin{pmatrix} -1 & 0 \\ 0 & a^2(t) \delta_{ij} \end{pmatrix} = a^2(\tau) \begin{pmatrix} -1 & 0 \\ 0 & \delta_{ij} \end{pmatrix} \, , \nonumber \\ g^{\mu\nu} &=& \begin{pmatrix} -1 & 0 \\ 0 & a^{-2}(t) \delta^{ij} \end{pmatrix}= a^{-2}(\tau) \begin{pmatrix} -1 & 0 \\ 0 & \delta^{ij} \end{pmatrix} \,, ~~~~~~ \end{eqnarray} with $t$ the cosmic time and $\tau$ the conformal time.~The Levi-Civita tensor is given by, \begin{eqnarray} \epsilon^{\mu\nu\rho\sigma} = \frac{\tilde \epsilon^{\mu\nu\rho\sigma}}{\sqrt{-g}} \, , \quad \epsilon_{\mu\nu\rho\sigma} = \sqrt{-g} \ \tilde \epsilon_{\mu\nu\rho\sigma} \, , \end{eqnarray} with the following conventions for the anti-symmetric Levi-Civita symbol and metric, \begin{eqnarray} \tilde \epsilon^{0123} = + 1\, , \quad \tilde\epsilon_{0123} = -1 \, ,\nonumber \\ \sqrt{-g} \equiv \sqrt{- \det (g_{\mu\nu})} = a^3(t) = a^4(\tau) \, . \end{eqnarray} The three-dimensional Levi-Civita symbol is related to the four-dimensional one as \begin{eqnarray} \tilde \epsilon_{ijk} = \tilde \epsilon^{ijk} = \tilde \epsilon^{0ijk} \, . \end{eqnarray} For the Hubble parameter we have in terms of the scale factors, \begin{eqnarray} H \equiv \frac{1}{a(t)} \frac{da(t)}{dt} = \frac{\dot a(t)}{a(t)} \, , \qquad {\cal H} \equiv \frac{1}{a(\tau)} \frac{da(\tau)}{d\tau} = \frac{a'(\tau)}{a(\tau)} \, , \end{eqnarray} where we use an overdot for the cosmic time derivative and a prime for the conformal time derivative.~We work in comoving momentum space where for the classical field $A_\mu$ we have: \begin{align} A_0(\vec x, \tau) & = \int \frac{d^3 k}{(2\pi)^3} A_0(\vec k, \tau) e^{i \vec k \cdot \vec x} \, , \\ A_i(\vec x, \tau) & = \int \frac{d^3 k}{(2\pi)^3} A_i(\vec k, \tau) e^{i \vec k \cdot \vec x} \, . \end{align} As the gauge field is real, $A_\mu(\vec x, \tau) = A^*_\mu(\vec x, \tau)$ which implies $A^*_\mu(\vec k, \tau) = A_\mu(-\vec k, \tau)$.~We decompose $\vec A(\vec k,\tau)$ along its transverse ($\vec A_T$) and longitudinal components ($A_L$), \begin{align} \vec k \cdot \vec A_T & = 0 \\ \vec k \cdot \vec A & = |\vec k| A_L \equiv k A_L \, , \end{align} and further decompose the transverse modes into the usual two polarizations \begin{equation} \label{eq:trpola} \vec A_T(\vec k,\tau) = \sum_{\lambda = \pm} \vec \epsilon_\lambda (\vec k) \left[ A_\lambda(k,\tau) + A_\lambda(k,\tau)^* \right] \, , \end{equation} where the polarization vectors satisfy, \begin{align} & \vec k \cdot \vec \epsilon_\pm(\vec k) = 0 \, , \quad \vec k \times \vec \epsilon_\pm(\vec k) = \mp i k \vec \epsilon_\pm(\vec k)\, , \\ & \vec k \cdot \vec \epsilon_L (\vec k) = k \, , \quad \vec k \times \vec \epsilon_L(\vec k) = 0 \, , \\ & \vec \epsilon_\lambda(\vec k)^* = \vec \epsilon_\lambda(-\vec k)\, , \quad \vec \epsilon_\lambda(\vec k) \cdot \vec \epsilon_{\lambda'}(- \vec k) = \delta_{\lambda \lambda'} \, . \end{align} The power spectra associated to the two-point correlation functions of the classical field is, \begin{align} \langle \vec A(\vec k,t) \cdot \vec A(\vec k',t) \rangle & = (2\pi)^3 \delta^3(\vec k + \vec k') \frac{2\pi^2}{k^3} {\cal P}_A(k,t) \, , \nonumber \\ \langle \partial_t \vec A(\vec k,t) \cdot \partial_t \vec A(\vec k',t) \rangle & = (2\pi)^3 \delta^3(\vec k + \vec k') \frac{2\pi^2}{k^3} {\cal P}_{\partial_t A}(k,t) \, , \label{eq:powspclass} \end{align} where we can then write for the two point function in position space, \begin{eqnarray} \label{eq:powspdef} \langle \vec A(\vec x,t)^2 \rangle = \int_0^\infty \frac{dk}{k} {\cal P}_A(k,t) = \int \frac{d^3k}{(2\pi)^3} \frac{2\pi^2}{k^3} {\cal P}_A(k,t) \, . \end{eqnarray} For the quantum field we expand in terms of creation and annihilation operators, \begin{eqnarray} \label{eq:Aqu} \hat{\vec A} (\vec x,t) = \sum_{\lambda = \pm, L} \int \frac{d^3k}{(2\pi)^3} e^{i \vec k \cdot \vec x} \ \vec \epsilon_\lambda(\vec k) \ [A_\lambda(k,t) a_\lambda(\vec k) + A_\lambda(k,t)^* a_\lambda^\dagger (-\vec k) ] \, . \end{eqnarray} where the creation and annihilation operators satisfy, \begin{eqnarray} a_\lambda(\vec k) |0\rangle = 0 \, , \quad \langle 0| a_\lambda^\dagger (\vec k) = 0\, , \quad \left[a_\lambda(\vec k), a^\dagger_{\lambda'}(\vec k') \right] = (2\pi)^3 \delta_{\lambda \lambda'} \delta^3(\vec k - \vec k') \, . \end{eqnarray} Two point correlation functions with the quantum field are then obtained by sandwiching the field operators between vacuum states, \begin{eqnarray}\label{eq:A2quantum} \langle \hat{\vec A}(\vec x,t)^2 \rangle &\equiv& \langle 0| \hat{\vec A}(\vec x,t)^2 |0\rangle \nonumber \\ &=& \sum_{\lambda,\lambda'} \int \frac{d^3k}{(2\pi)^3} \frac{d^3k'}{(2\pi)^3} e^{i (\vec k +\vec k') \cdot \vec x} \vec \epsilon_\lambda(\vec k) \cdot \vec \epsilon_{\lambda'}(\vec k') \nonumber \\ &\times&\langle 0| (A_\lambda(k,t) a_\lambda(\vec k) A_{\lambda'}(k',t)^* a^\dagger_{\lambda'}(-\vec k')) |0 \rangle \nonumber \\ &=& \sum_\lambda \int\frac{d^3 k }{(2\pi)^3} |A_\lambda(k,t)|^2 \, ,~~~~ \end{eqnarray} where to go from second to third line we used the commutation relations.~Comparing to the definition of the power spectrum in~\eref{powspdef} we find in the quantum case, \begin{eqnarray} \label{eq:powquantum} {\cal P}_{\hat A}(k,t) = \frac{k^3}{2\pi^2} \sum_\lambda |A_\lambda(k,t)|^2 \, . \end{eqnarray} Similarly we can compute the power spectrum for the field time derivative $\langle (\partial_t\hat{\vec A}(\vec x,t))^2 \rangle$, \begin{eqnarray} \langle (\partial_t\hat{\vec A}(\vec x,t))^2 \rangle &=& \sum_\lambda \int\frac{d^3 k }{(2\pi)^3} |\partial_t A_\lambda(k,t)|^2 \,, \nonumber \\ {\cal P}_{\partial_t \hat A}(k,t) &=& \frac{k^3}{2\pi^2} \sum_\lambda |\partial_t A_\lambda(k,t)|^2 \, . \end{eqnarray} Note that the time dependence in the Fourier expansion is only in the function $A_\lambda(k,t)$. \begin{center} {\bf\emph{Equations of motion}} \end{center} To derive the equations of motion we consider the following action, \begin{eqnarray}\label{eq:Lagrangian} S = \int d^4 x \, L &=& \int d^4 x \sqrt{-g} \Big[ -\frac{1}{2}g^{\mu\nu} \partial_\mu \phi \partial_\nu \phi - V(\phi) -\frac{1}{4} g^{\mu\nu}g^{\rho\sigma}F_{\mu\rho}F_{\nu\sigma} \nonumber \\ &-& \frac{\alpha}{4f} \phi \frac{1}{2} \epsilon^{\mu\nu\rho\sigma} F_{\mu\nu}F_{\rho\sigma} - \frac{1}{2} m^2 g^{\mu\nu}A_\mu A_\nu \Big] \, , \end{eqnarray} where the field strength is defined in the usual way as, \begin{equation} F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu \, . \end{equation} Note that we have inserted the mass term for the vector field \`a la Proca so this Lagrangian describes a model with a scalar field and a massive vector field and is strictly speaking not a gauge theory, but this distinction is not important for present purposes.~Note also that, \begin{eqnarray} \frac{1}{2} \epsilon^{\mu\nu\rho\sigma}F_{\mu\nu} F_{\rho\sigma} &=& \frac{4}{\sqrt{-g}} \left[ (\partial_\tau \vec A - \nabla A_0) \cdot (\nabla \times \vec A) \right] \\ &\rightarrow& \frac{4}{\sqrt{-g}} \int \frac{d^3 k \ d^3 k'}{(2\pi)^6} e^{i(\vec k + \vec k') \cdot \vec x} (\partial_\tau \vec A(\vec k,\tau) - i \vec k A_0(\vec k, \tau)) \cdot (i \vec k' \times \vec A(\vec k', \tau)) \, ,\nonumber \end{eqnarray} where we see that this vanishes for the longitudinal mode for which $\vec k' \times \vec A(\vec k', \tau) = 0$.~Thus, the term with $F \tilde F$ only affects the transverse modes as we will see explicitly below. The equations of motion are obtained from the lagrangian density via, \begin{align} \partial_\alpha \frac{\delta L}{\delta(\partial_\alpha \phi)} - \frac{\delta L}{\delta \phi} & = 0 \, , \label{eq:EOMphiAP} \\ \partial_\alpha \frac{\delta L}{\delta(\partial_\alpha A_\beta)} - \frac{\delta L}{\delta A_\beta} & = 0 \label{eq:EOMA} \, . \end{align} From the first we equation obtain in cosmic time, \begin{eqnarray} \ddot \phi - \frac{1}{a^2(t)}\partial_i^2 \phi + 3 H \dot \phi + \frac{\partial V}{\partial\phi} + \frac{\alpha}{4 f} \frac{1}{2} \epsilon^{\mu\nu\rho\sigma} F_{\mu\nu}F_{\rho\sigma} = 0 \, , \end{eqnarray} or in conformal time, \begin{eqnarray} \partial_\tau^2 \phi - \partial_i^2 \phi + 2 {\cal H} \partial_\tau \phi + a^2(\tau) \frac{\partial V}{\partial\phi} + a^2(\tau) \frac{\alpha}{4 f} \frac{1}{2} \epsilon^{\mu\nu\rho\sigma} F_{\mu\nu}F_{\rho\sigma} = 0 \, . \end{eqnarray} Next, we turn to the equations of motion for the vector field which we derive in conformal time.~From here on every time we write $a$ for the scale factor we mean implicitly $a(\tau)$.~After some algebra~\eref{EOMA} can be cast into the form, \begin{eqnarray} g^{\alpha\nu} g^{\beta\sigma} \partial_\alpha F_{\nu\sigma} +\frac{\alpha}{2f} (\partial_\alpha \phi) \epsilon^{\alpha\beta \rho \sigma}F_{\rho\sigma} - m^2 g^{\beta \mu} A_\mu = 0 \, . \label{eq:EOMAbeta} \end{eqnarray} This gives four equations of motion for the massive vector field where the $\beta = 0$ and $\beta = l$ components can be written respectively as, \begin{eqnarray} \label{eq:beta0} (\partial_i^2 A_0 - \partial_0 \partial_i A_i ) + \frac{\alpha}{f} \tilde \epsilon^{ijk} (\partial_i \phi) \partial_j A_k - a^2 m^2 A_0 &=& 0 \,,\\ g^{00} g^{li} \partial_0 F_{0i} + g^{kj} g^{li} \partial_k F_{ji} + \frac{\alpha}{2f} \left[ (\partial_0\phi) \epsilon^{0ljk}F_{jk} + (\partial_i\phi) \epsilon^{ilk0}F_{k0} \right] - m^2 g^{li}A_i &=& 0 \, . \end{eqnarray} After some algebra the $\beta = l$ component can be massaged into the form, \begin{eqnarray} \label{eq:betal} \partial_0 \partial_0 A_l &-& \partial_i \partial_i A_l - \partial_l \partial_0 A_0 + \partial_l \partial_i A_i \nonumber \\ &-& \frac{\alpha}{f} \tilde\epsilon^{ljk} \left[ (\partial_0\phi) \partial_j A_k +\frac{1}{2} (\partial_j \phi) (\partial_k A_0 - \partial_0 A_k) \right] + a^2 m^2 A_l = 0 \, . \end{eqnarray} As a massive vector has three degrees of freedom, we need to apply one constraint which can be obtained by acting on \eref{EOMAbeta} with $\partial_\beta$, \begin{eqnarray} \label{eq:Procalikeconst} g_{\beta \gamma} \partial^\gamma \left[ g^{\alpha\nu} g^{\beta\sigma} \partial_\alpha F_{\nu\sigma} +\frac{\alpha}{2f} (\partial_\alpha \phi) \epsilon^{\alpha\beta \rho \sigma}F_{\rho\sigma} - m^2 g^{\beta \mu} A_\mu \right] = 0 \, , \end{eqnarray} which after some algebra leads to, \begin{equation} \label{eq:Procafull} \frac{\alpha}{f} \tilde \epsilon^{ijk} (\partial_i \phi) \partial_j A_k = \frac{a^2 m^2}{4 {\cal H}} (2{\cal H} A_0 + \partial^0 A_0 - \partial^i A_i) \, . \end{equation} We now proceed to write more explicitly these equations, first projecting along the longitudinal mode and then along the transverse modes. We first consider the longitudinal mode for which we have, \begin{eqnarray} \tilde \epsilon^{ijk} (\partial_i \phi) \partial_j A_k \rightarrow (\nabla \phi(\vec x, \tau)) \cdot (i \vec k \times \vec A(\vec k, \tau)) = 0\, . \end{eqnarray} This then gives for~\eref{beta0}, \begin{eqnarray} (\partial_i^2 A_0 - \partial_0 \partial_i A_i ) - a^2 m^2 A_0 = 0 \, , \end{eqnarray} which in Fourier space has the solution, \begin{eqnarray} \label{eq:A0} A_0(\vec k,\tau) = \frac{-i \vec k \cdot \partial_\tau \vec A(\vec k, \tau)}{k^2 + a^2 m^2} \, . \end{eqnarray} We next consider the terms in squared parentheses in~\eref{betal}, \begin{align} & \tilde\epsilon^{ljk} \left[ (\partial_0\phi) \partial_j A_k +\frac{1}{2} (\partial_j \phi) (\partial_k A_0 - \partial_0 A_k) \right] \nonumber \\ & \rightarrow (\partial_\tau \phi) \left( i \vec k \times \vec A_L(\vec k , \tau) \right) + \frac{1}{2} (\nabla \phi) \times \left( i \vec k A_0(\vec k, \tau) - \partial_\tau \vec A_L(\vec k, \tau) \right) \, , \end{align} This results in vectors which are orthogonal to the direction of $\vec k$ and so do not contribute to the equation of the longitudinal mode.~In Fourier space~\eref{betal} then simplifies to, \begin{eqnarray} \partial_\tau^2 A_L(\vec k,\tau) - i k \partial_\tau A_0(\vec k,\tau) + a^2 m^2 A_L(\vec k,\tau) = 0 \, . \end{eqnarray} Using the solution for $A_0$ in~\eref{A0} this becomes, \begin{eqnarray} \label{eq:ALtau} \partial_\tau^2 A_L(\vec k,\tau) + \frac{2k^2}{k^2 + a^2 m^2} {\cal H} \partial_\tau A_L(\vec k,\tau) + (k^2 + a^2 m^2) A_L(\vec k,\tau) = 0 \, . \end{eqnarray} Switching to cosmic time the equation of motion for the longitudinal mode becomes, \begin{eqnarray} \label{ALt} \ddot A_L(\vec k,t) + \frac{3k^2 + a^2(t) m^2}{k^2 + a^2(t) m^2} H \dot A_L(\vec k,t) + \left( \frac{k^2}{a^2(t)} + m^2 \right) A_L(\vec k,t) = 0 \, , \end{eqnarray} which is in agreement with~\cite{Graham:2015rva}. Turning to the transverse modes for which $\vec k \cdot \vec A = 0$, we apply the constraint of~\eref{Procafull} to~\eref{beta0} which leads to, \begin{eqnarray} \left( \partial_i^2 - \frac{a^2 m^2}{2} + \frac{a^2 m^2}{4 {\cal H}} \partial^0 \right) A_0 = 0 \, . \label{eq:cond0trans} \end{eqnarray} Integrated from some initial conformal time $\tau_0$, the solution to this differential equation in Fourier space is given by, \begin{eqnarray} A_0(\vec k, \tau) = \frac{a^2(\tau)}{a^2(\tau_0)} A_0(\tau_0) e^{-2 \left( \frac{k^2}{a^2(\tau) m^2} - \frac{k^2}{a^2(\tau_0) m^2} \right)} \, . \end{eqnarray} Here we are not free to choose the initial condition $A_0(\tau_0)$ and the only possible choice is the one which makes the solution for $A_0$ consistent with~\eref{A0}, that is $A_0(\tau_0) = 0$.~Note that the constraint in~\eref{Procafull} forces, \begin{equation} \tilde \epsilon^{ijk} (\partial_i \phi) \partial_j A_k = 0 \, , \end{equation} consistently for both longitudinal and transverse modes.~Thus, we conclude that, \begin{align} A_0(\vec k,\tau) & = \frac{-i k \cdot \partial_\tau A_L(\vec k, \tau)}{k^2 + a^2 m^2} \, , & {\rm longitudinal} \, , \\ A_0(\vec k,\tau) & = 0 \, , & {\rm transverse} \, . \end{align} Note in particular these solutions imply that the time component of the vector field does \emph{not} mix the transverse and longitudinal components.~With these solutions~\eref{betal} for the transverse modes in Fourier space becomes, \begin{eqnarray}\label{eq:EOMAT} (\partial_\tau^2 + k^2 + a^2 m^2 ) \vec A_T(\vec k, \tau) - \frac{\alpha}{f} \left[ (\partial_\tau \phi(\vec x,\tau))(i\vec k \times \vec A_T(\vec k,\tau)) - \frac{1}{2}(\nabla \phi(\vec x, \tau)) \times (\partial_\tau \vec A_T(\vec k ,\tau)) \right] = 0 \, . \nonumber \\ \end{eqnarray} Rewriting this in cosmic time and switching back to coordinate space we have\footnote{Note this differs from Eq. (4) in~\cite{Agrawal:2018vin} where we have a factor of two difference in our last term and we do not have a term proportional to $\nabla \phi \times \nabla A_0$.}, \begin{eqnarray} \label{eq:comparePrateek} \ddot{\vec A}_T + H \dot{\vec A}_T - \frac{\nabla^2}{a^2} \vec A_T + m^2 \vec A_T - \frac{1}{a} \frac{\alpha}{f} \left[ \dot\phi (\nabla \times \vec A_T) - \frac{1}{2} \nabla \phi \times \dot{\vec A}_T \right] = 0 \, . \end{eqnarray} Next we consider the case of a homogeneous scalar field where we approximate $\nabla \phi \approx 0$ so that $\phi(\vec x,t) \approx \phi(t)$, as appropriate for $\phi$ during inflation.~Using~\eref{trpola} for the transverse modes~\eref{EOMAT} then simplifies to, \begin{eqnarray} \partial_\tau^2 A_{\pm}(k,\tau) + \left[ k^2 \mp \frac{\alpha}{f}(\partial_\tau\phi) k + a^2 m^2 \right] A_{\pm}(k,\tau) = 0 \, . \end{eqnarray} Using the fact that the conformal time during inflation is $\tau \approx -(aH)^{-1}$ we can rewrite, \begin{eqnarray} \partial_\tau \phi = a \frac{\partial\phi}{\partial t} = aH \frac{\dot\phi}{H} \approx - \frac{1}{\tau} \frac{\dot\phi}{H} \, . \end{eqnarray} The equation of motion then reads in conformal time, \begin{eqnarray} \label{eq:EOM3} \partial_\tau^2 A_{\pm}(k,\tau) + \left[ k^2 \pm \frac{\alpha}{f}\frac{\dot\phi}{H}\frac{k}{\tau} + a^2 m^2 \right] A_{\pm}(k,\tau) = 0 \, . \end{eqnarray} Using $\partial_\tau^2 A = a^2(t) H \dot A + a^2(t) \ddot A$ this can be rewritten in cosmic time as, \begin{eqnarray} \label{eq:Apmt} \ddot A_{\pm}(k,t) + H \dot A_{\pm}(k,t) + \left[ \frac{k^2}{a^2(t)} \mp \frac{\alpha}{f} \dot\phi \frac{k}{a(t)} + m^2 \right] A_{\pm}(k,t) = 0 \, . \end{eqnarray} \vspace{2mm} \begin{center} {\bf \emph{Energy and pressure densities}} \end{center} Starting from the action, we can compute the stress-energy tensor via, \begin{eqnarray} T_{\alpha\beta} = -\frac{2}{\sqrt{-g}} \frac{\delta S}{\delta g^{\alpha\beta}} \, , \end{eqnarray} where the following relations will be useful for the calculation, \begin{eqnarray} \frac{\delta}{\delta g^{\alpha\beta}} \sqrt{-g} = -\frac{1}{2} \sqrt{-g} \ g_{\alpha\beta} \, , \qquad \frac{\delta}{\delta g^{\alpha\beta}}(-g) = g \ g_{\alpha\beta} \, . \end{eqnarray} One can verify that after computing $\frac{\delta S}{\delta g^{\alpha\beta}}$, there are two terms proportional to $\phi F \tilde F$ which cancel exactly.~Thus, we see that the operator responsible for inducing the tachyonic instability does not contribute to the energy density.~Explicitly we find, \begin{eqnarray} T_{\alpha\beta} &=& \partial_\alpha \phi \partial_\beta \phi + g^{\mu\nu} F_{\mu\alpha}F_{\nu\beta} + m^2 A_\alpha A_\beta \nonumber \\ &-& g_{\alpha\beta} \left( \frac{1}{2} g^{\mu\nu} \partial_\mu \phi \partial_\nu \phi + V(\phi) +\frac{1}{4} g^{\mu\nu} g^{\rho\sigma} F_{\mu\rho} F_{\nu\sigma} + \frac{1}{2} m^2 g^{\mu\nu} A_\mu A_\nu \right) \, . \end{eqnarray} In cosmic time $t$ the energy density is then given by, \begin{eqnarray}\label{eq:rhoAx} \rho = T_{00} &=& \frac{1}{2} \dot\phi^2 + \frac{1}{2a^2} (\partial_i \phi)^2 + V(\phi) \nonumber \\ &+& \frac{1}{2a^2} (\partial_t A_i - \partial_i A_0)^2 + \frac{1}{4a^4} (\partial_i A_j - \partial_j A_i)^2 + \frac{1}{2} m^2 A_0^2 + \frac{1}{2a^2} m^2 A_i^2 \\ &=& \rho_\phi + \rho_A \, , \nonumber \end{eqnarray} where $\rho_\phi$ and $\rho_A$ denote the contributions from the inflaton and dark vector respectively.~The pressure density can then be computed as, \begin{eqnarray} p &=& \frac{1}{3} (g^{\alpha\beta} T_{\alpha\beta} + \rho) = \frac{1}{2} \dot\phi^2 - \frac{1}{6a^2} (\partial_i \phi)^2 - V(\phi) \nonumber \\ &+& \frac{1}{6a^2} (\partial_t A_i - \partial_i A_0)^2 + \frac{1}{12a^4} (\partial_i A_j - \partial_j A_i)^2 + \frac{1}{2} m^2 A_0^2 - \frac{1}{6a^2} m^2 A_i^2 \\ &=& p_\phi + p_A \, ,\nonumber \end{eqnarray} where $p_\phi$ and $p_A$ denote the contributions from the inflaton and dark vector respectively. In Fourier space, we can write the energy density of the vector field (\eref{rhoAx}) as, \begin{eqnarray}\label{eq:rhoFA0} \rho_A(\vec x, t) &=& \frac{1}{2a^2} \int \frac{d^3 k \ d^3 k'}{(2\pi)^6} e^{i(\vec k + \vec k')\cdot \vec x} \Big\{ [\partial_t \vec A(\vec k,t) \cdot \partial_t \vec A(\vec k',t) ] + m^2 [\vec A(\vec k,t) \cdot \vec A(\vec k',t)] \nonumber \\ &-& \frac{1}{a^2} \left( [\vec k \cdot \vec k'] [ \vec A(\vec k,t) \cdot \vec A(\vec k',t)] - [\vec k \cdot \vec A(\vec k',t)] [\vec k' \cdot \vec A(\vec k,t) ] \right)\\ &-& i A_0(\vec k,t) [\vec k \cdot \partial_t \vec A(\vec k',t) ] - [\vec k \cdot \vec k'] [A_0(\vec k,t) A_0(\vec k',t) ] \nonumber \\ &-& i A_0(\vec k',t) [\vec k' \cdot \partial_t \vec A(\vec k,t)] + a^2 m^2 A_0(\vec k,t) A_0(\vec k',t) \Big\} \, . \nonumber \end{eqnarray} We can ``bracket'' the expression above and use the definition of the power spectrum \eref{powspclass} to obtain the spatial average $\langle \rho_A(t) \rangle$.~Separating the transverse and longitudinal modes, we have $A_0 = 0$ for the former so last two lines in \eref{rhoFA0} vanish.~After some algebra we have, \begin{eqnarray} \label{eq:rhoTt} \langle \rho_A^T (t) \rangle &=& \frac{1}{2a^2} \int_0^\infty \frac{dk}{k} \left[ {\cal P}_{\partial_t A_T}(k,t) + \left( \frac{k^2}{a^2} + m^2 \right) {\cal P}_{A_T}(k,t) \right] , \, \nonumber \\ \langle \rho_A^T (\tau) \rangle &=& \frac{1}{2a^4} \int_0^\infty \frac{dk}{k} \left[ {\cal P}_{\partial_\tau A_T}(k,\tau) + \left( k^2 + a^2 m^2 \right) {\cal P}_{A_T}(k,\tau) \right] \, . \end{eqnarray} in cosmic and conformal time respectively. For the longitudinal mode, we use~\eref{A0} and its analog in cosmic time, \begin{eqnarray} A_0(\vec k,t) = \frac{-i \vec k \cdot \partial_t \vec A(\vec k, t)}{k^2 + a^2 m^2} \, , \end{eqnarray} and substitute them into~\eref{rhoFA0}.~After some algebra this gives, \begin{eqnarray}\label{eq:rhoLtau} \langle \rho_A^L (t) \rangle &=& \frac{1}{2a^2} \int_0^\infty \frac{dk}{k} \left[ \frac{a^2 m^2}{k^2 + a^2 m^2} {\cal P}_{\partial_t A_L}(k,t) + m^2 {\cal P}_{A_L} (k,t) \right] \, , \nonumber \\ \langle \rho_A^L (\tau) \rangle &=& \frac{1}{2a^4} \int_0^\infty \frac{dk}{k} \left[ \frac{a^2 m^2}{k^2 + a^2 m^2} {\cal P}_{\partial_\tau A_L}(k,\tau) + a^2 m^2 {\cal P}_{A_L} (k,\tau) \right] \, , \end{eqnarray} in agreement with \cite{Graham:2015rva}.~Note that the expressions in~\eref{rhoTt} and~\eref{rhoLtau} are valid both for the classical and the quantum gauge field upon using the corresponding definitions of the power spectra discussed in the previous section. \subsection{Analytic study of the energy density spectrum during inflation} In this section we utilize the (approximate) analytic solutions of the equations of motion to examine the energy density spectrum of the dark vector at the end of inflation.~Starting from~\eref{EOM3} and using $\tau \approx -(a H)^{-1}$ during inflation we define, \begin{eqnarray} \xi \equiv \frac{\alpha \dot\phi}{2 H f} \, , \qquad \bar m \equiv \frac{m}{H} \, , \end{eqnarray} which allows us to write the equation of motion for the transverse modes as, \begin{eqnarray} \partial_\tau^2 A_{\pm}(k,\tau) + \left[ k^2 \pm 2\xi \frac{k}{\tau} + \frac{\bar m^2}{\tau^2} \right] A_{\pm}(k,\tau) = 0 \, . \end{eqnarray} Introducing the dimensionless variable, \begin{eqnarray}\label{eq:xdef} x = - k \tau \approx \frac{k}{aH} \, , \end{eqnarray} the equation of motion then becomes, \begin{eqnarray} \label{eq:EOMx} \partial_x^2 A_\pm (x) + \left[ 1 \mp \frac{2\xi}{x} + \frac{\bar m^2}{x^2} \right] A_\pm (x) = 0 \, . \end{eqnarray} Neglecting the time dependence in $\xi$ and the Hubble parameter, this equation of motion can be solved analytically.~Noting that $x>0$ and using the convention $\xi > 0$, the mode that gets exponentially enhanced is $A_+$ and we can neglect $A_-$ in what follows.~The solution to \eref{EOMx}, once properly normalized, is given in terms of the Whittaker function \cite{Meerburg:2012id}, \begin{eqnarray} A_+(x) = \frac{e^{\pi \xi / 2}}{\sqrt{2k}} W_{-i \xi, \mu} (-2 i x) \, , \qquad \mu = \sqrt{1/4 - \bar m^2} \, . \end{eqnarray} From this we can obtain the power spectra as, \begin{eqnarray} {\cal P}_{\partial_\tau A_+} = \frac{k^3}{2\pi^2} |\partial_\tau A_+|^2 \, , \qquad {\cal P}_{A_+} = \frac{k^3}{2\pi^2} | A_+|^2 \, . \end{eqnarray} Starting with the spatially averaged energy density as a function of $\tau$ in~\eref{rhoTt} and the definition of $x$ in~\eref{xdef}, we can write the energy density as, \begin{align} \langle \rho_A^T \rangle & = \frac{H^4}{8\pi^2} \int \frac{dx}{x} \ 2k \ x^4 \left[ |\partial_x A_+(x)|^2 + \left( 1+ \frac{\bar m^2}{x^2} \right) |A_+(x)|^2 \right] \, , \\ \frac{d \langle \rho_A^T \rangle}{d \ln x} & = \frac{H^4}{8\pi^2} 2k \ x^4 \left[ |\partial_x A_+(x)|^2 + \left( 1+ \frac{\bar m^2}{x^2} \right) |A_+(x)|^2 \right] \label{eq:drhodlnx} \\ & = \frac{H^4}{8\pi^2} e^{\pi \xi} x^4 \left[ |\partial_x W_{-i \xi,\mu}(-2i x)|^2 + \left( 1+ \frac{\bar m^2}{x^2} \right) |W_{-i\xi, \mu}(-2ix)|^2 \right] \,. \end{align} Note the above is only a function of $x$ under the assumptions that $\tau \approx -(aH)^{-1}$ and $\xi$ is constant, both of which are good approximations during slow roll inflation. We next separate the contributions to the energy density spectrum into the electric and magnetic components (and drop the brackets for $\rho_A$), \begin{eqnarray} \label{eq:rhoEandB2} \frac{d\rho_E}{d \ln x} &=& \frac{H^4}{8\pi^2} e^{\pi \xi} x^4 |\partial_x W_{-i \xi,\mu}(-2i x)|^2 \, ,\nonumber \\ \frac{d\rho_B}{d \ln x} &=& \frac{H^4}{8\pi^2} e^{\pi \xi} (x^4 + x^2 \bar m^2) |W_{-i\xi, \mu}(-2ix)|^2 \, . \end{eqnarray} In Fig.~\ref{Fig:EandB} we plot $\frac{d\rho_E}{d \ln x}$ and $\frac{d\rho_B}{d \ln x}$ as a function of $x$ for the listed choice of parameters $\bar m$, $\xi$, and $H$ during inflation, $H_I$.~Both components show a peak at $x \sim 0.1-1$ which indicates the point of maximal tachyonic enhancement.~Moving to lower values of $x$, the magnetic component drops as $x^4$ until it reaches $x = \bar m$ and then decreases as $x^2$.~This behavior can be understood directly from~\eref{rhoEandB2} where for $x < \bar m$ the term $x^2 \bar m^2$ dominates over $x^4$.~This can be traced back easily to $a^2 m^2$ dominating over $k^2$ in the last two terms of \eref{rhoTt}.~The electric component, dominant for $\bar m < x < 0.1$, continues decreasing as $x^4$ until values of $x$ smaller than $\bar m^2 / (2\xi)$ after which point it goes through a kink and then decreases as $x^2$.~It is also clear from the plot that at small $x$ the energy density is dominated by the mass term which we have included in the magnetic component. \begin{figure} [t] \centering \includegraphics[width=0.7\textwidth]{EandB.pdf} \caption{Electric (solid) and magnetic (dashed) components of the energy density spectra normalized to the energy density of the inflaton $\rho_I = 3 M_{Pl}^2 H_I^2$. } \label{Fig:EandB} \end{figure} Looking more closely at the electric component to understand the kink and the change of slope, the location of the kink occurs where $\partial_x A_+(x)$ changes sign and the electric energy density goes to zero.~It is located at $x < \bar m^2 / (2\xi)$, but a compact analytic formula for the exact $x$ is difficult to obtain due to the complicated form of the solution in terms of the Whittaker function.~We can however easily understand the shape to the right and left of the kink by inspection of the equation of motion.~For $\bar m^2 / (2\xi) \ll x < 2\xi$, we can neglect the mass term in \eref{EOMx}.~Then the solution for the tachyonic mode becomes, \begin{eqnarray} \label{eq:masslessol} A_+(x) = \frac{1}{2k} \sqrt{\frac{4x}{\pi}} e^{\pi \xi} K_1(2 \sqrt{2x \xi}) \, , \end{eqnarray} where $K_1$ is the Bessel function.~We plug this it into~\eref{drhodlnx} and expand for $x\ll 1$ to find, \begin{eqnarray}\label{eq:rhoEap} \frac{d\rho_E}{d\ln x} = \frac{H^4 \xi e^{2\pi\xi}}{4\pi^3} \left(2\gamma + \ln(2 x \xi) \right)^2 x^4 + {\cal O}(x^{9/2}) \, , \quad \bar m^2/(2\xi) < x \ll 1 \, , \end{eqnarray} where $\gamma = 0.577$ is the Euler's constant and we see in~\fref{Eapprox} this is an excellent approximation to the full solution in the regime $\bar m^2 / (2\xi) \ll x < 2\xi$.~We also see from the approximate solution in~\eref{rhoEap} that the slope to the right of the kink is $\sim x^4$. \begin{figure} [t] \centering \includegraphics[width=0.7\textwidth]{Eapprox.pdf} \caption{Electric component of the energy density spectrum.~For $x > \bar m^2 / (2\xi)$ the full solution (solid red) to the equation of motion is well approximated by the massless one (dashed gray) given in~\eref{masslessol}.~For $x \ll \bar m^2 / (2\xi)$ the mass term dominates and the full solution is approximated by~\eref{massivesol} (dashed black).~The slope changes from $x^4$ on the right of the kink to $x^2$ on the left.} \label{fig:Eapprox} \end{figure} For $x \ll \bar m^2 / (2\xi)$, the mass term dominates in the equation of motion, \begin{eqnarray} \partial_x^2 A_+(x) + \frac{\bar m^2}{x^2} A_+(x) = 0 \, , \end{eqnarray} which then leads to the approximate solution, \begin{eqnarray} \label{eq:massivesol} A_+(x) = c_1 x^{\bar m^2} + c_2 x^{1 - \bar m^2} \, , \qquad x \ll \bar m^2 / (2\xi) \, . \end{eqnarray} Fixing $c_1$ and $c_2$ to match the normalization of the full solution and plugging into \eref{drhodlnx}, \begin{eqnarray} \frac{d\rho_E}{d\ln x} = \frac{H^4}{8\pi^2} c_1^2 \bar m^4 x^2 + {\cal O}(x^3)\, , \qquad x\ll \bar m^2 / (2\xi) \, , \quad \bar m \ll 1 \, . \end{eqnarray} The solution in this regime is shown in dashed black in~\fref{Eapprox} where we see it is an excellent approximation to the full solution for $x \ll \bar m^2 / (2\xi)$ and the slope goes like $x^2$. \subsection{Power spectrum of energy density fluctuations} Here we derive the power spectrum of the energy density fluctuations.~At late times once the dark vector is non-relativistc the energy density is well approximate by the mass term in the lagrangian, $\rho \sim m^2 A^2$.~We can then define the energy density contrast $\delta(\vec x)$ which measures the deviation from the mean energy density, \begin{eqnarray} \label{eq:rhoqu} \rho(\vec x) = \langle \rho \rangle (1 + \delta(\vec x)) = \frac{1}{2} m^2 \left( \langle \hat{\vec A}(\vec x)^2 \rangle + \hat{\vec A}(\vec x)^2 \right)\, , \end{eqnarray} where we have implicitly dropped the cosmic time variable $t$.~From this we identify, \begin{align} \langle \rho \rangle & = \frac{1}{2} m^2 \langle \vec A(\vec x)^2 \rangle , \\ \delta(\vec x) & = \frac{ \vec A(\vec x)^2 }{ \langle \vec A(\vec x)^2 \rangle} = \frac{1}{ \langle \vec A(\vec x)^2 \rangle} \int \frac{d^3p}{(2\pi)^3} \frac{d^3q}{(2\pi)^3} e^{i (\vec p + \vec q)\cdot x} \vec A(\vec p) \cdot \vec A(\vec q) \nonumber \\ & = \frac{1}{ \langle \vec A(\vec x)^2 \rangle} \int \frac{d^3k}{(2\pi)^3} e^{i \vec k \cdot \vec x} \int \frac{d^3q}{(2\pi)^3} \vec A(\vec k - \vec q) \cdot \vec A(\vec q) \\ & \equiv \int \frac{d^3k}{(2\pi)^3} e^{i \vec k \cdot \vec x} \delta(\vec k) \, ,\nonumber \end{align} where we have defined the momentum shift $\vec k = \vec p + \vec q$.~From this we can then read off the Fourier transform of the density contrast, \begin{eqnarray}\label{eq:deltak} \delta(\vec k) = \frac{1}{ \langle \vec A(\vec x)^2 \rangle} \int \frac{d^3q}{(2\pi)^3} \vec A(\vec k - \vec q) \cdot \vec A(\vec q) \, . \end{eqnarray} Even though the exponentially enhanced tachyonic modes are highly classical, here we can work explicitly with the quantum field defined in~\eref{Aqu}.~To compute the $\hat{\vec A}(\vec x)^2$ we have, \begin{eqnarray} \hat{\vec A}(\vec x)^2 &=& \sum_{\lambda, \lambda'} \int \frac{d^3p d^3q}{(2\pi)^6} e^{i(\vec p + \vec q)\cdot x} \vec\epsilon_\lambda(\vec p) \cdot \vec \epsilon_{\lambda'}(\vec q) \nonumber \\ &\times& \left( A_\lambda(p) a_\lambda(\vec p) + A_\lambda(p)^* a^\dagger_\lambda(-\vec p) \right) \left( A_{\lambda'}(q) a_{\lambda'}(\vec q) + A_{\lambda'}(q)^* a^\dagger_{\lambda'}(-\vec q) \right) \nonumber \\ &=& \int \frac{d^3k}{(2\pi)^3} e^{i \vec k \cdot \vec x} \sum_{\lambda, \lambda'} \int \frac{d^3q}{(2\pi)^3} \vec\epsilon_\lambda(\vec k - \vec q) \cdot \vec \epsilon_{\lambda'}(\vec q) \\ &\times& \left( A_\lambda(k-q) a_\lambda(\vec k - \vec q) + A_\lambda(k-q)^* a^\dagger_\lambda(\vec q - \vec k) \right) \left( A_{\lambda'}(q) a_{\lambda'}(\vec q) + A_{\lambda'}(q)^* a^\dagger_{\lambda'}(-\vec q) \right) \, .\nonumber \end{eqnarray} where again we have defined the momentum shift $\vec k = \vec p + \vec q$.~From~\eref{deltak} we then have, \begin{eqnarray} \label{eq:deltakqu} \delta(\vec k) &=& \frac{1}{\langle \hat{\vec A}(\vec x)^2 \rangle} \sum_{\lambda, \lambda'} \int \frac{d^3q}{(2\pi)^3} \vec\epsilon_\lambda(\vec k - \vec q) \cdot \vec \epsilon_{\lambda'}(\vec q) \\ &\times& \left( A_\lambda(k-q) a_\lambda(\vec k - \vec q) + A_\lambda(k-q)^* a^\dagger_\lambda(\vec q - \vec k) \right) \left( A_{\lambda'}(q) a_{\lambda'}(\vec q) + A_{\lambda'}(q)^* a^\dagger_{\lambda'}(-\vec q) \right) \, .\nonumber \end{eqnarray} The space-dependent part of the energy density contrast is obtained from the two-point correlation function, \begin{eqnarray}\label{eq:delta2pt} \langle \delta(\vec k) \delta(\vec k') \rangle &=& \frac{1}{\langle \hat{\vec A}(\vec x)^2 \rangle^2} \sum_{\lambda_1 \lambda_2 \lambda_3 \lambda_4} \int \frac{d^3 q \ d^3 q'}{(2\pi)^6} \vec\epsilon_{\lambda_1}(\vec k - \vec q) \cdot \vec \epsilon_{\lambda_2}(\vec q) \ \vec\epsilon_{\lambda_3}(\vec k' - \vec q^{\ \prime}) \cdot \vec \epsilon_{\lambda_4}(\vec q^{\ \prime}) \nonumber \\ &\times& \langle 0| \left( A_{\lambda_1}(k-q) a_{\lambda_1}(\vec k - \vec q) \right) \left( A_{\lambda_2}(q) a_{\lambda_2}( \vec q) + A_{\lambda_2}(q)^* a^\dagger_{\lambda_2}(-\vec q) \right) \\ &\times& \left( A_{\lambda_3}(k'-q') a_{\lambda_3}(\vec k' -\vec q^{\ \prime}) + A_{\lambda_3}(k'-q')^* a^\dagger_{\lambda_3}(\vec q^{\ \prime} - \vec k') \right) \left( A_{\lambda_4}(q')^* a^\dagger_{\lambda_4}(-\vec q^{\ \prime}) \right) |0 \rangle \, .\nonumber \end{eqnarray} Focusing on the last two lines, there is only one combination of creation and annihilation operators that leads to a space-dependent result, \begin{eqnarray} \langle 0| a_{\lambda_1} a_{\lambda_2} a^\dagger_{\lambda_3} a^\dagger_{\lambda_4} |0 \rangle &=& (2\pi)^6 \Big[ \delta_{\lambda_2\lambda_3} \delta_{\lambda_1\lambda_4} \delta^3(\vec k' - \vec q^{\ \prime} + \vec q) \delta^3(-\vec q^{\ \prime} -\vec k + \vec q) \nonumber \\ &+& \delta_{\lambda_1\lambda_3} \delta_{\lambda_2\lambda_4} \delta^3(\vec k - \vec q - \vec q^{\ \prime} + \vec k') \delta^3(\vec q + \vec q^{\ \prime}) \Big] \, . \end{eqnarray} Plugging this into~\eref{delta2pt} and keeping only the tachyonic transverse mode $A_+$ we find, \begin{eqnarray} \langle \delta(\vec k) \delta(\vec k') \rangle &=& (2\pi)^3 \delta^3(\vec k + \vec k') \frac{1}{\langle \hat{A}_+(\vec x)^2 \rangle^2} \int \frac{d^3 q}{(2\pi)^3} 2 |A_{+}(k-q)|^2 |A_{+}(q)|^2 \\ &=& (2\pi)^3 \delta^3(\vec k + \vec k') \frac{2}{\langle \hat{A}_+(\vec x)^2 \rangle^2} \frac{2\pi}{(2\pi)^3} \int q^2 dqd\cos\theta \frac{2\pi^2}{(k-q)^3} {\cal P}_{A_+}(k-q) \frac{2\pi^2}{q^3} {\cal P}_{A_+}(q) \nonumber \\ &=& (2\pi)^3 \delta^3(\vec k + \vec k') \frac{2 \pi^2}{\langle \hat{A}_+(\vec x)^2 \rangle^2} \int dq \ dp \ q^2 \frac{p}{kq} \frac{1}{p^3 q^3} {\cal P}_{A_+}(p) {\cal P}_{A_+}(q) \, ,\nonumber \end{eqnarray} where we have used~\eref{powquantum} and the change of variables from $q, \cos\theta$ to $q,p$ using, \begin{eqnarray} \vec p = \vec k - \vec q\, , ~~p^2 = k^2 + q^2 - 2 kq \cos\theta\, , ~~d\cos\theta = - \frac{p}{kq} dp\, , ~~dq d\cos\theta \rightarrow \frac{p}{kq} dq dp \, , \end{eqnarray} as well as trivially performed the $d\phi$ integral.~Defining the power spectrum for the energy density contrast in the quantum case (and for transverse modes) as, \begin{eqnarray} \label{eq:powdqu} \langle \delta(\vec k) \delta(\vec k') \rangle &=& (2\pi)^3 \delta^3(\vec k + \vec k') \frac{2\pi^2}{k^3} {\cal P}_{\delta}(k) \, , \end{eqnarray} we arrive finally at the expression given in~\eref{deltaPS}, \begin{eqnarray} \label{eq:powdeltaqu} {\cal P}_{\delta}(k) &=& \frac{k^2}{\langle \hat{A}_+(\vec x)^2 \rangle^2} \int_{|q-k|<p<q+k} dq \ dp \ \frac{1}{q^2 p^2} {\cal P}_{A_+}(p) {\cal P}_{A_+}(q) \, , \nonumber \\ &&\langle \hat{A}_+(\vec x)^2 \rangle^2 = \left[ \int_0^\infty \frac{dk}{k} {\cal P}_{A_+} (k) \right]^2 \, . \end{eqnarray} This is the power spectrum of the energy density contrast for $\rho \sim m^2 A^2$.~For completeness we have also computed the power spectrum for the energy density contrast corresponding to the kinetic term in the energy density for which we find the same expression as~\eref{powdeltaqu} with $A_+ \to \partial_t A_+$.~For the longitudinal mode we find the same result as in~\cite{Graham:2015rva}. \subsection{Numerical procedure for solving equations of motion} Here we sketch the numerical solutions to the equations of motion and how the input spectra are obtained.~We have integrated the equations of motion for the longitudinal and transverse vector perturbations at linear order in cosmic time\footnote{We have used the~\href{https://computing.llnl.gov/projects/sundial}{SUNDIALS package}, “SUite of Nonlinear and DIfferential/ALgebraic equation Solvers” for the numerical integration.}, \begin{eqnarray} \ddot A_L(\vec k,t) + \frac{3k^2 + a^2(t) m^2}{k^2 + a^2(t) m^2} H \dot A_L(\vec k,t) + \left( \frac{k^2}{a^2(t)} + m^2 \right) A_L(\vec k,t) &=& 0 \,, \\ \ddot A_{\pm}(k,t) + H \dot A_{\pm}(k,t) + \left[ \frac{k^2}{a^2(t)} \mp \frac{\alpha}{f} \dot\phi \frac{k}{a(t)} + m^2 \right] A_{\pm}(k,t) &=& 0 \, , \end{eqnarray} together with the background inflaton equation of motion, \begin{equation} \ddot \phi +3 H \dot \phi + V^\prime=0 \,, \end{equation} where $V^\prime = d V(\phi)/d \phi$.~Working in the regime of no dark vector backreaction during single-field inflation, the Hubble parameter is given by, \begin{equation} H^2 = \frac{1}{3 M_{PL}^2} \left( \frac{\dot \phi^2}{2} + V(\phi) \right) \,, \end{equation} and the number of e-folds is obtained by integrating, \begin{equation} \frac{d \ln a}{dt} = H \,. \end{equation} We start the integration in the slow-roll regime with initial field velocity, \begin{equation} \dot \phi = \frac{V^\prime}{3H} \,, \end{equation} and the initial value of the inflaton field for any given inflationary potential $V(\phi)$ will set the total number of e-folds $N_e$ up to the end of inflation at $\epsilon_{end}=\dot \phi^2/(2H^2 M_{PL}^2) = 1$.~For example, for the models considered in~\fref{energyspecsteep}, $\phi(0)=24 M_{PL}$ gives $N_e\simeq 72$ for a chaotic quartic potential, while $\phi(0)=17 M_{PL}$ gives $N_e\simeq 73$ for a chaotic quadratic potential, $\phi(0)= M_{PL}$ gives $N_e\simeq 73$ for the hilltop quadratic potential with $v=6 M_{PL}$, and $\phi(0)=60 M_{PL}$ gives $N_e\simeq 62$ for the axion-like model with $\Lambda=24 M_{PL}$. For each $k$ mode, we start the integration with the vector fluctuations initially in the Bunch-Davies vacuum defined as, \begin{align} A_{\lambda}^{(R)}(k,0) &= \frac{1}{\sqrt{2 \omega_k}}\,, &A_{\lambda}^{(I)}(k,0) &= 0 \,, \\ \dot A_{\lambda}^{(R)}(k,0) &= 0 \,, &\dot A_{\lambda}^{(I)}(k,0) &= -\frac{\omega_k}{\sqrt{2}}\,, \end{align} where $\lambda$ refers to either longitudinal or transverse mode, $\omega_k^2 = k^2/a_0^2 + m^2$, $a_0$ is the initial value of the scale factor, and $R$($I$) refers to the real (imaginary) component of the perturbation. Therefore, the initial vacuum power spectrum is given by, \begin{eqnarray} {\cal P}_{A_\lambda}(k,0) &=& \frac{k^3}{2 \pi^2 a_0^3} |A_\lambda(k,0) |^2 = \frac{k^3}{4 \pi^2 \omega_k a_0^3} \,, \\ {\cal P}_{\dot A_\lambda}(k,0) &=& \frac{k^3}{2 \pi^2 a_0^3} | \dot A_\lambda(k,0) |^2 = \frac{k^3 \omega_k}{4 \pi^2 a_0^3 } \,. \end{eqnarray} Two-point functions and energy densities for the dark vector perturbations obtained directly from the momentum integration of the power spectrum are clearly UV divergent and must be regularized.~In an expanding universe this can be accomplished using the adiabatic regularization method~\cite{Parker:2009uva} which is based on a WKB-type expansion in powers of the time derivatives of the scale factor and frequency modes.~For our purposes, given that we are interested in particle production effects driven by the tachyonic instability, it is enough to regularize the expressions at zero-order, i.e.\,we only subtract the vacuum contribution as in Minkowsky space, \begin{eqnarray} {\cal P}^{reg}_{A_\lambda}(k,t) &=& {\cal P}_{A_\lambda}(k,t) - {\cal P}_{A_\lambda}(k,0) \,, \\ {\cal P}^{reg}_{\dot A_\lambda}(k,t) &=& {\cal P}_{\dot A_\lambda}(k,t) - {\cal P}_{\dot A_\lambda}(k,0) \,. \end{eqnarray} On the left hand side in~\fref{energyspec} the dotted line shows the subtracted vacuum contribution.~We see all the spectra are above this line up to modes $k/a_{end} \sim O(10) H_{\rm end}$ showing the particle production effect.~Higher momentum modes stay in the Bunch-Davies vacuum and the partial subtraction we have performed leads to a spectra below the vacuum one.~This signals the absence of the tachyonic instability and particle production effects for these modes. \bibliographystyle{JHEP}
1,477,468,750,235
arxiv
\section{Background} In this section, we first provide some background on sketches and their use in network telemetry. We then introduce the key stakeholders in sketch-based telemetry to set the context for the research challenges. \subsection{Sketching Algorithms} Sketching algorithms (sketches) can process data streams accurately and efficiently in an online fashion. Sketches are attractive for network monitoring precisely because they typically require small memory footprints to estimate traffic statistics with provable accuracy guarantees. In addition to network telemetry~\cite{CountSketch,CMSketch,SpaceSavings,Entropy1,univmon,OpenSketch,elasticsketch,SketchVisor,sketchlearn}, sketch-based approaches have also been applied in databases~\cite{mergeable,sigmod2019}, streaming analytics~\cite{apache-druid}, and machine learning~\cite{recursiveDL,ivkin2019communication,jiang2018sketchml}. Sketches draw on rich theoretical foundations starting from the foundational ``AMS'' paper~\cite{ams}. At a high level, the problem they address is as follows: Given an input stream of {\em <key, value>} pairs (e.g., <5-tuple, packet size> pairs in network traffic), a sketching algorithm is allowed to make a single pass over the data stream to compute statistics while using sub-linear (usually poly-logarithmic) memory space compared to the total size of the dataset and the number of distinct keys. When processing each item in the stream, a sketch typically maintains a table of counters in the main memory and computes multiple independent hashes to update a small random set of counters in the table. These algorithms are backed by rigorous theoretical analysis on bounded accuracy-memory tradeoffs for arbitrary workload patterns. \para{Sketch-based network telemetry.} Sketches are useful approaches for key network telemetry tasks, such as (1) Heavy-Hitter detection to discover large flows~\cite{CMSketch,CountSketch,SpaceSavings,HashPipe,univmon,elasticsketch}; (2) Entropy Estimation to analyze traffic distributions for anomaly detection~\cite{simple_entropy,univmon,Entropy1}; (3) Change Detection to identify significant traffic shifts over time~\cite{k-ary,OpenSketch,univmon}; (4) Cardinality Estimates to detect the number of distinct items/flows in the network traffic~\cite{bar2002counting,HLL,univmon,SketchVisor}; (5) Performance Monitoring to identify flows with high packet loss, large latency, and high out-of-order or retransmitted packets~\cite{lean}; (6) Superspreader Detection to identify sources that contact many different destinations~\cite{OpenSketch}, among others. \subsection{Stakeholders for Telemetry Deployment} We identify three key players in the ecosystem that drive and influence the adoption of the above sketch-based telemetry. \paraf{Network operator:} Network operators rely on real time telemetry to make timely decisions that ensure network reliability, performance, and security. To this end, they may want network-wide information such as global heavy hitter flows, distinct flows, and entropy changes on various traffic distributions. Ideally, network operators want to express high-level telemetry objectives without having to worry about low-level algorithm and implementation details about sketches. \begin{figure}[h] \centering \begin{minipage}[t]{.49\textwidth} \textbf{Q1: Return 5-tuple 0.005-heavy hitters from all flows} \begin{codefragment2} FlowKey = (SrcIP,SrcPort,DstIP,DstPort,Proto) C="Select HeavyHitter(p.FlowKey,0.05) From *" return C \end{codefragment2} \end{minipage} \begin{minipage}[t]{.49\textwidth} \textbf{Q2: Return distinct DstIP count that a host connects to} \begin{codefragment2} C="Select Distinct(p.dstIP) From * Where p.srcIP=1.2.3.4" return C \end{codefragment2} \end{minipage} \vspace{-4mm} \tightcaption{Examples of envisioned telemetry queries.} \label{fig:query_example} \end{figure} For example, operators may specify queries like Q1 and Q2 depicted in Figure~\ref{fig:query_example}. A telemetry system should provide an interface to write queries, identify if the queries can be supported by existing primitives, and distribute the monitoring responsibilities efficiently across a network. If better or new sketches are needed, the telemetry system must pass these information to algorithm designers described below. \para{Algorithm designers:} We envision an active community of algorithm designers developing new sketching algorithms to estimate different telemetry metrics. They would like to understand the requirements of network operators to design improved or new sketching algorithms for needed metrics. In practice, however, it requires significant efforts to translate theoretical algorithms into optimized implementations on diverse platforms. As richer primitives need to be designed and new platforms (e.g., Barefoot Tofino~\cite{tofino} and Multi-engine SoC SmartNIC~\cite{netronome}) emerge, algorithm designers increasingly find themselves in need of a mature sketch-based framework allowing them to develop and evaluate algorithmic tools along with platform vendors/developers described below. \para{Platform vendors and developers:} Platform vendors offer specialized capabilities that implement and optimize sketches on various hardware and software platforms. For instance, we have already seen programmable switches, SmartNIC, FPGA, and software switches established in today's networks, and we envision future deployments with richer and more diverse platform capabilities. Ideally, platform vendors should provide primitives for these developers to optimally support sketch-based telemetry. However, recent work suggests it is non-trivial to efficiently implement sketches~\cite{liu2019nitrosketch,HashPipe}. In this respect, we envision the need for these two stakeholders to jointly contribute their domain expertise to achieve optimized sketch implementations. \section{Data Plane Implementation} \subsection{Limitations of Existing Work} \alan{TODO: A concrete measurement on the inefficient sketch implementation on P4. } \subsection{Open Problems} \begin{insight}[Sketch Implementation] Given $r_{s,d}$ for (potentially new) sketch instance $s$ and device instance $d$, can we generate sketch implementation $c_{s,d}$ such that actual resource usages $m_{s,d}$ and $l_{s,d}$ are minimized ? \end{insight} Problem 6 is about building a tool to automatically generate sketch hardware implementation for any sketch resource configuration and device instance. At the moment, it requires tremendous joint efforts from platform and algorithms developers to deliver an resource- and performance-optimized sketch implementation per hardware target. Sometimes, achieving even a feasible implementation is difficult~\cite{SketchVisor,univmon}. Thus, we envision this tool (e.g., a specialized code compiler) to take as input algorithm definition and configuration defined in a high-level language (e.g., P4), and automatically output an implementation that is optimized for a particular hardware target. With such a tool, algorithm developers will not need to worry about how to implement a current or future sketching algorithm into a hardware architecture and platform developers will not worry about understanding the algorithmic details in order to implement the sketches. \antonis{unclear to me from this text why P4 is lacking here. After reading this para, it felt like you were describing P4 again} \begin{insight}[Implementation Optimization] Given $r_{s,d}$ for all sketch configuration $s \in \mathcal{S}$ and device instance $d \in \mathcal{D}$, can we generate sketch implementation $c_d$ for device $d$ such that the device actual resource usage $\sum_s m_{s,d}$ and $\sum_s l_{s,d}$ are minimized? \end{insight} Problem 7 can be considered as a generalized version of Problem 6. This problem is about optimizing the sketch implementation on a device when multiple sketch instances are present. Our observation is that many sketches share common primitive operations (hash computation, counter updates, etc.), and we expect that the actual memory usage and packet processing performance on a device $d^*$ can be further optimized to less than $\sum_s r_{s,d^*}$. \section{Overview} \vyas{not sure id call this motivation/overview. we can shrink this and directly jump into the open problems etc. just do setup notation/and the stakeholder definition here. these are not "scenarios" as such } \prs{Maybe move the subsection on motivating scenarios to the previous section and kill the rest? Motivating scenarios should be high level.} \hun{We use (platform)/(network devices) as the same meaning, but the usage can be subtle that might confuse readers.} In this section, we first highlight several scenarios from the stakeholders involving in the research of sketch-based network telemetry, such as network operator, algorithm developer, and platform developer, and then discuss a sketch-based telemetry framework that designed to meet the requirements from parties. \subsection{Stakeholders and Requirements} \paraf{Network operator.} Network operators rely on real-time telemetry information to make timely control decisions in order to improve network reliability, performance, and security. When managing the network, the operator may want to obtain network-wide information from one or a set of telemetry tasks, such as global heavy hitter flows, distinct flows, and entropy changes. However, network operators may not have the knowledge about the sketch design and implementation. Then they cannot decide which sketches should be used for the required telemetry tasks and how to configure the sketches to meet certain accuracy goals. Ideally, network operators should be only required to provide high-level queries and telemetry objectives without knowing the underlying details about sketches. For example, the operator may specify queries as the following. The telemetry system should be able to identify whether the queries can be fully supported by the existing library of sketches. If better or new sketches are needed, the telemetry system should pass these information to algorithm developers. \begin{figure}[h] \centering \begin{minipage}[t]{.49\textwidth} \textbf{Q1: Return 5-tuple 0.005-heavy hitters from all flows} \begin{codefragment2} FlowKey = (SrcIP,SrcPort,DstIP,DstPort,Proto) C="Select HeavyHitter(p.FlowKey,0.05) From *" return C \end{codefragment2} \end{minipage} \begin{minipage}[t]{.49\textwidth} \textbf{Q2: Return distinct DstIP count that a host connects to} \begin{codefragment2} C="Select Distinct(p.dstIP) From * Where p.srcIP=1.2.3.4" return C \end{codefragment2} \end{minipage} \label{fig:query_example} \end{figure} \para{Algorithm developer.} Algorithm developers and theorists are the ones who design sketching algorithms to estimate different telemetry metrics. From their perspective, they would like to understand the requirements from network operators and design improved or new sketching algorithms for needed metrics. In practice, it requires tremendous efforts from algorithm developers to obtain the real-world telemetry requirements from network operators and to have their algorithms evaluated in real-world hardware targets with optimized implementations. Ideally, algorithm developers should have a straightforward way to obtain the telemetry requirements from network operators as their motivation to develop new and improved sketching algorithms. \hun{This points out the difficulty or a communication problem between network operator and algorithm developer. Is it enough to point out current existing problem? or do we have a high level solution for this? If we will suggest automated process in later sections, then above sentence should be like: we should automate those process to remove the need for the communication between algo/network operator developer(?)} Once new algorithms have been designed, we cannot expect algorithm developers to understand network hardware architectures and implement the algorithms by themselves. We should transfer these algorithm definitions to platform developers in order to get them properly implemented in a variety of platforms. \para{Platform developer.} There has been a tremendous interest in developing new network devices and platforms with diverse flexibility and packet processing capability. For instance, we have programmable switches, SmartNIC, FPGA, and software switches established in today's network. If platform developers such as network hardware and software developers plan to have sketch primitives deployed on their devices, they want to achieve best possible resource efficiency and guaranteed performance. However, today's sketch design and implementation are far from being optimized~\cite{liu2019nitrosketch} since it is challenging for platform developers to achieve optimally without the help of algorithm developers. Ideally, platform developers should implement sketches into every supported hardware platform in an optimized way. We anticipate a mechanism for algorithm and platform developer to jointly contribute their domain-specific knowledge to the sketch implementation without the need for algorithm developer to dig into hardware architecture and the need for platform develop to dig into the algorithmic details. \subsection{Sketch-based Telemetry Framework } Inspired by the different stakeholders in network telemetry, we propose a framework to capture their requirements in participating sketch-based telemetry research. As shown in Figure~\ref{fig:overview} \hun{This figure is insightful. One question is that sketch library should be in L1 instead of L2 (Algorithm developer should be linked to L1 accordingly)?} \para{User front-end:} \para{Control plane:} \para{Data plane:} \begin{figure}[t] \centering \includegraphics[width=0.85\linewidth]{figures/overview.pdf} \caption{Sketch-based Telemetry Framework} \label{fig:overview} \end{figure} \section{A Future Roadmap} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/vision.pdf} \vspace{-6mm} \tightcaption{Sketch-based telemetry framework and stakeholder interactions.} \label{fig:vision} \end{figure} We envision a sketch-based telemetry framework as depicted in Figure~\ref{fig:vision}, assuming that research challenges P1-P6 described above and others have been properly addressed by the community. In this framework, we expect a {\em management interface} that has an expressive front-end/API to interact with network operators, algorithm designers, and platform vendors/developers. Some key components in the interface are 1) query compiler to translate operator intents into sketch configurations, and 2) sketch library to maintain state-of-the-art sketching definitions/implementations. In the {\em control plane}, there will be a network-wide resource manager taking input from the management interface and computing an optimized sketch placement and resource allocation based on the requirements. In the {\em data plane}, the optimized and verified sketch instances will be initialized across a network of heterogeneous devices based on the resource management decisions from the control plane. We expect network operators, algorithm designers, and platform vendors/developers will have a way interacting with the telemetry framework as follows: \begin{packeditemize} \item {\bf Network operators:} Operators can specify their telemetry needs via the management interface and receive the intended telemetry metrics via API. In the back-end, operator queries are translated to sketch configurations and their related device-level implementations to be deployed. In addition, operators can also describe their intents to cover some unsupported telemetry tasks. \item {\bf Algorithm designers:} Algorithm designers can obtain new telemetry capability requests from operators and design new algorithms based on the requests. They can then add their new algorithms to the sketch ecosystem and get feedback about their implemented and evaluated algorithms in real-world scenarios. \item {\bf Platform vendors and developers:} Platform vendors can receive new hardware capabilities requests, deliver new hardware capabilities, and update the device specifications accordingly. Platform developers can explore the sketch algorithm definitions and hardware capabilities in the sketching ecosystem, and deliver improved or new implementations to the sketch library. \end{packeditemize} Prior efforts have laid the groundwork for designing sketches and making them transition from a theoretical curiosity to a promising start for network telemetry. We hope that our vision, research challenges, and collaborative efforts from the stakeholders taken together can help transition sketch-based telemetry into ``prime time'' deployment. \section{Introduction} At the core of managing networks, network telemetry plays a crucial role in understanding what is happening in the network and informing management decisions. For example, to improve cloud security, telemetry enables operators to detect network anomalies and attacks in a timely fashion. Similarly, in order to optimize traffic engineering and ensure that service-level agreements (SLAs) for applications are met, operators commonly rely on telemetry to monitor network flow distributions. Traditionally, flow-based telemetry is done via offline analysis or some form of packet or flow sampling (e.g., NetFlow~\cite{netflow} and sFlow~\cite{sflow}). However, given the need for timely results using constrained compute/memory resources, offline analysis is not a practical option. Moreover, sampling only provides coarse-grained flow size distributions, and cannot provide accurate results for more fine-grained key telemetry tasks such as entropy estimation, distinct count, and change detection~\cite{duffield2003estimating,new_directions,k-ary}. To address the drawbacks of sampling approaches, sketching algorithms (or sketches for short) have been extensively studied in recent years (e.g., \cite{bar2002counting,RHHH,CountSketch,CMSketch,SketchVisor,sketchlearn,zhou2019generalized,ivkin2019know,univmon,elasticsketch,HashPipe,liu2019nitrosketch,OpenSketch,Entropy1,k-ary,lean,revsketch}). In light of increasing network traffic and ever-evolving application dynamics, sketches have emerged as a promising solution for real-time network telemetry. This paper is a reflection on the current state of sketch-based telemetry to examine not just what sketch-based systems {\em can} do but what {\em should} be done to enable broader adoption. To this end, we look at the state of the sketch-based telemetry ecosystem from the perspective of three key stakeholders in Figure~\ref{fig:intro}: (1) {\em Network Operators (NO\xspace)} who are the users/consumers of telemetry capabilities; (2) {\em Algorithm Designers (AD\xspace)} who design and analyze sketching algorithms; and (3) {\em Platform Vendors and Developers (PVD\xspace)} who provide hardware/software primitives and APIs in various platforms (e.g., Intel DPDK~\cite{dpdk}, Barefoot Tofino~\cite{tofino}, Broadcom Trident~\cite{trident}, Mellanox~\cite{mellanox}, among others) and use these APIs to implement sketch-based functions. \begin{figure}[t] \centering \includegraphics[width=0.88\linewidth]{figures/intro.pdf} \vspace{-2mm} \tightcaption{Overview of the problems from the stakeholders in sketch-based telemetry.} \vspace{-2mm} \label{fig:intro} \end{figure} By taking this ecosystem-level view, we identify four areas of gaps between stakeholder and interaction requirements and existing research (blue boxes in Figure~\ref{fig:intro}): \begin{packeditemize} \item {\bf NO-Centric:} Most existing efforts assume operators have extensive knowledge about the algorithms and their underlying data structures, which is not realistic. There are few, if any, efforts to help operators translate high-level intents into sketches. This requires both high-level interfaces as well as precise resource management. While NO's intents may involve different sketches and devices, current solutions (e.g., ~\cite{univmon,elasticsketch,sketchlearn,moshref2014dream}) do not consider the composition of multiple types of sketches and the heterogeneity of network devices. \item {\bf Between NO\xspace/AD\xspace:} Prior theoretical work in sketching algorithms covers many common telemetry tasks, and more recent work on general sketches can cover a broad portfolio of tasks~\cite{univmon}. Despite these advances, many common NO\xspace intents fall outside the scope of the literature. For instance, for attack detection, operators are interested in obtaining statistics from not only one dimension of data (e.g., SrcIP) but multiple dimensions (e.g., any subset of the combinations in 5-tuple). Conversely, we find that the theory community has many rich capabilities and streaming models (e.g., turnstile~\cite{li2014turnstile,turnstile1}, sliding-window~\cite{exponential_histogram,braverman2007smooth,zero_one_sliding}, and distributed functional monitoring~\cite{cormode2013continuous, functional_monitor}) that are yet to find practical adoption in networking. \item {\bf Between AD\xspace/PVD\xspace:} While sketching algorithms are theoretically lightweight, existing algorithms may not be efficiently realizable across diverse platforms as highlighted by recent efforts~\cite{SketchVisor,liu2019nitrosketch,elasticsketch,HashPipe,yang2020joltik}. Similarly, while existing languages and APIs~\cite{p4_studio,p4,netronome} are sufficiently expressive to specify different sketch algorithms, na\"ive implementations are often resource intensive, thus nullifying any potential benefits~\cite{SketchVisor,liu2019nitrosketch}. This suggests the need for new sketch-centric APIs, language support, and best practices. \item {\bf Between NO\xspace/PVD\xspace:} Given that the success of the operator's policies depends crucially on how accurately telemetry reflects current network conditions, verifying the practical accuracy and correctness of sketches at post-deployment is a major priority for the NO\xspace. In addition, while platform vendors have designed and delivered trusted hardware capabilities (e.g., Intel SGX~\cite{intel-sgx}, AMD SEV~\cite{amd-sev}, and ARM TrustZone~\cite{arm-trustzone}) to ensure the integrity of the program running on the device, the integrity of sketch-based telemetry logic has yet to be protected. \end{packeditemize} Our contribution in this paper is to identify and formulate challenges that need to be addressed to enable sketch-based telemetry to be more widely adopted. While this list of challenges is by no means exhaustive, our goal is to start the conversation regarding the ecosystem's missing pieces. We hope that our work will inspire the community to tackle these as-yet-unsolved issues, eventually enabling the practical adoption of sketch-based telemetry. \section{Unified Front End} \begin{table}[t] \centering \footnotesize \begin{tabular}{ cl } \toprule \textbf{Constants} & Definition\\ \midrule $\mathcal{Q}$ & Set of telemetry queries\\ $\mathcal{O_A}$ & Set of accuracy requirements, \\ & ~~e.g., accuracy target and confidence level\\ $\mathcal{O_P}$ & Set of performance requirements,\\ & ~~e.g., packet rate performance\\ $\mathcal{T}$ & Topology information, e.g., links and devices\\ $\mathcal{D}$ & Set of device instances with resource constraints, \\ & ~~e.g., SmartNIC with 4 engines and 10MB SRAM \\ $\mathcal{T_r}$ & Workload characteristics, e.g., distribution \\ \end{tabular} \begin{tabular}{ cl } \toprule \textbf{Variables} & Definition\\ \midrule ${S}$ & Set of sketch instances with memory configurations \\ $r_{s,d}$ & Resource config. for sketch s on device $d$ instance \\ $l_{s,d}$ & Processing latency for sketch s on device $d$ instance\\ $c_{s,d}$ & Implementation of sketch $s$ on device $d$\\ $c_{d}$ & Implementation of all sketches on device $d$\\ $m_{s,d}$ & Actual memory usage of device $d$ from $c_{d}$ \\ \\ \bottomrule \end{tabular} \caption{Summary of parameters and variables in open problems.} \label{tab:definition} \end{table} \subsection{Limitations of Existing Work} \vyas{perhaps don't harp on this too much or call out explicit attention to the limitations of the existing work. in some sense we are already setting up that expectation in sec 1. just dive into the detailed problems and then highlight why/where existing stuff falls short?} \antonis{I would drop the subsection header - limitations of prior work-. I agree with Vyas' comment and also I personally don't like to see a subsection right under a section header (but that's just my pet-peeve. I think the same text would be perfectly suitable as intro to the section} \prs{I agree: focus on open problems and discussion limitations of previous work in that context.} \hun{I have another question - 3. overview section and all of problems in section 4 - 6 sounds great. The thing is that some of problems themselves suggest a solution of "automation" (e.g., problem6/7 suggest process automation to replace the role of platform developer). I think it would be good to setup this clearly in advance by saying either we are suggesting only problems or we talk about problems with solutions by the framework even though it is a high level. Or in other words, do we envision that the problems are solved in manual effort by collaborative work between algo developer and platform developer? or we envision that all of the problems are solved by automated manner by the framework so that communication is not needed anymore.} Traditionally, sketch-based telemetry systems are designed under a ``narrow-scope'' assumption with respect to the generality of queries they can support. Specifically, existing frameworks are either designed to support only one type of queries~\cite{sigmod_paper} or assume that the operator will determine at query time the appropriate (available) sketch for each query. For example, to detect super-spreaders (i.e., SrcIPs that connect to many distinct DstIPs), the user needs to make a choice between X,Y sketches whereas to detect heavy hitters they need to choose between A,B sketches~\cite{apache_druid}. As a result, developing a unified user-front end for such telemetry systems was, to the best of our knowledge, never seen as a key design priority. The closest efforts to our goal are OpenSketch~\cite{OpenSketch} and Sonata~\cite{sonata}. OpenSketch~\cite{OpenSketch} is sketch-based telemetry library in NetFPGA, including Count-Min~\cite{CMSketch}, K-ary sketch~\cite{k-ary}, and reversible sketch~\cite{ReversibleSketch}. Although OpenSketch showed a way to program its controller in order to add a new measurement program, it remains unclear if OpenSketch will be able to define all existing or future sketch-based measurement queries. While Sonata provides a unified interface to query telemetry statistics, their interface is based on a SQL-alike language to specify the detailed user-level queries. However, Sonata does not consider sketches as their monitoring primitives and overly complicate the query definitions for sketches. Similarly, other efforts such as NetQRE~\cite{netqre} and Marple~\cite{marple} provide query interfaces for network operators to specify their telemetry policies and a different set of telemetry metrics (e.g., network performance) but they were designed without considering sketch abstractions as telemetry primitives and we are unsure if they can support sketch-based queries or are overly complicated to define a sketch-based telemetry query. \subsection{Open Problems} \begin{insight}[Query Language] Can we design a high-level declarative language that can precisely define telemetry queries $\mathcal{Q}$ supported by sketches? \end{insight} Problem 1 is about designing a declarative query language specifically for sketch-based telemetry systems. With the growth of sketch functionality, it becomes much more difficult than ever for network operators to study the details of sketching algorithms in the telemetry system in order to properly define the query input and output. To let network operators use sketch-based telemetry systems easily, we ideally need a high-level querying language that is expressive enough to cover the broadest possible set of queries while hiding the intricacies of the execution from the user. Specifically, the user should be able to conceptually describe the characteristics of the telemetry query they want to execute (e.g., type of estimation, aggregation of data necessary, required accuracy constraints) without having to explicitly specify the mechanism of execution. \begin{insight}[Traffic-Oblivious Compilation] For all workload characteristics $\mathcal{T_r}$, instances $D$, and a set of queries $\mathcal{Q}$, can we build a compiler that translates $\mathcal{Q}$ into a set of platform-agnostic sketch configurations ${S}$ that meet requirements $\mathcal{O_A}$? \end{insight} \antonis{I have a concern here: I got slightly confused here after reading your definition of problem 2 and problem 3. I think you need to bring the discussion of "different workload params produce different accuracy guarantees" here and elevate more the point that at this step you produce a set of candidate sketches. There's significant overlap right now between 2-3 and it's unclear from the current text where you draw the line. } Problem 2 is about translating telemetry queries into a composition of sketch configurations that can meet the accuracy requirements from the queries, regardless of traffic workload characteristics and hardware platforms. This is possible because the accuracy guarantee of sketches is hardware agnostic and only depends on the memory configuration. Thus, we can potentially leverage the theoretical analysis to provide sketch memory configuration treating the network-wide devices as a ``one-big-switch'' such that the worst-case accuracy requirements on this switch can be guaranteed. For example, if a query specifies a heavy hitter task with $98\%$ accuracy and 0.99 confidence level, we expect a compiler to generate a minimized sketch configuration (e.g., Count-Min Sketch with $r\times d$ counters) that maintains errors $\le 2\%$ with 0.99 probability under any workload distribution and hardware platform. This problem can be considered as a first compilation step towards a network-wide device-aware resource allocation in the control plane which usually requires target-agnostic memory configurations and corresponding performance characteristics on each hardware target as input. \section{Research Challenges} Next, we formulate a broad (but non-exhaustive) list of open research problems P1$\dots $P6 and some of their extensions for a sketch-based telemetry ecosystem. As depicted in Figure~\ref{fig:problems}, we conceptually cluster these challenges according to each stakeholder's needs and considerations. \begin{figure}[t] \vspace{-3mm} \centering \includegraphics[width=0.66\linewidth]{figures/problems.pdf} \vspace{-4mm} \tightcaption{Open problems between stakeholders.} \vspace{-2mm} \label{fig:problems} \end{figure} \medskip \noindent {\bf Preliminaries:} We introduce some terms and notations to formulate the problems (summarized in Table~\ref{tab:definition}). \begin{packeditemize} \item The constants represent the inputs to the telemetry system. Specifically, network operators can define their telemetry needs by a list of input constants: (1) Queries $\ensuremath{\mathcal{Q}}$ consisting of a set of $k$ (potentially infinite) query definitions $\{q_1,\dots,q_k\}$; (2) Requirements $\ensuremath{\mathcal{R_A}}=\{\ensuremath{{r_a}}^1,\dots, \ensuremath{{r_a}}^k\}$ defining a set of accuracy requirements (e.g., accuracy target 95\% with 0.99 confidence) for queries $\{\ensuremath{{q}}_1,\dots,\ensuremath{{q}}_k\}$ and similarly $\ensuremath{\mathcal{R_P}}=\{{r_p}^1, \dots, {r_p}^k\}$ as the packet rate requirements; (3) Network characteristics including topology information $\ensuremath{\mathcal{T}}$, device information with resource capabilities $\ensuremath{\mathcal{D}}$, and traffic workload characteristics $\ensuremath{\mathcal{W_r}}$. \item The variables are the notations for the intermediate or final outputs of the telemetry system: (1) $\ensuremath{{S}}$ is a set of sketch definitions with appropriate memory and flow-key/OD-pair configurations (e.g., a Count-Min sketch tracking 5-tuple flows with 5 x 2048 32-bit counters); (2) $\ensuremath{{r_{s,d}}}$ is the resource configuration of sketch instance $s$ on device $d$ (e.g., assigning 200KB and 2 cores for $s$ on CPU) and $\ensuremath{{l_{s,d}}}$ is the processing latency of $s$ on $d$ (e.g., $1\mu s$ on CPU); (3) $\ensuremath{{c_{s,d}}}$ is the implementation (binary code) of sketch instance $s$ on device $d$. When there are multiple sketch instances in $d$, $\ensuremath{{c_{d}}}$ represents the implementation of all instances combined; (4) $\ensuremath{{r_{d}}}$ is the actual resource usage of $\ensuremath{{c_{d}}}$ and $\ensuremath{{l_{d}}}$ is the actual processing latency of $\ensuremath{{c_{d}}}$. \end{packeditemize} \begin{table}[t] \centering \footnotesize \begin{tabular}{ cl } \toprule \textbf{Constants} & Definition\\ \midrule $\ensuremath{\mathcal{Q}}$ & Set of telemetry queries\\ $\ensuremath{\mathcal{R_A}}$ & Set of accuracy requirements, \\ & ~~e.g., accuracy target and confidence level\\ $\ensuremath{\mathcal{R_P}}$ & Set of performance requirements, e.g., packet rate\\ $\ensuremath{\mathcal{T}}$ & Topology information, e.g., links and devices\\ $\ensuremath{\mathcal{D}}$ & Set of device instances with resource constraints, \\ & ~~e.g., SmartNIC w/ 4 engines and 10MB SRAM \\ $\ensuremath{\mathcal{W_r}}$ & Traffic workload characteristics, e.g., distribution \\ \end{tabular} \begin{tabular}{ cl } \toprule \textbf{Variables} & Definition\\ \midrule ${S}$ & Set of sketch definitions with configurations \\ $r_{s,d}$ & Resource config. for sketch s on device $d$ \\ $l_{s,d}$ & Processing latency for sketch s on device $d$ \\ $c_{s,d}$ & Implementation of sketch $s$ on device $d$\\ $c_{d}$ & Implementation of all sketches on device $d$\\ $r_{d}$ & Actual resource usage of device $d$ from $c_{d}$ \\ $l_{d}$ & Actual processing latency of device $d$ from $c_{d}$ \\ \bottomrule \end{tabular} \vspace{-2mm} \caption{Summary of notations in problem definitions.} \vspace{-3mm} \label{tab:definition} \end{table} \subsection{Network Operator-Centric} \begin{insight}[Query Language] Is there a high-level declarative language that can precisely define sketch-based telemetry queries $\ensuremath{\mathcal{Q}}$? \end{insight} Traditionally, sketch-based telemetry is designed under a narrow scope in the queries it supports. Specifically, existing frameworks are either designed to support one type of queries~\cite{sigmod2019} or assume that the operators determine at query time the appropriate (available) sketch for each query. For example, to detect Superspreaders (i.e., SrcIPs that connect to many distinct DstIPs), the operators need to make a choice between Count-Min + HLL and CountSketch + UnivMon whereas to conduct change detection they need to choose between K-ary and Count-Min. As a result, developing a unified front-end for such telemetry systems was, to the best of our knowledge, never seen as a key design priority. Specifically, the operators should be able to conceptually describe the characteristics of a query to execute (e.g., type of metrics, appropriate aggregation of data, accuracy constraints) without explicitly specifying the execution mechanism. Existing efforts have proposed several query languages for network telemetry~\cite{sonata,NetQRE,marple}, streaming database~\cite{TelegraphCQ,gigascope}, and traffic analysis~\cite{chimera}. These efforts are self-contained for their systems but may not be an ideal fit for sketch-based telemetry. Specifically, they did not consider sketches as their primitives and overly complicate the query definitions for sketches. For instance, Sonata~\cite{sonata} can specify the detailed packet-level queries with dataflow operators (e.g., map, filter, reduce) but it is unclear how to describe sketches, and NetQRE~\cite{NetQRE} extends from quantitative regular expressions~\cite{qre} to define flow-level and application-level statistics and polices. In addition, the telemetry tool Marple~\cite{marple} is designed to support a particular set of performance metrics only. Similarly, streaming databases such as Gigascope~\cite{gigascope} support continuous queries over packet headers or counts via a SQL-alike language but do not support other metrics such as network performance and traffic patterns. \begin{insight}[Resource Optimization] Given a set of queries $\ensuremath{\mathcal{Q}}$ with accuracy requirements $\ensuremath{\mathcal{R_A}}$ and performance requirements $\ensuremath{\mathcal{R_P}}$, traffic workload characteristics $\ensuremath{\mathcal{W_r}}$, topology $\ensuremath{\mathcal{T}}$, and device instances $\ensuremath{\mathcal{D}}$, generate resource configuration $\ensuremath{{r_{s,d}}} \ \forall s,d$ within a time budget such that $\sum_s\sum_d \ensuremath{{r_{s,d}}}$ is minimized and $\forall s \in {S}$ meets $\ensuremath{\mathcal{R_A}}$ and $\ensuremath{\mathcal{R_P}}$ \end{insight} Given a set of queries $\ensuremath{\mathcal{Q}}$, each with associated accuracy and performance requirements, traffic workload characteristics and a network topology, the operator's high-level goal is to deploy appropriate sketches across the deployment such that SLAs are met while minimizing overall resource usage. The operator ideally wants to view their deployment under as ``one-big-switch'' without worrying about manually distributing sketches across the various devices in the deployment to ensure appropriate correctness and coverage. However, realizing this conceptual goal, requires addressing a number of sub-challenges which we introduce now and discuss in more detail in the following subsections: \begin{packeditemize} \item {\bf Problem 3:} Translate each $q \in \ensuremath{\mathcal{Q}}$ to appropriate sketch definitions with conservative (traffic-oblivious) memory configurations $\ensuremath{{S}}$ to meet accuracy requirements $\ensuremath{\mathcal{R_A}}$. % \item {\bf Problem 4:} Given a heterogeneous network deployment, develop optimal device-specific sketch implementations, given sketch definitions and configurations $s \in \ensuremath{{S}}$. \item {\bf Problem 5:} Given traffic workload characteristics $\ensuremath{\mathcal{W_r}}$, optimize each sketch's memory configuration to provide better memory-accuracy tradeoff and further reduce resource usage. \item {\bf Problem 6:} Once sketches are deployed on device $d$, verify their correctness to ensure the expected accuracy requirements $\ensuremath{\mathcal{R_A}}$ are met. \end{packeditemize} While prior work presented an early version of a network-wide solution~\cite{univmon}, it does not take traffic workload characteristics, different types of sketches, and the heterogeneity of the devices into account, and can converge to a sub-optimal or even infeasible sketch placement and resource allocation. Figure~\ref{fig:univmon} shows a simple scenario where network-wide UnivMon does not optimally place three Count-Min sketch instances in a topology of three programmable devices. Specifically, in the example, the operator wants to know the 5-tuple heavy hitters over traffic between devices A and C (CM1) and the heavy hitters over traffic between devices B and C separately for (SrcIP, SrcPort) and (DstIP, DstPort) flow keys (CM2 and CM3). Resource optimization approaches will decide which sketch will be placed on which device while being aware of the resources required for these sketches given different performance requirements for different devices: (1) UnivMon, which is unaware of the interaction between performance requirements and resource usage, tries to balance memory usage by placing a sketch on each device. This results in placing a sketch on device $A$ which sees 20Mpps traffic. In order to accommodate a sketch and support this forwarding rate, device $A$ requires 4 cores. (2) A better strategy shifts telemetry load towards device $B$\footnote{Device B runs in a CPU polling mode.} which sees less traffic and can accommodate 2 sketches while meeting the 10Mpps requirement. Device $A$ in this strategy does not maintain a sketch and only needs 2 cores to maintain 20 Mpps traffic forwarding. Note: Device $C$'s compute resources are the same in both strategies and hence are not shown. \begin{mycorollary}[Maximum Performance] Given a set of queries $\ensuremath{\mathcal{Q}}$ with requirements $\ensuremath{\mathcal{R_A}}$, topology information $\ensuremath{\mathcal{T}}$, and devices $\ensuremath{\mathcal{D}}$, output resource configuration $\ensuremath{{r_{s,d}}}$ for all $s,d$ such that $\sum_s\sum_d \ensuremath{{l_{s,d}}}$ is minimized and $\forall s \in \ensuremath{{S}}$ meets $\ensuremath{\mathcal{R_A}}$ \end{mycorollary} This extension aims at providing optimized network-wide sketch placement and resource allocation that meets the device resource constraints and minimizes total packet processing overhead. In this optimization, we aim at deploying a telemetry solution to handle the largest possible volume of traffic for given queries, which potentially offers us the ability to monitor bursty traffic. Meanwhile, this type of optimization is useful for operators to control the maximum volume of traffic that goes into the telemetry infrastructure. \begin{figure}[t] \centering \includegraphics[width=0.66\linewidth]{figures/univmon-fig.pdf} \includegraphics[width=0.88\linewidth]{figures/example-table.pdf} \vspace{-2mm} \tightcaption{Example of network-wide UnivMon not optimally placing the sketches.} \vspace{-1mm} \label{fig:univmon} \end{figure} \subsection{Network Operator \& Algorithm Designer} \begin{insight}[Queries to Sketch Definitions] Design a compiler that translates queries $\ensuremath{\mathcal{Q}}$ into sketch definitions and configurations $\ensuremath{{S}}$ that meet accuracy requirements $\ensuremath{\mathcal{R_A}}$ \end{insight} Here, our focus is on translating telemetry queries into a set of \textit{practical} sketch definitions with memory configurations satisfying the accuracy requirements from the queries, irrespective of traffic workload characteristics and hardware platforms. This is possible because the accuracy guarantees of sketches are hardware agnostic and only depend on the memory configuration. Thus, one can potentially leverage the theoretical analysis from algorithm designers to provide traffic-oblivious sketch memory configurations. For example, if a query specifies a heavy hitter task with $98\%$ accuracy and 0.99 confidence level, we envision a compiler to generate a platform-agnostic sketch configuration (e.g., Count-Min Sketch with $r\times d$ counters) that maintains errors $\le 2\%$ with 0.99 probability under any workload distribution. This is the first step towards network-wide device-aware resource management, which requires target-agnostic memory configurations treating the network-wide topology as a ``one-big-switch'' and corresponding performance characteristics on each hardware target as input. \begin{mycorollary} [Expressiveness] If the network operator's telemetry queries $\ensuremath{\mathcal{Q}}$ cannot be compiled to $\ensuremath{{S}}$, can algorithm designers develop new sketching algorithms to address the failures? \end{mycorollary} While there have been significant advances in developing sketches for various telemetry tasks, the intents of network operators may still fall outside those of existing sketching algorithms. We need algorithm designers to step in and come up with improved or new sketches. Meanwhile, the theory community has already developed a rich pool of sketching tools that may be relevant to the operator's needs. The challenge lies in how to effectively collect and formulate these requirements to motivate algorithm designers to develop new algorithms or disprove the feasibility. \subsection{Algorithm Designer \& Platform Vendor/Developer}\label{sec:alg_plat} \begin{insight}[Sketch Implementation] Given a sketch configuration $s\in \ensuremath{{S}}$ with device $d$, generate a sketch implementation $\ensuremath{{c_{s,d}}}$ to minimize the actual resource usages $\ensuremath{{r_{s,d}}}$ and $\ensuremath{{l_{s,d}}}$ \end{insight} Ideally, we want to generate optimized platform-specific sketch implementations for any sketch configuration and device instance. Today, this requires significant effort from both platform vendors/developers and algorithm designers to deliver optimized sketch implementation per hardware target~\cite{SketchVisor,univmon,HashPipe,yang2020joltik}. What is missing today are tools (e.g., optimizing compilers) to take as input an algorithm definition and configuration defined in a high-level language, and automatically output an implementation that is optimized for a particular hardware target. With such a tool, algorithm designers will not need to worry about how to implement a current or future sketching algorithm into the hardware architecture and platform developers will not worry about understanding the algorithmic details in order to implement the sketches. Existing efforts on P4 language and its target-specific compilers are expected to contributing in this direction. Unfortunately, our benchmark demonstrates that existing sketch implementations on programmable switches using P4 are far from resource-efficient (Table~\ref{tab:hwresource})\footnote{Sketch configurations in the table, R:rows, C:columns, and L:levels. CountSketch(R=5, C=2048), UnivMon(L=16, R=5, C=2048), R-HHH(L=25, R=5, C=2048), SketchLearn(L=112, R=1, C=2048).}. Compared to a fully functional switch implementation (switch.p4), existing sketches use excessive switch hardware resources (e.g., up to $15\times$ more hash function calls and $17\times$ more stateful ALUs). \begin{table}[t] \centering \scriptsize{ \begin{tabular}{|l|c|c|c|c|} \hline {\bf Resource} &CountSketch&UnivMon& R-HHH & SketchLearn \\ \hline Match Crossbar & 10.0\% & 177.2\% & 476.9\% & 347.7\%\\ SRAM & 3.5\% & 56.3\% & 88.4\% & 78.9\%\\ Hash Bits & 3.7\% & 59.6\% & 91.6\% & 82.1\%\\ Hash Func. Calls & 62.5\% & 1100.0\% & 1562.5\% & 700.0\%\\ Stateful ALUs & 71.4\% & 1142.9\% & 1785.7\% & 1600.0\%\\ \hline \end{tabular}} \vspace{-2mm} \caption{ Additional H/W resource usage in Barefoot Tofino by existing sketch implementations. The numbers are normalized by the usage of baseline switch.p4} \vspace{-3mm} \label{tab:hwresource} \end{table} Recent efforts have focused on performance bottlenecks of sketching algorithms run inside virtual software switches~\cite{SketchVisor, liu2019nitrosketch,elasticsketch}. While they address the compute/memory bottlenecks in various software sketch implementations, their ideas do not directly transfer to other hardware platforms. For instance, NitroSketch~\cite{liu2019nitrosketch} increases the memory footprint to reduce CPU consumption, but the key resource constraints in hardware context are different (e.g., processing stages, ALU, and hash function calls)~\cite{tofino}. SketchVisor~\cite{SketchVisor} and ElasticSketch~\cite{elasticsketch} split a sketch into a fast path and a slow path, and use the fast path to accelerate the packet processing. This type of idea is not particularly useful in hardware switches where all packet operations should stay in the fast path~\cite{rmt}. \begin{mycorollary}[Multi-Sketch Implementation] Given all sketch configurations $s \in \ensuremath{{S}}$ and device instance $d$, generate a consolidated sketch implementation $\ensuremath{{c_{d}}}$ for device $d$ such that the actual device resource usage $\ensuremath{{r_{d}}}$ and performance latency $\ensuremath{{l_{d}}}$ are minimized? \end{mycorollary} This extension is about optimizing the sketch implementation on a device when multiple sketch instances are present. Our observation is that many sketches share common primitive operations (hash computation, counter updates, etc.), and we expect that the actual resource usage and packet processing performance on a device $d^*$ can be further optimized to less than $\sum_s r_{s,d^*}$ and $\sum_s l_{s,d^*}$. A recent proposal~\cite{gao2019autogenerating} shows the promises of using program synthesis to auto-generate fast processing hardware implementations on programmable switches using fewer hardware resources. While this direction is promising in general for Problem 4 and its extension, this work is a preliminary demonstration in one particular hardware architecture and we would like to see if a similar approach can be designed for other platforms and how many more resources it can save. \subsection{Network Operator \& Platform Vendor/Developer} \begin{insight}[Sketch Configuration] Given a set of traffic workload characteristics $\ensuremath{\mathcal{W_r}}$ and traffic-oblivious sketch configurations $S$ that meet accuracy requirements $\ensuremath{\mathcal{R_A}}$, output a minimal platform-agnostic memory configuration for $\forall s \in \ensuremath{{S}}$ that meets the accuracy requirement \end{insight} This problem entails finding a minimal memory configuration that meets a certain accuracy requirement for a sketch and a given type of traffic workload characteristics (e.g., skewness, number of flows). Problem 3 attempts to provide a traffic-oblivious memory configuration for the sketch to meet the accuracy requirement under any workloads. For platform vendors, it is of importance to fully understand the resource-accuracy usage of the user functions running atop their platforms and to continue improving cost-efficiency of their architecture. In practice, network operators shall have basic understanding and expectation about the workloads such as skewness and distribution, and the traffic-oblivious configuration may not be tight anymore. For instance, Count Sketch can achieve better memory-accuracy tradeoff if the workload is skewed following some Zipfian distribution~\cite{CountSketch}. SketchLearn~\cite{sketchlearn} leverages automated statistical inference to actively ``learn'' the traffic workload characteristics to configure its sketch on the fly, relieving the user burdens in the sketch memory configuration. While a learning-based approach is promising in resolving this problem, SketchLearn did not tackle the configurations of other types of sketches and we are unsure whether the model inference used in SketchLearn is an optimal choice. \begin{insight}[Verification] Given sketch implementation $\ensuremath{{c_{d}}}$ on device $d$, ensure that $\ensuremath{{c_{d}}}$ will correctly meet the accuracy requirements when running on $d$? \end{insight} Once sketch implementations have been deployed to various devices, one question is that whether the on-device sketch instances will work as expected. Specifically, when an adversary is present, network operators want to verify the integrity of the sketch instances such that the output is correctly reflecting the network traffic conditions. We can think of this verification in two aspects: (1) Network operators can naturally verify the accuracy of sketches if the integrity of the on-device sketch instance is guaranteed. (2) If such integrity cannot be guaranteed, operators need to identify the occurrences when sketches failed to meet the accuracy requirements. Current platform vendors have been on an active race to offer secure enclave primitives on various hardware targets such as Intel SGX, AMD SEV, and ARM TrustZone for mapping arbitrary functions to trusted memory. It remains an open challenge on how to leverage secure hardware capabilities as the ``root-of-the-trust'' for sketch-based telemetry. Existing efforts \cite{han2017sgx-box, poddar2018safebricks, trach2018shieldbox} demonstrate the promises of protecting network functions with hardware enclaves (e.g., Intel SGX). However, those efforts are not capable of sketch-based telemetry because (1) sketches require high throughput guarantees while existing frameworks such as SafeBricks~\cite{poddar2018safebricks} and SGX-Box~\cite{han2017sgx-box} incur high processing overhead, and (2) these efforts are designed for general-purpose network functions where redundant modules and complexities are included. \section{Control Plane} \subsection{Limitations of Existing Work} Existing efforts on sketch-based telemetry, such as UnivMon~\cite{univmon}, SketchVisor~\cite{SketchVisor}, Elastic Sketch~\cite{elasticsketch}, have tackled on the network-wide resource allocation problem that distributes sketch capabilities among a network to offer ``one-big-switch'' telemetry abstraction using minimized resource footprints. However, these efforts fall short in one or more of the following dimensions: (1) {\em Traffic-awareness} to optimize the global memory usage and packet processing performance based on traffic demands and expect traffic characteristics; (2) {\em Device heterogeneity} to optimize the global memory usage and packet processing performance based on diverse device capabilities. As a result, existing network-wide solutions utilize excessive compute and memory resources compared to the optimal possible resource allocation. For instance, \alan{TODO: pending an quantitative example from Anup.} \subsection{Open Problems} \begin{insight}[Optimized Sketch Configuration] Given workload characteristics $\mathcal{T_r}$ and a sketch $s$ with accuracy requirements, can we output a minimal platform-agnostic memory configuration for $s$ that meets the accuracy requirement? \end{insight} Problem 3 is about determining a minimal memory configuration meeting certain accuracy requirement for a sketch and a given type of workload characteristics (e.g., skewness, number of flows). Traditionally, the theoretical analysis of a sketch can help determine a memory configuration to fulfill the accuracy requirement in the worst-case. However in practice, this memory-accuracy tradeoff shown in the analysis is ``loose'': If we only need to support workloads with predictable characteristics (which are usually better than the worst case), we should be able to search for a configuration that both consumes smaller memory space and meets the accuracy requirement. By solving the \begin{insight}[Resource Optimization] Given a set of sketches $\mathcal{S}$ with dynamically changing accuracy requirements $\mathcal{O_A}$ and performance requirements $\mathcal{O_P}$, traffic characteristics $\mathcal{T_r}$, topology information $\mathcal{T}$, and devices instances $\mathcal{D}$, can we output the resource configuration $r_{s,d}$ for all $s,d$ within a time budget such that $\sum_s\sum_d r_{s,d}$ is minimized and $\forall s \in {S}$ meets $\mathcal{O_A}$ and $\mathcal{O_P}$? \end{insight} Problem 4 is about minimizing the network-wide sketch resource usage across a network of heterogeneous devices. Given the requirements of accuracy and packet processing performance, the goal of this optimization is to decide which sketch should be placed on which device and how much resource should be allocated for each sketch on the device, such that the total resource usage is minimized. In this problem, the traffic characteristics, path information (topology), and device specifications will play a role in the sketch place and resource allocation. For example, if an OD-path has known traffic characteristics and different types of devices to place the telemetry capabilities, placing the same sketch into different hardware (e.g., switch and server CPU) will has different memory and compute resource usages. \begin{insight}[Maximum Performance] Given a set of sketches $\mathcal{S}$ with requirements $\mathcal{O_A}$, topology information $\mathcal{T}$, and devices $\mathcal{D}$, can we output resource configuration $r_{s,d}$ for all $s,d$ such that $\sum_s\sum_d l_{s,d}$ is minimized and $\forall s \in \mathcal{S}$ meets $\mathcal{O_A}$? \end{insight} Problem 5 is to provide an optimized network-wide sketch placement and resource allocation that meet the device resource constraints and minimizes the packet processing overhead. In this optimization, we aim at deploying a telemetry solution to handle largest possible volume of traffic for given queries, which potentially offers us the ability to monitor bursty traffic.
1,477,468,750,236
arxiv
\section{Introduction} It is well-known that the separation of variables in the Hamilton--Jacobi equation of a natural Hamiltonian is characterized by the existence of sets of Killing 2-tensors ($\mathcal K_m$) and Killing vectors~($D_r$) with suitable properties \cite{[13],[4],[6]}. The aim of this paper is to show some geometrical properties of a separable web (i.e., a set of foliations of coordinate hypersurfaces) by means of the analysis of $D_r$ and $\mathcal K_m$; in particular, we provide an algebraic method for determining the equations of the separable coordinate hypersurfaces. For instance, it is known that on a two-dimensional manifold the separation is characterized by a Killing tensor $\mathbf K$ with pointwise simple eigenvalues $(\lambda^1,\lambda^2)$. Separable coordinates $(q^1,q^2)$ can be determined in two ways: (i) by setting $q^1=\lambda^2$ and $q^2=\lambda^1$ or, (ii) by integrating the eigenvectors $(\mathbf E_1,\mathbf E_2)$, which are orthogonal to the coordinate curves. In case (i) we need to assume that both eigenvalues are real independent functions (except for a closed subset of the conf\/iguration manifold at most); if the eigenvalues are not independent (for instance one of them is constant), then symmetries i.e., Killing vectors or ignorable coordinates, are present and the problem becomes even simpler. We remark that method (i) is purely algebraic: starting from $\mathbf K$, we have only to solve its characteristic equation. In \cite{[10]} we extended this analysis, for the orthogonal separation, to the $n$-dimensional Riemannian manifolds. For $n>2$ the eigenvalues of the Killing tensors do not def\/ine directly separable coordinates, however, we show how to construct rational functions of them which are constant on the separable coordinate hypersurfaces. In the present paper we analyze the most general case of separation on a (pseudo) Riemannian manifold, without assumptions on the signature of the metric and on the orthogonality of the separable coordinates. In the following we shall refer to this kind of separation as ``non-orthogonal separation" or ``general separation". In non-orthogonal separation a fundamental role is played by a particular $m$-dimensional space of Killing tensor $\mathcal K_m$ ($m \leq n$), whose properties relevant for our aims are recalled in Section \ref{s1}. We illustrate an algebraic algorithmic method for constructing a set of intrinsically def\/ined functions, that we call {\it fundamental functions}, starting from a special class of eigenvalues of $m$ independent Killing tensors of $\mathcal K_m$. The analysis of these functions allows us to detect two classes of symmetries ({\it proper} and {\it conformal}) of the St\"ackel matrix associated with separation. When no proper symmetries occur, the fundamental functions are an ef\/fective tool for computing the equations of the separable coordinate hypersurfaces. Other algorithmic approaches for f\/inding separable coordinates are already known in important particular cases, such as orthogonal separation in constant curvature manifolds with positive def\/inite metric \cite{[18],rauch}, essentially based on the previous knowledge of all separable coordinates in these spaces (as generalized elliptic coordinates and their degenerations). A further generalization considered here is the analysis of the case of the so-called {\it orthogonal conformal separation}, which deals with separable coordinates for the null-geodesic HJ-equation, or for a HJ-equation of a natural Hamiltonian with f\/ixed value of the energy \cite{[7]}. The orthogonal separable coordinates can be considered as a special case of this broader class of coordinates. In the intrinsic characterization of the orthogonal conformal separation, the Killing tensors are replaced by conformal Killing tensors with suitable properties. Following the same procedure of the ``ordinary" separation, we are able to construct intrinsic functions allowing to deduce geometrical properties of the conformal separable coordinate hypersurfaces or to construct their equations. \looseness=1 In Section \ref{s1} we recall the basic intrinsic characterizations of the non-orthog\-onal separation on a Riemannian manifold in a form suitable for our needs. In Section \ref{s2} we describe our method and its application to a simple example. In Section \ref{s3}, devoted to the orthogonal separation, we improve the analysis given in \cite{[10]} and we show the links between eigenvalues of Killing tensors and proper or conformal symmetries for the associated coordinate systems. In the orthogonal case, by ``proper symmetry" (resp., ``conformal symmetry") of a coordinate system we mean that there are Killing vectors (resp., conformal Killing vectors) orthogonal to some foliations of the web. In Section \ref{s4}, we see how the def\/initions of proper and conformal symmetries of the coordinates can be extended for the general separation and we generalize our results for the cases when non-orthogonal or null coordinates occur. In Section \ref{s5}, we summarize the intrinsic characterization of the orthogonal conformal separation \cite{[7],[14]} and we apply our algebraic method to the case of conformal separable orthogonal webs, showing how to detect conformal symmetries and to write the equation describing the foliations without conformal symmetries. Each section is completed by illustrative examples: the spherical coordinates in ${\mathbb R}^3$ (Section~\ref{s2}), the L-systems \cite{[5]}, also known as Benenti systems \cite{[8],[9],tsiga} (Section~\ref{s3}), two non-orthogonal 4-dimensional coordinate systems (one of them with null coordinates) in Section \ref{s4}, and the conformal separable coordinate system, known as tangent-spheres coordinates \cite{[15]} (Section \ref{s5}). Moreover, by applying our analysis to L-systems, we prove an interesting geometrical property of these systems: for $n>2$, none of the common eigenvectors of the associated Killing tensors is a proper conformal Killing vector. \section{An outline of geodesic separation on Riemannian\\ and pseudo-Riemannian manifolds}\label{s1} We consider a $n$-dimensional Riemannian manifold $Q$ with contravariant metric $\mathbf G=(g^{ij})$ of arbitrary signature and the corresponding {\it geodesic Hamiltonian} $G=\frac 12 g^{ij}p_ip_j$. A relation of equivalence is def\/ined among separable coordinate systems for $G$ \cite{[1],[2]} such that in each class there are the particular coordinate systems described in Theorem \ref{t1.1} below. We recall that a~regular square matrix $(\varphi_j^{(i)})$ is a {\it St\"ackel matrix} if any element $\varphi_j^{(i)}$ is a function of the coordinate~$q^j$ only. \begin{theorem}[\cite{[1],[2]}] \label{t1.1} In an equivalence class of separable coordinates there exists a {\rm standard coordinate system} $(q^{\hat a},q^{\bar a}, q^{\alpha})$ such that {\rm (i)} The metric tensor has the {\rm standard form} \begin{equation} \label{1.1} (g^{ij})=\bordermatrix{ &\overbrace{\hphantom{AAA}}^{\displaystyle{m_1}} &\overbrace{\hphantom{AAA}}^{\displaystyle{m_0}} &\overbrace{\hphantom{AAA}}^{\displaystyle{r}}\cr m_1\phantom{r}\bigg\{ & g^{\hat a\hat a} & 0 & 0 \cr m_0\phantom{r}\bigg\{ & 0 & 0 & g^{\bar b\beta} \cr r\phantom{m_0}\bigg\{ & 0 & g^{\alpha\bar a} & g^{\alpha\beta}\cr}, \end{equation} where the coordinates $(q^{\alpha})$ $(\alpha=m_0+m_1+1,\ldots,n)$ are {\sl ignorable}, $\partial _{\alpha}g^{ij}=0$ $(i,j=1,\ldots,n)$. {\rm(ii)} The non-vanishing metric components have the form \begin{equation}\label{1.2} \begin{array}{l} g^{\hat a\hat a}=\varphi^{\hat a}_{(m)},\\ g^{\bar a \beta}=\theta^{\beta}_{\bar a}\varphi^{\bar a}_{(m)},\\ g^{\alpha \beta}=\eta^{\alpha \beta}_a\varphi^a_{(m)}, \end{array} \qquad \begin{array}{l} a=1,\ldots, m_1+m_0, \\ \hat a=1,\ldots, m_1, \\ \bar a=m_1+1,\ldots, m_1+m_0, \\ \alpha,\beta =m_1+m_0+1,\ldots, n , \end{array} \end{equation} where $\theta^\beta_{\bar a}$ and $\eta^{\alpha\beta}_a$ are functions of the coordinate corresponding to the lower index only and $(\varphi ^a_{(m)})=(\varphi^{\hat a}_{(m)},\varphi^{\bar a}_{(m)})$, $m=m_0+m_1$, is a row of the inverse of a $m\times m$ St\"ackel matrix $\mathbf S$ in the coordinates $(q^a)$. \end{theorem} \begin{theorem}[\cite{[4]}] \label{t1.2} The geodesic Hamiltonian $G$ is separable if and only if there exists a pair $(D_r,\mathcal K_m)$, called {\sl separable Killing algebra}, such that: a) $D_r$ is a r-dimensional Abelian algebra of Killing vectors spanning a regular distribution $\Delta$ of rank $r$ such that $I=\Delta \cap \Delta^{\perp}$ has constant rank $m_0$. b) $\mathcal K_m$ is a m-dimensional space of Killing tensors, generated by $m$ independent tensors $(\mathbf K_a)$, with $m$ common eigenvectors orthogonal to $D_r$ which are normal (i.e., orthogonally integrable) and associated with real eigenvalues. c) $\mathcal K_m$ is $D_r$-invariant. d) For $m_0>1$, $d(\mathbf K\cdot dg^{\alpha \beta})=0$ for any $\mathbf K\in \mathcal K_m$, where $g^{\alpha\beta}={\mathbf X}_\alpha \cdot {\mathbf X}_\beta$ for any basis $({\mathbf X}_\alpha)$ of $D_r$. \end{theorem} By ``Killing tensor" (KT) we mean a symmetric two-tensor $\mathbf K$ such that $[\mathbf K,\mathbf G]=0,$ where $[\cdot,\cdot]$ denotes the Lie--Schouten brackets. With a separable Killing algebra we associate an important kind of coordinates in the following way: \begin{definition}\label{d1.3} \rm Let $(D_r,\mathcal K_m)$ be a separable Killing algebra and $(\mathbf E_a)$ be the $m$ normal common eigenvectors $(\mathbf E_a)$ of the elements of $\mathcal K_m$. The coordinates $(q^a,q^\alpha)$ are {\it adapted coordinates} of $(D_r,\mathcal K_m)$ if the coordinate hypersurfaces of $q^a$ are the integral manifolds orthogonal to each~$\mathbf E_a$ (or equivalently, the dif\/ferentials $dq^a$ are common eigenforms of $\mathcal K_m$) and the vector f\/ields $\partial_\alpha$ form a basis of $D_r$ (i.e., the $q^\alpha$ are the af\/f\/ine parameters of a given basis $\mathbf X_\alpha$ of $D_r$ with zero values on a chosen integral manifold of $\Delta^\perp$); the $m$ orthogonal coordinates $(q^a)$ are said to be {\it essential}. The $m_0$ essential coordinates $(q^{\bar a})$ such that $g^{\bar a \bar a}=0$ are called {\it isotropic} or {\it null coordinates}. \end{definition} The integrability of the distribution $\Delta^\perp$ is a consequence of the hypotheses of Theorem \ref{t1.2} \cite{[4]}. We remark that if $Q$ is a proper Riemannian manifold (or if the coordinates are orthogonal) there are no isotropic coordinates. \begin{theorem}[\cite{[4]}] \label{t1.4} Let $(q^i)=(q^{\hat a}\!,q^{\bar a}\!,q^\alpha)$ be an adapted coordinate system of a separable Killing algebra $(D_r,\mathcal K_m)$. Then, {\rm (i)} The coordinates $(q^i)$ are standard separable coordinates and each tensor $\mathbf K\in\mathcal K_m$ assumes the standard form \begin{equation} \label{1.3} (K^{ij})= \begin{pmatrix} \lambda^{\hat a}\,g^{\hat a\hat a} & 0 & 0 \cr &&\cr 0 & 0 & \lambda^{\bar b}\,g^{\bar b\beta} \cr &&\cr 0 & \lambda^{\bar a}\,g^{\alpha\bar a} & K^{\alpha\beta}\cr \end{pmatrix}. \end{equation} {\rm (ii)} Given a basis $(\mathbf K_{a})$ of $\mathcal K_m$, the non-vanishing components of each $\mathbf K_{b}$ $(b=1,\ldots, m)$ assume the form \begin{equation} \label{1.4} \begin{array}{l} K_b^{\hat a\hat a}=\varphi^{\hat a}_{(b)},\\ K_b^{\bar a \beta}=\theta^{\beta}_{\bar a}\varphi^{\bar a}_{(b)},\\ K_b^{\alpha \beta}=\eta^{\alpha \beta}_a\varphi^a_{(b)}, \end{array} \qquad \begin{array}{l} a=1,\ldots, m, \quad b=1,\ldots, m,\\ \hat a=1,\ldots, m_1, \quad \bar a=m_1+1,\ldots, m, \\ \alpha,\beta =m+1,\ldots, n, \end{array} \end{equation} where $(\varphi^a_{(b)})$ is a row of $\mathbf S^{-1}$. \end{theorem} \begin{remark}\label{r1.5} \rm The functions $(\lambda^a)$ are the eigenvalues of $\mathbf K$ corresponding to the common eigenvectors of all tensors in $\mathcal K_m$ and satisfy the following {\sl intrinsic Killing equations}: \begin{equation} \label{1.5} \begin{array}{l} \partial _a\lambda ^b=(\lambda^a-\lambda^b)\partial _a\ln \varphi^b_{(m)}, \\ \partial _aK^{\alpha \beta}=\lambda^a\partial_ag^{\alpha \beta}, \\ \partial _{\alpha}\lambda^j=0, \end{array} \qquad a=(\hat a,\bar a),\quad a,b=1,\dots, m,\quad j=1,\dots, n. \end{equation} \end{remark} The geometric realization of an equivalence class of separable coordinates is called {\it separable Killing web} \cite{[4]}: \begin{definition} \label{d1.6} \rm A {\it separable Killing web} is a triple $(\mathcal S_m, D_r,\mathbf K)$, where (i) $\mathcal S_m=(S^a)$ is a set of $m$ foliations of hypersurfaces pairwise transversal and orthogonal; (ii) $D_r$ is a $r$-dimensional Abelian algebra of Killing vectors tangent to each foliation $S^a$; (iii) $\mathbf K$ is a $D_r$-invariant Killing tensor with $m$ eigenvectors $(\mathbf E_a)$ associated with $m$ pointwise distinct real eigenvalues $(\lambda^a)$ and orthogonal to the foliations $(S^a)$. The KT $\mathbf K$ is called {\it characteristic tensor of the web}. \end{definition} \begin{remark}\label{d1.7} \rm The existence of a separable Killing web is equivalent to the existence of a separable Killing algebra, or of separable coordinates for $G$. Indeed, a separable Killing web $(\mathcal S_m,D_r,\mathbf K)$ gives rise to a standard separable coordinate system $(q^a,q^\alpha)$ such that the coordinate hypersurfaces $q^a=\hbox{constant}$ are leaves of $S^a$ and the vector f\/ields $(\partial_\alpha)$ associated with $q^\alpha$ form a basis of $D_r$. \end{remark} From Def\/initions \ref{d1.3} and \ref{d1.6}, it follows that essential coordinates only are associated with the eigenvectors of Killing tensors, that is with the foliations $(S^a)$ of a separable Killing web. Therefore, in the following sections we restrict our attention to the essential coordinates. In Example~\ref{e4.14} we show a separable Killing web, the corresponding separable Killing algebra and the St\"ackel matrix in the adapted coordinates for the case $n=4$, $m=2$ and $m_0=0$. \section{The method of the eigenvalues}\label{s2} In order to clarify the exposition, we collect in this section the results proved in Sections \ref{s3}--\ref{s5}. We recall that \begin{definition}\label{d2.1} \rm A vector f\/ield $\mathbf X$ is said to be a {\it conformal Killing vector} (CKV) if there exists a~function $F$ such that \[ [\mathbf X,\mathbf G]=\mathcal L_{\mathbf X}\, \mathbf G=F\mathbf G, \] where $[\cdot,\cdot]$ is the Schouten bracket and $\mathcal L$ the Lie-derivative. If $F=0$, $\mathbf X$ is a Killing vector (KV) and if $F \neq 0$ we call $\mathbf X$ a {\it proper} CKV. \end{definition} \begin{remark}\label{r2.2} \rm A coordinate $q^{i}$ is ignorable if and only if $\partial _{i}$ is a Killing vector. Moreover, in an orthogonal system, if $\partial _{i}$ is proportional to a KV (i.e., there is a function $f$ such that $f\partial_i$ is a~KV) then $q^i$ is ignorable up to a rescaling $\tilde q^i=\tilde q^i(q^i)$. \end{remark} \begin{note} \label{n2.3} Here and in the following, for any matrix $\mathbf A=(a_j^i)$ the lower index is the row-index and the upper one is the column index. Moreover, we shall denote by $\mathbf A_j^i$ the submatrix of $\mathbf A$ obtained by eliminating the $j$-th row and the $i$-th column. \end{note} \noindent \textsc{Step 1. Construction of the fundamental functions.} Let $(D_r,\mathcal K_m)$ be a separable Killing algebra associated with a separable web $(D_r, \mathcal S_m)$ and let $(\mathbf K_1,\ldots,$ ${}\mathbf K_m=\mathbf G)$ be a basis of $\mathcal K_m$. \begin{itemize}\itemsep=0pt \leftskip .0cm \item[i)] We determine the essential eigenvalues of $(\mathbf K_a)$ associated with the common essential eigenvectors $(\mathbf E_1,\ldots, \mathbf E_m)$ orthogonal to $D_r$. \item[ii)] We construct the regular (see Remark \ref{r:ult}) $m\times m$ matrix $\Lambda=(\lambda_a^b)$ of the essential eigenvalues of $\mathbf K_a\in\mathcal K_m$ ordered as follows: $\lambda_a^b$ is the eigenvalue of $\mathbf K_a$ associated with the common eigenvector $\mathbf E_b$. \\ We remark that for the construction of $\Lambda$ we have to order properly the eigenvalues of each KT and, to do that, we need to compute the eigenvectors. However, our further analysis is based only upon the matrix $\Lambda$ of the eigenvalues and no integration is needed. \item[iii)] For $a,b,c=1,\ldots,m$ we consider the intrinsic ratios \begin{equation} \label{2.1} f_a^{bc}=\frac{\det \Lambda_b^a}{\det \Lambda_c^a}, \end{equation} that we call {\it fundamental functions}. They are {\it well-defined} functions only if $\det \Lambda_c^a$ is not identically zero. Moreover, since $f_a^{bc}=1/f_a^{cb}$ and $f_a^{bb}$ is equal to 1 or everywhere undef\/ined, in the following we shall assume $b>c$. \end{itemize} \noindent\textsc{Step 2. Analysis.} Let us f\/ix an index $a$. Due to the regularity of $\Lambda$, at least one fundamental function (\ref{2.1}) is well def\/ined (Proposition \ref{p3.3}). By examining the functions $f_a^{bc}$ written in an arbitrary coordinate system, we can easily detect symmetries of the foliation $S^a\in \mathcal S_m$; if $S^a$ has no symmetries we obtain the equation of the foliation $S^a$. Indeed, two dif\/ferent and mutually exclusive cases occur: \begin{itemize}\itemsep=0pt \item[iv)] There exist indices $c$ and $b$ such that the function $f_a^{bc}$ is not identically constant. In this case, in a neighborhood of each point $P_0$ such that $df_a^{bc}(P_0)\neq 0$, equation \[ f_a^{bc}=f_a^{bc}(P_0) \] def\/ines a hypersurface containing $P_0$ and orthogonal to the eigenvector $\mathbf E_a$ (i.e., a hypersurface of $S^a$). Hence, equations \[ f_a^{bc}=k, \] for suitable values of $k\in {\mathbb R}$, describe the foliation $S^a$ (see Theorems \ref{t3.8} and \ref{t4.11}, for the orthogonal and the general case respectively). \item[v)] For the f\/ixed index $a$ all functions $f_a^{bc}$ constructed in \textsc{Step 1} are constant or undef\/ined. Then, special properties of the adapted coordinates of $(D_r,\mathcal K_m)$ hold. Let $(q^1,\ldots,q^m)$ be the essential coordinates adapted to the foliations $(S^1,\ldots,S^m)$. Up to a reparameterization of $q^a$, the St\"ackel matrix $\mathbf S=(\varphi_c^{(b)})$ and its inverse matrix $\mathbf S^{-1}=(\varphi_{(c)}^b)=(\lambda_c^b\varphi_{(m)}^b)$ do not depend on $q^a$ (see Theorem \ref{t4.9} (ii)). We call the vector f\/ield $\partial_a$ a {\it St\"ackel symmetry} (see Section \ref{s4} for further details). \end{itemize} We remark that we do not need to distinguish between isotropic and non-isotropic coordinates. Moreover, if we consider the orthogonal separation (i.e., when $m=n$, $D_r=0$ and all coordinates are essential) then we are able to test if a foliation $S^a$ is orthogonal to a Killing vector or to a conformal Killing vector, by examining the fundamental functions. Indeed, the following properties hold: \begin{itemize}\itemsep=0pt \item[vi)] All fundamental functions $f_a^{bc}$ are constant or undef\/ined if and only if the eigenvector $\mathbf E_a$ is proportional to a Killing vector i.e., the associated adapted coordinate $q^a$ is (up to a~reparameterization) ignorable (Theorem \ref{t3.6} (ii)). \item[vii)] All fundamental functions $f_a^{bc}$ with $c<b<n$ ($n>2$) are constant or undef\/ined if and only if the corresponding eigenvector $\mathbf E_a$ is proportional to a conformal Killing vector (Theorem~\ref{t3.6}~(i)). \end{itemize} \begin{remark} \label{r2.4} \rm Also for the general separation, properties analogous to items vi) and vii) hold. Item vi) is in fact a special case of the general situation described in item v), holding for orthogonal coordinates. Indeed, due to Remark \ref{r2.2} and to equations (\ref{1.5})$_1$, we have that $\mathbf E_a$ is proportional to a Killing vector if and only if the St\"ackel matrix and its inverse do not depend on the corresponding coordinate $q^a$. The property stated in item vii) can be extended to general separable coordinates as follows (Theorem \ref{t4.9} (i)): All functions $f_a^{bc}$ with $c<b<m$ ($m>2$) are constant or undef\/ined if and only if there exists a function $F$ such that (up to a~reparameterization of $q^a$) $\partial_a \varphi_{(m)}^b=F\varphi_{(m)}^b$ for all $b=1,\ldots,m$. Then, we call $\partial_a$ a {\it conformal St\"ackel symmetry} (see Section \ref{s4} for further details). \end{remark} \begin{remark} \label{r2.5} \rm For $m=2$, we have only two fundamental functions: $f_1^{21}=\lambda^2$ and $f_2^{21}=\lambda^1$ i.e., the eigenvalues of the characteristic tensor. \end{remark} We show how the method works in the following simple but illustrative example. \begin{example} \label{e2.6} \rm Let us consider in ${\mathbb R}^3$ the spherical coordinates centered at a point $O$, with axis $\omega$ passing through $O$ and parallel to a unit vector $\mathbf n$. It is well known that they are orthogonal separable coordinates for the geodesic Hamiltonian. Thus, we have $m=n=3$ and all coordinates are essential. A basis of $\mathcal K_3$ is \[ \mathbf K_1=r^2\mathbf G-\mathbf r\otimes \mathbf r, \qquad \mathbf K_2=(\mathbf n\times\mathbf r)\otimes (\mathbf n\times \mathbf r),\qquad \mathbf K_3=\mathbf G, \] where $\mathbf r$ is the position vector with respect to $O$ and $r=\| \mathbf r\|$. The common eigenvectors are \[ \mathbf E_1=\mathbf r, \qquad \mathbf E_2=\mathbf r\times (\mathbf n\times \mathbf r),\qquad \mathbf E_3=\mathbf n\times \mathbf r, \] which are orthogonal to the foliation $S^1$ of the spheres centered at $O$, to the foliation $S^2$ of the circular cones with vertex $O$ and axis $\mathbf n$ and to the foliation $S^3$ of the meridian half-planes issued from $\omega$, respectively. The matrix $\Lambda$ of the eigenvalues of $\mathbf K_a$ is \[ \Lambda=\begin{pmatrix} 0 & r^2 & r^2 \\ 0 & 0 & \|\mathbf n\times \mathbf r\|^2 \\ 1 & 1 & 1 \end{pmatrix}. \] By computing $\det \Lambda^a_b$ for $a,b=1,\ldots,3$, we get that the non vanishing ones are \[ \Lambda_1^1=\Lambda_1^2=-\|\mathbf n\times \mathbf r\|^2, \qquad \Lambda_3^1=r^2\|\mathbf n\times \mathbf r\|^2, \qquad \Lambda_2^2=\Lambda_2^3=- r^2. \] The fundamental functions (\ref{2.1}) are summarized in the following table (n.d.\ means that the denominator vanishes identically and the function is not def\/ined) \begin{center} \begin{tabular}[c]{|c|c|c|c|} \hline & $c=1$, $b=2$ & $c=1$, $b=3$ & $c=2$, $b=3 \vphantom{\dfrac 12}$ \\ \hline $a=1$ & $f_1^{21}=0$ & $f_1^{31}=-r^2$ & $f_1^{32}{\hbox { n.d.}}\vphantom{\dfrac 12}$ \\ \hline $a=2$ & $f_2^{21}=\frac{r^2}{\parallel \mathbf n\times\mathbf r\parallel^2}$ & $f_2^{31}=0$ & $f_2^{32}=0\vphantom{\dfrac 12}$ \\ \hline $a=3$ & $f_3^{21} {\hbox { n.d.}}$ & $f_3^{31}=0$ & $f_3^{32}{\hbox { n.d.}} \vphantom{\dfrac 12}$ \\ \hline \end{tabular} \end{center} For $a=1$, the function $f_1^{31}=-r^2$ is constant on the hypersurfaces of $S^1$ and equation $f_1^{31}=k$, for real negative values of $k$, describes all the spheres of $S^1$. According to the fact that $\mathbf E_1=\mathbf r$ is a CKV $(\mathcal L_{\mathbf r}\mathbf G=-2\mathbf G)$, we have that for all $c<b<3$ all functions $f_1^{bc}$ are constant or undef\/ined. For $a=2$, the level sets of the non-constant function $f^{21}_2={r^2}{ \|\mathbf n\times \mathbf r\|^{-2}}$ are the surfaces of~$S^2$ and, since the upper indices are both $<3$, the eigenvector $\mathbf E_2$ is not proportional to a~CKV. For $a=3$, since for any $b$ and $c$, $f_3^{bc}$ is undef\/ined or identically constant, we have that $\mathbf E_3$~is a Killing vector (the rotation around the axis $\omega$) and the corresponding coordinate $q^3$ (the rotational angle) is ignorable. \end{example} \section{Orthogonal separable webs}\label{s3} We consider a $n$-dimensional Riemannian manifold $Q$ with positive def\/inite metric $\mathbf G$ and the corresponding geodesic Hamiltonian $G=\frac 12 g^{ij}p_ip_j$. We suppose that $G$ is orthogonally separable. We adapt the general results of Section \ref{s1} to the case $m=n$, $m_0=0$. Thus, some of the geometric structures introduced in Section \ref{s1} are simplif\/ied. The separable Killing web (Def\/inition \ref{d1.6}) is replaced by the {\it orthogonal separable web} $(\mathcal S_n,\mathbf K)$, that is a set of $n$ pairwise orthogonal foliations $\mathcal S_n=(S^i)$ orthogonal to the eigenvectors of the Killing tensor $\mathbf K$ with simple eigenvalues (the {\it characteristic tensor} of the web). In the orthogonal context, the linear space $D_r$ of Killing vectors disappears and the $n$-dimensional space of Killing tensors $\mathcal K_n$ associated with $\mathcal S_n$ is called {\it Killing--St\"ackel algebra} (KS-algebra) or {\it Killing--St\"ackel space}. All Killing tensors of $\mathcal K_n$ have common normal (i.e., orthogonally integrable) eigenvectors $(\mathbf E_i)$, the integral manifolds orthogonal to $\mathbf E_i$ are the leaves of $S^i$ and all coordinates are essential. We denote by $(\mathbf K_{j})$ a~basis of $\mathcal K_n$ with $\mathbf K_{n}=\mathbf G$. Adapting Theorem \ref{t1.4} to the orthogonal separation, we get \begin{proposition}\label {p3.1} Let $(q^i)$ be a coordinate system adapted to the KS-algebra $\mathcal K_n$. Then, {\rm (i)} the $(q^i)$ are orthogonal separable coordinates for $G$. $(ii)$ Given a basis $(\mathbf K_{j})$ of $\mathcal K_n$, we have \[ \mathbf K_{j}=\sum_i \lambda_{j}^i g^{ii}\, \mathbf E_i \otimes \mathbf E_i= \sum_i K_{j}^{ii}\, \mathbf E_i \otimes \mathbf E_i, \qquad \forall\, j=1,\ldots, n, \] where $\mathbf E_i$ are common eigenvectors of $\mathbf K_{j}$ and $\lambda_{j}^i$ are the corresponding eigenvalues. \end{proposition} We call ${\mathbf S}^{-1}=\big( \varphi_{(j)}^i\big)$ the regular $n\times n$ matrix of the components of $(\mathbf K_{j})$: \begin{equation} \label{3.1} \varphi_{(j)}^i=K_{j}^{ii}= \lambda_{j}^i g^{ii}, \qquad \varphi_{(n)}^i=K_{n}^{ii}= g^{ii}. \end{equation} As in the general case (see equation~\eqref{1.4}), its inverse matrix ${\mathbf S}=\big(\varphi_i^{(j)}\big)$ is a St\"ackel matrix. Moreover, we consider the invariant $n\times n$ matrix $ \Lambda=(\lambda_j^i) $ of the eigenvalues of a basis of $\mathcal K_n$, introduced in Section \ref{s2}. Theorems \ref{t3.6} and \ref{t3.8} below give a complete algebraic characterization of orthogonal separable webs in terms of eigenvalues of associated Killing--St\"ackel spaces, impro\-ving considerably the analysis contained in \cite{[10]} and providing a rigorous proof of the method illustrated in Section \ref{s2} for the orthogonal separation. \begin{proposition}\label{p3.2} For any fixed index $i=1,\ldots,n$ the fundamental functions \begin{equation} f^{jh}_i=\frac {\det \Lambda_j^i}{\det \Lambda_h^i}, \label{3.2} \end{equation} when well-defined, depend on $q^i$ only: in particular we have \begin{equation} f_i^{jh}=(-1)^{h+j}\frac {\varphi_i^{(j)}}{\varphi_i^{(h)}}. \label{3.3} \end{equation} \end{proposition} \begin{proof} We have the following relations between $\mathbf S^{-1}$ and $\Lambda$: \begin{equation} \det \mathbf S^{-1}= \det \Lambda\prod_i g^{ii}, \qquad \det (\mathbf S^{-1})_h^k=\dfrac{\det \Lambda_h^k}{g^{kk}} \prod_i g^{ii}. \label{3.4} \end{equation} By def\/inition of inverse matrix and by (\ref{3.4}), we see that each element of $\mathbf S$ has the following expression \[ \varphi_i^{(j)}=(-1)^{i+j}\,\frac{\det (\mathbf S^{-1})_j^i}{\det \mathbf S^{-1}}= (-1)^{i+j}\,\frac{\det \Lambda_j^i}{g^{ii} \,\det \Lambda}. \] Hence, by (\ref{3.2}), equation (\ref{3.3}) holds and the fundamental functions (\ref{3.2}) depend on the coordinate~$q^i$ only. \end{proof} \begin{proposition}\label{p3.3} For any index $i$ there exist indices $h$ and $j$ with $h<j$ such that the function~\eqref{3.2} is well-defined. \end{proposition} \begin{proof} Since $\det \Lambda \neq 0$, for each $i$ there exists an index $h_0$ such that $\det \Lambda^i_{h_0}\neq 0$. If $h_0<n$, then $f_i^{jh_0}$ is well-def\/ined for any $j>h_0$. If $h_0=n$, then there exists an index $h<n$ such that $\det \Lambda^i_h\neq 0$. Indeed, the $n\times (n-1)$ matrix $\Lambda^i$ obtained from $\Lambda$ by eliminating the $i$-th column has rank $n-1$. Moreover, being $\det \Lambda^i_n\neq 0$, the f\/irst $n-1$ lines are independent i.e., form a basis of a $(n-1)$-dimensional linear space. Since the last row is dif\/ferent from zero (all its elements are equal to 1), there exists a basis made of the last row and other $n-2$ rows of $\Lambda^i$. This means that there exists $h<n$ such that $\det \Lambda^i_h\neq 0$. Hence, for any index $i$, at least one function $f_i^{jh}$ with $j>h$ is well def\/ined. \end{proof} {}From the Def\/inition \ref{d2.1} of CKV, we get the following lemma \begin{lemma} \label{l3.4} The vector field $\mathbf X=f(q^1,\ldots,q^n)\partial _i$ is a CKV if and only if (i) $f$ depends on $q^i$ only; (ii) there exists a function $F$ such that \begin{equation} \partial _i\ln g^{jj}=F, \quad j\neq i, \qquad \partial_i\ln g^{ii}=F+2\partial_i \ln f. \label{3.5} \end{equation} In particular if $F=0$, then $\mathbf X$ is a KV. \end{lemma} \begin{remark}\label{r3.5} \rm By (\ref{3.5}) it follows that $\partial_i g^{hh}=\partial_i g^{jj}$ for all $h,j$ both dif\/ferent from $i$. Moreover, due to item (i) of Lemma \ref{l3.4} the coordinate $q^i$ can always be rescaled in order to have $\mathbf X=\partial_i$. This means that if $\partial_i$ is parallel to a CKV (resp. KV), we can assume without loss of generality that $\partial_i$ is a CKV (resp. KV), by rescaling the corresponding coordinate. \end{remark} \begin{theorem} \label{t3.6} \it Let $\mathbf E_i$ be a common eigenvector of a KS-algebra $\mathcal K_n$. Then, {\rm (i)} for $n>2$ $\mathbf E_i$ is proportional to a conformal Killing vector if and only if the ratios $f_i^{jh}$ are constant or undefined for every $h<j<n$; in particular, {\rm (ii)} for any $n$ $\mathbf E_i$ is proportional to a Killing vector if and only if the ratios $f_i^{jh}$ are constant or undefined for every $h<j$. \end{theorem} \begin{proof} Let $\mathbf E_i$ be a common eigenvector of $\mathcal K_n$ and $(q^i)$ be separable coordinates adapted to $\mathcal K_n$. For simplicity we take $i=1$. Since the separable coordinates are orthogonal, $\mathbf E_1$ is proportional to $\partial_1$. Thus, without loss of generality, we assume that the vector $\mathbf E_1=\partial_1$ is proportional to a~CKV. From Proposition \ref{p3.2} we have for all $j\neq 1$ \begin{equation} \partial _1f_j^{hk}=0. \label{3.6} \end{equation} Let us consider $\partial _1f_1^{hk}$. Due to properties of determinants, we have \begin{equation} \partial _1\det \Lambda ^1_k =\sum _{p=2}^n \Xi_p, \label{3.7} \end{equation} where $\Xi _p $ is the determinant of the matrix obtained from $\Lambda_k^1$ by replacing the elements $(\lambda ^p_h)$ of its $p$-th column with $(\partial_1\lambda^p_h)$. By equations (\ref{1.5})$_1$ and (\ref{3.5}), for all $h$, $k$, we have \begin{equation} \partial _1\lambda^h_k=(\lambda^1_k-\lambda^h_k)F. \label{3.8} \end{equation} By substituting (\ref{3.8}) in (\ref{3.7}), we obtain \begin{equation} \partial _1\det \Lambda ^1_k=-F\left( (n-2)\det \Lambda ^1_k-\sum _{p=1}^n(-1)^p\det \Lambda_k^p\right). \label{3.9} \end{equation} We remark that $\sum\limits_{p=1}^n(-1)^p\det \Lambda_k^p$ is (up to the sign) the determinant of the matrix obtained from~$\Lambda$ by replacing the $k$-th row with the row made by $n$ elements equal to~1. Moreover, since for $k\neq n$ also the last row of $\Lambda_k^1$ contains $n$ elements equal to 1, we have \begin{equation} \sum _{p=1}^n(-1)^p\det \Lambda_k^p=0 ,\quad k\neq n,\qquad \sum _{p=1}^n(-1)^p\det \Lambda_n^p=(-1)^{n+1}\det \Lambda. \label{3.10} \end{equation} We can now evaluate $\partial _1f_1^{hk}$ for all indices $j$, $h$ such that the function is well-def\/ined. Recalling that we always assume $k< h$, from (\ref{3.9}) and (\ref{3.10}) it follows \begin{equation} \partial_1f_1^{hk}=0,\quad h\neq n,\qquad \partial_1f_1^{nk}=(-1)^{n+1}F \dfrac {\det \Lambda}{\det \Lambda^1_k}. \label{3.11} \end{equation} This proves that $f_1^{hk}$ is constant for $k<h<n$. In particular if $F=0$ (i.e., $\partial _1$ is proportional to a Killing vector), then all $f_1^{hk}$ are constant or undef\/ined. Conversely, let us assume that $f_1^{hk}$ is constant or undef\/ined for every $k<h<n$. By (\ref{3.3}) there exists an index $j\neq n$ and $n-1$ real constant $c^{hj}$ ($h=1,\ldots,n-1$) such that for all $h< n$ \begin{equation} \varphi_1^{(h)}=c^{hj}\varphi _1^{(j)} \label{3.12} \end{equation} and, due to the regularity of $\mathbf S$, $\varphi _1^{(j)}\neq 0$. Let $\tilde {\mathbf S}$ be the $n\times n$ matrix obtained from $\mathbf S$ by dividing the f\/irst row by $\varphi_{1}^{(j)}$. The following relations link $\mathbf S$ and $\tilde{\mathbf S}$ \[ \det \mathbf S= \varphi_{1}^{(j)} \det \tilde {\mathbf S}, \qquad \det \mathbf S_h^k= \varphi_{1}^{(j)} \det \tilde {\mathbf S}_h^k, \quad h\neq 1,\qquad \det \mathbf S_1^k= \det \tilde{\mathbf S}_1^k. \] The $n$-th element of the f\/irst row of $\tilde {\mathbf S}$ is the only element of the matrix depending on $q^1$. Thus, \begin{equation} \partial _1\det \tilde {\mathbf S}=(-1)^{n+1}\partial _1\left( \dfrac {\varphi_1^{(n)} }{\varphi_1^{(j)} }\right)\det \tilde {\mathbf S}^n_1, \label{3.13} \end{equation} and for the same reason we get, up to the sign, \begin{alignat}{3} &\partial _1\det \tilde {\mathbf S}_h^k=\partial _1\left( \dfrac {\varphi_1^{(n)} }{\varphi_1^{(j)} }\right)\det \big( \tilde {\mathbf S}_h^k\big)^n_1,\quad&& h\neq 1\ \hbox{and}\ k\neq n,&\nonumber\\ & \partial _1\det \tilde {\mathbf S}_h^k=0, \quad && h=1 \ \hbox{or} \ k=n.& \label{3.14} \end{alignat} From the def\/inition of inverse matrix, we have \begin{gather*} g^{11}=\varphi_{(n)}^1=(-1)^{1+n} \dfrac{\det\mathbf S^n_1}{\det \mathbf S}=(-1)^{1+n}\dfrac{\det\tilde{\mathbf S} ^n_1} {\varphi_1^{(j)}\det \tilde {\mathbf S}},\cr g^{hh}=\varphi_{(n)}^h=(-1)^{h+n} \dfrac{\det\mathbf S^n_h}{\det \mathbf S}=(-1)^{h+n}\dfrac{\det\tilde{\mathbf S}^n_h}{\det \tilde {\mathbf S}}, \qquad h\neq 1. \end{gather*} Thus, we get \begin{equation} \partial _1g^{11}=(-1)^{1+n}\partial _1\bigg( \dfrac{\det\tilde {\mathbf S}^n_1}{ \varphi_1^{(j)}\det \tilde{\mathbf S}}\bigg), \qquad \partial _1g^{hh}=(-1)^{h+n}\partial _1\bigg( \dfrac{\det\tilde {\mathbf S}^n_h}{\det \tilde{\mathbf S}}\bigg), \label{3.15} \end{equation} for $h\neq 1$. Hence, by (\ref{3.14}$)_2$ we have \[ \partial _1g^{11}=(-1)^{n}\dfrac { (\det\tilde{\mathbf S}^n_1)\partial _1(\varphi_1^{(j)}\det \tilde{\mathbf S} )}{(\varphi_1^{(j)} \det \tilde{\mathbf S} )^2}, \qquad \partial _1g^{hh}=(-1)^{h+n-1}\dfrac { (\det\tilde{\mathbf S}^n_h)\partial _1(\det \tilde{\mathbf S})}{(\det \tilde{\mathbf S})^2}, \] and \begin{equation} \partial _1\ln g^{hh}=-\partial _1\ln (\det \tilde{\mathbf S}), \qquad \partial _1\ln g^{11}=-\partial _1\ln (\det \tilde{\mathbf S})-\partial_1 \ln \varphi_1^{(j)}. \label{3.16} \end{equation} By Lemma \ref{l3.4} it follows that $\partial _1$ is proportional to a CKV with $F=-\partial _1\ln (\det \tilde{\mathbf S})$ and $f=\sqrt{\varphi_1^{j}}$. We remark that $F$ does not depend on the choice of the element $\varphi^{(j)}_1$ used in the construction of $ \tilde{\mathbf S}$. Indeed, for any other $j'$ such that $\varphi^{(j')}_1\neq 0$, by (\ref{3.12}) we have \[ \varphi_1^{(j')}=c^{j'j}\varphi _1^{(j)} \] for a suitable constant $c^{j'j}\in {\mathbb R}$ and $\det \tilde{{\mathbf S}}={c^{j'j}}\det \tilde{{\mathbf S}}'$, where $\tilde {\mathbf S}'$ is obtained from $\mathbf S$ by dividing the f\/irst row by $\varphi^{(j')}_1$. Then, the function $F$ does not change. In the particular case when all $f_1^{jh}$ are constant, then (\ref{3.12}) hold for all $h=1,\ldots, n$ and by (\ref{3.13}) it follows that $F=-\partial _1\ln (\det \tilde {\mathbf S})=0$. Hence, by (\ref{3.15}) we get $\partial_1 g^{hh}=0$ for any $h\neq 1$ and $\partial_1 g^{11}=-\partial_1 \ln \varphi_1^{(j)}$. According to Lemma~\ref{l3.4} and Remark~\ref{r3.5}, this means that $\partial_1$ is proportional to a Killing vector and $q^1$ is ignorable up to a rescaling. In this case, due to (\ref{1.5})$_1$ all the elements of $\mathbf S^{-1}$ and $\mathbf S$ do not depend on $q^1$. \end{proof} \begin{remark}\label{r3.7} \rm For $n=2$ every eigenvector of a characteristic Killing tensor is proportional to a~CKV. This fact can be checked directly by writing $g^{11}$ and $g^{22}$ in terms of a two dimensional St\"ackel matrix. \end{remark} From Proposition \ref{p3.2} and Theorem \ref{t3.6} it follows that \begin{theorem}\label{t3.8} \it Let $\mathbf E_i$ be a common eigenvector of a KS-algebra. For every $i=1,\dots, n$ one and only one of the following statements holds: {\rm I)} $\mathbf E_i$ is, up to a scalar factor, a KV. {\rm II)} There exist indices $j,h$ such that, in a neighborhood of any point $P$ where $df_i^{jh}(P)\neq 0$, the equation \begin{equation} f_i^{jh}=\hbox{\rm const} \label{3.17} \end{equation} defines a hypersurface orthogonal to $\mathbf E_i$. \rm \end{theorem} \begin{example} \label{e3.9} \rm Let us consider a {\it L-tensor} $\mathbf L$, that is a conformal Killing tensor with simple eigenvalues and vanishing Nijenhuis torsion~\cite{[5]}. It is known that, if the eigenvalues $(u^i)$ of the L-tensor $\mathbf L$ are functionally independent (this property is not required in our def\/inition according to~\cite{[5]}), then the $(u^i)$ are a separable coordinate system for the geodesic Hamilton--Jacobi equation. In \cite{tsiga} a computer algorithm was implemented for constructing of the separable coordinates associated with a L-tensor compatible with the potential of a natural Hamiltonian on a proper Riemannian manifold. We verify the behaviour of the $f_i^{jh}$ for the KS-algebra generated by $\mathbf L$. Moreover, we see that our method allows to f\/ind new properties of these systems. We recall that a symmetric two-tensor f\/ield $\mathbf K$ is said to be a {\it conformal Killing tensor} (CKT) if there exists a vector f\/ield $\mathbf C$ such that \begin{equation} [\mathbf K,\mathbf G]=2\mathbf C\odot\mathbf G, \label{3.18} \end{equation} where $[\cdot ,\cdot ]$ denotes the Lie--Schouten brackets and $\odot$ the symmetric tensor product. Then, (see \cite{[5],[3]}), the tensors $(\mathbf K_0,\mathbf K_1,\ldots, \mathbf K_{n-1})$ where \[ \mathbf K_0=\mathbf G, \qquad \mathbf K_a=\tfrac 1 {a}{\mathrm{tr}}(\mathbf K_{a-1}\mathbf L)\mathbf G- \mathbf K_{a-1}\mathbf L, \quad a>1 \] form a basis of a KS-algebra. The matrix $\Lambda$ of eigenvalues of the $\mathbf K_a$ is $ \Lambda=\big(\sigma_a^i(u^1,\ldots, u^n)\big)$, $a=0,\ldots, n-1$, $i=1,\ldots, n, $ where $\underline u=(u^1,\ldots, u^n)$ are the eigenvalues of $\mathbf L$ and, for $a>0$, $\sigma_a^i$ are the elementary symmetric polynomials of degree $a$ in the $n-1$ variables $(u^1,\ldots, u^{i-1},u^{i+1},\ldots,u^n)$, for $a=0$ we set $\sigma_0^i=1$. In this case we have \begin{equation} f_i^{jh} = (u^i)^{h-j}. \label{3.19} \end{equation} Indeed, we have that the inverse matrix of $\Lambda$ is \[ \mathbf A=(A^a_i)= \left((-1)^a\frac{(u^i)^{n-a-1}}{U'(u^i)}\right), \] where $U'(u^i)$ is a suitable function of $u^i$ (see \cite{[3]}) and the fundamental functions satisfy \[ f^{jh}_i=\frac{\det \Lambda^i_j}{\det\Lambda^i_h}=(-1)^{j-h}\frac {A^j_i}{A^h_i}. \] In particular, as expected, we get $ f_i^{j\,j+1}=u^i. $ We notice that, due to (\ref{3.19}), for any f\/ixed $i$ either the fundamental functions are constant for all $j<h$ (if $u^i\in {\mathbb R}$), either none of them is constant (if $u^i$ is a non-constant function). Thus, by Theorem \ref{t3.6}, we obtain the following theorem, which provides an interesting restriction upon L-tensors. \end{example} \begin{theorem} \label{t3.10} \it For $n>2$, a L-tensor has no eigenvector proportional to a proper conformal Killing vector. \end{theorem} \section{Separable webs in Riemannian \\ and pseudo-Riemannian manifolds}\label{s4} In this section we prove that the method shown in Section \ref{s2} is ef\/fective also when separable coordinates are not orthogonal or the metric is not positive def\/inite and isotropic coordinates may be present. However, for adapting the results of the orthogonal case, we have to take in account several dif\/ferences occurring in the general case. First of all we recall that in the general case we cannot identify $\mathbf E_i$ and $\partial _i$ as seen in the previous section. Indeed, for orthogonal separable coordinates the common eigenvectors of $\mathcal K_n$ are always in the form $\mathbf E_i=f^i\, \partial _i$ ($i$ not summed) as well as the corresponding eigenforms $(\mathbf E_i)^\flat=f_idq^i$ are proportional to $dq^i$ (see Def\/inition \ref{d1.3}). In the non-orthogonal case, from Def\/inition \ref{d1.3} still $dq^a=(\mathbf E_a)^\flat$, but by (\ref{1.1}) \begin{equation} \mathbf E_{\hat a}=g^{\hat a \hat a}\partial_{\hat a}, \qquad \mathbf E_{\bar a}=g^{\bar a\alpha}\partial_\alpha=\varphi_{(m)}^{\bar a}(\theta_{\bar a}^\alpha \partial_\alpha). \label{4.1} \end{equation} Thus, for all indices $\hat a$ the eigenvectors $\mathbf E_{\hat a}$ are proportional to $\partial_{\hat a}$, but for isotropic coordinates the f\/ields $\mathbf E_{\bar a}$ are not proportional to $\partial_{\bar a}$. Moreover, by (\ref{4.1})$_2$, we see that $\mathbf E_{\bar a}$ are proportional to the vectors $\theta_{\bar a}^\alpha \partial_\alpha$ which are Killing vectors of the hypersurfaces orthogonal to $\mathbf E_{\bar a}$. If for all $\alpha$ the functions $\theta_{\bar a}^\alpha$ are constant, the vectors $\theta_{\bar a}^\alpha \partial_\alpha$ are KV of the whole manifold. Since $\mathbf E_{\bar a}$ are null vectors, we can distinguish between essential eigenvectors of type $\hat a$ and $\bar a$ before knowing the separable coordinates. Moreover, in non-orthogonal coordinates the characterization of the vector f\/ields $\partial_a$ associated with essential coordinates $q^a$ as CKV or KV is more complicated than Lemma \ref{l3.4}: \begin{lemma} \label{l4.1} Let $(g^{ij})$ in standard form \eqref{1.1} in coordinates $(q^i)=(q^a,q^\alpha)$. The vector $\mathbf X=\partial_a$ is a CKV if and only if there exists a function $F$ such that \begin{equation} \begin{array}{l} \partial _a\ln g^{\hat b \hat b}=F, \\ \partial _ag^{\bar b \alpha}=Fg^{\bar b \alpha},\\ \partial _ag^{\alpha \beta}=Fg^{\alpha \beta}. \end{array} \qquad \hat b=1,\ldots, m_1,\quad \bar b=m_1+1,\ldots,m. \label{4.2} \end{equation} If $F=0$ then $\mathbf X$ is a KV. \end{lemma} \begin{proof} From def\/inition of CKV we have, for $\mathbf X=\partial _a$ \[ \partial_a g^{ij}=Fg^{ij}, \] for all $i,j=1,\ldots,n$. By (\ref{1.1}), observing that $g^{\hat b \hat b}\neq 0$, we obtain (\ref{4.2}). \end{proof} Unlike the orthogonal case, by (\ref{1.2}) and (\ref{4.2}) we can see that not only the components of the inverse of a St\"ackel matrix are involved in the def\/inition of CKV, but also the functions~$\theta_{\bar a}^{\alpha}$ and~$\eta_a^{\alpha \beta}$. Since these functions appear in the metric components, we have to modify Theorem \ref{t3.6} for the non-orthogonal case. In particular we need to def\/ine a new kind of vector f\/ields playing the role of KV and CKV in non-orthogonal separable coordinates. \begin{definition} \label{d4.2} \rm Let $(q^i)=(q^a,q^\alpha)$ be standard separable coordinates. The vector $\mathbf X=\partial_a$ is a~{\it conformal St\"ackel symmetry} (CS-symmetry) of the foliation $S^a$ if there exists a function $F$ such that for all $b=1,\ldots, m$ \begin{equation} \partial_a\ln \varphi_{(m)}^b=F. \label{4.3} \end{equation} We say that $\mathbf X$ is a {\it St\"ackel symmetry} (S-symmetry) if it is a CS-symmetry with $F=0$. \end{definition} \begin{remark} \label{r4.3} \rm Due to the regularity of $(g^{ij})$ the $\varphi_{{(m)}}^b$ are all dif\/ferent from zero. For the same reason, for a given $ \bar b$, the $\theta _{\bar b}^\alpha$ are not all zero. \end{remark} \begin{proposition} \label{p4.4} Let $\partial _a$ be a CS-symmetry (S-symmetry). Then, $\partial_a$ is a CKV (KV) if and only if $\theta _a^\alpha $ and $\eta_a^{\alpha \beta}$ are constant for every $\alpha$, $\beta $. \end{proposition} \begin{proof} By (\ref{1.2}) and (\ref{4.3}), for a given CS-symmetry $\partial _a$ we have \begin{gather*} \partial _a\ln g^{\hat b \hat b}=F,\\ \partial _ag^{\bar b\alpha}=\delta _a^{\bar b} \partial _a\theta_{\bar b}^\alpha \varphi_{(m)}^{\bar b}+Fg^{\bar b \alpha},\\ \partial_ag^{\alpha \beta}=\partial _a\eta _a^{\alpha \beta} \varphi^a_{(m)}+Fg^{\alpha \beta}. \end{gather*} Because of (\ref{4.2}) and of $\varphi ^a_{(m)} \neq 0$, $\partial _a$ is a CKV if and only if $\theta _{\bar a}^\alpha $ and $\eta_a^{\alpha \beta}$, both functions of the coordinate corresponding to the lower index only, are constant for every $\alpha$, $\beta $. \end{proof} \begin{remark} \label{r4.5} \rm Proposition \ref{p4.4} shows that CS-symmetries are not coordinate independent objects unless coordinates are orthogonal, in this case they coincide with CKV. Also for the isotropic coordinates we have that if $\partial_{\bar a}$ is a KV then, by Proposition \ref{p4.4} and (\ref{4.1})$_2$, $\mathbf E_{\bar a}$ is proportional to the KV $\theta_{\bar a}^\alpha \partial_\alpha$. If no isotropic coordinate occurs, then $\partial_a$ is a CS-symmetry if and only if it is a~CKV of each $m$-dimensional submanifold $\{q^\alpha={\rm const}$, $\alpha=m+1,\ldots,n\}$. \end{remark} As in the previous sections, we introduce the $m\times m$ matrix $\mathbf \Lambda=(\lambda^a_b)$ of the essential eigenvalues of a basis $(\mathbf K_1,\ldots, \mathbf K_m=\mathbf G)$ of $\mathcal K_m$ described in Section \ref{s2}. Even if in the construction of $\mathbf \Lambda$ we do not explicitly distinguish between eigenvalues $\lambda^{\hat a}$ and $\lambda^{\bar a}$, we will see in Remark \ref{r4.13} that the distinction is relevant. \begin{proposition}\label{p4.6} The vector $\mathbf X=\partial _a$ is a CS-symmetry if and only if \begin{equation} \partial _a\lambda^b_c=(\lambda^a_c-\lambda ^b_c)F \qquad \forall \;b,\,c=1,\ldots, m, \label{4.4} \end{equation} where $F$ is the function appearing in \eqref{4.3}. The vector $\mathbf X=\partial _a$ is a S-symmetry if and only if \rm \[ \partial_a \lambda^b_c=0 \qquad \forall\; b,\, c=1,\ldots, m. \] \end{proposition} \begin{proof} It follows directly from Def\/inition \ref{d4.2} and (\ref{1.5})$_1$, recalling that in $(\mathcal K_m)$ there is always at least one KT with distinct essential eigenvalues. \end{proof} \begin{lemma} \label{l4.7} Let $S=(\varphi ^{(b)}_a)$ be the $m\times m$ St\"ackel matrix defined by \eqref{1.2} and \eqref{1.4}. Then \begin{equation} \varphi^{a}_{(b)}=\lambda_b^a \varphi^a_{(m)}, \qquad \forall \; a,b=1,\ldots, m. \label{4.5} \end{equation} \end{lemma} \begin{proof} According to (\ref{1.3}) and (\ref{1.4}), the non-vanishing components of the basis $(\mathbf K_1,\ldots, \mathbf K_m)$ in standard coordinates have the following equivalent forms \begin{equation} K_b^{\hat a\hat a}=\lambda^{\hat a}_b g^{\hat a\hat a}=\varphi^{\hat a}_{(b)},\qquad K_b^{\bar a \beta}=\lambda^{\bar a}_b g^{\bar a\beta} =\theta^{\beta}_{\bar a}\varphi^{\bar a}_{(b)}. \label{4.6} \end{equation} By inserting the expression of the metric (\ref{1.2}) in (\ref{4.6}), we get \[ \lambda^{\hat a}_b \varphi^{\hat a}_{(m)}=\varphi^{\hat a}_{(b)}, \qquad \lambda^{\bar a}_b \theta^{\beta}_{\bar a}\varphi^{\bar a}_{(m)}=\theta^{\beta}_{\bar a}\varphi^{\bar a}_{(b)}. \] Hence, due to Remark \ref{r4.3}, relation (\ref{4.5}) holds for all essential eigenvalues, without distinction between isotropic and non-isotropic coordinates. \end{proof} \begin{remark}\label{r:ult} By (\ref{4.5}) for the general separation (cf. (\ref{3.4}) for the orthogonal case), it follows: $\det \mathbf S^{-1}= \det \Lambda\prod\limits_a \varphi^a_{(m)}$. Therefore, for nondegenerate metrics we have always $ \det \Lambda\neq 0. $ \end{remark} We def\/ine for essential coordinates the fundamental functions $f_a^{bc}$ in terms of minors of $\Lambda$ as described by (\ref{2.1}). Namely, Propositions \ref{p3.2} and \ref{p3.3} can be directly restated here as follows \begin{proposition} \label{p4.8} \it {\rm (i)} If \begin{equation} f^{bc}_a=\frac {\det \Lambda_b^a}{\det \Lambda_c^a}=(-1)^{b+c}\frac {\varphi_a^{(b)}}{\varphi_a^{(c)}} \label{4.7} \end{equation} is well-defined, then it depends on $q^a$ only. {\rm (ii)} For any fixed index $a$ there exist two indices $c<b\leq m$ such that the fundamental function $f_a^{bc}$ \eqref{4.7} is well-defined. \end{proposition} We can now generalize Theorem \ref{t3.6}: \begin{theorem}\label{t4.9} Let $q^a$ be an essential coordinate adapted to a separable Killing algebra $(D_r,\mathcal K_m)$. Then, {\rm (i)} for $m>2$ there exists a rescaling $\breve{q}^{a}=\breve{q}^a(q^a)$ such that the associated vector field~$\breve \partial_a$ is a CS-symmetry if and only if the functions $f_a^{bc}$ \eqref{4.7} are constant or undefined for every $c<b< m$. In particular, {\rm (ii)} for any $m$, $\breve\partial_a$ is a S-symmetry if and only if the functions $f_a^{bc}$ are constant or undefined for every indices $b$, $c$. \end{theorem} \begin{proof} By comparing (\ref{4.5}) and (\ref{3.1}), we see that the relations between essential components of Killing tensors and the $m\times m$ St\"ackel matrix $(\varphi ^{(a)}_c)$ are exactly the same as in the orthogonal case. To prove our thesis, we follow the proof of Theorem \ref{t3.6} with some modif\/ications. Let us assume $a=1$ and that $\breve\partial_1$ is a CS-symmetry. Then, equations (\ref{4.4}) hold and by calculating $\breve\partial_1 f_1^{bc}$ as in Theorem \ref{t3.6} we get equations \[ \breve\partial_1f_1^{bc}=0,\quad c<b<m,\qquad \breve\partial_1f_1^{mc}=(-1)^{m+1}F \dfrac {\det \Lambda}{\det \Lambda^1_c}, \] corresponding to (\ref{3.11}). Hence, the fundamental functions $f_1^{bc}$ are constant or undef\/ined for all $b<c<m$ and, if $F=0$ (i.e., $\breve\partial_1$ is a S-symmetry) they are all constant or undef\/ined. Conversely, if $f_1^{bc}$ are constant or undef\/ined for all $b<c<m$, by repeating the same reasoning of Theorem \ref{t3.6} we obtain the following equations analogous to (\ref{3.16}) \[ \partial _1\ln \varphi_{(m)}^b=-\partial _1\ln (\det \tilde{\mathbf S}), \qquad \partial _1\ln \varphi_{(m)}^1=-\partial _1\ln (\det \tilde{\mathbf S})-\partial_1 \ln \varphi_1^{(c)}, \] with $b\neq 1$, where $\varphi_1^{(c)}$ is a non-vanishing element of the f\/irst row of the St\"ackel matrix $\mathbf S$ and $\tilde{\mathbf S}$ is the matrix obtained from $\mathbf S$ by dividing the f\/irst row by $\varphi_1^{(c)}$. If $\partial_1 \varphi_1^{(c)}\neq 0$, we can locally rescale $q^1$ as $\breve q^{1}=\varphi_1^{(c)}(q^1)$ (if $\varphi_1^{(c)}$ is constant we do not need to rescale and $\breve\partial_1=\partial_1$). Hence, for all $b=1,\ldots,m$ we get $\breve\partial _{1}\ln \varphi_{(m)}^b=-\breve\partial_{1}\ln (\det \tilde{\mathbf S})$ and by (\ref{4.3}) $\breve\partial_{1}$ is a CS-symmetry with $F=-\breve\partial_{1}\ln (\det \tilde{\mathbf S})$. In particular, as in the orthogonal case, if all $f_1^{bc}$ are constant or undef\/ined then $\det \tilde{\mathbf S}$ is independent of $\breve q^1$, $F=0$, and $\breve\partial_1$ is a S-symmetry. \end{proof} \begin{remark} \label{r4.10}\rm In the previous theorem no distinction is made between coordinates of type $\hat a$ and~$\bar a$. For $m=2$ it is easy to check that every $\partial_a$ is up to a rescaling a CS-symmetry. \end{remark} By Theorem \ref{t4.9}, Theorem \ref{t3.8} can be generalized in the following way \begin{theorem}\label{t4.11} Let $(q^a)$ be essential coordinates adapted to a separable Killing algebra $(D_r,\mathcal K_m)$. For every $a=1,\dots, m$ one and only one of the following statements holds: {\rm I)} there exists a~re\-sca\-ling $\breve{q}^{a}=\breve{q}^a(q^a)$ such that the associated vector field $\breve \partial_a$ is a S-symmetry. {\rm II)} There exist indices $b$, $c$ such that, in a neighborhood of any point where $df_a^{bc}\neq 0$, the equation \[ f_a^{bc}=\hbox{\rm const} \] defines a hypersurface of the foliation $q^a={\rm const}$. \end{theorem} \begin{remark}\label{r4.12} \rm The vector f\/ield $\partial_a$ is a S-symmetry if and only if the St\"ackel matrix does not depend on $q^a$. Therefore, unlike the orthogonal case, item I) of Theorem \ref{t4.11} does not provide a geometric characterization of the f\/ield $\partial_a$, but merely a property of the St\"ackel matrix with respect to $(q^i)$, as it is illustrated in Example \ref{e4.14}. On the contrary, item II) retains the same geometric meaning as in Theorem \ref{t3.8}. \end{remark} \begin{remark}\label{r4.13} \rm Since $\mathbf E_{\hat a}$ is proportional to $\partial_{\hat a}$, by applying Theorems \ref{t4.9} and \ref{t4.11} to the fundamental functions $f_{\hat a}^{bc}$ we have that if $\partial_{\hat a}$ is a CS-symmetry then $\partial_{\hat a}$ is a common eigenvector of $\mathcal K_m$ orthogonal to $S^{\hat a}$. This is not true for indices $\bar a$, which correspond to isotropic eigenvectors $\mathbf E_{\bar a}$ generating the isotropic distribution $I=\Delta \cap \Delta ^\perp$. \end{remark} \begin{example}\label{e4.14} \rm Let us consider the Euclidean four-dimensional space ${\mathbb R}_4$. Let $\mathcal S_2$ be the set of two foliations $S^1$ and $S^2$ described in Cartesian coordinates $(x,y,z,t)$ as \[ S^1=\bigcup_{k>0}\big\{(x,y,z,t)\in {\mathbb R}^4 \mid x^2+y^2=k\big\}, \qquad S^2=\bigcup_{h\in \scriptstyle{{\mathbb R}}}\big\{(x,y,z,t)\in {\mathbb R}^4 \mid t=h\big\}. \] Two vectors orthogonal to $S^1$ and $S^2$ respectively are $ \mathbf n_1=x\,\partial_x+y\,\partial_y,$ $\mathbf n_2=\partial_t. $ Let $D_2$ be the linear space generated by the vectors \[ \mathbf X_3=\partial_z, \qquad \mathbf X_4=y\,\partial_x-x\,\partial_y, \] which are commuting Killing vectors tangent to both foliations $S^a$. The tensor \[ \mathbf K=\partial_t\otimes\partial_t \] is a $D_2$-invariant KT. Moreover, $\mathbf E_1=\mathbf n_1$ and $\mathbf E_2=\mathbf n_2$ are eigenvectors of $\mathbf K$ associated with the eigenvalues $\lambda^1=0$ and $\lambda^2=1$ respectively. Hence, according to Def\/inition~\ref{d1.6}, $(\mathcal S_2,D_2,\mathbf K)$ is a separable Killing web. The tensors $(\mathbf K, \mathbf G)$ form a basis of the KT-space $\mathcal K_2$. Let us construct the adapted coordinates $(q^a,q^\alpha)$ and compute the components of the metric in these new coordinates. As essential coordinates we choose $q^1=\sqrt{x^2+y^2}=\rho$ and $q^2=t$. By adding the ignorable coordinates $(q^\alpha)=(q^3,\,q^4)$ associated with the basis $(\mathbf X_3, \mathbf X_4)$ and with a section~$\mathcal Z$ orthogonal to the orbits of $D_2$, the coordinate transformation is def\/ined by \begin{equation} x=\rho \cos(q^4+\theta_0), \qquad y=\rho \sin(q^4+\theta_0), \qquad z=q^3+z_0, \qquad t=q^2, \label{4.8} \end{equation} where $\theta_0\in (0,\,2\pi)$ and $z_0\in {\mathbb R}$ are the parameters def\/ining $\mathcal Z$. In these coordinates the metric is diagonal and the non vanishing components of $\mathbf G$ are \[ g^{11}=g^{22}=g^{44}=1, \qquad g^{33}=\rho^{-2}. \] By choosing a dif\/ferent basis of $D_2$, for instance $\mathbf X'_3=\mathbf X_3$ and $\mathbf X'_4=\mathbf X_3+\mathbf X_4$, and leaving $\mathcal Z$ unchanged, we get non-orthogonal ignorable coordinates $(q'^\alpha)$ given by \[ x=\rho \cos(q'^4+\theta_0), \qquad y=\rho \sin(q'^4+\theta_0), \qquad z=q'^3+q'^4+z_0, \qquad t=q^2. \] and the metric assumes the standard form \[ \mathbf G=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1+\rho^{-2} & -1 \\ 0 & 0 & -1 & 1 \\ \end{pmatrix}. \] In both cases the $2\times 2$ St\"ackel matrix associated with the essential separable coordinates and its inverse are \[ \mathbf S= \begin{pmatrix} -1 & 1 \cr 1 & 0 \end{pmatrix}, \qquad \mathbf S^{-1}=\begin{pmatrix} \lambda^1 g^{11} & \lambda^2 g^{22} \cr g^{11} & g^{22} \end{pmatrix}= \begin{pmatrix} 0 & 1 \cr 1 & 1 \end{pmatrix}, \] respectively. The matrix $\Lambda$ of the essential eigenvalues of $\mathbf K$ and $\mathbf G$ coincides with the mat\-rix~$\mathbf S^{-1}$. The method of the eigenvalues does not provide any coordinate hypersurface because $\mathbf S$ is constant. Then, according to Theorem \ref{t4.11}, $\partial_1$, $\partial_2$ are both S-symmetries. However, from a geometric point of view the corresponding eigenvectors $\mathbf E_1=x\,\partial_x+y\,\partial_y$ and $\mathbf E_2=\partial_t$ have dif\/ferent properties. Indeed, $\mathbf E_2$ is a KV, according to the fact that $q^2$ is ignorable, while $\mathbf E_1$ is not a KV since $g^{33}$ depends on $q^1$. We can apply the eigenvalue method to the orthogonal system~(\ref{4.8}) for computing the equation of the hypersurfaces of $S^1$. In order to determine the $4\times 4$ matrix $\Lambda$ we consider the $4$-dimensional KT space containing $\mathcal K_2$ and the tensors $\mathbf K_2=\mathbf X_3\otimes\mathbf X_3$, $\mathbf K_3=\mathbf X_4\otimes\mathbf X_4$, which is a KS-algebra for the orthogonal system (\ref{4.8}). \end{example} \begin{example} Let us consider a 4-dimensional pseudo-Riemannian manifold having in a coordinate system $(X,Y,Z,U)$ the following non-zero contravariant metric components \begin{gather*} g^{11}=-X^{10}\frac{-4(X^4-Y^4)+9X^4Y^{24}\Psi}{144(X^{10}+Y^{10})^2(X^4-Y^4)}, \\ g^{22}=-Y^{10}\frac{-4(X^4-Y^4)-9X^{24}Y^4\Psi}{144X^{10}(X^{10}+Y^{10})^2(X^4-Y^4)}, \\ g^{12}=X^{5}Y^5\frac{4(X^4-Y^4)+9X^{14}Y^{14}\Psi}{144(X^{10}+Y^{10})^2(X^4-Y^4)}, \\ g^{33}=-(Z-U)^2-\Psi, \qquad g^{44}=(Z-U)^2-\Psi, \qquad g^{34}= -\Psi, \end{gather*} where $\Psi=(U-Z+X^6+Y^6)$. For $|Y|<|X|$ and $\Psi>0$ the signature is (3,1), while for $|Y|>|X|$ and $\Psi<0$ the signature is (2,2). The 2-dimensional space of the KVs is generated by $\mathbf X_1 =\partial_Z+\partial_U$, $\mathbf X_2=\sqrt{2|f|}(X^5 \partial_X-Y^5 \partial_Y),$ where \[ f=\frac{Y^{14}X^{14}}{32(X^{10}+Y^{10})^2(X^4-Y^4)}. \] We call $D_1$ the KV space generated by $\mathbf X_1$. We have $r=1=m_0$, since $\mathbf X_1$ is an isotropic vector. Let us consider the independent tensors $\mathbf K_1$, $\mathbf K_2$ whose non null contravariant components are \begin{gather*} K_1^{11}=Y^{10}(U-Z)f, \qquad K_1^{22}= X^{10}(U-Z)f, \qquad K_1^{12}= -X^5-Y^5 (U-Z)f, \\ K_1^{33} =1+ \tfrac 12 (Z-U)^2- \tfrac 12 (Z-U), \qquad K_1^{34} =1- \tfrac 12 (Z-U), \\ K_1^{44} = 1-\tfrac 12 (Z-U)^2- \tfrac 12 (Z-U), \\ K_2^{11}=2Y^{10}f, \qquad K_2^{22}= 2X^{10}f, \qquad K_2^{12}= -2X^5Y^5 f, \qquad K_2^{33}=K_2^{34}=K_2^{44}=1. \end{gather*} The space $\mathcal K_3$ generated by $(\mathbf K_1, \mathbf K_2,\mathbf G)$ satisf\/ies the hypotheses of Theorem \ref{t1.2}. Thus, we can apply the eigenvalue method to get the equations of the separated coordinate hypersurfaces. The matrix of the eigenvalues is \[ \Lambda = \begin{pmatrix} 0 & \frac{Z-U}{2\Psi}& -\frac 12 \\ 0 & -\frac 1\Psi & 0 \\ 1 & 1 & 1 \\ \end{pmatrix}. \] We get \[ f_1^{23}=-X^6-Y^6, \qquad f_2^{ab} \quad \hbox{constant or n.d.}, \qquad f_3^{21}=\tfrac 12 (U-Z). \] This means that \[ x = X^6 +Y^6, \qquad z = (Z-U)/2 \] are essential separable coordinates (the last one is a null coordinate) and that, up to a rescaling, the vector $ \partial_y$ associated with the remaining essential separable coordinate $y$ is a S-symmetry. The separable coordinate $y=1/Y^4-1/X^4$ cannot be computed by the eigenvalue method. The coordinate associated with $\mathbf X_1 \in D_1$ is $u=(U+Z)/2$. By performing the change of variables $(X,Y,Z,U)\to (x,y,z,u)$ we get the metric in standard form \[ \mathbf G = \begin{pmatrix} 1& 0& 0 & 0 \\ 0 & \frac{2x-z}{y}& 0 & 0 \\ 0 & 0& 0& -2z^2 \\ 0 & 0 & -2z^2& 2z-x \\ \end{pmatrix}. \] In the separable coordinates the general KV is $c_1\frac {1} {\sqrt y}\partial_y + c_2\partial_u$ and the tensors $\mathbf K_1$, $\mathbf K_2$ become \[ \mathbf K_1=-\frac zy \partial_y \odot \partial_y + 2z^2\partial_z\odot\partial_u +(1-z)\partial_u\odot\partial_u, \qquad \mathbf K_2= \frac 1y \partial_y \odot \partial_y + \partial_u\odot\partial_u. \] We remark that $\mathbf K_2$ is a reducible KT (i.e., sum of symmetric products of KV), while $\mathbf K_1$ is an irreducible tensor. \end{example} \section{Conformal separable orthogonal systems.}\label{s5} The method developed in Section \ref{s2} characterizes also conformal separable orthogonal webs \cite{[7]} in a natural way. We recall that \begin{definition} \label{d5.1} \rm The geodesic Hamiltonian $G$ is {\it conformal separable} if there exists a function $\sigma$ on $Q$ such that the conformal geodesic Hamiltonian $\bar G=G/\sigma$ (associated with the conformal metric $\bar{\mathbf G}=\mathbf G/\sigma$) is separable. We call {\it conformal separable} the coordinates $(q^i)$ allowing the separation of $\bar G$. \end{definition} \begin{remark} An important application of conformal separable coordinates is the fact that coordinates allowing $R$-separation of the Laplace equation are necessarily conformal separable (see \cite{[17],[11]}). Moreover, in conformally f\/lat manifolds, all the conformal separable coordinates are also $R$-separable (see \cite{[18]}). \end{remark} Due to Def\/inition \ref{d5.1}, the conformal separation in orthogonal coordinates is equivalent to the existence of a KS-algebra for a conformal metric $\bar {\mathbf G}$. The following theorem contains an intrinsic characterization in terms of the original metric tensor $\mathbf G$, involving conformal Killing tensors (CKT), introduced in Example \ref{e3.9}. \begin{theorem}[\cite{[7]}] \label{t5.2} The geodesic Hamiltonian $G$ is conformal separable in orthogonal coordinates if and only if there exist $n$ CKT $(\mathbf K_i)$ pointwise independent with common eigenvectors $(\mathbf E_i)$ and in conformal involution (i.e., there exist vector fields $\mathbf C_{ij}$ such that $[\mathbf K_i,\mathbf K_j]=\mathbf C_{ij}\odot \mathbf G$). It is not restrictive to assume $\mathbf K_n=\mathbf G$. Each conformal separable coordinate hypersurface is orthogonal to one of the $n$ common normal eigenvectors of $(\mathbf K_i)$. \end{theorem} We have \cite{[14],[7]} \begin{proposition} \label{p5.3} Let $(\mathbf K_1,\ldots,\mathbf K_n=\mathbf G)$ be a set of independent CKT associated with conformal separable orthogonal coordinates $(q^i)$, and let $(\lambda_i^j)$ be their eigenvalues with respect to the metric~$\mathbf G$. Then, for any choice of the index $k=1,\ldots,n$, the tensors \begin{equation} \bar {\mathbf K}_i=\mathbf K_i-\lambda_i^k\mathbf G \qquad i=1,\dots, n-1 \label{5.1} \end{equation} are KT for the metric \begin{equation} \bar {\mathbf K}_n=\bar {\mathbf G}=(g^{kk})^{-1}\mathbf G \label{5.2} \end{equation} and $(\bar {\mathbf K}_1,\ldots,\bar {\mathbf K}_n=\bar {\mathbf G})$ is a basis for the KS-algebra associated with $(q^i)$. \end{proposition} \begin{remark}\label{r5.4} \rm We say that $(\mathbf K_1, \dots ,\mathbf K_{n-1},\mathbf G)$ is a basis of the {\it conformal Killing space} (CK-space) associated with the conformal separable coordinates $(q^i)$. \end{remark} \begin{sloppypar} Due to Proposition \ref{p5.3}, by considering the matrix $\bar \Lambda$ made by the eigenvalues of $(\bar {\mathbf K}_i)$ with respect to $\bar {\mathbf G}$, we can apply Theorems \ref{t3.6}, \ref{t3.8} for characterizing any orthogonal conformal separable web associated with $(\mathbf K_1, \dots ,\mathbf K_{n-1},\mathbf G)$ as orthogonal separable web associated with the KS-algebra generated by $(\bar {\mathbf K}_1, \dots ,\bar {\mathbf K}_{n-1},\bar {\mathbf G})$. Following Section \ref{s2}, we def\/ine \begin{equation} \bar f_i^{jh}=(-1)^{j+h}\dfrac {\det \bar \Lambda_j^i}{\det \bar \Lambda_h^i}, \label{5.3} \end{equation} where the matrix $\bar \Lambda$ is formed by the eigenvalues of $(\bar {\mathbf K}_i)$ with respect to $\bar {\mathbf G}$, and \begin{equation} f_i^{jh}=(-1)^{j+h}\dfrac {\det \Lambda_j^i}{\det \Lambda_h^i}, \label{5.4} \end{equation} where $\Lambda$ is the matrix made by eigenvalues of $(\mathbf K_i)$ with respect to $\mathbf G$. We remark that functions~(\ref{5.3}) are not intrinsically def\/ined, since to determine the $\bar{\mathbf G}$-eigenvalues of the tensors $(\bar {\mathbf K}_i)$ it is necessary to know the coordinates because of the factor $g^{kk}$ appearing in (\ref{5.2}). On the contrary, in functions (\ref{5.4}) only the eigenvalues of tensors satisfying intrinsic conditions (the hypotheses of Theorem \ref{t5.2}) are involved. \end{sloppypar} \begin{remark} \label{r5.5} \rm Def\/inition \ref{d5.1} of ($\bar {\mathbf K}_i$) implies that in $\bar \Lambda$ the $k$-th column has $n-1$ zeros. \end{remark} \begin{proposition} \label{p5.6} Let $\bar f_i^{jh}$ and $f_i^{jh}$ be the functions defined in \eqref{5.3} and \eqref{5.4} respectively. Then, for $h<j< n$ $(n>2)$ we have either \[ \bar f_i^{jh}=f_i^{jh}, \] or both functions are undefined. \end{proposition} \begin{proof} The eigenvalues of $\bar {\mathbf K}_i$ with respect to $\bar {\mathbf G}$ are \begin{equation} \bar \lambda_i^j=(\lambda_i^j-\lambda_i^k)g^{kk}. \label{5.5} \end{equation} By linear algebra arguments, we have that \[ \det \bar \Lambda_{h}^i=\big(g^{kk})^{n-2} \det \Lambda_{h}^i. \] Hence, by (\ref{5.3}) and (\ref{5.4}) the thesis follows. \end{proof} \begin{remark} \label{r5.7} \rm A vector f\/ield $\mathbf X$ is CKV for $\mathbf G$ if and only if it is a CKV for any metric conformal to $\mathbf G$. Thus, in the following for CKV we shall not specify which is the metric tensor considered. \end{remark} In the case of the orthogonal conformal separation Theorems \ref{t3.6} and \ref{t3.8} can be restated as follows \begin{theorem} \label{t5.8} Let $\mathbf E_i$ be a common eigenvector of a basis $(\mathbf K_i)$ of a CK-space. Then, $\mathbf E_i$ is proportional to a CKV if and only if for all $h<j<n$ $(n>2)$ the functions $f_i^{jh}$ defined by \eqref{5.4} are constant or undefined. \end{theorem} \begin{proof} Due to Proposition \ref{p5.6} and Remark \ref{r5.7}, the thesis follows by applying Theorem \ref{t3.6} (i) to the KS-algebra (\ref{5.1}), (\ref{5.2}) with $k\neq i$. \end{proof} \begin{theorem} \label{t5.9} Let $\mathbf E_i$ be a common eigenvector of a basis $(\mathbf K_i)$ of a CK-space. For every $i=1,\dots, n$ $(n>2)$ one and only one of the following statements holds: {\rm I)} $\mathbf E_i$ is, up to a scalar factor, a CKV. {\rm II)} There exist indices $h<j<n$ such that, in a neighborhood of any point where $df_i^{jh}\neq 0$, the equation \[ f_i^{jh}=\hbox{\rm const} \] defines a hypersurface orthogonal to $\mathbf E_i$. \end{theorem} In the case of conformal orthogonal separation, we do not distinguish if $\mathbf E_i$ is proportional to a CKV or a KV. The following property holds \begin{proposition} \label{p5.10} If $\mathbf E_i$ is, up to a factor, a CKV, then it is a KV of $\bar {\mathbf G}=\mathbf G/g^{kk}$ for any $k\neq i$. \end{proposition} \begin{proof} By (\ref{5.1}), we get a basis of the KS-algebra with respect to $\mathbf G/g^{kk}$ with $k\neq i$. According to Remark \ref{r5.5}, the $k$-th column of $\bar \Lambda$ has $n-1$ zeros. Therefore, all submatrices of kind $\bar \Lambda_n^h$, $h\neq k$ have null determinants. This means that for all $h<n$ the functions $\bar f_i^{hn}$ are undef\/ined or identically null. Moreover, since according to Remark \ref{r5.7} $\mathbf E_i$ is up to a factor a CKV for $\bar {\mathbf G}=\mathbf G/g^{kk}$, due to Theorem \ref{t3.6} (i) we get that for $h<j<n$ the functions $\bar f_i^{hj}= f_i^{hj}$ are constant or undef\/ined. Then, Theorem \ref{t3.6} (ii) implies that $\mathbf E_i$ is a KV for $\bar {\mathbf G}$. \end{proof} \begin{example} \label{e5.11} \rm Let us consider in ${\mathbb R}^3$ the vector f\/ields $\mathbf R_3$ and $\mathbf I_3$ having Cartesian components \[ \mathbf R_3=(-y,x,0),\qquad \mathbf I_3=\big(-2xz,\,-2yz,\, x^2+y^2-z^2\big), \] respectively. The vector $\mathbf I_3$ is a CKV with respect to the Euclidean metric $\mathbf G$: it is the inversion with respect to a generic point on the axis $z$. The vector $\mathbf R_3$ is the rotation around the $z$ axis and it is a Killing vector. It is straightforward to check that the two vector f\/ields commute so that the corresponding linear f\/irst integrals are in involution. Moreover, the tensors $(\mathbf K_1=\mathbf I_3\otimes \mathbf I_3,\, \mathbf K_2= \mathbf R_3\otimes \mathbf R_3, \mathbf G)$ are pointwise independent. Hence, they satisfy the hypotheses of Theorem \ref{t5.2} and they are associated with some conformal separable coordinate system. We apply the above described method to determine the conformal separable coordinate hypersurfaces. Being $\mathbf R_3\perp \mathbf I_3$, the common eigenvectors are \[ \mathbf E_1=\mathbf I_3\times \mathbf R_3,\qquad \mathbf E_2=\mathbf I_3,\qquad \mathbf E_3=\mathbf R_3. \] The eigenvalues matrix $\Lambda$ is \[ \Lambda=\begin{pmatrix} 0 & \mathbf I_3\cdot \mathbf I_3 & 0 \\ 0 & 0& {\mathbf R_3\cdot \mathbf R_3} \\ 1 & 1 & 1 \end{pmatrix}. \] The coordinate hypersurfaces orthogonal to $\mathbf E_1$ are the level sets of the function \[ f_1^{21}=\dfrac{\left|\begin{matrix} {\mathbf I_3\cdot \mathbf I_3} & 0 \\ 1 & 1 \\ \end{matrix}\right| }{\left|\begin{matrix} 0& {\mathbf R_3\cdot \mathbf R_3} \\ 1 & 1 \\ \end{matrix}\right| }=\dfrac{\mathbf I_3\cdot \mathbf I_3}{-\mathbf R_3\cdot \mathbf R_3}=-\dfrac{(x^2+y^2+z^2)^2}{x^2+y^2} \] which describes the rotational surface obtained by rotating around the $z$-axis a circle in the plane $(x,z)$ tangent in the origin $O$ to the $z$-axis (toroids without center opening). Since $f_1^{12}$ is not constant and both the upper indices are dif\/ferent from 3, the eigenvector $\mathbf E_1$ is not proportional to a CKV. According to the fact that $\mathbf I_3$ and $\mathbf R_3$ are conformal Killing tensors, all functions $f_i^{jh}$ for $i=2,3$ and $h,j\neq 3$ are constant or undef\/ined. It is well known that the surfaces orthogonal to $\mathbf E_3=\mathbf R_3$ are half-planes issued from the $z$-axis. Moreover, it is easy to check that the spheres tangent in $O$ to the $xy$-plane are hypersurfaces orthogonal to $\mathbf E_2=\mathbf I_3$. Indeed, the gradient of the function $q^2=(x^2+y^2+z^2)/z$ is up to the factor $z^2$ exactly $\mathbf I_3$. The coordinates associated with $(\mathbf K_1, \mathbf K_2, \mathbf G)$ are known as tangent-spheres coordinates \cite{[15]} related to $(x,y,z)$ by \[ x=\frac{\mu \cos \psi}{\mu^2+\nu^2}, \qquad y=\frac{\mu \sin \psi}{\mu^2+\nu^2}, \qquad z=\frac{\nu}{\mu^2+\nu^2}, \] where $q^1=\mu$, $q^2=\nu$, $q^3=\psi$ (see also \cite{[16]} for a detailed analysis and classif\/ication of the symmetric conformal separable coordinates in $\mathbb R^3$ and the associated CKTs). A conformal metric which is separable in these coordinates is for instance $\bar {\mathbf G}=(\mathbf R_3\cdot \mathbf R_3)\,\mathbf G$. By Proposition~\ref{p5.3}, the tensors $\mathbf K_1$ and $\mathbf K_2$ are KT for $\bar {\mathbf G}$. By Proposition~\ref{p5.10}, $\mathbf E_2$ is a KV for $\bar {\mathbf G}$. By def\/inition of~$\mathbf K_2$ and because it is a KT for $\bar {\mathbf G}$, $\mathbf E_3$ also is a KV for $\bar {\mathbf G}$. \end{example} \section{Conclusion} By using simple arguments of linear algebra and the properties of the St\"ackel matrices, we have seen how to construct separable hypersurfaces by means of eigenvalues of symmetric two-tensors in Riemannian and pseudo-Riemannian manifolds. It follows that the webs associated with these hypersurfaces have the same domain of def\/inition of the eigenvalues employed in the costruction, apart some closed singular set where the common eigenspaces of the tensors in the KS (CKS) spaces are not one-dimensional. In (real) pseudo-Riemannian manifolds, KTs (and CKTs) may have complex conjugated eigenvalues, in this case it is not possible to def\/ine real separable coordinates. However, it is possible to introduce separated complex variables allowing the Jacobi integration (see \cite{[12]}). The application of our eigenvalue method to the complex case is in progress \cite{mink}. For manifolds of constant curvature the whole spaces of Killing and conformal-Killing tensors are well known, then it is possible to apply our method to get computer-graphical representations of the webs. We remark that the separable (resp. conformal-separable) coordinates here considered are the only ones allowing separation (resp.\ Fixed Energy $R$-separation \cite{[11]}) of Laplace, Helmholtz and Schr\"odinger equations \cite{[6]}. \subsection*{Acknowledgements} This research is partially supported by MIUR (National Research Project ``Geometry of Dynamical System'') and by the research project ``Progetto Lagrange" of Fondazione CRT. \pdfbookmark[1]{References}{ref}
1,477,468,750,237
arxiv
\section{Statement of the main result} Dain has proven the inequality $m \geq |J|$ for complete, maximal, asymptotically flat axisymmetric vacuum initial data to the 3+1 dimensional Einstein equation. Here $m$ is the ADM mass associated with the data and $J$ is the conserved angular momenta associated with the $U(1)$ isometry \cite{dain2006proof,Dain:2005vt ,dain2008proof}. A thorough account of this program with references to further generalizations can be found in the review \cite{dain2012geometric}. A natural problem is to investigate whether these results can be generalized to higher dimensions. The area-angular momenta inequalities (see \cite{dain2012geometric} for a survey) have been shown to admit such a generalization in all dimensions $D$ for black holes with $U(1)^{D-3}$ rotational isometries \cite{hollands2012horizon}. Here we will focus on extending mass-angular momenta inequalities in $D=5$, as this is the only other possibility that admits asymptotically flat spacetimes with these isometries. In previous work \cite{alaee2014mass} we have constructed a mass functional $\mathcal{M}$ valid for a broad class of maximal, asymptotically flat, $U(1)^2$-invariant, $(t-\phi^i)$-symmetric, vacuum initial data. The mass functional evaluates to the ADM mass for this class and is a lower bound for the mass of general biaxisymmetric data. We also showed that the critical points of this mass functional amongst this class of data are precisely the $\mathbb{R} \times U(1)^2$-invariant vacuum solutions of the five-dimensional Einstein equation. Our result concerns the subset of stationary, biaxisymmetric data that represent maximal slices of extreme black holes. The uniqueness results of Figueras and Lucietti \cite{figueras2010uniqueness} imply that, for fixed angular momenta $J_1,J_2$ and interval structure, there is \emph{at most} one asymptotically flat extreme black hole. We will consider the case where an extreme solution exists. Then for a fixed structure we can write the mass of the extreme black hole as $m_{ext}= f(J_1,J_2)$ for some function $f$ which depends on the interval structure. We have shown (under suitable conditions) that for small variations with fixed angular momenta about the extreme black hole initial data, the mass $m_{ext}$ is a minimum; that is \begin{equation} m \geq f(J_1,J_2) \end{equation} Note that $m$ could be the mass of a dynamic black hole. This is shown by demonstrating that the extreme black holes are local minima of the mass functional. Of course, \emph{within} the two explicitly known families of stationary black holes, the extreme Myers-Perry \cite{Myers1986} and extreme doubly-spinning black ring \cite{pomeransky2006black} for fixed angular momenta, the extreme member of the family has the minimum mass, as is the case for Kerr. However, for more general interval structure, there is no reason to expect this to occur, or indeed that a non-extreme family of solutions with a given interval structure contains an extreme limit. We will consider maximal initial data sets for the Einstein vacuum equations that consist of a triple $(\Sigma,h_{ab},K_{ab})$ where $\Sigma$ is complete, simply connected Riemannian manifold with two asymptotic ends, $h_{ab}$ is a Riemannian metric , and $K_{ab}$ is a trace-free symmetric tensor field which satisfies the vacuum constraints \begin{eqnarray}\label{eq:constraints} R_h= K^{ab}K_{ab}\qquad \bm{\nabla}^bK_{ab}= 0 \end{eqnarray} where $R_h$ and $\bm{\nabla}$ are the scalar curvature and Levi-Civita connection with respect to $h_{ab}$. Let $m_i$ be Killing vectors generating the $U(1)^2$ symmetry of the data. We have $\mathcal{L}_{m_i} h_{ab} = \mathcal{L}_{m_i} K_{ab} = 0$. We consider the class of metrics of the form \begin{equation}\label{htphi} h_{ab}=e^{2v}\tilde{h}_{ab}\qquad \tilde{h}_{ab}=e^{2U}\left(\text{d}\rho^2+\text{d} z^2\right)+\lambda'_{ij}\text{d}\phi^i\text{d}\phi^j \end{equation} where $U = U(\rho,z)$ is a smooth function, $\lambda'=[\lambda'_{ij}]$ is a positive definite $2\times 2$ symmetric metric with $\det\lambda'=\rho^2$ and $\phi^i$ are coordinates with periodicity $2\pi$ adapted to the Killing vectors $m_i$. Note that we assume that the action of the $U(1)^2$ isometry is orthogonally transitive. We expect that this assumption can be removed~\cite{dain2008proof}. (Of course, if the data arises from a \emph{stationary} spacetime, this assumption can be removed). In the following we will \emph{not} assume the data is $t-\phi^i$ symmetric. Rather, we restrict attention to metrics of the form \eqref{htphi} but we allow for general axisymmetric extrinsic curvature. As has been proved in \cite{alaee2014mass}, one can always decompose $K_{ab}$ as \begin{equation} \label{decomposition} K_{ab} = \mathcal{K}_{ab} + H_{ab} \end{equation} where $\mathcal{K}_{ab}$ is the $t-\phi^i$-symmetric part of the extrinsic curvature. Recall that $(t-\phi^i)$-symmetry implies that under the diffeomorphism $\phi^i \to -\phi^i$, we have $h_{ab} \to h_{ab}, \mathcal{K}_{ab} \to -\mathcal{K}_{ab}$ \cite{alaee2014mass}. We now briefly review the construction of the mass functional which is defined for $t-\phi^i$ symmetric data $(\Sigma,h,\mathcal{K})$. Since $\mathcal{K}_{ab}$ is automatically traceless, using the divergence-less condition and the property $\Sigma$ is simply connected \cite{alaee2014small}, we can express $\tilde{K}_{ab}=e^{2v}\mathcal{K}_{ab}$ in a compact form. Define two scalar potentials $Y^i$ and one-forms \begin{equation} S^i = \frac{1}{2 \det \lambda'} i_{m_1} i_{m_2} \star \text{d} Y^i \end{equation} Note $\text{d} \star S^i = 0$. Then an \emph{arbitrary} divergenceless $t-\phi^i$-symmetric extrinsic curvature can be expressed as \cite{alaee2014small} \begin{equation}\label{exttphi} \tilde{K}_{ab}=\frac{2}{\det\lambda'}\Bigg[\left(\lambda'_{22} S^1{}_{(a}m_1{}_{b)}-\lambda'_{12}S^2{}_{(a}m_1{}_{b)}\right)+\left(\lambda'_{11} S^2{}_{(a}m_2{}_{b)}-\lambda'_{12}S^1{}_{(a}m_2{}_{b)}\right)\Bigg]. \end{equation} Hence for $(t-\phi^i)$ symmetric initial data, the extrinsic curvature is completely characterized by the scalar potentials $Y^i$ as well as the metric functions $\lambda'_{ij}$. One can show \cite{alaee2014small} that these potentials are simply the pull-backs of the spacetime twist potentials defined in the usual way, i.e. $\text{d} Y^i = \star_5(m_1 \wedge m_2 \wedge \text{d} m_i)$. Moreover, these potentials are related to the angular momenta of the data by \begin{equation} J_i=\frac{\pi}{4}\left[Y^i(\rho=0,z)-Y^i(\rho=0,-z)\right] \end{equation} In terms of the conformal data $(\tilde{h}_{ab},\tilde{K}_{ab},v)$ the constraint equations reduce to the Lichnerowiscz equation for $v$: \begin{gather} \Delta_{\tilde{h}}\Phi-\frac{1}{6} R_{\tilde h}\Phi +\frac{1}{6}\tilde{K}_{ab}\tilde{K}^{ab}\Phi^{-5}=0.\label{eq:Lich} \end{gather} where $\Phi = e^{2v}$. \begin{remark}\cite{alaee2014mass} Let $(\Sigma,h,\mathcal{K})$ be an asymptotically flat, $(t-\phi^i)$-symmetric, vacuum initial data set. Such data can be completely characterized by $\Sigma$, its $U(1)^2$ action, and a triple $u = (v,\lambda',Y)$ where $v$ is a scalar, $\lambda'$ is a positive definite symmetric matrix with determinant $\rho^2$, and $Y=(Y^1,Y^2)^t$ is a column vector (the function $U$ is found by solving a Poisson equation arising from \eqref{eq:Lich}). We will denote such data simply by $(\Sigma,u)$. \label{tphiremark} \end{remark} Let $\rho$, $z$, $\phi$ be cylindrical coordinates in Euclidean $\mathbb{R}^3$ with metric $\delta_3=\text{d}\rho^2+\text{d} z^2+\rho^2\text{d}\phi^2$. Note all functions only depend on $\rho$ and $z$. Then by \cite{alaee2014mass} we have the following mass functional defined for $(\Sigma,u)$ \begin{eqnarray} \mathcal{M}(u)=\frac{1}{8}\int_{\mathbb{R}^3}\left(-\frac{\det\nabla\lambda'}{2\rho^2}+e^{-6v}\frac{\nabla Y^t\lambda'^{-1}\nabla Y}{2\rho^2}+6\left(\nabla v\right)^2\right)\, \, \text{d}\Sigma-\frac{\pi}{4}\sum_{\text{rods}}\int_{I_i}\log V_i\,\text{d} z \end{eqnarray} where $\text{d}\Sigma=\rho\,\text{d}\rho\text{d} z\text{d}\phi$ and $\nabla$ are respectively the volume element and connection with respect to $\delta_3$, and $V_i$ is defined by \begin{equation} V_i(z)=\lim_{\rho\to 0}\frac{2\sqrt{\rho^2+z^2}\lambda'_{ij}w^iw^j}{\rho^2},\qquad z\in I_i=(a_i,a_{i+1}),\quad w^i\in\mathbb{Z} \end{equation} where $\lambda'_{ij}w^j = O(\rho^2)$ as $\rho \to 0$ with $w=w^i\frac{\partial}{\partial\phi_i}$ is the Killing vector vanishing on the rod $I_i$ Note that $\phi$ is an auxiliary coordinate with period $2\pi$ and the functional can be defined over the orbit space $\mathcal{B}\cong\Sigma/U(1)^2$ \cite{alaee2014mass}. $\mathcal{B}$ is a two-dimensional manifold with boundary and corners \cite{hollands2008uniqueness} and the boundary and asymptotic conditions on the various functions which parametrize the data are given in Section II of \cite{alaee2014mass}. We record them here for convenience. To understand the decay in the asymptotic regions, we define new coordinates \begin{equation} x \equiv \frac{z}{\sqrt{\rho^2 + z^2}}\; , \qquad r \equiv \left[2\sqrt{\rho^2+z^2}\right]^{1/2} \end{equation} where $x \in [-1,1]$ and $r \in (0,\infty)$. Observe that $\delta_3 = r^2(\text{d} r^2 +\tfrac{r^2}{4}[ (1-x^2)^{-1} \text{d} x^2 + (1-x^2)\text{d} \phi^2])$. Note the boundary $\rho=0$ corresponds to $x= \pm 1$ and $r \to 0$ corresponds to an asymptotic end which can be either asymptotically flat or cylindrical whereas $r \to \infty$ corresponds to the asymptotically flat end where the ADM mass is defined. We require: \begin{enumerate}[(a)] \item as $r\to\infty$ \begin{gather} v=o_{1}(r^{-1}),\quad\lambda'_{ij}-\sigma_{ij}=\frac{f_{il}\sigma_{lj}}{r^2}+\sigma_{ij}o_1(r^{-2}),\label{Asympvf}\\ V=\frac{\bar{V}(x)}{r^2}+o_1(r^{-2}),\quad \int_{-1}^1\bar{V}(x)\,\text{d} x=0.\label{AsympV} \end{gather} where $\sigma_{ij}=\frac{r^2}{2}\text{diag}\left(1+x,1-x\right)$ and $f_{ij}$ is a diagonal matrix with $\text{Tr}(f_{ij})=0$. This implies that the geometry approaches the flat metric on $\mathbb{R}^4$ at large $r$. \item As $r\to 0$ for an asymptotically flat end we have \begin{gather} v=-2\log (r)+o_{1}(1),\quad\lambda'_{ij}-\sigma_{ij}=f_{il}\sigma_{lj}r^2+\sigma_{ij}o_1(r^{2}),\\ V=\bar{V}(x)r^2+o_1(r^{2}),\quad \int_{-1}^1\bar{V}(x)\,\text{d} x=0.\label{End1} \end{gather} \item As $r\to 0$ for an asymptotically cylindrical end with topology $\mathbb{R}^+\times N$ where $N\cong S^3,S^1\times S^2, L(p,q)$ we have \begin{gather} v=-\log (r)+o_{1}(r^{1}),\quad\lambda'_{ij}-\bar{\sigma}_{ij}=o_1(r^{2}),\quad V=O_1(1)\label{End2} \end{gather} where $h^c=e^{2V} \frac{ \text{d} x^2}{4(1-x^2)}+\bar{\sigma}_{ij}\text{d}\phi^i\text{d}\phi^j$ is the metric on $N$. \end{enumerate} \begin{remark} \label{remark1} The mass functional is defined for $t-\phi^i$ symmetric data $(\Sigma,u)$ and it equals the ADM mass. The ADM mass of \emph{general} initial data $(\Sigma,h,K)$ satisfies \cite{alaee2014mass} \begin{equation} m \geq \mathcal{M}(u) \end{equation} where $u = (v,\lambda',Y)$ is constructed from the corresponding $\mathcal{K}_{ab}$ associated to $K_{ab}$ by the decomposition \eqref{decomposition}. The equality is achieved if and only if the original initial data set is $t-\phi^i$ symmetric. \end{remark} From now on we restrict attention to the mass functional, as it is a lower bound for the mass of our original initial data. We set $\varphi=(\bar{v},\bar{\lambda}', \bar{Y})$ where $\bar{\lambda'}$ is a symmetric $2 \times 2$ matrix such that $\det\bar{\lambda'}=0$. As will be explained in following sections, $\varphi$ will represent a perturbation about some fixed initial data $u_0$ defined in Definition \ref{Def1} . This should consist of five free degrees of freedom, and the apparent restriction $\det\bar{\lambda'} =0$ is simply a gauge choice. Let $\Omega$ be a (unbounded) domain and we introduce the following weighted spaces of $C^1$ functions with norm \begin{equation} \norm{f}_{C^1_{\beta}(\Omega)}=\sup_{x\in \Omega}\{\sigma^{-\beta}\abs{f}+\sigma^{-\beta+1}\abs{\nabla f}\} \end{equation} is finite with $\beta<-1$ and $\sigma=\sqrt{r^2+1}$ and for a column vector and a matrix we define respectively \begin{equation} \abs{\bar{Y}} \equiv \left(\bar{Y}^t \lambda'^{-1}_0 \bar{Y}\right)^{1/2}\;, \quad \abs{\bar{\lambda}'} \equiv \left(\text{Tr}\left[\bar{\lambda}'^t\bar{\lambda}'\right]\right)^{1/2} \end{equation} Let $\rho_0> 0$ be a constant and $K_{\rho_0}$ be the cylinder $\rho\leq \rho_0$ in $\mathbb{R}^3$. We define the domain $\Omega_{\rho_0}=\mathbb{R}^3 \backslash K_{\rho_0}$. The perturbation $\bar{Y}$ and $\bar{\lambda}$ are assumed to vanish in $K_{\rho_0}$. This is consistent with the physical requirement that the perturbations keep fixed the angular momenta $J_i$ and fixed orbit space. The Banach space $B$ is defined by \begin{equation} \norm{\varphi}_{B}=\norm{\bar{v}}_{C^1_{\beta}(\mathbb{R}^3)}+\norm{\bar{\lambda}'}_{C^1_{\beta}(\Omega_{\rho_0})}+\norm{\bar{Y}}_{C^1_{\beta}(\Omega_{\rho_0})} \end{equation} Now we define the class of extreme data. Note that we will denote non-negative constants which depend on parameters of data such as mass and angular momenta by $C$, $C_i$, and $C'$. \begin{Def}\label{Def1} The set of \emph{extreme class} $E$ is the collection of data arising from extreme, asymptotically flat, $\mathbb{R}\times U(1)^2$ invariant black holes which consist of triples $u_0 = (v_0,\lambda'_0,Y_0)$ where $v_0$ is a scalar, $\lambda'_0=[\lambda_{ij}]$ is a positive definite $2\times 2$ symmetric matrix, and $Y_0$ is a column vector with the following bounds for $\rho\leq r^2$ \begin{enumerate} \item $\frac{\nabla Y_0^t\lambda^{-1}_0\nabla Y_0}{X_0}\leq Cr^{-4}$ and $e^{-2v_0}\frac{\nabla Y_0^t\lambda^{-1}_0\nabla Y_0}{X_0}\leq Cr^{-2}$ in $\mathbb{R}^3$ where $\lambda_0=e^{2v_0}\lambda'_0$ \item $C_1\rho I_{2\times 2}\leq\lambda_{0}\leq C_2 \rho I_{2\times 2}$ and $C_3\rho^{-1}I_{2\times 2}\leq\lambda^{-1}_{0}\leq C_4\rho^{-1}I_{2\times 2}$ in $\Omega_{\rho_0}$ \item $\rho^2\leq X_0$ in $\mathbb{R}^3$ where $X_0=\det\lambda_0$ and $X_0^2\leq C' \rho^4$ in $\Omega_{\rho_0}$ where $\lim_{\rho_0\to 0}C'=\infty$ \item $\abs{\nabla v_0}^2\leq C r^{-4}$, $\abs{\nabla\ln X_0}^2\leq C\rho^{-2}$ in $\mathbb{R}^3$ and $\abs{\nabla\lambda_0\lambda^{-1}_0}^2\leq C\rho^{-2}$ in $\Omega_{\rho_0}$ \end{enumerate} \end{Def} The choice of these bounds are consistent with the two known extreme black holes initial data, extreme Myers-Perry and extreme doubly spinning black ring. These inequalities are difficult to prove directly because the expressions in terms of the $(\rho,z)$ coordinates are unwieldy. However, we have checked numerically that these bounds hold for a wide range of parameters for these two cases. It is possible that there exists an extreme data which has slightly different bounds (i.e. this would correspond to another extreme black hole with different orbit space). In that case we expect the arguments used in the proof of theorem \ref{main theorem} can be extended to take into account these different estimates. Note that by what has been proved in \cite{alaee2014mass}, $\mathcal{M}$ evaluated on the extreme class is non-negative and given by \begin{equation}\label{massextreme} \mathcal{M}_{\text{cp}}=\frac{3}{8}\int_{\mathbb{R}^3}e^{-6v_0}\frac{\abs{\nabla Y_0}^2 }{2\rho^2}\, \,\text{d} \Sigma \end{equation} where $\abs{\nabla Y_0}^2=\nabla Y_0^t\lambda'^{-1}_{0}\nabla Y_0$. Now denote an extreme data of this class by $u_0 = (v_0,\lambda'_0,Y_0)\in E$. Then we have the following result \begin{thm}\label{main theorem} $\phantom{Next line}$ \begin{enumerate}[(a)] \item Let $\varphi=(\bar{v},\bar{\lambda}', \bar{Y}) \in B$ where $B$ is the Banach space defined above and $u_0=({v}_0,{\lambda}_0',{Y_0})\in E$ is extreme data with fixed $\mathcal{B}$. Then the functional $\mathcal{M}:B\rightarrow \mathbb{R}$ has a strict local minimum at $u_0$. That is, there exists $\epsilon>0$ such that \begin{equation} \mathcal{M}(u_0+\varphi)>\mathcal{M}(u_0) \end{equation} for all $\varphi\in B$ with $\norm{\varphi}_{B}<\epsilon$ and $\varphi\neq 0$. \item Let $(\Sigma,h_{ab},K_{ab})$ be an asymptotically flat, maximal, $U(1)^2$-invariant, vacuum initial data with mass $m$ and angular momenta $J_1$ and $J_2$ and fixed orbit space $\mathcal{B}$ such that the data satisfies the boundary conditions given by \eqref{Asympvf}-\eqref{End2}. Let $u = (v,\lambda',Y)$ describe the associated $t-\phi^i$ symmetric data as in Remark \ref{remark1} and write $u = u_0 + \varphi$ where $u_0$ is extreme data with the same $J_1,J_2$ and orbit space $\mathcal{B}$. If $\varphi$ is sufficiently small (as in (a)) then \begin{equation} m\geq f(J_1,J_2) = \mathcal{M}(u_0) \end{equation} for some $f$ which depends on the orbit space $\mathcal{B}$. Moreover, $m=f(J_1,J_2)$ for data $(\Sigma,h,K)$ in a neighbourhood if and only if the data are extreme data. \end{enumerate} \end{thm} \noindent For the sake of illustration we mention two special cases of the theorem. \begin{enumerate} \item In dimension 5, a possible horizon topology is $H\cong S^3$. Consider fixed angular momenta $J_1$ and $J_2$ and fixed orbit space $\tilde{\mathcal{B}}$ consisting of a finite timelike interval (the event horizon) and two semi-infinite spacelike intervals extending to asymptotic infinity (representing rotation axes). Then the orbit space of the slice will be $\mathcal{B} \cong \tilde{\mathcal{B}}\backslash \{\text{horizon interval}\}$ which corresponds to slice topology $\Sigma\cong\mathbb{R}\times S^3$ \cite{alaee2014notes,alaee2014mass}. By the uniqueness theorem \cite{figueras2010uniqueness} extreme Myers-Perry solution is the unique solution with this orbit space and fixed angular momenta. Thus there exists $f(x,y)=3\left[\frac{\pi}{32}(\abs{x}+\abs{y})^2\right]^{1/3}$ such that mass of extreme Myers-Perry is equal to $f(J_1,J_2)$. Then by theorem \ref{main theorem} mass of any asymptotically flat, maximal, biaxisymmetric data sufficiently close (in the sense made precise above) with the same interval structure and angular momenta is greater than $f(J_1,J_2)$. \item Now consider the horizon topology $H\cong S^2\times S^1$. Consider fixed angular momenta $J_1$ and $J_2$ and fixed orbit space $\tilde{\mathcal{B}}$ consisting a finite timelike interval, a finite spatial interval, and two semi-infinite intervals extending to asymptotic infinity. Then the orbit space of the slice will be $\mathcal{B} \cong \tilde{\mathcal{B}}\backslash \{\text{horizon interval}\}$ which corresponds to slice topology $\Sigma\cong S^2\times B^2\#\mathbb{R}^4$ \cite{alaee2014notes,alaee2014mass}. By the uniqueness theorem \cite{figueras2010uniqueness} the extreme doubly spinning black ring is the unique solution with orbit space $\tilde{\mathcal{B}}$ and fixed angular momenta. Thus there exist $f(x,y)=3\left[\frac{\pi}{4}\abs{x}(\abs{y}-\abs{x})\right]^{1/3}$\footnote{In \cite{alaee2014mass} there is a typo in equation (2). The correct expression is $M^3=\frac{27\pi}{4}J_1(J_2-J_1)$} such that mass of extreme doubly spinning black rings is equal to $f(J_1,J_2)$. Then by theorem \ref{main theorem} the mass of any asymptotically flat, maximal, biaxisymmetric data with the same orbit structure and fixed angular momenta is greater than $f(J_1,J_2)$. \end{enumerate} Theorem \ref{main theorem} is a local inequality which should be satisfied for a wide class of (possibly dynamical) black holes with a fixed interval structure with a geometry sufficiently near an extreme black hole. One may expect to prove a global result showing that this inequality holds all data with fixed $J_1,J_2$ and $\mathcal{B}$. Such a global inequality has been proved in the electrovacuum in 3+1 dimensions \cite{chrusciel2009mass,dain2008proof}. A major obstacle to extending this result to the present case is showing positivity of $\mathcal{M}$ for arbitrary interval structures consistent with asymptotic flatness. However, for a class of interval structures (including Myers-Perry black hole initial data) one can show $\mathcal{M} \geq 0$ \cite{alaee2014mass}. We are currently investigating whether a global inequality can be demonstrated in this particular setting. In this context, it is worth noting that $\mathbb{R} \times U(1)^2$-invariant vacuum spacetimes can be cast as harmonic maps from the orbit space to $SL(3,\mathbb{R}) / SO(3)$ \cite{hollands2008uniqueness}. The target space metric is easily checked to be Einstein with negative curvature (it is not conformally flat). This can be contrasted with the four-dimensional case where the $\mathbb{R} \times U(1)$-invariant vacuum solutions are harmonic maps to $SL(2,\mathbb{R})/SO(2) \cong \mathbb{H}^2$ equipped with its standard Einstein metric. Another open problem is to generalize this theorem to include multiple asymptotic ends, corresponding to multiple black holes \cite{chrusciel2008mass}. The proof of theorem \ref{main theorem} is given in Section \ref{proof}. The rest of the paper is organized as follows. In Section \ref{critical} we find critical points of $\mathcal{M}$ and we prove uniform continuity of a one parameter family of functionals obtained from $\mathcal{M}$ and denoted by $\mathcal{E}_{\varphi}(t)$. In Section \ref{Carter} we will use a Carter-type identity (a linearized version of Mazur's identity) to derive an identity for five dimensional spacetimes and we use this identity to prove positivity of the second variation of $\mathcal{E}_{\varphi}(t)$ at $t=0$. Finally, we prove a coercive condition for the second variation $\mathcal{E}''_{\varphi}(0)$. This is sufficient to demonstrate that $u_0$ is a strict minimum for $\mathcal{M}$. \section{Critical points of the mass functional $\mathcal{M}$}\label{critical} In this section we will study the properties of second variation of mass functional $\mathcal{M}$. Let $\varphi\in B$ and consider the real-value function \begin{equation} \mathcal{E}_{\varphi}(t)\equiv \mathcal{M}(u_0+t\varphi) \end{equation} and we assume \begin{equation} (v,\lambda',Y)\equiv(v(t),\lambda'(t),Y(t))=(v_0+t\bar{v},\lambda'_0+t\bar{\lambda}',Y_0+t\bar{Y})\label{relation1} \end{equation} where $\det\lambda'=\rho^2$. This choice for determinant of $\lambda'$ requires that $\det\bar{\lambda}=0$. Moreover we have \begin{equation} \lambda\equiv \lambda (t)=e^{2v}\lambda'(t)\qquad X\equiv X(t)=e^{4v}\rho^2\label{relation2} \end{equation} and $X_0=X(0)$. Then the first variation is {\small\begin{eqnarray} \mathcal{E}'_{\varphi}(t)&=&\frac{1}{8}\int_{\mathbb{R}^3}\Bigg[12\nabla v.\nabla\bar{v}+\frac{e^{-6v}}{2\rho^4}\Bigg[\nabla Y^t\text{adj}(\bar{\lambda'})\nabla Y+2\nabla Y^t\text{adj}(\lambda')\nabla\bar{Y}-6\bar{v}\nabla Y^t\text{adj}(\lambda')\nabla Y\Bigg]\nonumber\\ &-&\frac{1}{2\rho^2}\text{Tr}\left(\text{adj}(\nabla\bar{\lambda'})\nabla\lambda'\right)\Bigg]\,d\Sigma \end{eqnarray}} The critical points of this variation ($\mathcal{E}'_\phi(0) =0$) in \cite{alaee2014mass} are given by \begin{eqnarray}\label{EL1} G_X&\equiv&4\Delta_3v+\frac{\nabla Y^t{\lambda}^{-1}\nabla{Y}}{X} = 0\\ \label{EL2} G&\equiv&\nabla\cdot\left(\frac{\nabla\lambda'}{\rho^2}\right)+\frac{e^{2v}}{X^2}\nabla Y\nabla Y^t = 0\\ \label{EL3} G_Y&\equiv&\nabla\cdot\left(\frac{\lambda^{-1}\nabla Y}{X}\right) = 0 \end{eqnarray} On the other hand, the vacuum field equations for a $\mathbb{R} \times U(1)^2$-invariant spacetime are \cite{figueras2010uniqueness} \begin{gather}\label{fieldeqns} \begin{aligned} G_{\lambda}&\equiv \nabla\cdot \left(\lambda^{-1}\nabla\lambda\right)+\frac{\lambda^{-1}}{X}\nabla Y\cdot\nabla Y^t = 0\\ G_{Y}&=\nabla\cdot \left(\frac{\lambda^{-1}}{X}\nabla Y\right) = 0 \end{aligned} \end{gather} where $G_X=\text{Tr}\left(G_{\lambda}\right)$. It is straightforward to show these field equations \eqref{fieldeqns} are equivalent to critical points (\ref{EL1})-(\ref{EL3}) of $\mathcal{E}_{\varphi}$. This shows the critical points of the mass functional are the same as the stationary, biaxisymmetric vacuum solutions \cite{alaee2014mass} (written in spacetime Weyl coordinates with orbit space $\tilde{\mathcal{B}}$). However, for non-extreme black holes, this chart only covers the exterior region of the black hole spacetime and the manifold has an interior boundary. In particular in these coordinates the mass functional is singular on the inner boundary. One can always find quasi-isotropic coordinates on the initial data slice $\Sigma$ to complete the manifold and compute the mass, but then the resulting geometry is \emph{not} a critical point of $\mathcal{M}$. But for extreme black holes, the usual spacetime Weyl coordinates and quasi-isotropic coordinates coincide, and the mass functional is well defined on these critical points. This point is discussed in more detail\footnote{We thank S Dain for clarifying this point.} in \cite{dain2006variational} and {\cite{alaee2014mass}}. A calculation yields the second variation {\small\begin{eqnarray} \mathcal{E}''_{\varphi}(t)&=&\frac{1}{8}\int_{\mathbb{R}^3}\Bigg(12\left(\nabla\bar{v}\right)^2-\frac{\det\nabla\bar{\lambda'}}{\rho^2}+\frac{e^{-6 v}}{\rho^4}\Bigg[2\nabla Y^t\text{adj}(\bar{\lambda'})\nabla{\bar{Y}}+\nabla \bar{Y}^t\text{adj}(\lambda')\nabla{\bar{Y}}\nonumber\\ &-&6\bar{v}\nabla Y^t\text{adj}(\bar{\lambda'})\nabla Y-12\bar{v}\nabla Y^t\text{adj}(\lambda')\nabla\bar{Y}+18\bar{v}^2\nabla Y^t\text{adj}(\lambda')\nabla Y\Bigg] \Bigg)\,d\Sigma \end{eqnarray}} Note that the integrand of the functional $\mathcal{M}$ is singular at $\rho=0$. However, we have defined the Banach space $B$ only for functions $\bar{Y}$ and $\bar{\lambda}'$ with support in $\Omega_{\rho_0}$. Therefore, the domain of integration of the terms in which $\nabla \bar{Y}$ and $\nabla\bar{\lambda'}$ appear are in fact $\Omega_{\rho_0}$ and hence the integrand is regular for those terms. We now introduce axillary Hilbert spaces $\mathcal{H}_i$, which is defined in terms of the weighted Sobolev spaces \begin{eqnarray} \norm{\bar{v}}^2_{\mathcal{H}_1}&=&\int_{\mathbb{R}^3}\abs{\nabla\bar{v}}^2 r^{-2}\text{d}\Sigma+\int_{\mathbb{R}^3}\abs{\bar{v}}^2r^{-4}\text{d}\Sigma\\ \norm{\bar{\lambda}'}^2_{\mathcal{H}_2}&=&\int_{\Omega_{\rho_0}}\abs{\nabla\bar{\lambda}'}^2\rho^{-2}\text{d}\Sigma+\int_{\Omega_{\rho_0}}\abs{\bar{\lambda}'}^2\rho^{-4}\text{d}\Sigma\\ \norm{\bar{Y}}^2_{\mathcal{H}_3}&=&\int_{\Omega_{\rho_0}}\abs{\nabla\bar{Y}}^2\rho^{-2}\text{d}\Sigma+\int_{\Omega_{\rho_0}}\abs{\bar{Y}}^2\rho^{-4}\text{d}\Sigma \end{eqnarray} and its corresponding inner products. The following auxiliary Hilbert space for $\phi$ with norm defined by \begin{equation} \norm{\varphi}_{\mathcal{H}}^{2}=\norm{\bar{v}}^2_{\mathcal{H}_1}+\norm{\bar{\lambda}}^2_{\mathcal{H}_2}+\norm{\bar{Y}}^2_{\mathcal{H}_3}, \end{equation} with its corresponding inner product. We have $B\subset \mathcal{H}$ and the following P\'oincare inequalities \begin{lemma}\label{Poincare} Let $\varphi\in\mathcal{H}$ and $\delta\neq 0$ is a real number . Then \begin{enumerate}[(a)] \item $\abs{\delta}^{-2}\int_{\mathbb{R}^3}\abs{\nabla\bar{v}}^2r^{-2\delta-1}\text{d}\Sigma\geq \int_{\mathbb{R}^3}\abs{\bar{v}}^2r^{-2\delta-3}\text{d}\Sigma$ \item $\abs{\delta}^{-2}\int_{\Omega_{\rho_0}}\abs{\nabla\bar{\lambda}'}^2\rho^{-2\delta}\text{d}\Sigma\geq \int_{\Omega_{\rho_0}}\abs{\bar{\lambda}'}^2\rho^{-2\delta-2}\text{d}\Sigma$ \item $2\abs{\delta}^{-2}\int_{\Omega_{\rho_0}}\nabla\bar{Y}^t\nabla\bar{Y}\rho^{-3\delta}\text{d}\Sigma\geq 3\int_{\Omega_{\rho_0}}\bar{Y}^t\bar{Y}\rho^{-3\delta-2}\text{d}\Sigma$ \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(a)] \item The proof of this part is similar to Theorem 1.3 of \cite{bartnik1986mass}. \item The proof of part (b) is as following. We know for any symmetric matrices $\bar{\lambda}$ we have \begin{equation} \abs{\bar{\lambda}'}^2=\bar{\lambda}'^2_{11}+\bar{\lambda}'^2_{22}+2\bar{\lambda}'^2_{12} \end{equation} Let $\Delta_3$ be Laplace operator respect to $\delta_3$ on $\mathbb{R}^3$. \begin{equation} \Delta_3(\ln \rho)=0 \end{equation} Then for each one of these functions, $\bar{\lambda}'_{ij}$ and by integrating over $\Omega_{\rho_0}$ and integrating by parts, \begin{equation} \int_{\Omega_{\rho_0}}\nabla\left(\rho^{-2\delta}\bar{\lambda}'^2_{ij}\right)\nabla\left(\ln\rho\right)\text{d}\Sigma=0 \end{equation} Now if we expand the derivatives in the integrand and use H\"older inequality we have \begin{equation} \abs{\delta}^{-2}\int_{\Omega_{\rho_0}}\abs{\nabla\bar{\lambda'}_{ij}}^2\rho^{-2\delta}\text{d}\Sigma\geq \int_{\Omega_{\rho_0}}\abs{\bar{\lambda}'_{ij}}^2\rho^{-2\delta-2}\text{d}\Sigma \end{equation} Then we have the following inequality \begin{equation} \abs{\delta}^{-2}\int_{\Omega_{\rho_0}}\abs{\nabla\bar{\lambda}'}^2\rho^{-2\delta}\text{d}\Sigma\geq \int_{\Omega_{\rho_0}}\abs{\bar{\lambda}'}^2\rho^{-2\delta-2}\text{d}\Sigma \end{equation} \item Proof is similar to part (b). \end{enumerate} \end{proof} \begin{lemma}\label{uniformlemma} Let $\varphi\in B$ and $0<t<1$, then \begin{enumerate}[(a)] \item The function $\mathcal{E}_{\varphi}(t)$ is $C^2$ in the $t$ variable. \item For every $\epsilon>0$ there exist $\eta(\epsilon)$ such that for $\norm{\varphi}_{B}< \eta(\epsilon)$ we have \begin{equation}\label{uc} \abs{\mathcal{E}''_{\varphi}(t)-\mathcal{E}''_{\varphi}(0)}\leq\epsilon\norm{\varphi}^2_{\mathcal{H}} \end{equation} \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(a)] \item To show $\mathcal{E}_{\varphi}(t)$ is $C^2$ it is enough to to show the third derivatives exists for all $t$. First we have {\small\begin{eqnarray} \mathcal{E}'''_{\varphi}(t)&=&\frac{1}{8}\int_{\mathbb{R}^3}\frac{e^{-6 v}}{\rho^4}\Bigg(3\nabla\bar{Y}^t\text{adj}(\bar{\lambda'})\nabla{{Y}} -42\bar{v}\nabla\bar{Y}^t\text{adj}(\bar{\lambda'})\nabla{\bar{Y}} -12\bar{v}\nabla\bar{Y}^t\text{adj}(\lambda')\nabla{\bar{Y}}\nonumber\\ &+&108\bar{v}^2\nabla{Y}^t\text{adj}(\bar{\lambda'})\nabla{ Y} +144\bar{v}^2\nabla{Y}^t\text{adj}(\lambda')\nabla{\bar{Y}} -216\bar{v}^3\nabla{Y}^t\text{adj}(\lambda')\nabla{ Y} \Bigg)\,\text{d}\Sigma\nonumber \end{eqnarray}} Note $\nabla\bar{Y}^i$ and $\bar{\lambda}'$ have compact support in $\Omega_{\rho_0}$. Therefore, by parts 1 and 2 of Definition \ref{Def1} and relation $\text{adj}\bar{\lambda}'=-\frac{1}{\rho^2}\text{adj}{\lambda}'_0\bar{\lambda}'\text{adj}\lambda'_0$ and $\det\bar{\lambda}'=0$ it is straightforward but tedious to show that all terms are bounded by the norm $B$. The only term with different domain is \begin{equation} -\frac{216\bar{v}^3}{X_0}\nabla{Y}_0^t\lambda^{-1}_0\nabla{Y}_0 \end{equation} which is bounded on $\mathbb{R}^3$ by part 1 of Definition \ref{Def1}. Then $\mathcal{E}_{\varphi}(t)$ is $C^2$. \item First by integrand of $\mathcal{E}''_{\varphi}(t)$ we have \begin{eqnarray} \mathcal{E}''_{\varphi}(t)-\mathcal{E}''_{\varphi}(0)=\int_{\mathbb{R}^3}\left(A_1|_{0}^t+...+A_6|_{0}^t\right)\,\text{d}\Sigma \end{eqnarray} where \begin{eqnarray} A_1&=&18\frac{e^{-6v}\bar{v}^2}{\rho^4}\nabla Y_0^t\text{adj}\lambda'_0\nabla Y_0\qquad A_2=\frac{e^{-6v}}{\rho^4}(18\bar{v}^2t-6\bar{v})\nabla {Y}^t_0\text{adj}\bar{\lambda}'\nabla Y_0\nonumber\\ A_3&=&\frac{e^{-6v}}{\rho^4}(36\bar{v}^2t-12\bar{v})\nabla \bar{Y}^t\text{adj}{\lambda}'_0\nabla Y_0\qquad A_4=\frac{e^{-6v}}{\rho^4}(18\bar{v}^2t^2-12\bar{v}t+1)\nabla \bar{Y}^t\text{adj}\lambda'_0\nabla\bar{Y}\nonumber\\ A_5&=&\frac{e^{-6v}}{\rho^4}(36\bar{v}^2t^2-24\bar{v}t^2+2)\nabla \bar{Y}^t\text{adj}\bar{\lambda}'\nabla{Y}_0\qquad A_6=\frac{e^{-6v}}{\rho^4}(18\bar{v}^2t^3-18\bar{v}t^2+3t)\nabla \bar{Y}^t\text{adj}\bar{\lambda}'\nabla\bar{Y}\nonumber \end{eqnarray} All of these terms satisfy \eqref{uc} by similar steps as in \cite{dain2006proof}. We will explicitly give the proof for $A_1,A_2,A_3$ as the arguments are similar but tedious. First we have \begin{gather}\label{barv} \abs{\bar{v}}\leq \sigma^{\beta}\norm{\bar{v}}_{C^1_{\beta}(\mathbb{R}^3)}\leq \norm{\bar{v}}_{C^1_{\beta}(\mathbb{R}^3)}\leq \norm{\varphi}_{B}\leq \eta \end{gather} By part (1) of Definition \ref{Def1} we have \begin{eqnarray} \int_{\mathbb{R}^3}A_1|_{0}^t\text{d}\Sigma_0&=&\int_{\mathbb{R}^3}18\bar{v}^2\frac{\nabla Y_0^t\lambda_0^{-1}\nabla Y_0}{X_0}\left[e^{-6 t\bar{v}}-1\right]\text{d}\Sigma\nonumber\\ &\leq&18 C\left[e^{6\eta}-1\right]\int_{\mathbb{R}^3}\bar{v}^2r^{-4}\text{d}\Sigma \nonumber\\ &\leq& 18 C\left[e^{6\eta}-1\right]\norm{\bar{v}}^2_{\mathcal{H}_1} \leq 18 C\left[e^{6\eta}-1\right]\norm{\varphi}^2_{\mathcal{H}} \end{eqnarray} Now First we write $A_2=B_1+B_2$ where \begin{gather} B_1=\frac{e^{-6v}}{\rho^4}18\bar{v}^2t\nabla {Y}^t_0\text{adj}\bar{\lambda}'\nabla Y_0\qquad B_2=-6\frac{e^{-6v_0}}{\rho^4}\bar{v}\nabla {Y}^t_0\text{adj}\bar{\lambda}'\nabla Y_0\left[e^{-6 t\bar{v}}-1\right] \end{gather} We will prove it for $B_1$ and $B_2$ is similar. We have \begin{eqnarray} \int_{\mathbb{R}^3}B_1\text{d}\Sigma &=&-\int_{\Omega_{\rho_0}}\frac{e^{-6v}}{\rho^6}18\bar{v}^2t\nabla {Y}^t_0\text{adj}\lambda'_0\bar{\lambda}'\text{adj}\lambda'_0\nabla Y_0\text{d}\Sigma\nonumber\\ &\leq&18e^{6\eta}\eta\int_{\Omega_{\rho_0}}\frac{e^{-6v_0}}{\rho^4}\abs{\bar{\lambda}'}\bar{v}\nabla {Y}^t_0(\text{adj}{\lambda}'_0)^2\nabla Y_0\text{d}\Sigma\nonumber\\ &\leq&18C\eta e^{6\eta}\int_{\Omega_{\rho_0}}\abs{\bar{\lambda}'}\bar{v}\rho^{-1}r^{-2}\text{d}\Sigma \nonumber\\ &\leq&18C\eta e^{6\eta}\norm{\bar{v}}_{\mathcal{H}_1}\norm{\bar{\lambda'}}_{\mathcal{H}_2}\leq 18C\eta e^{6\eta}\norm{\varphi}^2_{\mathcal{H}} \end{eqnarray} We used the identity $\text{adj}\bar{\lambda}'=-\frac{1}{\rho^2}\text{adj}{\lambda}'_0\bar{\lambda}'\text{adj}\lambda'_0$ in the first line. The first inequality arise from \eqref{barv} and the matrix inequality $u^t A u \leq \abs{A} u^t u$ for any $2 \times 2$ matrix $A$. The second inequality is a consequence of parts (1) and (2) of Definition \ref{Def1}. Finally, the third inequality follows from H\"older's inequality. The term $A_3$ can be expressed as $A_3=B_3+B_4$ where \begin{gather} B_3=36\frac{e^{-6v}}{\rho^4}\bar{v}^2t\nabla \bar{Y}^t\text{adj}{\lambda}'_0\nabla Y_0\qquad B_4=-12\frac{e^{-6v_0}}{\rho^4}\bar{v}\nabla \bar{Y}^t\text{adj}{\lambda}'_0\nabla Y_0\left[e^{-6 t\bar{v}}-1\right] \end{gather} Then the bound of $B_3$ is \begin{eqnarray} \int_{\mathbb{R}^3}B_3\text{d}\Sigma &\leq&36\eta e^{6\eta}\int_{\Omega_{\rho_0}}\frac{1}{X_0}\bar{v}\nabla \bar{Y}^t\lambda'^{-1}_0\nabla Y_0\text{d}\Sigma \nonumber\\ &\leq&36\eta e^{6\eta}\int_{\Omega_{\rho_0}}\frac{\bar{v}}{X_0}\left(\nabla \bar{Y}^t\lambda'^{-1}_0\nabla \bar{Y}\right)^{1/2} \left(\nabla Y_0^t \lambda_0^{-1} \nabla Y_0 \right)^{1/2}\text{d}\Sigma \nonumber \\ &\leq&36C\eta e^{6\eta}\left(\int_{\Omega_{\rho_0}}\rho^{-2}\nabla \bar{Y}^t\lambda'^{-1}_0\nabla \bar{Y}\text{d}\Sigma \right)^{1/2}\left(\int_{\Omega_{\rho_0}}\bar{v}^2r^{-4}\text{d}\Sigma \right)^{1/2}\nonumber\\ &\leq&36C\eta e^{6\eta}\norm{\bar{v}}_{\mathcal{H}_1}\norm{\bar{Y}}_{\mathcal{H}_3} \leq 36C\eta e^{6\eta}\norm{\varphi}^2_{\mathcal{H}} \end{eqnarray} The first inequality uses \eqref{barv}. We know $\lambda_0^{-1}$ is a positive definite symmetric matrix. Thus it has a square root matrix $\lambda_0^{-1/2}$, that is $\lambda_0^{-1}=\left(\lambda_0^{-1/2}\right)^2$. Then the integrand in the first line is equal to $X_0^{-1} \bar{v} u^t w$ where $u^t = \nabla \bar{Y}^t \lambda_0^{-1/2}$ and $w = \lambda_0^{-1/2}\nabla Y_0$. Since $u^t w \leq (u^t u)^{1/2} (w^t w)^{1/2}$ we have the second inequality. The third inequality follows from H\"older's inequality and parts (1) and (2) of Definition 1. The fourth inequality is by the definition of norm. $B_4$ is exactly similar to $B_3$. \end{enumerate} \end{proof} \section{Local minima of $\mathcal{E}_\varphi(t)$ }\label{Carter} In this section we first derive a five-dimensional version of Carter's identity and show its relation with the second variation $\mathcal{E}''_{\varphi}(t)$. Assume we are in a five dimensional vacuum spacetime with isometry group $\mathbb{R}\times U(1)^2$. The field equations can be expressed simply as the conservation of a current (see \cite{figueras2010uniqueness} for details). \begin{equation}\label{EOM} \nabla\cdot J=\nabla\cdot\left(\rho\,\Phi^{-1}\nabla\Phi\right)=0 \end{equation} where \begin{equation} \Phi \equiv \Phi(X,Y,\lambda)=\begin{pmatrix} \frac{1}{X} & -\frac{Y^t}{X} \\ -\frac{Y}{X} & \lambda+\frac{YY^t}{X} \end{pmatrix} \end{equation} and $\det\Phi=1$, $\lambda$ is a positive definite $2\times 2$ symmetric matrix with $\det\lambda=X$ and $Y$ is a column vector. One can derive the Mazur identity (for a detailed discussion see \cite{carter1985bunting}) for two matrices $\Phi_{[1]}$ and $\Phi_{[2]}$ (not necessarily solutions) with corresponding currents $J_{[1]}, J_{[2]}$ \begin{equation} \label{Mazur} \Delta \Psi-\text{Tr}\left(\Phi_{[2]}\left(\nabla\cdot\mathring{J} \right)\Phi_{[1]}^{-1}\right)=\frac{1}{\rho^2}\text{Tr}\left(\mathring{J}^t\Phi_{[2]}\mathring{J}\Phi_{[1]}^{-1}\right) \end{equation} where $\Delta$ is Laplace operator with respect to flat metric $\delta_3$ and \begin{eqnarray} \Psi&=&\text{Tr}\left(\Phi_{[2]}\Phi_{[1]}^{-1}-I\right)\qquad \mathring{J}=J_{[2]}-J_{[1]} \end{eqnarray} Note that this identity holds quite generally for any field theory which can be derived from a positive definite action with Lagrangian of the form $L \sim \,{\rm Tr} (\Phi^{-1} \text{d} \Phi)^2$. The linearized version of this identity in four dimensions was originally found by Carter \cite{carter1971axisymmetric} and plays an important role in geometric inequalities in 3+1 dimensional spacetime\cite{dain2006proof,dain2008proof,dain2011area,dain2011areacharge}. We will now derive a generalization of this identity for five dimensions. Assume we have $\Phi_{[1]}(X,Y,\lambda)$ and $\Phi_{[2]}(X_2,Y_2,\lambda_2)$ related by \begin{gather}\label{fieldeqns} \begin{aligned}X_2&=X+s\dot{X}\qquad &&Y_2 =Y+s \dot{Y} \qquad \lambda_2=\lambda+s\dot{\lambda} \\ G_{\lambda_2}&=G_{\lambda}+s \dot{G}_{\lambda},\qquad &&G_{X_2}=G_X+s \dot{G}_X \end{aligned} \end{gather} The overdot $\dot{}$ represents the linear order of expansion or first variation with respect to $s$ (when taking variations of the products of several terms, we use the notation $\delta$ instead of dot for convenience of notation). Then \eqref{Mazur} implies, to lowest order in $s$, that \begin{eqnarray} &&\Delta\left(\frac{\dot{Y}^t\lambda^{-1}\dot{Y}}{X}+\frac{\dot{X}^2}{X^2}\right)\nonumber\\ &+&\frac{\dot{Y}^t\lambda^{-1}\dot{Y}}{X}G_X-\frac{\dot{X}}{X}\dot{G}_X-2\frac{\dot{X}}{X}G_Y^t\dot{Y}-2\dot{Y}^t\lambda^{-1}\dot{\lambda}G_Y+\frac{\dot{Y}^t\lambda^{-1}G^t_{\lambda}\dot{Y}}{X}-\text{Tr}\left(\lambda^{-1}\dot{\lambda}\dot{G}^t_{\lambda}\right)-2\dot{G}_Y^t\dot{Y}\nonumber\\ &=&\left(\nabla\left(\frac{\dot{X}}{X}\right)+\frac{\dot{Y}^t\lambda^{-1}\nabla Y}{X}\right)^2+X\left(\dot{U}_2^t\lambda\dot{U}_2+\nabla U_1^t\lambda\nabla U_1\right)+\text{Tr}\bigg[\left(\nabla\left(\dot{\lambda}\lambda^{-1}\right)+\frac{\nabla Y\dot{Y}^t\lambda^{-1}}{X}\right)^2\bigg]\nonumber\\ \end{eqnarray} where \begin{equation} U_1\equiv\frac{\lambda^{-1}\dot{Y}}{X}\qquad U_2\equiv\frac{\lambda^{-1}\nabla{Y}}{X} \end{equation} This is the five-dimensional extension of Carter's identity which appeared in \cite{carter1971axisymmetric} . Now if we consider our parametrization of data with relations \eqref{relation1} and \eqref{relation2} we have \begin{equation} \dot{X}=4\bar{v}X,\qquad \dot{\lambda}=\bar{\lambda}=2\bar{v}\lambda+\lambda\lambda'^{-1}\bar{\lambda}',\qquad \dot{Y}=\bar{Y} \end{equation} Thus \begin{equation} \lambda^{-1}\dot{\lambda}=2\bar{v}I+\lambda'^{-1}\bar{\lambda}'\label{lambdaandprime} \end{equation} since $\text{Tr}\left(\lambda'^{-1}\bar{\lambda}'\right)= \delta \det \lambda' / \det \lambda' = 0$ we have $\text{Tr}\left(\lambda^{-1}\bar{\lambda}\right)=4\bar{v}$. Then the following identity holds for arbitrary $v$, $\bar{v}$,$Y$, $\bar{Y}$,$\lambda$,$\bar{\lambda}$ will be \begin{eqnarray}\label{Carteridentity} &&\Delta\left(\frac{\bar{Y}^t\lambda^{-1}\bar{Y}}{X}+16\bar{v}^2\right)\nonumber\\ &+&\frac{\bar{Y}^t\lambda^{-1}\bar{Y}}{X}G_X-4\bar{v}\dot{G}_X-8\bar{v}G_Y^t\bar{Y}-2\bar{Y}^t\lambda^{-1}\bar{\lambda}G_Y+\frac{\bar{Y}^t\lambda^{-1}G^t_{\lambda}\bar{Y}}{X}-\text{Tr}\left(\lambda^{-1}\bar{\lambda}\dot{G}^t_{\lambda}\right)-2\dot{G}_Y^t\bar{Y}\nonumber\\ &=&F(t) \end{eqnarray} where $G_X$, $G_Y$, and $G_{\lambda}$ defined in \eqref{fieldeqns} and \begin{eqnarray} F(t)&=&\left(4\nabla\bar{v}+\frac{\bar{Y}^t\lambda^{-1}\nabla Y}{X}\right)^2+X\left(\dot{U}_2^t\lambda\dot{U}_2+\nabla U_1^t\lambda\nabla U_1\right)+\text{Tr}\bigg[\left(\nabla\left(\bar{\lambda}\lambda^{-1}\right)+\frac{\nabla Y\bar{Y}^t\lambda^{-1}}{X}\right)^2\bigg]\nonumber\\ \dot{G}_X&=&4\Delta_3\bar{v}+\frac{e^{-6v}}{\rho^4}\left\{2\nabla{\bar{Y}}^t\text{adj}\lambda'\nabla Y+\nabla{{Y}}^t\text{adj}\bar{\lambda'}\nabla Y-6\bar{v}\nabla{{Y}}^t\text{adj}\lambda'\nabla Y\right\}\nonumber\\ \dot{G}_{\lambda}&=&2\Delta_3\bar{v} I+\nabla\cdot\delta\left(\lambda ^{'-1}\nabla\lambda'\right)+\frac{e^{-6v}}{\rho^4}\left\{2\text{adj}\lambda'\nabla Y\cdot\nabla \bar{Y}^t+\text{adj}\bar{\lambda}'\nabla Y\cdot\nabla {Y}^t-6\bar{v}\text{adj}\lambda'\nabla Y\cdot\nabla{Y}^t\right\}\nonumber\\ \dot{G}_{Y}&=&\nabla\cdot \left(\frac{e^{-6v}}{\rho^4}\left\{\text{adj}\lambda'\nabla\bar{Y}+\text{adj}\bar{\lambda}'\nabla{Y}-6\bar{v}\text{adj}\lambda'\nabla{Y}\right\}\right) \end{eqnarray} The identity \eqref{Carteridentity} can be verified directly. Assume $\varphi\in B$ then after a tedious calculation involving repeated integration by parts we have the remarkable relation \begin{equation}\label{secondvariation} \int_{\mathbb{R}^3}\left(-4\bar{v}\dot{G}_X-\text{Tr}\left(\lambda^{-1}\bar{\lambda}\dot{G}_{\lambda}\right)-2\dot{G}_Y^t\bar{Y}\right)\text{d}\Sigma=16\mathcal{E}''_{\varphi}(t) \end{equation} Thus if $t=0$, the field equations $G_X(0)=G_{\lambda}(0)=G_Y(0)=0$ hold and we have from \eqref{Carteridentity} (the integral over the divergence term vanishes by our boundary conditions) \begin{equation} \mathcal{E}''_{\varphi}(0)=\frac{1}{16}\int_{\mathbb{R}^3}F(0)\text{d}\Sigma\geq 0 \end{equation} where \begin{eqnarray} F(0)&=&\left(4\nabla\bar{v}+\frac{\bar{Y}^t\lambda^{-1}_0\nabla Y_0}{X_0}\right)^2+X_0\left(\dot{U}_2^t\lambda\dot{U}_2+\nabla U_1^t\lambda\nabla U_1\right)+\text{Tr}\bigg[\left(\nabla\left(\bar{\lambda}\lambda^{-1}_0\right)+\frac{\nabla Y_0\bar{Y}^t\lambda^{-1}_0}{X_0}\right)^2\bigg]\nonumber\\ &\geq& X_0\nabla U_1^t\lambda_0\nabla U_1\label{F1} \end{eqnarray} Now if $\mathcal{E}''_{\varphi}(0)=0$, then $F(0)=0$. Therefore, by inequality \eqref{F1} we have $\nabla U_1=0$. Also, since $\varphi\in B$, we have $\bar{Y}=0$. Therefore, by $F=0$ and $\bar{Y}=0$ we have $\bar{v}=0$ and $\bar{\lambda}=0$. This is, however, not sufficient to prove that the extreme data $u_0$ is a \emph{strict} local minimum. For this one needs a stronger positivity result on $\mathcal{E}''_{\varphi}(0)$ (see for example, Theorem 40.B of \cite{zeidler1989nonlinear}) which we now demonstrate. Firstly, we prove a coercive condition required for $u_0$ to be a local minimum. We note the identity (this arises in the proof of \eqref{secondvariation}) \begin{eqnarray} \int_{\Omega_{\rho_0}}2\rho^{-2}\text{Tr}\left(\lambda'^{-1}\nabla \lambda'\text{adj}\bar{\lambda'}\nabla \bar{\lambda'}\right)\,\text{d}\Sigma=-\int_{\Omega_{\rho_0}}\left(\text{Tr}\left[\bar{\lambda}'\nabla\left(\lambda'^{-1}\right)\right]\right)^2\,\text{d}\Sigma\label{Crazyidentity} \end{eqnarray} \begin{lemma}\label{coercive} There exist $\mu>0$ such that for all $\varphi\in B$ we have \begin{equation} \mathcal{E}''_{\varphi}(0)\geq\mu\norm{\varphi}^2_{\mathcal{H}}\label{Coercivein} \end{equation} \end{lemma} \begin{proof} Let $\varphi\in B$. Note that $\mathcal{E}''_{\varphi}(0)$ defines a bilinear form \begin{equation} a(\varphi,\varphi)\equiv\mathcal{E}''_{\varphi}(0)=\int_{\mathbb{R}^3}F(0)\text{d}\Sigma \end{equation} as function of $\varphi$. The inequality \eqref{Coercivein} is equivalent to the following variational problem \begin{equation} \mu=\inf_{\varphi\in B,\norm{\varphi}^2_{\mathcal{H}}=1}a(\varphi,\varphi) \end{equation} Since $a(\varphi,\varphi)$ is positive definite, we have $\mu\geq 0$. Now we prove $\mu > 0$. Assume $\mu=0$, then there exists a sequence $\{\varphi_n\}$ such that \begin{equation} \norm{\varphi_n}^2_{\mathcal{H}}=1\qquad \text{for all $n$} \end{equation} and \begin{equation} \lim_{n\to\infty}a(\varphi_n,\varphi_n)=0 \end{equation} Then we have \begin{eqnarray} 0&=&\lim_{n\to\infty}a(\varphi_n,\varphi_n)=\lim_{n\to\infty}\int_{\mathbb{R}^3}F(0)\text{d}\Sigma\nonumber\\ &\geq&\lim_{n\to\infty} \int_{\Omega_{\rho_0}}X_0\nabla U_1^t\lambda_0\nabla U_1\text{d}\Sigma \geq C_1\lim_{n\to\infty}\int_{\Omega_{\rho_0}}\rho^3 \nabla U_1^t\nabla U_1\text{d}\Sigma\nonumber\\ & \geq& \frac{3C_1}{2}\lim_{n\to\infty}\int_{\Omega_{\rho_0}}\rho U_1^t U_1\text{d}\Sigma \geq\frac{3C_1C_3}{2C'}\lim_{n\to\infty}\int_{\Omega_{\rho_0}}\rho^{-4}\bar{Y}^t_n\lambda^{-1}_0\bar{Y}_n\text{d}\Sigma \end{eqnarray} In first inequality we used \eqref{F1}. The second follows from part 2 and 3 of Definition \ref{Def1} . Third inequality follows from Lemma \ref{Poincare}-(c). Fourth inequality follows from part 3 of Definition \ref{Def1}. Therefore, \begin{equation} \lim_{n\to\infty}\int_{\Omega_{\rho_0}}\rho^{-4}\bar{Y}^t_n\lambda^{-1}_0\bar{Y}_n\text{d}\Sigma=0\label{firstY} \end{equation} Next we establish some inequalities. First rewrite $F(0)$ in the form \begin{eqnarray} F(0)&=&\left(4\nabla\bar{v}_n+\frac{\bar{Y}^t_n\lambda^{-1}_0\nabla Y_0}{X_0}\right)^2+2A_1^t\lambda_0 A_1+2A_2^t\lambda_0 A_2+\text{Tr}\bigg[\left(\nabla\left(\bar{\lambda}_n\lambda^{-1}_0\right)+\frac{\nabla Y_0\bar{Y}^t_n\lambda^{-1}_0}{X_0}\right)^2\bigg]\nonumber \end{eqnarray} where \begin{equation} A_1 = \frac{\sqrt{X_0}}{2} \left[B_I + B_{II} + B_{III}\right], \qquad A_2 = \frac{\sqrt{X_0}}{2} \left[B_{II} - B_I\right] \end{equation} and \begin{eqnarray} B_I &=& {\frac{\lambda^{-1}_0\nabla\lambda_0\lambda^{-1}_0\bar{Y}_n}{X}+\frac{\nabla{X}_0}{X^2_0}\lambda^{-1}_0\bar{Y}_n}, \qquad B_{II} = \frac{\lambda^{-1}_0\bar{\lambda}_n\lambda^{-1}_0\nabla{Y}_0}{X_0}+\frac{\bar{X}}{X^2_0}\lambda^{-1}_0\nabla Y_0 \nonumber \\ B_{III} &=& 2\frac{\lambda^{-1}_0\nabla\bar{Y}}{X_0} \; . \end{eqnarray} Then we have the following inequality \begin{equation} a(\varphi_n,\varphi_n)+\int_{\Omega_{\rho_0}}2B_I^t\lambda_0B_I\,\text{d}\Sigma\geq\int_{\Omega_{\rho_0}}\frac{1}{4}B_{III}^t\lambda_0B_{III} \,\text{d}\Sigma\label{ineq1} \end{equation} where $B_I$ can be written as \begin{equation} B_I=\frac{\lambda^{-1}_0}{\sqrt{X_0}}\left(\nabla\lambda_0\lambda^{-1}_0+\frac{\nabla{X_0}}{X_0}I_{2\times 2}\right)\bar{Y}_n=\frac{\lambda^{-1}_0}{\sqrt{X_0}}M\bar{Y}_n \;. \end{equation} By part 4 of Definition \ref{Def1} we have \begin{equation} \abs{M}^2\leq 2\abs{\nabla\lambda_0\lambda^{-1}_0}^2+2\abs{\nabla\ln X_0}^2\leq C\rho^{-2} \end{equation} and we have \begin{eqnarray} \int_{\Omega_{\rho_0}}2B_I^t\lambda_0B_I\,\text{d}\Sigma&\leq&\int_{\Omega_{\rho_0}}\frac{2}{X_0}\abs{M}^2\bar{Y}^t_n\lambda^{-1}_0\bar{Y}_n\,\text{d}\Sigma\leq 2C\int_{\Omega_{\rho_0}}\rho^{-4}\bar{Y}^t_n\lambda^{-1}_0\bar{Y}_n\,\text{d}\Sigma\label{ineq2} \end{eqnarray} Then by inequities \eqref{ineq1} and \eqref{ineq2} we have \begin{equation} a(\varphi_n,\varphi_n)+4\int_{\Omega_{\rho_0}}\rho^{-4}\bar{Y}^t_n\lambda^{-1}_0\bar{Y}_n\,\text{d}\Sigma\geq \frac{1}{4}\int_{\Omega_{\rho_0}}\rho^{-2}\nabla\bar{Y}^t_n\lambda^{-1}_0\nabla\bar{Y}_n\,\text{d}\Sigma \,\text{d}\Sigma \end{equation} Now we take the limit of above equation and use the equation \eqref{firstY} to find \begin{equation} \lim_{n\to\infty}\int_{\Omega_{\rho_0}}\rho^{-2}\nabla\bar{Y}^t_n\lambda^{-1}_0\nabla\bar{Y}_n\text{d}\Sigma=0 \end{equation} Thus \begin{equation}\label{Ynvanish} \lim_{n\to\infty}\norm{\bar{Y}_n}_{\mathcal{H}_3}=0 \end{equation} Now we look at the first term in $F(0)$. Then \begin{equation} a(\varphi_n,\varphi_n)+\int_{\Omega_{\rho_0}}\left(\frac{\bar{Y}^t_n\lambda^{-1}_0\nabla Y_0}{X_0}\right)^2\,\text{d}\Sigma\geq 8\int_{\Omega_{\rho_0}}\left(\nabla\bar{v}_n\right)^2\,\text{d}\Sigma \label{ineq4} \end{equation} Since $\lambda_0$ is a positive definite symmetric metric it has unique square root $\lambda^{1/2}_0$. Now if we set $u=\lambda_0^{-1/2}\bar{Y}$ and $w=\lambda_0^{-1/2}\nabla Y_0$ we have \begin{eqnarray} \int_{\Omega_{\rho_0}}\left(\frac{\bar{Y}^t_n\lambda^{-1}_0\nabla Y_0}{X_0}\right)^2\,\text{d}\Sigma &\leq&\int_{\Omega_{\rho_0}}\left(\frac{\bar{Y}^t_n\lambda^{-1}_0\bar{Y}_n}{X_0}\right)\left(\frac{\nabla{Y}^t_0\lambda^{-1}_0\nabla{Y}_0}{X_0}\right)\,\text{d}\Sigma \nonumber\\ &\leq&C\int_{\Omega_{\rho_0}}\rho^{-2}\bar{Y}^t_n\lambda^{-1}_0\bar{Y}_nr^{-4}\,\text{d}\Sigma\nonumber\\ &\leq&C\int_{\Omega_{\rho_0}}\rho^{-4}\bar{Y}^t_n\lambda^{-1}_0\bar{Y}_n\,\text{d}\Sigma \label{ineq3} \end{eqnarray} The first inequality follows from the Cauchy-Schwarz inequality $u^tw\leq (u^tu)^{1/2}(w^tw)^{1/2}$. Second inequality is by part 1 and 3 of Definition \ref{Def1}. The third inequality is by the fact $\rho\leq r^2$. Then by inequality \eqref{ineq3} and \eqref{ineq4} we have \begin{equation} a(\varphi_n,\varphi_n)+C\int_{\Omega_{\rho_0}}\rho^{-4}\bar{Y}^t_n\lambda^{-1}_0\bar{Y}_n\,\text{d}\Sigma \geq 8\int_{\mathbb{R}^3}\left(\nabla\bar{v}_n\right)^2\,\text{d}\Sigma \geq 8\int_{\mathbb{R}^3}\left(\nabla\bar{v}_n\right)^2 r^{-2}\,\text{d}\Sigma\label{ineq5} \end{equation} the last inequality is by Theorem 1.2-(i) of \cite{bartnik1986mass}. Now if we take the limit of inequality \eqref{ineq5} and by the fact the right hand side is zero by \eqref{firstY}, we have \begin{equation} \lim_{n\to \infty}\int_{\mathbb{R}^3}\left(\nabla\bar{v}_n\right)^2 r^{-2}\,\text{d}\Sigma=0\label{vinfty} \end{equation} Thus by Lemma \ref{Poincare}-(a) we have \begin{equation}\label{vnvanish} \lim_{n\to \infty}\norm{\bar{v}_n}_{\mathcal{H}_1}=0 \end{equation} Now we consider the last term of $F(0)$. We have the following inequality \begin{equation} a(\varphi_n,\varphi_n)+\int_{\Omega_{\rho_0}}\text{Tr}\bigg[\left(\frac{\nabla Y_0\bar{Y}^t_n\lambda^{-1}_0}{X_0}\right)^2\bigg]\,\text{d}\Sigma \geq \frac{1}{2}\int_{\Omega_{\rho_0}}\text{Tr}\bigg[\left(\nabla\left(\bar{\lambda}_n\lambda^{-1}_0\right)\right)^2\bigg]\,\text{d}\Sigma \label{ineq8} \end{equation} The integrand of the second term on the left hand side has vanishing determinant since $\det\left(\nabla Y_0\bar{Y}^t_n\lambda^{-1}_0\right)=\frac{\det\left(\nabla Y_0\bar{Y}^t_n\right)}{\rho^2}=0$. Thus by the matrix identity $\text{Tr}(A^2)=\left(\text{Tr}A\right)^2-2\det A$ and inequality \eqref{ineq3} we have \begin{eqnarray} \int_{\Omega_{\rho_0}}\text{Tr}\bigg[\left(\frac{\nabla Y_0\bar{Y}^t_n\lambda^{-1}_0}{X_0}\right)^2\bigg]\,\text{d}\Sigma &\leq&C\int_{\Omega_{\rho_0}}\rho^{-4}\bar{Y}^t_n\lambda^{-1}_0\bar{Y}_n\,\text{d}\Sigma \label{ineq7} \end{eqnarray} By relation \eqref{lambdaandprime} the right hand side expands \begin{eqnarray} \text{Tr}\bigg[\left(\nabla\left(\bar{\lambda}_n\lambda^{-1}_0\right)\right)^2\bigg] &=&2\left[\nabla\bar{v}_n\right]^2+\text{Tr}\left[\left(\nabla\bar{\lambda}'_n\lambda'^{-1}_0\right)^2\right]+\text{Tr}\left[\left(\bar{\lambda}'_n\nabla\left(\lambda'^{-1}_0\right)\right)^2\right]\nonumber\\&+&2\text{Tr}\left[\nabla\bar{\lambda}'_n\left(\frac{\text{adj}\bar{\lambda}'_n}{\rho^2}\right)\nabla\lambda'_0\lambda'^{-1}_0\right] \end{eqnarray} By integration we have \begin{eqnarray} \int_{\Omega_{\rho_0}}\text{Tr}\bigg[\left(\nabla\left(\bar{\lambda}_n\lambda^{-1}_0\right)\right)^2\bigg]\,\text{d}\Sigma&=&\int_{\mathbb{R}^3}2\left[\nabla\bar{v}_n\right]^2\,\text{d}\Sigma+\int_{\Omega_{\rho_0}}\text{Tr}\left[\left(\nabla\bar{\lambda}'_n\lambda'^{-1}_0\right)^2\right]\,\text{d}\Sigma \nonumber\\ &\geq&\int_{\mathbb{R}^3}2\left[\nabla\bar{v}_n\right]^2\,\text{d}\Sigma+C_1^2\int_{\Omega_{\rho_0}}\abs{\nabla\bar{\lambda}'_n}^2\rho^{-2}\,\text{d}\Sigma\qquad\label{ineq6} \end{eqnarray} The equality is by identity \eqref{Crazyidentity}. The inequality is by part 2 of Definition \ref{Def1}. Then by substitution of inequalities \eqref{ineq6} and \eqref{ineq7} in \eqref{ineq8} we have \begin{eqnarray} a(\varphi_n,\varphi_n)+C\int_{\Omega_{\rho_0}}\rho^{-4}\bar{Y}^t_n\lambda^{-1}_0\bar{Y}_n\,\text{d}\Sigma \geq \int_{\mathbb{R}^3}\left[\nabla\bar{v}_n\right]^2\,\text{d}\Sigma+\frac{C_1^2}{2}\int_{\Omega_{\rho_0}}\abs{\nabla\bar{\lambda}'_n}^2\rho^{-2}\,\text{d}\Sigma \end{eqnarray} Now if we take the limit from both side of this inequality and use equation \eqref{vinfty} we have \begin{equation} \lim_{n\to\infty}\int_{\Omega_{\rho_0}}\abs{\nabla\bar{\lambda}'_n}^2\rho^{-2}\,\text{d}\Sigma=0 \end{equation} Thus by Lemma \ref{Poincare}-(b) we have \begin{equation}\label{lambdavan} \lim_{n\to\infty}\norm{\bar{\lambda}'_n}_{\mathcal{H}_2}=0 \end{equation} Thus \eqref{Ynvanish}, \eqref{vnvanish} and \eqref{lambdavan} contradict the fact that $\norm{\varphi_n}_{\mathcal{H}}=1$. Hence $\mu>0$. \end{proof} \section{Proof of Theorem \ref{main theorem}}\label{proof} \begin{proof} The proof is straightforward and similar to the proof of theorem 1 of \cite{dain2006proof} and Chapter 40-B of \cite{zeidler1989nonlinear}. \begin{enumerate}[(a)] \item We have proved in Lemma \ref{uniformlemma} that $\mathcal{E}''_{\varphi}(t)$ is $C^2$ with respect to $t$. Also by Taylor's theorem we have \begin{equation} \mathcal{M}(u_0+\varphi)-\mathcal{M}(u_0)=\mathcal{E}_{\varphi}(1)-\mathcal{E}_{\varphi}(0)=\frac{\mathcal{E}''_{\varphi}(t)}{2}\quad 0<t<1 \end{equation} To prove this is positive we will show $\mathcal{E}''_{\varphi}(t)\geq 0$ and $\mathcal{E}''_{\varphi}(t)=0$ implies $\varphi=0$. By Lemma \ref{uniformlemma}-(b) $\mathcal{E}''_{\varphi}(t)$ is uniformly continuous, that is for every $\epsilon>0$ there exist $\eta(\epsilon)$ such that the following inequality holds \begin{equation} \abs{\mathcal{E}''_{\varphi}(t)-\mathcal{E}''_{\varphi}(0)}\leq \epsilon \norm{\varphi}^2_{\mathcal{H}} \end{equation} for every $\norm{\varphi}_{\mathcal{H}}<\eta(\epsilon)$. From this inequality we have \begin{equation} \mathcal{E}''_{\varphi}(0)-\epsilon \norm{\varphi}^2_{\mathcal{H}}\leq \mathcal{E}''_{\varphi}(t) \end{equation} By Lemma \ref{coercive} we have \begin{equation} (\mu-\epsilon) \norm{\varphi}^2_{\mathcal{H}}\leq \mathcal{E}''_{\varphi}(t) \end{equation} Choosing $\eta(\epsilon)$ such that $0 < \epsilon<\mu$ the desired result follows. \item Let $u= u_0 + \varphi$ be the associated $t-\phi^i$ symmetric part of the initial data set $(\Sigma,h,K)$ as in the statement of Theorem \ref{main theorem}. It was proved that the ADM mass of this data satisfies \cite{alaee2014mass} \begin{equation}\label{ADMineq} m\geq\mathcal{M}(u) = \mathcal{M}(u_0 + \varphi) \end{equation} Then by part (a) we have \begin{equation} \mathcal{M}(u_0+\varphi)> \mathcal{M}(u_0) \end{equation} for nonzero $\varphi$. Since $u_0$ is an extreme data, there exists a function $f$ such that $\mathcal{M}(u_0)=f(J_1,J_2)$. Thus \begin{equation} m \geq f(J_1,J_2) \end{equation} Clearly, by definition if the initial data is extreme, then $m=f(J_1,J_2)$ . Conversely, suppose the mass $m$ of given initial data $(\Sigma,h,K)$ satisfies $m=f(J_1,J_2)=\mathcal{M}(u_0)$. Hence $\varphi =0$ and $u= u_0$ and from \eqref{ADMineq} and Remark \ref{remark1} the initial data is extreme. Thus $m=f(J_1,J_2)$ if and only if the data belongs to the extreme class. \end{enumerate} \end{proof} \subsection*{Acknowledgments} We would like to thank S. Dain for comments on the use of the Carter identity in his article \cite{dain2006proof} and also for clarifying the relationship between spacetime Weyl and quasi-isotropic coordinates for non-extreme data. HKK also thanks James Lucietti for discussions concerning the uniqueness theorem for extreme black holes \cite{figueras2010uniqueness}. We also would like to thank the referees for suggesting a number of improvements. AA is partially supported by a graduate scholarship from Memorial University. HKK is supported by an NSERC Discovery Grant.
1,477,468,750,238
arxiv
\section{Introduction} Ion transport is fundamental to a wide variety of biophysical processes and technological applications, e.g., transmembrane ion channels, electrochemical energy devices, and electrokinetics in microfluidics\cite{IonChanel_HandbookCRC15, BazantDiffChg_PRE04, BazantReview_ACIS09}. Based on a mean-field approximation, the classical Poisson--Nernst--Planck (PNP) theory has been derived to describe ion dynamics in various scenarios. The diffusion and convection of ions under gradients of the electrostatic potential are modeled by the Nernst--Planck equations. In turn, the electrostatic potential is governed by the Poisson equation with charge density arising from mobile ions. Despite its success in many applications, the PNP theory is valid only for dilute solutions due to various underlying assumptions made in mean-field approximations~\cite{BazantReview_ACIS09, LiuJiXu_SIAP18}. For instance, it neglects ionic steric effects that play a crucial role in the description of concentrated electrolytes in confined environments, e.g., high ionic concentrations in ion channels. To address this issue, several versions of modified PNP theories with steric effects have been developed in the past few decades. One approach to account for steric effects is via the incorporation of entropy of solvent molecules to the electrostatic free energy~\cite{BAO_PRL97, BazantSteric_PRE07, Li_Nonlinearity09, ZhouWangLi_PRE11, BZLu_BiophyJ11, LiLiuXuZhou_Nonliearity13, NuoZhouMcCammon_JPCB14, BZLu_JCP14, JiZhou_CMS19}. One salient feature of this type of models is a saturation concentration for compactly packed counterions in the vicinity of charge surfaces. Another strategy to include steric effects is to add an excess chemical potential, which can be given by the density functional theory\cite{JZWu_JPCM14, GLin_Cicp14}, or by the Lennard-Jones potential for hard-sphere repulsions~\cite{HyonLiuBob_CMS10, BobHyonLiu_JCP10, LinBob_CMS14}. These models often give rise to integro-differential equations that are computationally intractable. To avoid nonlocal integral terms, local approximations of nonlocal integrals up to leading order terms are proposed to obtain approximate local models~\cite{HyonLiuBob_CMS10, HorngLinLiuBob_JPCB12, LinBob_CMS14}. Nonetheless, the resulting model has been shown to be ill-posed for concentrated electrolytes within certain parameter regimes~\cite{LinBob_CMS14, Gavish_PhysD18}. To remedy this issue, concentration gradient energies that are higher order terms of the local approximations of nonlocal integrals can be added to regularize the solution~\cite{GavishYochelis_JPCLett16, GavishEladYochelis_JPCLett18, Gavish_PhysD18, GavishLiuEisenberg_JPCB18}. Such gradient energy terms are widely used in the Ginzburg--Landau theory for the description of phase separation in mixtures. A conserved $H^{-1}$ gradient flow of the Ginzburg-Landau functional gives rise to the Cahn--Hilliard (CH) equations~\cite{CH_JCP58}. An $H^{-1}$ gradient flow of the electrostatic free energy with additional concentration gradient energies leads to the following Poisson--Nernst--Planck--Cahn--Hilliard (PNPCH) equations with steric interactions: \begin{equation*}\left\{ \begin{aligned} &\frac{\partial c^{m}}{\partial t}=\epsilon^{m}\nabla\cdot\left[c^{m}\nabla \left(z^{m}\psi+\log c^{m}+\sum^{M}_{n=1}g^{mn}c^{n}-\sigma^{m}\Delta c^{m}\right)\right],\\ &-\nabla\cdot(\kappa\nabla\psi)=\sum^{M}_{m=1}z^{m}c^{m}+\rho^{f}, \end{aligned}\right. \end{equation*} where $\psi$ is the electrostatic potential, $c^{m}$ is the ion concentration for the $m$th species, $z^{m}$ is the valence, $\rho^{f}$ is the fixed charge density, $\kappa$ and $\epsilon^{m}$ arise from nondimensionalization, $G=\left(g^{mn}\right)$ is the coefficient matrix for steric interactions, and $\sigma^{m}$ is a gradient energy coefficient; cf.~Section~\ref{s:PNPCH}. Such type of equations have been successfully applied to study ion permeation and selectivity in ion channels~\cite{GavishLiuEisenberg_JPCB18} and charge dynamics in room temperature ionic liquids and highly concentrated electrolytes~\cite{GavishYochelis_JPCLett16, GavishEladYochelis_JPCLett18}. We focus on the development of numerical methods for the PNPCH equations. Many efforts have been devoted to the development of numerical methods for the PNP-type equations, ranging from finite difference schemes to discontinuous Galerkin (DG) methods~\cite{ProhlSchmuck09, LHMZ10, ZCW11, Gibou_JCP14, AFJKXL17, GaoHe_JSC17, DSWZhou_CICP18, DingWangZhou_NMTMA19}. In order to obtain physically faithful numerical solutions, it is highly desirable and crucial to preserve physical properties of the analytical solutions, such as mass conservation, free-energy dissipation, and positivity, at the discrete level. A finite difference scheme was developed for the PNP equations in $1$D~\cite{AMEKLL14}; it was proved that the scheme guarantees numerical positivity if a time step size satisfies certain constraint conditions. An energy satisfying finite difference scheme based on a Slotboom transformation is proved to maintain discrete positivity under a constraint on the mesh ratio~\cite{LW14}. An arbitrary-order free energy satisfying DG method is proposed to numerically solve the 1D PNP equations. The positivity of numerical solutions was enforced by a delicately designed accuracy-preserving limiter~\cite{LW17}. A finite element method that can ensure positivity of numerical solutions was developed to solve both the PNP equations and PNP equations coupled with the Navier--Stokes equations~\cite{MXL16}. A semi-implicit finite difference scheme that ensures positivity and discrete energy dissipating properties was established in~\cite{HuHuang_Sub2019}. Based on harmonic-mean approximations~\cite{QianWangZhou_JCP19}, a finite difference scheme that is proved to respect mass conservation and unconditional positivity preservation was proposed for PNP equations with steric effects~\cite{DingWangZhou_JCP19}. Estimates on the condition number of the coefficient matrix was established as well. To the best of our knowledge, numerical methods with the desired properties for the PNPCH equations are still missing. In this work, we first derive the PNPCH equations corresponding to an $H^{-1}$ gradient flow of a free-energy functional that includes electrostatic free energies, steric interaction energies of short range, entropic contribution of ions, and concentration gradient energies. To numerically solve the PNPCH equations, we propose a novel energy stable numerical scheme that respects mass conservation and positivity at the discrete level. It is shown that the solution to the proposed nonlinear scheme corresponds to a unique minimizer of a convex functional over a closed, convex domain, establishing the existence and uniqueness of the solution. The positivity of numerical solutions is further theoretically justified by making use of the singular nature of the entropy terms, which prevents the minimizer from approaching zero concentrations. It is noted that such an argument has been used to prove the positivity preservation of numerical methods for the Cahn--Hilliard equations and quantum diffusion equations~\cite{CWWW_JCP2019, dong19b, HuoLiu_Sub20}; also see the related analysis~\cite{duan19a, duan19b} to deal with energetic variational method in the particle coordinate approach. Further numerical analysis establishes discrete free-energy dissipation of the proposed scheme. We perform extensive numerical tests to demonstrate that the numerical scheme is first-order accurate in time and second-order accurate in space, and is capable of preserving the desired properties, including mass conservation, positivity, and free energy dissipation. Moreover, the PNPCH equations and the proposed scheme are applied to study multiple time scale dynamics and self-assembled nanopatterns in highly concentrated electrolytes. The multiple time scale dynamics often take long time to reach an equilibrium, highlighting the need for robust, energy stable numerical schemes that allow large time stepping. Numerical results demonstrate that the PNPCH equations and the proposed numerical scheme are able to capture nanostructures, such as lamellar patterns and labyrinthine patterns, and multiple time scale dynamics with multiple time scales. In addition, we investigate the interplay between cross steric interactions of short range and the concentration gradient regularization, and their impact on the development of nanostructures in the equilibrium state. The rest of the paper is organized as follows. In Section~\ref{s:PNPCH} we derive the PNPCH equations from a free energy functional. In Section~\ref{s:NumericalScheme} we present the finite difference scheme. In Section~\ref{s:Properties} we prove main properties of the numerical scheme at the discrete level. Section~\ref{s:Numerics} is devoted to the numerical results. Finally, some concluding remarks are made in Section~\ref{s:Conclusions}. \section{The physical model}\label{s:PNPCH} We consider an electrolyte solution with $M $ ionic species, occupying a region $\Omega$ in $\mathbb{R}^{3}$. We denote by $c^{m}=c^{m}(t, \mathbf{x})$ $(m=1, \cdots, M)$ the local ionic concentration of the $m$th ionic species at position $\mathbf{x}\in\Omega$ and time $t$. Denote by $c=(c^{1}, c^{2},..., c^{M})^{T}$. For such a charged system, we consider the following free-energy functional of ionic concentrations: \begin{equation}\label{CHFE} \begin{aligned} F[c]=\int_{\Omega}\frac{1}{2}\rho\psi d\textbf{x}+\beta^{-1}\sum^{M}_{m=1}\int_{\Omega}c^{m}\left[\log(\Lambda^{3}c^{m})-1\right] d\textbf{x} +\int_{\Omega}\frac{1}{2}c^{T}Gc d\textbf{x}+\sum^{M}_{m=1}\int_{\Omega}\frac{\sigma^{m}}{2}|\nabla c^{m}|^{2} d\textbf{x}. \end{aligned} \end{equation} The first term represents the electrostatic energy. The total charge density $\rho$ is given by $$\rho=\sum^{M}_{m=1}q^{m}c^{m}+\rho^{f},$$ where $q^{m}=z^{m}e$, with $z^{m}$ being the valence of the $m$th ionic species and $e$ being the elementary charge, and the function $\rho^f \in C(\bar{\Omega}) $ represents the distribution of fixed charges. The electrostatic potential $\psi$ is governed by the Poisson's equation \begin{equation}\label{POIeq} \begin{aligned} -\nabla\cdot(\varepsilon_{0}\varepsilon_{r}\nabla\psi)=\rho \quad \mathrm{in}\quad\Omega, \end{aligned} \end{equation} where $\varepsilon_{0}$ is the vacuum dielectric permittivity and $\varepsilon_{r}\geq1$ is a spatially dependent dielectric coefficient function. The second term describes the entropic contribution of ions to the total free energy. The parameter $\beta=1/k_{B}T$ is the inverse of thermal energy, with $k_{B}$ being the Boltzmann constant and $T$ being the absolute temperature. The constant $\Lambda$ is the thermal de Broglie wavelength. The ionic steric interaction energy is given in the third term, in which the $M\times{M}$ matrix $G=(g^{mn})$ is symmetric with non-negative entries. The entry $g^{mn}$ is related to the second-order virial coefficients of hard spheres, depending on the size of the $m$th and $n$th ionic species~\cite{ZhouJiangDoi_PRL17, DingWangZhou_JCP19}. Diagonal entries of $G$ describe self-steric interactions of ions of the same species, and off-diagonal entries correspond to short-range cross steric interactions between ions of different species. A similar model based on the leading order local approximation of the Lennard-Jones interaction energies has been developed in the works~\cite{HyonLiuBob_CMS10, HorngLinLiuBob_JPCB12, LinBob_CMS14, LinBob_Nonlinearity15, Gavish_PhysD18, SWZ_CMS18, GavishLiuEisenberg_JPCB18}, while the interaction matrix $G$ in that model has a different interpretation. The fourth term, arising from the Cahn--Hilliard mixture theory~\cite{CH_JCP58}, describes a gradient energy that penalizes large concentration gradients. Here $\sigma^{m}$ are gradient energy coefficients. Since concentration gradient terms can be regarded as high-order approximation of the Lennard-Jones interaction energies, they have been recently introduced to the PNP theory to describe steric interactions in ionic liquids~\cite{GavishYochelis_JPCLett16, BGUYochelis_PRE17, GavishEladYochelis_JPCLett18} and ion channels~\cite{Gavish_PhysD18, GavishLiuEisenberg_JPCB18}. We shall derive governing equations based on the free-energy functional~\reff{CHFE}. Taking the first variation of $F[c]$ with respect to ion concentrations, we obtain the chemical potential $\mu^{m}$ of the $m$th species of ions: \begin{equation}\label{chemicalMU} \begin{aligned} \mu^{m}=\frac{\delta F[c]}{\delta c^{m}}=q^{m}\psi+\beta^{-1}\log(\Lambda^{3}c^{m})+\sum^{M}_{n=1}g^{mn}c^{n}-\sigma^{m}\Delta c^{m}. \end{aligned} \end{equation} The force balance between thermodynamic force and hydrodynamic drag gives rise to the velocity of the $m$th ion species $$\eta^{m}=-\beta D^{m}\nabla \mu^{m},$$ where $D^{m}$ is the diffusion constant. The time evolution of $c^{m}$ satisfies the conservation equation $$\frac{\partial c^{m}}{\partial t}=-\nabla \cdot (c^{m}\eta^{m}).$$ With the chemical potential~\reff{chemicalMU}, we obtain \[ \begin{aligned} &\frac{\partial c^{m}}{\partial t}=D^{m}\nabla\cdot\left\{\beta c^{m}\nabla\left[q^{m}\psi+\beta^{-1}\log(\Lambda^{3}c^{m})+\sum^{M}_{n=1}g^{mn}c^{n}-\sigma^{m}\Delta c^{m}\right]\right\}. \end{aligned} \] Meanwhile, the following nondimensionalized variables are introduced~\cite{BazantSteric_PRE07, BazantDiffChg_PRE04} \[ \begin{aligned} &\tilde{\psi}=e\beta \psi, \quad \tilde{x}=\frac{x}{L}, \quad \tilde{c}^{m}=\frac{c^{m}}{c_{0}}, \quad \tilde{t}=\frac{tD_{0}}{L\lambda_{D}}, \quad \tilde{D}^{m}=\frac{D^{m}}{D_{0}}, \\ &\tilde{v}=\Lambda^{3}c_{0},\quad \tilde{\sigma}^{m}=\frac{\sigma^{m} c_{0}\beta}{L^{2}}, \quad \tilde{G}=c_{0}\beta G, \quad \tilde{\rho}^{f}=\frac{\rho^{f}}{ec_{0}}, \end{aligned} \] where $c_{0}$ is a characteristic concentration, $L$ is a macroscopic length scale, and $\lambda_{D}$ is the Debye length given by \[\begin{aligned} \lambda_{D}:=\sqrt{\frac{\varepsilon_{0}\varepsilon_{r}}{2e^{2}c_{0}\beta}}. \end{aligned} \] For simplicity, we omit the tildes and obtain the dimensionless Poisson--Nernst--Planck--Cahn--Hilliard (PNPCH) equations \begin{equation}\label{PNPCHE}\left\{ \begin{aligned} &\frac{\partial c^{m}}{\partial t}=\epsilon^{m}\nabla\cdot\left[c^{m}\nabla \left(z^{m}\psi+\log c^{m}+\sum^{M}_{n=1}g^{mn}c^{n}-\sigma^{m}\Delta c^{m}\right)\right],\\ &-\nabla\cdot(\kappa\nabla\psi)=\sum^{M}_{m=1}z^{m}c^{m}+\rho^{f}, \end{aligned}\right. \end{equation} where $\epsilon^{m}=\frac{\lambda_{D}}{L}D^{m}$ and $\kappa=\frac{2\lambda_{D}^{2}}{L^{2}}$. Since the dielectric coefficient function $\varepsilon_{r}\geq 1 $, there exists a lower bound $\kappa_0$ of $\kappa$, i.e., \begin{equation}\label{kappa_0} \kappa\geq \kappa_0:=\frac{\varepsilon_{0}}{e^{2}c_{0}L^2\beta}. \end{equation} After omitting the tildes over the dimensionless variables again, the free energy functional becomes \begin{equation}\label{NONF} \begin{aligned} F[c]=&\int_{\Omega}\frac{1}{2}\left(\sum^{M}_{m=1}z^{m}c^{m}+\rho^{f}\right)\psi d\textbf{x}+\sum^{M}_{m=1}\int_{\Omega}c^{m}\left[\log(vc^{m})-1\right] d\textbf{x}\\ &\quad+\int_{\Omega}\frac{1}{2}c^{T}Gc d\textbf{x}+\sum^{M}_{m=1}\int_{\Omega}\frac{\sigma^{m}}{2}\left|\nabla c^{m}\right|^{2} d\textbf{x}. \end{aligned} \end{equation} For simplicity, we assume that the domain $\Omega$ is a cuboid and consider periodic boundary conditions on the boundary $\partial \Omega$. Since $c^m$ represents the concentration of ions, it is reasonable to assume that $c^{m}(t,\textbf{x}) \geq 0$ for $\textbf{x}\in \Omega$ and $t>0$. By periodic boundary conditions and equation system~\reff{PNPCHE}, we have mass conservation of each species of ions in the sense that \[ \frac{d}{dt} \int_{\Omega} c^m(t, \textbf{x}) d\textbf{x} = \int_{\partial\Omega} c^{m} \epsilon^{m} \nabla \left(z^{m}\psi+\log c^{m}+\sum^{M}_{n=1}g^{mn}c^{n}-\sigma^{m}\Delta c^{m}\right) \cdot \textbf{n} dS =0, \] where $\textbf{n}$ is a unit normal vector defined on $\partial \Omega$. Thus, the initial condition, $$c^m(0, \textbf{x}) = c^{m}_{in}(\textbf{x}),$$ determines the total mass of the $m$th species of ions in the system. It is assumed that the initial data satisfy the neutrality condition \begin{equation}\label{NeuCond} \int_{\Omega} \rho^{f} d\textbf{x} + \sum^{M}_{m=1} \int_{\Omega} z^m c^{m}_{in}(\textbf{x}) d\textbf{x} =0, \end{equation} which is necessary for the solvability of the Poisson equation~\reff{POIeq} with periodic boundary conditions. We also consider time evolution of the free energy \begin{align*} \frac{d}{dt}F&= \sum^{M}_{m=1}\int_\Omega \frac{\delta F}{\delta c^{m}}\frac{\partial c^{m}}{\partial t} d\textbf{x} \\ &= -\sum^{M}_{m=1}\int_\Omega \epsilon^{m}c^{m} \left| \nabla \left(z^{m}\psi+\log c^{m}+\sum^{M}_{n=1}g^{mn}c^{n}-\sigma^{m}\Delta c^{m}\right)\right|^{2}d\textbf{x}\leq0,~\forall t>0. \end{align*} In summary, we have the following physical properties for any solution to the PNPCH equations \reff{PNPCHE}: \begin{align} &\bullet~\text{Positivity: } \quad \text{If } c^{m}_{in}(\textbf{x})\geq0,~\text{then}~ c^{m}(t,\textbf{x})\geq0, ~\forall \textbf{x}\in \Omega ; \hspace{5cm}\label{3properties1} \\ &\bullet~ \text{Mass Conservation: } \quad \int_{\Omega}c^{m}(t,\textbf{x})d\textbf{x}= \int_{\Omega}c^{m}_{in}(\textbf{x})d\textbf{x}; \label{3properties2}\\ &\bullet~ \text{Free-energy Dissipation: } \quad \frac{d}{dt}F \leq0, ~\forall t>0. \label{3properties3} \end{align} \section{The numerical scheme}\label{s:NumericalScheme} \subsection{Discretization preliminaries} For simplicity, we present our numerical scheme in $\mathbb{R}^{3}$ with $\Omega=[a, b]\times[a, b]\times[a, b]$. We cover $\Omega$ with grid points \[ \left\{x_i, y_j,z_{k}\right\}= \left\{a+\left(i-\frac{1}{2}\right)h, a+\left(j-\frac{1}{2}\right)h, a+\left(k-\frac{1}{2}\right)h\right\} ~\text{for}~~ i,j,k=1,\cdots, N, \] where $N$ is the number of grid points in each dimension and $h=\frac{b-a}{N}$ is a uniform spatial mesh step size. We briefly recall notations and operators for discrete functions from~\cite{SCJ09,ChenLiuWangWise_MathComp15, CWWW_JCP2019}. To facilitate the presentation, we introduce the following spaces of $3D$ periodic grid functions: \[ \begin{aligned} &\mathcal{C}_{\mathrm{per}}:=\{v| v_{i,j,k}=v_{i+\alpha N,j+\beta N, k+\gamma N}, \quad\forall i,j,k, \alpha, \beta, \gamma \in\mathbb{Z} \},\\ &\mathring{\mathcal{C}}_{\mathrm{per}}:=\{v \in C_{\mathrm{per}}|\bar{v} = 0\}, \\ &\mathcal{E}^{x}_{\mathrm{per}}:= \{v| v_{i+\frac{1}{2},j,k}=v_{i+\frac12 +\alpha N,j+\beta N, k+\gamma N}, \quad\forall i,j,k, \alpha, \beta, \gamma \in\mathbb{Z} \}, \end{aligned} \] where $\bar{v}:=\frac{h^{3}}{|\Omega|}\sum^{N}_{i,j,k=1}v_{i,j,k}$ is the average of a grid function $v$. The spaces $\mathcal{E}^{y}_{\mathrm{per}}$ and $\mathcal{E}^{z}_{\mathrm{per}}$ are analogously defined. We also introduce the following discrete operators for grid functions: \[ \begin{aligned} &d_{x}v_{i,j,k}:=\frac{1}{h}(v_{i+\frac{1}{2},j,k}-v_{i-\frac{1}{2},j,k}), \quad D_{x}v_{i+\frac{1}{2},j,k}:=\frac{1}{h}(v_{i+1,j,k}-v_{i,j,k}),\\ &a_{x}v_{i,j,k}:=\frac{1}{2}(v_{i+\frac{1}{2},j,k}+v_{i-\frac{1}{2},j,k}),\quad A_{x}v_{i+\frac{1}{2},j,k}:=\frac{1}{2}(v_{i+1,j,k}+v_{i,j,k}) , \end{aligned} \] and the discrete operators $d_{y}$, $d_{z}$, $D_{y}$, $D_{z}$, $A_{y}$, and $A_{z}$ are similarly defined. The discrete gradient $\nabla_{h}$ becomes $$\nabla_{h}v_{i,j,k}:=(D_{x}v_{i+\frac{1}{2},j,k},D_{y}v_{i,j+\frac{1}{2},k},D_{z}v_{i,j,k+\frac{1}{2}}),$$ and the discrete divergence $\nabla_{h}\cdot$ turns out to be $$\nabla_{h}\cdot\vec{f}_{i,j,k}:=d_{x}f^{x}_{i,j,k}+d_{y}f^{y}_{i,j,k}+d_{z}f^{z}_{i,j,k}, \quad \mbox{for $\vec{f}=(f^{x},f^{y},f^{z})$.} $$ The discrete Laplacian operator $\Delta_{h}$ is defined by \[ \begin{aligned} \Delta_{h}v_{i,j,k}:&=\nabla_{h}\cdot(\nabla_{h}v)_{i,j,k}=d_{x}(D_{x}v)_{i,j,k}+d_{y}(D_{y}v)_{i,j,k}+d_{z}(D_{z}v)_{i,j,k}. \end{aligned} \] If $\mathcal{D}$ is a periodic scalar function, we introduce \[ \begin{aligned} \nabla_{h}\cdot(\mathcal{D} \nabla_{h}v)_{i,j,k}:=d_{x}(\mathcal{D} D_{x}v)_{i,j,k}+d_{y}(\mathcal{D} D_{y}v)_{i,j,k}+d_{z}(\mathcal{D} D_{z}v)_{i,j,k}. \end{aligned} \] We now define the following inner product and norms for grid functions: \[ \begin{aligned} &\langle\nu,\xi\rangle_{\Omega}:=h^{3}\sum^{N}_{i,j,k=1}\nu_{i,j,k}\xi_{i,j,k}, \quad \nu,\xi\in\mathcal{C}_{\mathrm{per}},\\ &[\nu,\xi]_{x}:=\langle a_{x}(\nu\xi),1\rangle_{\Omega}\quad \nu,\xi\in\mathcal{E}^{x}_{\mathrm{per}}. \end{aligned} \] And also, $[\nu,\xi]_{y}$ and $[\nu,\xi]_{y}$ could be analogously defined. We then introduce \[ [\vec{f_{1}},\vec{f_{2}}]_{\Omega} :=[f^{x}_{1},f^{x}_{2}]_{x}+[f^{y}_{1},f^{y}_{2}]_{y}+[f^{z}_{1},f^{z}_{2}]_{z},\quad\vec{f_{i}}=(f^{x}_{i},f^{y}_{i},f^{z}_{i})\in\vec{\mathcal{E}}_{\mathrm{per}}, i=1,2. \] For $\nu\in\mathcal{C}_{\mathrm{per}}$, we define $\|\nu\|^{2}_{2}:=\langle\nu,\nu\rangle_{\Omega}$, $\|\nu\|^{p}_{p}:=\langle|\nu|^{p},1\rangle_{\Omega}$, for $1\leq p\leq\infty$, and $\|\nu\|_{\infty}:=\max_{1\leq i,j,k\leq N}|\nu_{i,j,k}|$. For $\nu\in\ \mathcal{C}_{\mathrm{per}}$ and $1\leq p<\infty$, we define the following discrete norms of a gradient \[ \begin{aligned} \|\nabla_{h}\nu\|^{p}_{p}:&=\|D_{x}\nu\|^{p}_{p}+\|D_{y}\nu\|^{p}_{p}+\|D_{z}\nu\|^{p}_{p}. \end{aligned} \] In addition, the higher order discrete norms are defined as \[ \begin{aligned} \|\nu\|^{2}_{H^{1}_{h}}:=\|\nu\|^{2}_{2}+\|\nabla_{h}\nu\|^{2}_{2}, \quad \|\nu\|^{2}_{H^{2}_{h}}:=\|\nu\|^{2}_{2}+\|\nabla_{h}\nu\|^{2}_{2}+\|\Delta_{h}\nu\|^{2}_{2}. \end{aligned} \] We now introduce a discrete analogue of the space $H^{-1}_{\mathrm{per}}(\Omega)$~\cite{WangWise2011, CWWW_JCP2019}. Consider a positive, periodic scalar function $\mathcal{D}$. For any $\phi\in\mathring{\mathcal{C}}_{\mathrm{per}}$, there exists a unique solution $\varphi\in\mathring{\mathcal{C}}_{\mathrm{per}}$ to the equation $$\mathcal{L}_{\mathcal{D}}\varphi:=-\nabla_{h}\cdot (\mathcal{D}\nabla_{h}\varphi)=\phi,$$ with periodic boundary conditions discretized as \begin{equation}\label{PBCs} \begin{aligned} \varphi_{N+1,j,k}&=\varphi_{1,j,k}, \quad \varphi_{0,j,k}=\varphi_{N,j,k}, \quad j,k=0, \cdots, N+1,\\ \varphi_{i,N+1,k}&=\varphi_{i,1,k}, \quad \varphi_{i,0,k}=\varphi_{i,N,k}, \quad i,k=0, \cdots, N+1,\\ \varphi_{i,j,N+1}&=\varphi_{i,j,1}, \quad \varphi_{i,j,0}=\varphi_{i,j,N}, \quad i,j=0, \cdots, N+1. \end{aligned} \end{equation} For any $\phi_{1}, \phi_{2}\in \mathring{\mathcal{C}}_{\mathrm{per}},$ we define an inner product $$\langle\phi_{1}, \phi_{2}\rangle_{\mathcal{L}^{-1}_{\mathcal{D}}}:=[\mathcal{D}\nabla_{h}\varphi_{1},\nabla_{h}\varphi_{2}]_{\Omega},$$ where $\varphi_{i}\in\mathring{\mathcal{C}}_{\mathrm{per}}$ is the unique solution to \begin{equation}\label{LOp} \begin{aligned} \mathcal{L}_{\mathcal{D}}\varphi_{i}=\phi_{i}, \quad i=1,2. \end{aligned} \end{equation} By the discrete summation by parts, formula we have the following identities for periodic grid functions $\phi_{i}$: $$\langle\phi_{1}, \phi_{2}\rangle_{\mathcal{L}^{-1}_{\mathcal{D}}}=\langle\phi_{1}, \mathcal{L}^{-1}_{\mathcal{D}}\phi_{2}\rangle_{\Omega}=\langle\mathcal{L}^{-1}_{\mathcal{D}}\phi_{1}, \phi_{2}\rangle_{\Omega}.$$ We denote the norm associated to this inner product by $\|\phi\|_{\mathcal{L}^{-1}_{\mathcal{D}}}:=\sqrt{\langle\phi, \phi\rangle_{\mathcal{L}^{-1}_{\mathcal{D}}}}$. \subsection{The numerical scheme} We consider the discretization of the PNPCH equations~\reff{PNPCHE} on a time interval $[0, T]$ with $T>0$. For any function $v= v(x,y,z, t): \Omega \times [0, T] \rightarrow \R$, we denote by $v_{i,j,k}^l$ numerical approximations of $v(x_i, y_j, z_k, t_l)$ on a grid point $\left\{x_i, y_j, z_{k}\right\}$ at time $t_l=l \Delta t$, where $l=0, \cdots, N_t$ and $\Delta t = T/N_t$ with $N_t$ being a positive number. We employ the idea of convex splitting and propose a semi-implicit discrete scheme for the PNPCH equations~\reff{PNPCHE}: \begin{equation}\label{MCS}\left\{ \begin{aligned} &\frac{c^{m,l+1}-c^{m,l}}{\Delta{t}}=\epsilon^{m}\nabla_{h}\cdot\left(\check{c}^{m,l}\nabla_{h}\mu^{m,l+1}\right), \\ &\mu^{m,l+1}= z^{m}\psi^{l+1}+\log c^{m,l+1}+\sum^{M}_{n=1}g_{c}^{mn}c^{n,l+1}-\sigma^{m}\Delta_h c^{m,l+1}-\sum^{M}_{n=1}g_{e}^{mn}c^{n,l}, \\ &-\nabla_{h}\cdot (\kappa \nabla_{h}\psi^{l+1})=\sum^{M}_{m=1}z^{m}c^{m,l+1}+\rho_h^{f}, \end{aligned}\right. \end{equation} where the mobilities are approximated by \[ \begin{aligned} &\check{c}^{m,l}_{i+\frac{1}{2},j,k}=A_{x}c^{m,l}_{i+\frac{1}{2},j,k}, \quad \check{c}^{m,l}_{i,j+\frac{1}{2},k}=A_{y}c^{m,l}_{i,j+\frac{1}{2},k}, \quad \check{c}^{m,l}_{i,j,k+\frac{1}{2}}=A_{z}c^{m,l}_{i,j,k+\frac{1}{2}}, \end{aligned} \] and $\rho^{f}_h$, the restriction of $\rho^{f}$ on grid points, is assumed to satisfy \begin{equation}\label{NeuInit} \overline{\rho_h^f} + \sum_{m=1}^M z^m \overline{c^{m}_{in}} = 0. \end{equation} Here $G_{c}=\left(g^{mn}_{c}\right)$ and $G_{e}=\left(g^{mn}_{e}\right)$ are both positive semi-definite matrices such that $G=G_{c}-G_{e}$. Notice that the choice of $G_c$ and $G_e$ is not necessarily unique. In the numerical implementation, we choose the smallest positive $\lambda$ such that $G_{c}=\lambda I+G$ and $G_{e}=\lambda I$ are both positive semi-definite. The free-energy functional \reff{NONF} is discretized as \begin{equation}\label{SdisFE} \begin{aligned} F_{h}[c^l]=&\frac{1}{2}\left \|\sum^{M}_{m=1}z^{m}c^{m,l}+\rho_h^{f}\right \|^{2}_{\mathcal{L}^{-1}_{\kappa}}+\sum^{M}_{m=1}\left \langle c^{m,l},\log(vc^{m,l})-1 \right \rangle_{\Omega}\\ &\quad +\frac{1}{2}\sum^{M}_{n=1}\sum^{M}_{m=1}g^{mn}\langle c^{m,l},c^{n,l}\rangle_{\Omega}+\sum^{M}_{m=1}\frac{\sigma^{m}}{2}\left\|\nabla_{h}c^{m,l}\right\|^{2}_{2}. \end{aligned} \end{equation} \section{The numerical properties}\label{s:Properties} In this section, we show that the proposed numerical scheme~\reff{MCS} is uniquely solvable with desired properties in preserving positivity, mass conservation, and unconditional energy stability at the discrete level. \subsection{Mass Conservation} \begin{theorem}\label{t:MassCon} The numerical concentration $c^{m,l}_{i,j,k}$ of the semi-implicit scheme~\reff{MCS} respects mass conservation, in the sense that the total concentration of each species remains constant in time, i.e., \begin{equation}\label{conserve2} h^3\sum^{N}_{i,j,k=1}c^{m,l+1}_{i,j,k}=h^3\sum^{N}_{i,j,k=1}c^{m,l}_{i,j,k}, \quad l=0, \cdots, N_t-1. \end{equation} \end{theorem} \begin{proof} Summing both sides of the numerical scheme for concentrations over $i,j,k$ gives \begin{align*} h^{3}\sum_{i,j,k=1}^{N}c_{i,j,k}^{m,l+1} -h^{3}\sum_{i,j,k=1}^{N}c_{i,j,k}^{m,l} = \Delta t \epsilon^{m} h^{3}\sum_{i,j,k=1}^{N} \nabla_{h}\cdot\left(\check{c}^{m,l}\nabla_{h}\mu_{i,j,k}^{m,l+1}\right)=0, \end{align*} where we have used the periodic boundary conditions~\reff{PBCs} and discrete summation by parts in the last step. This completes the proof of \reff{conserve2}. \end{proof} If these exists a solution to the numerical scheme~\reff{MCS}, it is indicated by Theorem~\ref{t:MassCon} that \begin{equation}\label{AvgCon} \overline{c^{m,0}}= \overline{c^{m,1}} =\cdot\cdot\cdot =\overline{c^{m,N_t}}, \quad m = 1,2, \cdots, M. \end{equation} Therefore, the assumption~\reff{NeuInit} leads to a discrete version of the solvability condition: \[ \overline{\rho_h^f} + \sum_{m=1}^M z^m \overline{c^{m,l}} = 0, \quad l = 0, 1, \cdots, N_t. \] \subsection{Positivity preservation} To prove the positivity-preserving property of the proposed numerical scheme, we present the following lemma without giving its proof; cf.~\cite{CWWW_JCP2019}. \begin{lemma}\label{PoiAssum} Suppose that $\phi\in \mathring{\mathcal{C}}_{\mathrm{per}}$ and $\|\phi\|_{\infty}\leq C_1$, then we have the following estimate: \begin{equation}\label{LEM} \begin{aligned} \|\mathcal{L}_{\mathcal{D}}^{-1}\phi\|_{\infty}\leq C_{3}:=C_{2}h^{-1/2}\mathcal{D}^{-1}_{0}, \end{aligned} \end{equation} where $\mathcal{D}_0$ is a positive lower bound of the coefficient function $\mathcal{D}(x)$, i.e. $\mathcal{D}\geq \mathcal{D}_0 > 0$, and $C_2$ depends only on $C_1$ and $\Omega$. \end{lemma} \begin{theorem}\label{POSitive} Assume that $c^{m,l}\in \mathcal{C}_{\mathrm{per}}$, and $c^{m,l}>0$ with $\|c^{m,l}\|_{\infty}\leq M_{1}$ for some $M_{1}>0$. Define $C^{m,l}_0:=\underset{1\leq i,j,k\leq N}{\operatorname{min}}c^{m,l}_{i,j,k}$, so that $c^{m,l} \geq C^{m,l}_0>0$. There exists a unique solution $c^{m,l+1}\in\mathcal{C}_{\mathrm{per}}$ to the nonlinear scheme \reff{MCS} with $c^{m,l+1}>0$ at a point-wise level. \end{theorem} \begin{proof} The numerical solution of the nonlinear scheme \reff{MCS} corresponds to the minimizer of the discrete energy functional \[ \begin{aligned} \mathcal{J}^{k}(u)=&\frac{1}{2\triangle{t}}\sum^{M}_{m=1}\|u^{m}-c^{m,l}\|^{2}_{\mathcal{L}^{-1}_{\check{c}^{m,l}}}+ \frac{1}{2}\|\sum^{M}_{m=1}z^{m}u^{m}+\rho_h^{f}\|^{2}_{\mathcal{L}^{-1}_{\kappa}} +\sum^{M}_{m=1}\left\langle u^{m},\log u^{m}-1\right\rangle_{\Omega}\\ &+\frac{1}{2}\sum^{M}_{n=1}\sum^{M}_{m=1}g^{mn}_{c}\langle u^{m}, u^{n}\rangle_{\Omega}+\sum^{M}_{m=1}\frac{\sigma^{m}}{2}\|\nabla_{h}u^{m}\|^{2}_{2}-\sum^{M}_{n=1}\sum^{M}_{m=1}g^{mn}_{e}\langle u^{m}, c^{n,k}\rangle_{\Omega}, \end{aligned} \] over the admissible set $$A_{h}:=\{u^{m}\in \mathcal{C}_{\mathrm{per}}| 0 < u^{m} < \theta^m, ~ \overline{u^{m}}=\overline{c^{m,0}}~ \text{for}~ 1\leq m \leq M \}\subset\mathbb{R}^{MN^{3}},$$ where $\theta^m=\frac{|\Omega| \overline{c^{m,0}}}{h^3}$. It is easy to verify that $\mathcal{J}^{k}$ is strictly convex over this domain. We shall prove that the minimizer of $\mathcal{J}^{k}$ exists in $A_{h}$ and is positive on each grid point. We first consider a closed subset $$A_{h, \delta}:=\{u^{m}\in \mathcal{C}_{\mathrm{per}}|\delta \leq u^{m} \leq \theta^m -\delta, ~ \overline{u^{m}}=\overline{c^{m,0}}~ \text{for}~ 1\leq m \leq M\},$$ where $\delta$ is a number in $(0, \theta^m/2)$. Since $A_{h, \delta}$ is a bounded, compact, and convex subset in $\mathcal{C}_{\mathrm{per}}$, there exists a unique minimizer of $\mathcal{J}^{k}$ over $A_{h, \delta}$. Let the minimizer be $u^{\ast}=(u^{1, \ast}, u^{2, \ast}, \cdots, u^{M, \ast})$. When $\delta$ is sufficiently small, $u^{\ast}$ could not reach the lower boundary of $A_{h, \delta}$. We shall prove this by contradiction. Suppose that there exists one grid point $\vec{\alpha}_{0}=(i_{0}, j_{0}, k_{0})$ and $m_0$ such that $u^{m_0,\ast}$ achieves its global minimum at $\vec{\alpha}_{0}$ with $u^{m_0,\ast}_{\vec{\alpha}_{0}}=\delta$. Suppose that $\vec{\alpha}_{1}=(i_{1}, j_{1}, k_{1})$ is another grid point at which $u^{m_0,\ast}$ achieves its global maximum. Obviously, we have $$\overline{c^{m_0,0}} \leq u^{m_0,\ast}_{\vec{\alpha}_{1}}\leq \theta^{m_0}-\delta.$$ Since $\mathcal{J}^{k}$ is smooth over $A_{h}$ and $u^{\ast}\in A_{h, \delta}$, the following directional derivative is well defined for sufficiently small $s$: \[ \begin{aligned} &\underset{s\to 0+}{\operatorname{\lim}}\frac{\mathcal{J}^{k}(u^{\ast}+s\phi)- \mathcal{J}^{k}(u^{\ast})}{s} \\ &\quad=\frac{1}{\triangle{t}}\left\langle\mathcal{L}^{-1}_{\check{c}^{m_0,l}}\left(u^{m_0,\ast}-c^{m_0,l}\right), \phi^{m_0}\right\rangle_{\Omega}+\left\langle z^{m_0}\mathcal{L}^{-1}_{\kappa}\left(\sum^{M}_{m=1}z^{m}u^{m,\ast}+\rho_h^{f}\right), \phi^{m_0}\right\rangle_{\Omega}\\ &\quad-\sigma^{m_0}\left\langle\Delta_{h}u^{m_0,\ast}, \phi^{m_0}\right\rangle_{\Omega} +\langle \log u^{m_0,\ast}, \phi^{m_0}\rangle_{\Omega}+\sum^{M}_{n=1}g^{m_0n}_{c}\langle u^{n,\ast},\phi^{m_0}\rangle_{\Omega}-\sum^{M}_{n=1}g^{m_0n}_{e}\langle c^{n,l},\phi^{m_0}\rangle_{\Omega}, \end{aligned} \] where $\phi=(0, \cdots, \phi^{m_0}, \cdots, 0)$. If we choose the direction $$\phi_{i,j,k}^{m_0}=\delta_{i,i_{0}}\delta_{j,j_{0}}\delta_{k,k_{0}}-\delta_{i,i_{1}}\delta_{j,j_{1}}\delta_{k,k_{1}}\in \mathring{\mathcal{C}}_{\mathrm{per}},$$ then the directional derivative becomes \begin{equation}\label{DERVA} \begin{aligned} \frac{1}{h^{3}}\underset{s\to 0+}{\operatorname{\lim}}\frac{\mathcal{J}^{k}(u^{\ast}+s\phi)- \mathcal{J}^{k}(u^{\ast})}{s}&=\frac{1}{\triangle{t}}\mathcal{L}^{-1}_{\check{c}^{m_0,l}}(u^{m_0,\ast}-c^{m_0,l})_{\vec{\alpha}_{0}} -\frac{1}{\triangle{t}}\mathcal{L}^{-1}_{\check{c}^{m_0,l}}(u^{m_0,\ast}-c^{m_0,l})_{\vec{\alpha}_{1}}\\ &+z^{m_0}\mathcal{L}^{-1}_{\kappa}\left(\sum^{M}_{m=1}z^{m}u^{m,\ast}+\rho_h^{f}\right)_{\vec{\alpha}_{0}}-z^{m_0}\mathcal{L}^{-1}_{\kappa}\left(\sum^{M}_{m=1}z^{m}u^{m,\ast}+\rho_h^{f}\right)_{\vec{\alpha}_{1}}\\ &-\sigma^{m_0}\left(\Delta_{h}u^{m_0,\ast}_{\vec{\alpha}_{0}}-\Delta_{h}u^{m_0,\ast}_{\vec{\alpha}_{1}}\right) +\log u^{m_0,\ast}_{\vec{\alpha}_{0}}-\log u^{m_0,\ast}_{\vec{\alpha}_{1}}\\ &+\sum^{M}_{n=1}g^{m_0n}_{c}u^{n,\ast}_{\vec{\alpha}_{0}}-\sum^{M}_{n=1}g^{m_0n}_{c}u^{n,\ast}_{\vec{\alpha}_{1}}-\sum^{M}_{n=1}g^{m_0n}_{e}c^{n,l}_{\vec{\alpha}_{0}}+\sum^{M}_{n=1}g^{m_0n}_{e}c^{n,l}_{\vec{\alpha}_{1}}. \end{aligned} \end{equation} Since $u^{m_0,\ast}_{\vec{\alpha}_{0}}=\delta$ and $u^{m_0,\ast}_{\vec{\alpha}_{1}}\geq\overline{c^{m,0}}$, we have \begin{equation}\label{DER1} \begin{aligned} \log u^{m_0,\ast}_{\vec{\alpha}_{0}}-\log u^{m_0,\ast}_{\vec{\alpha}_{1}} \leq \log \delta-\log \overline{c^{m,0}}. \end{aligned} \end{equation} Since $u^{m_0,\ast}$ takes a minimum at the grid point $\vec{\alpha}_{0}$ and a maximum at the grid point $\vec{\alpha}_{1}$, we obtain \begin{equation}\label{DER2} \begin{aligned} \Delta_{h}u^{m_0,\ast}_{\vec{\alpha}_{0}}\geq 0,\quad \Delta_{h}u^{m_0,\ast}_{\vec{\alpha}_{1}}\leq 0. \end{aligned} \end{equation} It follows from $g^{mn}_{c}>0$, $g^{mn}_{e}>0$, $u^{n,\ast}>0$, and $c^{n,l}>0$ that \begin{equation}\label{DER3} -\sum^{M}_{n=1}g^{m_0n}_{c}u^{n,\ast}_{\vec{\alpha}_{1}}-\sum^{M}_{n=1}g^{m_0n}_{e}c^{n,l}_{\vec{\alpha}_{0}} <0. \end{equation} Since $u^{n,\ast}_{\vec{\alpha}_{0}}\leq \theta^n- \delta$, we have \begin{equation}\label{DER4} \sum^{M}_{n=1}g^{m_0n}_{c}u^{n,\ast}_{\vec{\alpha}_{0}} \leq \sum^{M}_{n=1}g^{m_0n}_{c} \left(\theta^n- \delta\right) \leq \sum^{M}_{n=1}g^{m_0n}_{c} \theta^n. \end{equation} Also, the \emph{a priori} assumption $\|c^{n,l}\|_{\infty}\leq M_{1}$ indicates that \begin{equation}\label{DER5} \begin{aligned} \sum^{M}_{n=1}g^{m_0n}_{e}c^{n,l}_{\vec{\alpha}_{1}} \leq M_{1} \sum^{M}_{n=1}g^{m_0n}_{e}. \end{aligned} \end{equation} Since $\check{c}^{m_0,l} \geq C_0^{m_0,l}>0$ and $\kappa \geq \kappa_0>0$ (cf.~\reff{kappa_0}), we have by applying the Lemma \ref{PoiAssum} with $\mathcal{D}=\check{c}^{m_0,l}$ and $\mathcal{D}=\kappa$ that \begin{equation}\label{DER6} \begin{aligned} \mathcal{L}^{-1}_{\check{c}^{m_0,l}}(u^{m_0,\ast}-c^{m_0,l})_{\vec{\alpha}_{0}} - \mathcal{L}^{-1}_{\check{c}^{m_0,l}}(u^{m_0,\ast}-c^{m_0,l})_{\vec{\alpha}_{1}}\leq 2C_{3}^c, \end{aligned} \end{equation} and \begin{equation}\label{DER7} \begin{aligned} z^{m_0}\mathcal{L}^{-1}_{\kappa}\left(\sum^{M}_{m=1}z^{m}u^{m,\ast}+\rho_h^{f}\right)_{\vec{\alpha}_{0}}-z^{m_0}\mathcal{L}^{-1}_{\kappa}\left(\sum^{M}_{m=1}z^{m}u^{m,\ast}+\rho_h^{f}\right)_{\vec{\alpha}_{1}} \leq 2\left|z^{m_0}\right|C_{3}^{\kappa}, \end{aligned} \end{equation} respectively. Note that we have used the assumption that $\check{c}^{m_0,l}\geq C^{m_0,l}_0>0$ and $\kappa\geq \kappa_0>0$; cf.~\reff{kappa_0}. The constant $C_{3}^c$ depends on $\theta^{m_0}$, $M_1$, $\Omega$, $h$, and $C^{m_0,l}_0$; and the constant $C_{3}^{\kappa}$ depends on $\text{max}_{1\leq m \leq M} \{|z^m| \theta^{m}\}$, $\|\rho_h^f\|_{\infty}$, $\Omega$, $h$, and $\kappa_0$. Thus, a substitution of \reff{DER1}$-$\reff{DER7} into \reff{DERVA} leads to \begin{equation}\label{DERi} \begin{aligned} &\frac{1}{h^{3}}\underset{s\to 0+}{\operatorname{\lim}}\frac{\mathcal{J}^{k}(u^{\ast}+s\phi)- \mathcal{J}^{k}(u^{\ast})}{s} \\ &\qquad\leq \log\delta-\log\overline{c^{m,0}}+ \sum^{M}_{n=1}g^{m_0n}_{c} \theta^n + M_{1} \sum^{M}_{n=1}g^{m_0n}_{e} +2C_{3}^{c}\triangle{t}^{-1}+2\left|z^{m_0}\right|C_{3}^{\kappa}. \end{aligned} \end{equation} For any fixed $\triangle{t}$ and $h$, we may choose $\delta$ sufficiently small so that \begin{equation}\label{DER7} \begin{aligned} \log\delta-\log\overline{c^{m,0}}+ \sum^{M}_{n=1}g^{m_0n}_{c} \theta^n + M_{1} \sum^{M}_{n=1}g^{m_0n}_{e} +2C_{3}^{c}\triangle{t}^{-1}+2\left|z^{m_0}\right|C_{3}^{\kappa}<0. \end{aligned} \end{equation} That is \begin{equation}\label{finaDER} \begin{aligned} \underset{s\to 0+}{\operatorname{\lim}}\frac{\mathcal{J}^{k}(u^{\ast}+s\phi)- \mathcal{J}^{k}(u^{\ast})}{s}<0. \end{aligned} \end{equation} This is contradictory to the assumption that $u^{\ast}$ is the minimizer of $\mathcal{J}^{k}$, since the direction $\phi$ we chose points into the interior of $A_{h,\delta}$. Since the numerical solution of each species conserves at each time step, one point of $u^{\ast}$ approaching $\theta^m -\delta$ implies many points of $u^{\ast}$ going to zero, when $\delta$ is sufficiently small. Thus, we can analogously show that the $u^{\ast}$ cannot reach the upper boundary of $A_{h,\delta}$. Therefore, when $\delta$ is sufficiently small, the global minimum of $\mathcal{J}^{k}$ over $A_{h,\delta}$ can only be achieved at an interior point of $A_{h,\delta}$, which is a subset of $A_{h}$. This establishes the existence of a positive numerical solution to the nonlinear scheme~\reff{MCS}. In addition, the strict convexity of $\mathcal{J}^{k}$ over $A_{h}$ implies the uniqueness of the numerical solution. The proof of Theorem \ref{POSitive} is complete. \end{proof} \subsection{Unconditional energy stability} \begin{theorem} The semi-implicit discrete scheme~\reff{MCS} is energy stable, in the sense that \[ \begin{aligned} F_{h}(c^{l+1})\leq F_{h}(c^{l}). \end{aligned} \] \end{theorem} \begin{proof} Since $\mathcal{L}^{-1}_{\kappa}$ is symmetric, positive definite on the space $\mathring{\mathcal{C}}_{\mathrm{per}}$, we know that the term $\frac{1}{2}\left\|\sum^{M}_{m=1}z^{m}c^{m}+\rho_h^{f}\right\|^{2}_{\mathcal{L}^{-1}_{\kappa}}$ is convex with respect to $c^m$. A direct calculation reveals that the term \[ \sum^{M}_{m=1}\left \langle c^{m},\log(vc^{m})-1\right\rangle_{\Omega}+\sum^{M}_{m=1}\frac{\sigma^{m}}{2}\left \|\nabla_{h}c^{m}\right \|^{2}_{2} \] is convex as well. We know by the positive semi-definiteness of the matrices $G_c$ and $G_e$ that \[ \frac{1}{2}\sum^{M}_{n=1}\sum^{M}_{m=1}g^{mn}_{c}\left \langle c^{m},c^{n}\right \rangle_{\Omega}~~\text{and }~ \frac{1}{2}\sum^{M}_{n=1}\sum^{M}_{m=1}g^{mn}_{e}\left \langle c^{m},c^{n}\right\rangle_{\Omega} \, \, \, \mbox{are convex} . \] Therefore, by the convexity of these terms and mass conservation~\reff{AvgCon}, we arrive at \[ \begin{aligned} F_{h}(c^{l+1})-F_{h}(c^{l}) \leq&\sum^{M}_{m=1}\left \langle z^{m}\psi^{l+1}+\log c^{m,l+1}+\sum^{M}_{n=1}g^{mn}_{c}c^{n,l+1}-\sigma^{m}\Delta _{h}c^{m,l+1}\right.\\ &\left.\quad-\sum^{N}_{n=1}g^{mn}_{e} c^{n,l},c^{m,l+1}-c^{m,l}\right \rangle_{\Omega}\\ =&\sum^{M}_{m=1}\left \langle\mu^{m,l+1},\Delta{t}\epsilon^{m}\nabla_{h}\cdot\left(\check{c}^{m,l}\nabla_{h}\mu^{m,l+1}\right) \right \rangle_{\Omega}\\ =&- \sum^{M}_{m=1} \Delta{t} \epsilon^{m} \left [ \nabla_{h} \mu^{m,l+1}, \check{c}^{m,l}\nabla_{h}\mu^{m,l+1} \right ]_{\Omega} \leq 0, \end{aligned} \] in which the periodic boundary conditions~\reff{PBCs} and discrete summation by parts formulas have been used. This completes the proof. \end{proof} \section{Numerical examples}\label{s:Numerics} At each time step evolution, we numerically solve the nonlinear difference equation~\reff{MCS} supplemented with periodic boundary conditions~\reff{PBCs} using the Newton's iterations. The Newton's iterations with ion concentrations, chemical potentials, and the electrostatic potential as unknowns converge robustly within four stages in our extensive numerical tests. For simplicity, we consider a periodic charged system consists of concentrated binary mononvalent electrolytes and fixed charges. Unless stated otherwise, we take the characteristic concentration $c_{0}=1$ M, characteristic length $L=1$ nm, characteristic diffusion constant $D_{0}=1$ nm$^{2}$/ns, and diffusion constants $D^{1}=D^{2}=1$ nm$^{2}$/ns for two species of ions. We consider a uniform dielectric constant $\ve_r=78$, which prescribes $\lambda_D = 0.304$ nm, $\kappa = 0.185$, and $\epsilon^1=\epsilon^2=0.304$. \subsection{Accuracy test} We test the numerical convergence order of the proposed numerical scheme~\reff{MCS} in one dimension. To obtain a reference solution for comparison, we construct an exact solution \begin{equation}\label{exact1D} \left\{ \begin{aligned} &c^{1}=0.1e^{-t}\cos(\pi x)+0.2,\\ &c^{2}=0.1e^{-t}\cos(\pi x)+0.2,\\ &\psi=e^{-t}\cos(\pi x), \end{aligned}\right. \end{equation} to the PNPCH equations \begin{equation}\label{DCH} \left\{ \begin{aligned} &\frac{\partial c^{m}}{\partial t}=\epsilon^{m}\partial_x \left[c^{m}\partial_x \left(z^{m}\psi+\log c^{m}+\sum^{M}_{n=1}g^{mn}c^{n}-\sigma^{m}\Delta c^{m}\right)\right] + f_m, \quad m =1, 2, \\ &- \partial_x (\kappa \partial_x \psi)=\sum^{2}_{m=1}z^{m}c^{m}+\rho^{f}, \end{aligned}\right. \end{equation} with periodic boundary conditions. Here the source terms $f_{1}$, $f_{2}$, and $\rho^{f}$, and the initial conditions are determined by the known exact solution~\reff{exact1D}. We choose the computational domain $\Omega=[-1, 1]$, and take the steric interaction coefficient matrix $ G=\left( \begin{matrix} 3.6 & 2.6\\ 2.6 & 0.2 \\ \end{matrix} \right) $ and gradient energy coefficients $\sigma^{1}=\sigma^{2}=0.01.$ Note that the matrix $G$ is not positive semi-definite and therefore the corresponding free energy functional~\reff{NONF} is nonconvex. \begin{table}[htbp] \centering \begin{tabular}{ccccccc} \hline \hline $N$ & $\ell^\infty$ error in $c^1$ & Order & $\ell^\infty$ error in $c^2$ & Order & $\ell^\infty$ error in $\psi$ & Order\\ \hline 100 & 3.98e-5 & - & 3.94e-5 & - & 6.57e-4 & -\\ 200 & 9.97e-6 & 1.9963 & 9.87e-6 & 1.9969 & 1.64e-4 & 2.0003\\ 400 & 2.50e-6 & 1.9985 & 2.47e-6 & 1.9992 & 4.11e-5 & 2.0001\\ 800 & 6.24e-7 & 1.9993 & 6.17e-7 & 1.9998 & 1.03e-5 & 2.0001\\ \hline \hline \end{tabular} \caption{The $\ell^\infty$ error and convergence order for numerical solutions of $c^1$, $c^2$, and $\psi$ at $T=0.0016$ with a mesh ratio $\Delta t=h^2$. } \label{order1d} \end{table} We test numerical accuracy of the proposed scheme~\reff{MCS} on various meshes, in comparison with the exact solution~\reff{exact1D}. Notice that the mesh ratio, $\Delta t=h^{2}$, is chosen for the purpose of numerical accuracy test, rather than for the concern of numerical stability. Table~\ref{order1d} lists the $\ell^\infty$ error and convergence order for the numerical solutions of ion concentrations and the electrostatic potential at time $T=0.0016$. It is observed that, the numerical error decreases as the mesh refines, and the convergence orders for ion concentrations and the potential are both around $2$, as expected. This confirms that the proposed numerical scheme~\reff{MCS} is second order accurate in space and first order accurate in time. We also test numerical accuracy of the proposed scheme~\reff{MCS} in two dimensions. We take $\Omega=[-4, 4] \times [-4, 4]$ and the steric interaction coefficient matrix $ G=\left( \begin{matrix} 2 & 1\\ 1 & 2 \\ \end{matrix} \right).$ The following exact solution is constructed \begin{equation}\label{exact2D} \left\{ \begin{aligned} &c^{1}=0.1e^{-20t}\cos(\pi x/4)\sin(\pi y/4)+1,\\ &c^{2}=0.1e^{-20t}\cos(\pi x/4)\sin(\pi y/4)+1,\\ &\psi=e^{-20t}\cos(\pi x)\sin(\pi y/4), \end{aligned}\right. \end{equation} for the equations~\reff{DCH} with periodic boundary conditions. Again, the corresponding source terms and the initial conditions are determined by the known exact solution~\reff{exact2D}. \begin{table}[htbp] \centering \begin{tabular}{ccccccc} \hline \hline $N$ & $\ell^\infty$ error in $c^1$ & Order & $\ell^\infty$ error in $c^2$ & Order & $\ell^\infty$ error in $\psi$ & Order\\ \hline 20 & 3.39e-1 & - & 3.39e-1 & - & 1.24e-1 & -\\ 40 & 8.38e-2 & 2.0158 & 8.38e-2 & 2.0158 & 2.78e-2 & 2.1549\\ 60 & 3.70e-2 & 2.0162 & 3.70e-2 & 2.0162 & 1.21e-2 & 2.0515 \\ 80 & 2.07e-2 & 2.0188 & 2.07e-2 & 2.0188 & 6.80e-3 & 2.0032\\ \hline \hline \end{tabular} \caption{The $\ell^\infty$ error and convergence order for numerical solutions of $c^1$, $c^2$, and $\psi$ at $T=0.16$ with a mesh ratio $\Delta t=h^2$.} \label{order2d} \end{table} Similarly, we carry out computations on various meshes with $\Delta t=h^2$ and compare with the exact solution~\reff{exact2D}. As shown in Table~\ref{order2d}, the numerical solutions of ion concentrations and the potential both converge to the exact solution with a convergence rate around $2$, indicating the anticipated accuracy order of the proposed numerical scheme~\reff{MCS} in the $2$D case. \subsection{Properties tests} \begin{figure}[htbp] \centering \includegraphics[scale=0.52]{1Dc1} \includegraphics[scale=0.52]{1Dpsi} \caption{Time evolution of numerical solutions of cation concentrations ($c^1$) and potential $\psi$.} \label{c1c2psi} \end{figure} We now test the performance of the proposed scheme in preserving the desired properties, including positivity, mass conservation, and energy dissipation, at the discrete level. We consider the equations~\reff{PNPCHE} on $\Omega=[-1,1]$ with periodic boundary conditions and initial data \[ c^1(x, 0)=c^2(x, 0)=1. \] The steric interaction coefficient matrix is taken as $ G=\left( \begin{matrix} 3.6 & 2.6\\ 2.6 & 0.2 \\ \end{matrix} \right) $, and the charge distribution function is given by $$\rho^{f}(x)=5\left[e^{-5(x-\frac12)^2}-e^{-5(x+\frac12)^2}\right],$$ which describes that negative and positive fixed charges are distributed at $x=-\frac12$ and $x=\frac12$, respectively. In the numerical simulations, we set the total grid number $N=100$ and a mesh ratio $\Delta t = h$. Figure~\ref{c1c2psi} displays the snapshot evolution of cation $c^{1}$ concentration and $\psi$ at different times. Since cations and anions distribute uniformly on $\Omega$ at $T=0$, the electrostatic potential $\psi$ is determined by the fixed charges, with maximum and minimum values at $x=\frac12$ and $x=-\frac12$, respectively. As time evolves, the cations are attracted by the negative fixed charges and get repelled by positive fixed charges, leading to sinusoidal profiles. Accordingly, the electrostatic potential gets screened by ion accumulation. We observe that the profiles at $T=1$ are almost identical to that of $T=0.7$, which implies that the charges in the system have arrived at a steady state. \begin{figure}[H] \centering {\includegraphics[scale=0.7]{1Dminc}} \caption{Upper: Time evolution of the free energy and total mass. Lower: Time evolution of the minimum concentration value of both cations and anions. The inset is a zoomed-in plot for a time interval $[0.4, 0.6].$} \label{Eminc1d} \end{figure} We now check structure-preserving properties of the proposed scheme. As shown in the upper plot of Figure ~\ref{Eminc1d}, the total mass of ions represented by the dashed line stays constant for all the time. Also, the free energy decays monotonically, indicating that our numerical scheme is energy stable. As ions are repelled by fixed charges of the same sign, the local ionic concentrations become very low. It is of physical interest to check positivity of numerical solutions of concentrations. The lower plot of Figure~\ref{Eminc1d} displays the evolution of minimum values of concentrations of both cation and anion against time, and the inset presents a zoomed-in plot for a time interval $[0.4, 0.6]$. It is demonstrated that, although the concentrations can be very low due to electrostatic repulsion, the numerical solutions of ionic concentrations remain positive for all the time. \subsection{Applications} We now apply the PNPCH equations and the corresponding numerical method to study the spatial ionic arrangement and charge dynamics of concentrated electrolytes that have been widely used in various applications, such as electrochemical energy devices. Salient features of concentrated electrolytes include crowding and charge layering in electric double layers, multiple time scale dynamics, self-assembly of nanostructuring both in the bulk and electric double layers (EDLs)~\cite{GavishYochelis_JPCLett16, BGUYochelis_PRE17, GavishEladYochelis_JPCLett18}, etc. The PNPCH equations, which combine the effect of phase separation with electrodiffusion, are used to investigate these features. \begin{figure}[htbp] \centering \includegraphics[scale=0.95]{2dTdiff} \caption{Snapshots of the evolution of cation concentrations starting from a random initial data with $\sigma= 0.05$ and $\sigma= 0.005$.} \label{2dTdiff} \end{figure} We consider the equations~\reff{PNPCHE} on $\Omega=[-3,3]\times[-3,3]$ with periodic boundary conditions and a random initial data. The distribution of fixed charges are given by \[ \begin{aligned} \rho^{f}(x,y)= -\frac12\chi_{\{x=-\frac{3}{2}\}} + \frac12\chi_{\{x=\frac{3}{2}\}}, \end{aligned} \] where $\chi_{A}$ is the characteristic function over a set $A$. Such a distribution describes that negative and positive charges are distributed on the lines at $x=-\frac{3}{2}$ and $x=\frac{3}{2}$, respectively. We take gradient energy coefficients $\sigma^{1}=\sigma^{2}=\sigma$ and the steric interaction coefficient matrix $ G=\left( \begin{matrix} 1 & g^{12}\\ g^{21} & 1 \\ \end{matrix} \right),$ where off-diagonal elements $g^{12}= g^{21} = 15$. Figure~\ref{2dTdiff} presents several snapshots of the evolution of cation concentrations with $\sigma= 0.05$ and $\sigma= 0.005$ at $T=0,$ $T=0.5,$ $T=2$, and $T=6$. Starting from a random initial distribution, the cations move quickly following the electrostatic potential mainly generated by the fixed charges. For $\sigma= 0.05$, we observe at $T=2$ that, cations further crowdingly accumulate in the vicinity of negative fixed charges with the emergence of oscillations in diffuse layers of EDLs. This is reminiscent of the overscreening structure studied in the work~\cite{BSK:PRL:2011}. As time further evolves, the cation concentrations show periodic lamellar patterns not only in the EDLs but also in the bulk. For $\sigma= 0.005$, in contrast, cations begin to develop labyrinthine type of structure in the bulk at $T=2$, but lamellar structures are still favored near fixed charges. As time evolves, the patterns become clearer and clearer, leading to a totally different structure in comparison with that of $\sigma= 0.05$. The pronounced difference is ascribed to the gradient energy coefficients that penalize large concentration gradients. Smaller gradient energy coefficients allow more concentration oscillations. Comparing snapshots at different time instants, we find that the electrostatic interactions dominate the ion migration in the early stage, and the effect of phase separation comes into play later in the development of patterns in the bulk. \begin{figure}[htbp] \centering {\includegraphics[scale=0.7]{MFg15}} \caption{The evolution of free energy with $\sigma= 0.05$ and $\sigma= 0.005$.} \label{Fmass2d} \end{figure} As shown in Figure~\ref{2dTdiff}, the mechanisms of phase separation and electrodiffusion take effects in different stages of the pattern formation, indicating the emergence of multiple time scale charge dynamics. To further understand the charge dynamics, we show in Figure~\ref{Fmass2d} the evolution of free energy of the system. A multi-phase free-energy dissipation can be clearly observed for both $\sigma= 0.05$ and $\sigma= 0.005$, reminiscent of metastability phenomena. In the first stage, the free energy decays sharply on a fast time scale, corresponding to the phase of electrodiffusion, and quickly reaches a metastable state characterized by a plateau in the free energy. In the second stage, the free energy decays with a relatively longer time scale, corresponding to the formation of patterns in the bulk and EDLs, and reaches an equilibrium eventually. Comparing the results with different $\sigma$ values, we also observe that, with larger gradient energy coefficients, the metastable state lasts longer and the time scale in the second stage of free-energy dissipation is larger. Such multi-phase free-energy dissipation with metastability often takes long time to reach an equilibrium. Efficient numerical simulations of such dynamics require robust, energy stable numerical schemes that allow large time stepping. Our numerical results demonstrate that the proposed energy stable numerical scheme is capable of effectively capturing such multi-phase dynamics. \begin{figure}[htbp] \centering \includegraphics[scale=0.9]{2dT6} \caption{Equilibrium states of cation concentrations with various combinations of $g^{12}$ and $\sigma$, starting from the same random initial data.} \label{2dSteady} \end{figure} We next study the interplay between off-diagonal elements in the steric interaction coefficient matrix ($g^{12}$) and the gradient energy coefficient ($\sigma$), and their impact on the development of nanostructures in the equilibrium state. Note that the off-diagonal elements describe the cross interactions of short range. Figure~\ref{2dSteady} plots equilibrium states of cation concentrations with various combinations of $g^{12}$ and $\sigma$, starting from the same random initial condition. From the upper row plots, we find that, with a relatively small off-diagonal element $g^{12}=5$, only EDLs in the vicinity of fixed charges develop in the equilibrium states. With a larger off-diagonal element $g^{12}=15$, we can see rich self-assembled patterns, such as lamellar stripes for a strong gradient energy coefficient $\sigma=0.05$ and labyrinthine patterns for a weak gradient energy coefficient $\sigma=0.005$. Mixed patterns with lamellar EDLs and labyrinthine structures in the bulk also present for an intermediate value of $\sigma=0.01$. Comparison of two rows of plots reveals that strong cross interactions of short range are necessary for the formation of self-assembled nanostructures in the bulk. Comparison of three columns of plots indicates that more complex nanostructures develop with weaker concentration-gradient regularization. \section{Concluding remarks}\label{s:Conclusions} In this work, we have derived the PNPCH equations based on a free-energy functional that includes electrostatic free energies, entropic contribution of ions, steric interactions, and concentration gradient energies. Numerical studies on the PNPCH equations are still missing, especially those concerning preservation of physical structures. We have proposed a novel energy stable, semi-implicit numerical scheme that guarantees mass conservation and positivity at the discrete level. Detailed analysis has revealed that the solution to the proposed nonlinear scheme corresponds to a unique minimizer of a convex functional over a closed, convex domain, establishing the existence and uniqueness of the solution. The positivity of numerical solutions has been rigorously proved via an argument on the singularity of the entropy terms at zero concentrations. Discrete free-energy dissipation has been established as well. Numerical tests on convergence rates have demonstrated that the proposed numerical scheme is first-order accurate in time and second-order accurate in space. Numerical simulations have also verified the capability of the numerical scheme in preserving the desired properties, e.g., mass conservation, positivity, and free energy dissipation. Moreover, we have applied the PNPCH equations and the proposed scheme to investigate charge dynamics and ionic arrangement, such as self-assembled nanostructures, in highly concentrated electrolytes. In numerical simulations, we have found that there are multiple time relaxations with distinct time scales, and metastable states present in the relaxation to an equilibrium. Efficient simulations of such dynamics require robust, energy stable numerical schemes that allow large time stepping. Our numerical results have demonstrated that the proposed numerical scheme is able to capture lamellar patterns and labyrinthine patterns in electric double layers and the bulk, as well as multiple time scale dynamics with intermediate metastable states. In addition, we have probed the interplay between cross steric interactions and the concentration gradient regularization, and their profound influence on the pattern formation in the equilibrium state. \section*{Acknowledgments} Y. Qian and S. Zhou were supported by National Natural Science Foundation of China (21773165) and National Key R\&D Program of China (No.~2018YFB0204404). C. Wang was supported by NSF under DMS-1418689. \bibliographystyle{plain}
1,477,468,750,239
arxiv
\section{Introduction} The availability of large scale data set and computational tools makes it easier than ever to quantitatively probe the science at different scales, from papers \citep{wang2013quantifying, hu2020describing, chen2021exploring, mukherjee2017nearly} to individual scientists \citep{way2017misleading, liu2018hot, bu2018understanding, zhang2017identifying}, and from research teams \citep{wu2019large, ma2020mentorship, alshebli2018preeminence} to institutions and nations \citep{king2004scientific, liu2020dominance, zhao2020investigation, zuo2018more, chen2020rank, huang2020comparison}. As the development of science is driven by scientists' involvements in different research topics, it is crucial to understand how they decide their research directions and what are the consequences of their collective decisions. Such choices affect not only individual careers but also collectively shapes contemporary science. The conflict in topic choosing is vividly depicted as ``The essential tension'' \citep{kuhn1977the} that scientists often need to decide whether to explore a new field or to exploit familiar topics \citep{foster2015tradition}. By carefully selecting the control group or by comparing a group of elites with the average, quantitative studies have provided us with a consistent conclusion. Those who focus on a narrow research agenda tend to secure a steady scientific output and gain more overall citations \citep{amjad2018measuring, zeng2019increasing, pramanik2019migration} whereas those who take the risk to change topics are likely to produce ``hit'' papers or highly innovative outcomes \citep{leahey2017prominent, foster2015tradition, azoulay2011incentives}. However, existing works usually focus on a group of selective individuals whose scientific achievements are beyond a certain threshold. It is still unclear what would happen to a typical scientist who crosses the boundary and enters a new field. Moreover, the comparison is usually based on the overall performance and made between two distinct groups of individuals. The change of performance, measured by the past and current performance of one's own, is rarely investigated. Here, we ask a set of simple but relatively unexplored questions: If one changes the research agenda and succeeds in the new field, is she going to publish better research works than she used to do? Is she going to publish faster or slower compared with her past publication speed? Furthermore, if she were to venture further, would her benefits/disadvantages increase or decline? These questions focusing on the change of the scientific performance of an individual are not fully addressed in existing works, to the best of our knowledge. We understand that individual scientists are inherently different from each other, affected by confounding factors such as scientific training \citep{clauset2015systematic}, gender \citep{mauleon2006productivity, huang2020historical}, age \citep{jones2011age, petersen2011quantitative}, mobility \citep{robinson2019many, petersen2018multiscale, zhao2020investigation}, prestige of institution in which they work \citep{way2019productivity}, and more. While some factors can be controlled, many of them are hard to control. Therefore, we choose to use a large size of samples which may allow us to balance out other factors and reach preliminary answers to these questions. We use data about the publication in the American Physical Society (APS) journals and apply the approach by \cite{jia2017quantifying,aleta2019explore} to quantitatively measure the extent of research direction change. The method does not just provide a binary classification that tells if a scientist changed the research direction or not. Instead, it gauges the distance between two vectors characterizing the topics of a paper set, giving rise to a continuous measure of the direction change. Using the publication records of over 14,000 scientists, we find that the research direction change is positively correlated with the increase of impact: those with a bigger change in the research direction demonstrate not only a higher probability to increase citations of the publications, but also the relative magnitude of the citation gain. In contrast, the relationship between the research direction change and productivity change is neutral. Scientists who stay on the same topic do not produce faster than their colleagues venturing into new areas. We observe similar patterns when varying the criteria of data filtering, which supports the robustness of our conclusion. We also carefully compare our results with some recent findings. We provide evidence that the metric used in our study is totally uncorrelated with the one in \cite{zeng2019increasing} and the quality associated with the direction change is also different. Hence, the two studies quantify the effects of direction change from two distinct perspectives. We compare the direction change with the diversity change of an individual's research agenda \citep{wang2015interdisciplinarity, leahey2017prominent} and find that these two quantities are uncorrelated. Therefore, the gain of citation is not from the benefit of conducting diverse or interdisciplinary research. We also control the field citation and find that our conclusion still holds. In other words, the positive correlation between direction change and impact change is not from the move to a hot field. Taken together, we show the correlation between the research direction change and the performance change among a large number of physicists, which advances our understanding of the career development of individual scientists. \section{Materials and Methods} \subsection{Data set} The APS data is publicly available at http://journals.aps.org/datasets, which consists of around 300,000 scientific papers authored by around 200,000 scientists together with their citation records covering years from 1976 to 2009. The information includes a paper's author list, its publication time, its reference list, and more. To classify papers belonging to each author, we perform the name disambiguation following procedures in previous works \cite{deville2014career, zeng2019increasing, aleta2019explore}. The name disambiguated data are roughly the same as one we used in another work \cite{jia2017quantifying}. Still, some author names are so general that individual authors can hardly be distinguished. For this reason, we calculate the Shannon entropy of topics in an author's publications and filter out those outliers who participate in unexpected highly diverse topics. We first find the topic vector $g$ based on the all papers under one author (see below for composing topic vector) and then calculate Shannon entropy as $H =\sum^{67}_{j=1} - x_{j}log(x_{j})$ where $x_{j}$ is the $j^{th}$ element of the vector $g$. Following \cite{jia2017quantifying}, we set $H=2.5$ as the cutoff value. \subsection{Composing topic vector} We make use of the Physics and Astronomy Classification Scheme (PACS) code firstly proposed by the American Institute of Physics in 1975. The PACS code contains 6 digits in a format like ``ab.cd.ef'' that points to a specific and specialized area in physics. The order of these digits is related to the hierarchical structure of the topics. For example, the code ``82.39.Rt'' corresponds to ``Reactions in complex biological systems'' which is a subtopic under the first level topic ``82'': physical chemistry and chemical physics. Totally, there are 67 topics given by the first two digits of the PACS code, representing 67 sub-disciplines in modern physics. Over 90\% of papers published after 1985 are labeled by one to three PACS codes. For each paper, we count the occurrence of the first level topic given by the first two digits of a PACS code. The number of occurrences is further normalized by the total number of PACS codes in a paper. Then for this paper, we can build a vector $A = (a_{1}, a_{2}, ...,a_{j},... a_{67})$ where the element $a_j$ is the fractionalized occurrence of each topic in this paper. For a set of $m$ papers, we can average the $m$ vectors of each paper and reach a topic vector $g$ representing the weight coverage of 67 topics that the set of $m$ papers demonstrates. If the $m$ papers are authored by one scientist, the topic vector $g$ then provides a good proxy of her research direction, capturing not only the collection of topics but also the level of involvements she studies in the $m$ papers. An example of composing a topic vector is illustrated in Fig. \ref{fig1}a. \subsection{Measuring the research direction change} We sequence a scientist's $n$ papers according to their publication time. We then select two sets of papers, each with size $m$, in a scientist's early and late career, from which we can build two topic vectors. Denote $g_i$ by the topic vector of the early publications and $g_f$ by the late publications. Comparing the difference between these two vectors allows us to quantify the degree of research direction change $J_o$. Using cosine similarity, we have \begin{equation} J_{o} = 1 - \frac{g_i\cdot g_f }{\left|\left|g_i\right|\right| \left|\left|g_f\right|\right|}\label{(1)}. \end{equation} $J_{o}$ varies between 0 and 1, where $J_{o} = 0$ means the author studies the same topic in the early and late career with identical involvements, $J_{o} = 1$ corresponds to the largest change as the topics in the two sets of papers have no overlap. To make sure that the results are not affected by how the two sets of papers are selected, we consider two distinct scenarios. First, we select two series of $m$ papers maximally separated in a scientist's publication sequence, \textit{i.e.} the first and last $m$ papers, capturing the change across the whole recorded career (Fig. \ref{fig1}b). The corresponding direction change is denoted by $J$. As one publication sequence defines one unique career, we only have one measure of $J$ for each author. \begin{figure}[H] \begin{center} \includegraphics[width=13.5cm]{Fig1} \caption{ ({\bf a}) An example demonstrating the procedure to compose topic tuple and topic vector \textit{g}. For two topic tuples (66, 68) and (05, 61, 68), the element value in $g$ of topic 66 is calculated as $\frac{1/2+0}{2} = \frac{1}{4}$, as it appears once in one topic tuple and is not included in the other. The element value in $g$ of topic 68 is calculated as $\frac{1/2+1/3}{2} = \frac{5}{12}$, as it appears once in each of the topic tuples. Similarly, the element values in $g$ of topic 05 and 61 are calculated as $\frac{0+1/3}{2} = \frac{1}{6}$. ({\bf b}) The scenario that takes the first and the last $m$ papers in a scientist's publication sequence to obtain the direction change $J$, its distribution $P$, the growth fraction and growth rate of the scientific impact and productivity $P_c$, $R_c$, $P_t$, and $R_t$. ({\bf c}) The scenario that uses two adjacent sequences of $m$ papers randomly chosen from a scientist's publication sequence. Correspondingly, the quantities obtained are denoted by $\tilde{J}$, $\tilde{P}$, $\tilde{P}_c$, $\tilde{R}_c$, $\tilde{P}_t$, and $\tilde{R}_t$. \label{fig1}} \end{center} \end{figure} Additionally, we also consider two consecutive sets of papers beginning at a random choosing paper to eliminate potential effects caused by the gap between the two sets of publications (Fig. \ref{fig1}c). The direction change measured is denoted by $\tilde{J}$. As an author may have published more than $2m$ papers, there are multiple measures of $\tilde{J}$ for each author. Therefore, for all measures associated with this scenario, we randomly pick a beginning paper for each author, get the measures needed (such as $\tilde{J}$, $\tilde{P}_t$ and so on), and calculate the statistics across the population. Then we repeat this procedure 2000 times. We report the mean value and the error bar that corresponds to the standard deviation. Though the value and the exact distribution of $J$ and $\tilde{J}$ are different, it is also noteworthy that the two quantities are highly correlated. The Pearson correlation between $J$ and the average value $\langle \tilde{J} \rangle$ of an individual scientist is over 0.99 (Fig. \hyperlink{figS1}{S1}). It implies that the big research direction change is likely carried out incrementally throughout one's career. Due to the way we quantify the research direction change, only authors with no fewer than $2m$ papers can be included in our analysis. In the main text, we report results based on $m = 8$, corresponding to 14,726 scientists who authored at least 16 papers. We repeat the same analysis with different \textit{m} values ($m = 7$ and $m = 9$, Section \hyperlink{Note.S2}{S2}). The same pattern is observed, suggesting that our findings do not depend on the choice of $m$. \section{Results} \subsection{Correlation between the direction change and impact change} We apply two distinct approaches to quantify the impact change. First, we group scientists whose direction changes fall into a small range ($J \in (0.025, 0.075]$ for instance) and measure the percentage of scientists within this group whose research impact have increased. In particular, we count the number of citations a paper receives within two years after its publication and normalize this value by the average citations of papers published in the same year, giving rise to a normalized citation measure $c_{2}$ that takes into account the citation inflation \citep{radicchi2008universality,sinatra2016quantifying,petersen2019methods,huang2020historical,huang2020patent} (Section \hyperlink{Note.S1}{S1}). We then measure the average value of $c_{2}$ for the $m$ papers in an author's early and late publication list which is used to quantify research direction change. Denote $\bar{c}_{2,i}$ and $\bar{c}_{2,f}$ by the average citation of the two paper sets, respectively (Figs. \ref{fig1}b-c). If an author's late publications on average receive more citations than her early ones ($\bar{c}_{2,f} > \bar{c}_{2,i}$), we consider this author's research impact has increased. The percentage ${P}_{c}$ is then calculated as the ratio between the number of authors whose $\bar{c}_{2,f} > \bar{c}_{2,i}$ and the number of authors whose direction change fall within the range ($J-0.025,J+0.025$]. For the direction change $\tilde{J}$, we obtain $\tilde{P}_c$ with a similar approach. The interpretation of these two measures of ${P}_c$ and $\tilde{P}_c$ can be rather intuitive: for those who choose to venture into a new topic, how many of them are going to produce better research works than they used to, and for those who choose to stay in the same field, how many of them are able to keep the level of the scientific output? Our results demonstrate a strong positive correlation between $J$ and $P_c$, as well as between $\tilde{J}$ and $\tilde{P}_c$, where the Pearson correlation coefficient \textit{r} 0.78 for $J$ and 0.92 for $\tilde{J}$ (Figs. \ref{fig2}a-b). The positive correlation indicates that those who leave the current research field are more likely to produce works of more impact than their previous work. More importantly, it also implies that the further one leaves the original area, the higher this likelihood will be. The $P_c$ and $\tilde{P}_c$ provide a binary check on whether or not an author's works are on average better than her previous ones. They do not, however, \begin{figure}[H] \begin{center} \includegraphics[width=13.5cm]{Fig2} \caption{ ({\bf a}) $P_c$ conditioning on the range ($J-0.025,J+0.025$] is positively correlated with $J$. The dashed line represents the linear regression. ({\bf b}) The average $\tilde{P}_c$ conditioning on the range ($\tilde{J} - 0.025, \tilde{J} + 0.025$] is positively correlated with $\tilde{J}$. ({\bf c}) The average $R_c$ conditioning on the range ($J-0.025,J+0.025$] is positively correlated with $J$. ({\bf d}) The average $\tilde{R}_c$ conditioning on the range ($\tilde{J} - 0.025, \tilde{J} + 0.025$] is positively correlated with $\tilde{J}$. At the boundary $J=0$ and $\tilde{J}=0$, the range [0, 0.05] is used, and the same boundary condition applies in all the analyses of $J$ and $\tilde{J}$. The scatter plots of $P_c$, $\tilde{P}_c$, $R_c$, and $\tilde{R}_c$ are displayed in Fig. \protect\hyperlink{figS2}{S2}. The value of $b$ is defined as the slope of the corresponding linear regression function (The dashed line). *** $p < 0.001$, ** $p < 0.05$, * $p < 0.1$ ($t$-test for Pearson coefficient $r$). Error bars represent the one standard deviation of the mean. \label{fig2}} \end{center} \end{figure} \noindent quantify the extent of the improvement or the decline. It could be the case, for example, that the gain is small but the loss is big associated with a certain direction change. For this reason, we measure the relative impact change $(\bar{c}_{2,f} - \bar{c}_{2,i})/\bar{c}_{2,i}$, which quantifies the extend that the impact of the later research differs from the early ones. We calculate $(\bar{c}_{2,f} - \bar{c}_{2,i})/\bar{c}_{2,i}$ for each author and get the average value $R_c$ and $\tilde{R}_c$ of scientists with similar research direction change $J$ and $\tilde{J}$, respectively (for few scientists whose $\bar{c}_{2,i}=0$, we assign a default value 2 to their relative impact change). We again observe a strong and positive correlation between $J$ and $R_c$, as well as $\tilde{J}$ and $\tilde{R}_c$ (Figs. \ref{fig2}c-d). Taken together, those who exhibit a larger change of research direction demonstrate a higher extend of impact improvement. The larger the change, the more impact the work would receive. A scientist may move to a ``hot'' field where the publication's average citation is higher than others. To explore if this can explain the positive correlation observed, we calculate the field-normalized citation of each paper (Section \hyperlink{Note.S3}{S3}). The positive correlation preserves when taking the difference of each field into account. Finally, to make sure our results are not affected by the sample sizes in calculating citations, we perform similar analyses using different citation measures. Besides, we also test the case when the citation time window is 3 years. The corresponding results are presented in Section \hyperlink{Note.S4}{S4}. In all cases, a strong positive correlation is observed between the research direction change and the impact change. \subsection{Correlation between the direction change and productivity change} Another quantity associated with scientific performance is productivity, typically quantified by the time needed to complete a certain number of papers. Here we use the time interval $t$ between the publications of the first and $m^{th}$ paper in a given series of papers. Similar to the above analyses, we identify the fraction of scientists $P_t$ (and $\tilde{P}_t$) whose productivity are increased ($t_f < t_i$), and calculate the average change rate $R_t$ (and $\tilde{R}_t$) of scientists' productivity by averaging $(\frac{1}{t_f} - \frac{1}{t_i})/\frac{1}{t_i}$ of individual scientists whose research direction change are within the same range (Figs. \ref{fig1}b-c). The direct correlation measure shows two seemingly contradictory pictures. On one hand, based on two adjacent sets of papers, the research direction change $\tilde{J}$ is not correlated with the probability to increase productivity $\tilde{P}_t$ (Fig. \ref{fig3}a), nor the change of productivity rate $\tilde{R}_t$ (Fig. \hyperlink{figS3}{S3}a). On the other hand, if the change is measured based on two sets of papers that are at the two ends of a career, $J$ is positively correlated with both $P_t$ (Fig. \ref{fig3}b) and $R_t$ (Fig. \hyperlink{figS3}{S3}b). Does it mean that switching to new topics at the latter career would associate with advanced productivity? To have a right understanding of the correlation between $J$ and $P_t$ as well as $J$ and $R_t$, we have to first control their inherent correlation. As $J$ depends on the two sets of papers that are maximally separated in one's publication list, the more publications one produces, the less likely it that the publications on one end of the list would contain similar topics with those on the other end. Therefore, it is expected and also empirically confirmed that $J$ is positively correlated with the number of publications $n$ in a career (Fig. \ref{fig3}c). Furthermore, the growing number of publications may enrich a scientist's experiences, skills, and collaboration networks, which in return would benefit productivity. It can be expected that the chance to surpass the publication rate at the beginning of the career (which usually corresponds to the graduate training period) would grow as one's publication list gets longer. Indeed, we observe empirically that $P_t$ and $R_t$ are positively correlated with $n$ (Fig. \ref{fig3}d, and Fig. \hyperlink{figS3}{S3}c). The positive correlation between $J$ and $n$ as well as $P_t$ and $n$ leads to an inherent correlation between $J$ and $P_t$. In other words, given the way that $J$ is measured and the various values of the hidden variable $n$, $J$ is expected to correlate with $P_t$. Indeed, if we control the number of papers by focusing on $16 \le n \le 20$ or $21 \le n \le 25$, the correlation disappears (Section \hyperlink{Note.S5}{S5}). To control this inherent correlation, we can first identify the dependence between $J$ and $P_t$ from their pairwise dependence with $n$ (Section \hyperlink{Note.S6}{S6}). The result gives $P_t \sim b\times J$, which is the expected dependence between $J$ and $P_t$ (red line in Fig. \ref{fig3}b). We then subtract the increase from $P_t$, yielding a measure $P^{'}_t = P_t - b\times J$ in which $P_t$'s natural dependence with $J$ is excluded. When the inherent correlation is properly controlled, we find that $P^{'}_t$ and $J$ are uncorrelated (Fig. \ref{fig3}e). The same analyses can also be performed on $R_t$, which leads to similar results (Fig. \hyperlink{figS3}{S3}d). Taken together, by properly control the effect of publication number $n$ in our measure, we reach consistent results that the research direction change is not correlated with the probability to increase productivity nor the average change rate of productivity. This is contrary to our common perception that changing research direction is likely to hurt productivity. But we have to keep in mind that there may be a survivor bias. A scientist can contribute to the statistics only when she successfully changes the direction. Those who take the risk to explore a new area but fail to publish enough are not included in our data. Therefore, the interpretation of the results needs more caution. Finally, the hidden variable $n$ leads us to two contradictory relationships because it strongly and positively correlates with $P_t$ as well as $R_t$. In terms of impact change analyzed in the above section, we find that $n$ is weakly and slightly negatively correlated with $P_c$ and $R_c$ (Fig. \hyperlink{figS4}{S4}). This is in line with some previous findings that simply publishing a lot does not necessarily enhance the scientific impact of individual papers \citep{hanssen2015value,lariviere2016many,sarewitz2016pressure,kolesnikov2018researchers}. Therefore, although the dependence between $J$ and $n$ preserves, it does not play a role in our conclusion about impact change. \begin{figure}[H] \begin{center} \includegraphics[width=13.5cm]{Fig3} \caption{ ({\bf a}) $\tilde{P}_t$ is not correlated with $\tilde{J}$ for a range of values ($0 \leq \tilde{J} \leq 0.725$) with over 97\% of the sample size and small standard deviations. Due to the relatively small sample size (no more than 100) and high standard deviation for each group of $J$ in the range ($0.725 < \tilde{J} \leq 1.0$), we do not take this range into discussion. ({\bf b}) $P_t$ increases with $J$, the slope of which is almost the same as the one predicted by the correlations between $P_t$ and $n$ as well as $n$ and $J$ (Section \protect\hyperlink{Note.S6}{S6}). ({\bf c}) The average output $n$ conditioning on the range of direction change ($J-0.025,J+0.025$] is positively correlated with $J$. ({\bf d}) $P_t$ is positively correlated with the output $n$. ({\bf e}) After subtracting the increase induced by the pairwise dependence between $n$ and $J$ as well as $n$ and $P_t$, the result indicates that $P^{'}_t$ and $J$ are uncorrelated. The value of $b$ is defined as the slope of the corresponding linear regression function (The dashed line). *** $p < 0.001$, ** $p < 0.05$, * $p < 0.1$ ($t$-test for Pearson coefficient $r$). \label{fig3}} \end{center} \end{figure} \subsection{Extended discussions} It is a conventional narrative that scientific productivity tends to grow rapidly at the early stage of a career and then slowly declines. Such narrative is highlighted in some recent studies \citep{sinatra2016quantifying, li2020scientific} explaining why elite scientists tend to produce the most significant works at their early career stage: because their productivity declines when they get older. This pattern seems to be against our finding that $P_t$ and $n$ are positively correlated, which plays a crucial role in explaining the discrepancy in the initial results of the productivity change. Indeed, if we are not only interested in elite scientists, the productivity change can be diverse and the jump-decline pattern is not the only typical one \citep{way2017misleading}. Even in the work that implies a jump-decline pattern, we can still observe in the vast majority that the productivity declines at the very early career (typical schooling and training period) followed by a steady increase as the career unfolds (Fig. 1E in \cite{sinatra2016quantifying}). For a typical scientist, it may hold that the productivity at the end of the career, though has declined from the peak, is still higher than when she starts the career. Indeed, when we turn to a more comprehensive data set by the Web of Science, we still observe $P_t$ ($R_t$) and $n$ are positively correlated (Figs. \hyperlink{figS5}{S5}a-b). Another potentially important factor is the career length, which is not controlled in the above analyses. Therefore, those who are on the rising stage may outnumber those in the gloaming. Nevertheless, when the career length is controlled, the same positive correlation remains (Figs. \hyperlink{figS5}{S5}c-d). Moreover, the study by \cite{sinatra2016quantifying} and \cite{liu2018hot} suggest that the work with the highest impact may appear randomly in one's publication sequence. But this pattern is not contradictory to the positive correlation between the direction change and impact change, which implies that the work with the highest impact may appear in the later career. The ``random impact rule'' by \cite{sinatra2016quantifying, liu2018hot} is based on all samples without any control, whereas the correlation measured is conditioned on a certain research direction change. Another work we wish to discuss more is the recent advances by \cite{zeng2019increasing} which also utilizes the APS data to quantify topic change patterns of scientists. In this pioneering work, the authors measure the probability to switch between topics in an individual career. The results demonstrate that compared with typical scientists, the top 10\% of productive scientists have a lower switching probability in the early career and higher switching probability in the later career. Moreover, the top 10\% most cited scientists have an overall lower switching probability than normal scientists. The relationship discovered seems contradictory to our findings. However, as a matter of fact, our work and the work by \cite{zeng2019increasing} are different in terms of the quantification of the direction change and the quality that the change is associated with. Hence, the two conclusions can be both valid. First, in this work, we are interested in the change of performance, not the performance itself. In other words, we investigate how likely a scientist can publish works with more impact or within less time as the career develops, regardless if she is in the top 10\% elite group or not. The focus, and consequently the findings are different from that in \cite{zeng2019increasing}. More importantly, the switching probability quantifies the likelihood that the current topic would be different from the subsequent one, which only depends on the topic of two papers. The direction change applied in this study compares the averaged topics of two sets of papers. Therefore, they reflect two distinct aspects in an individual's research topic selection. As an example, assume a scientist who has published 16 papers within which the first 8 papers are on topic A and the next 8 are on topic B. The direction change $J$ of this scientist would be 1, as she explores totally different topics in the early and late sets of publications. The switching probability, however, is very low because this scientist only switches the topic once, which occurs in the 9th paper. Likewise, assume another scientist who has also published 16 papers on either topic A or topic B. She publishes following a pattern that if her current publication is on topic A, then the next paper will be on topic B, and vice versa. In this case, the direction change $J$ would be 0 as this scientist equally devotes herself to topics A and B throughout the career. But the switching probability would be very high as she constantly switches the topic. The two examples vividly depict the difference between these two measures. Indeed, when we try to associate a scientist's direction change and switching probability together, we find that they two are independent of each other (Figs. \ref{fig4}a-b). Therefore, we believe that this work and the work by \cite{zeng2019increasing} are complementary rather than contradictory. As they reveal two essential patterns underlying the scientific careers of individuals, it would be interesting to check in future studies if a combination of they two is a more comprehensive predictor for an individual career. \begin{figure}[H] \begin{center} \includegraphics[width=13.5cm]{Fig4} \caption{ ({\bf a}) For each scientist, we plot her $J$ versus switching probability (grey circle), and the mean value of $J$ conditioning on the range of (switching probability - 0.025, switching probability + 0.025] (scatter with line). The result shows that switching probability is not correlated with $J$ on the individual level ($p>0.1$). ({\bf b}) For each scientist, we calculate the average value $\langle \tilde{J} \rangle$ of her $n-1$ $\tilde{J}$. Then we plot her $\langle \tilde{J} \rangle$ versus switching probability (grey circle), and the mean value of $\langle \tilde{J} \rangle$ conditioning on the range of (switching probability - 0.025, switching probability + 0.025] (scatter with line). The result shows that switching probability is not correlated with $\langle \tilde{J} \rangle$ at the individual level ($p>0.1$). ({\bf c}) For scientists whose impact has increased ($\bar{c}_{2,f} > \bar{c}_{2,i}$), we plot their $J$ versus the change of Shannon entropy $\Delta H$ (grey circle). The result shows that $\Delta H$ and $J$ are not correlated ($p>0.1$). The average $\Delta H$ is close to 0, indicating that diversity change is not associated with the impact increase. ({\bf d}) Similar to ({\bf c}), but the mean value of $\Delta H$ is taken for individuals with ($J-0.025,J+0.025$]. The value of $b$ is defined as the slope of the corresponding linear regression function (The dashed line). Error bars represent the one standard deviation of the mean. \label{fig4}} \end{center} \end{figure} Finally, we also wish to point out that our finding is different from that on research diversity, though they both suggest a positive citation gain \citep{2015Does,2015Are,2021Exploring,leahey2017prominent}. Indeed, the direction change and diversity change are two different things. Take the above example again, a scientist who has published the first 8 papers on topic A and the next 8 papers on topic B. The direction change is 1. But the diversity change is 0, as she always focuses on one topic only. Another example is when a scientist only focuses on topic A at the beginning and then changes the agenda with 80\% of topic A and five other topics each with 4\% of weight. As the main focus is still on topic A, the direction is very small. But because there are five other topics added to the agenda, the diversity change is high. Here, we use Shannon entropy to quantify the diversity of one's research agenda as $H =\sum^{67}_{j=1} - x_{j}log(x_{j})$ where $x_{j}$ is the $j^{th}$ element of the vector $g$ (the other two types of diversity index are reported in Section \hyperlink{Note.S7}{S7}). We focus on scientists whose impact has increased ($\bar{c}_{2,f} > \bar{c}_{2,i}$) and plot their change of entropy $\Delta H$ with their change of research direction. We find these two quantities are not correlated (Figs. \ref{fig4}c-d and Fig. \hyperlink{figS15}{S15}). The fact that the change of diversity is close to 0 on average indicates that the increased impact is not from a more interdisciplinary or more narrow research agenda. Instead, it is likely a result of bringing existing expertise to the new field and solving the field problem by a new approach. \section{Conclusions} To summarize, by utilizing PACS codes to classify topics in physics publications, we quantify the degree of the research direction change in an individual career. Instead of considering the overall scientific performance, we associate this direction change of an individual with her change of performance. On one hand, we find that the direction change is strongly and positively correlated with the change of impact. Those who demonstrate a larger change on research topics are more likely to receive more citations than they used to. The magnitude of the relative improvement also tends to be higher. On the other hand, the direction change is not correlated with the productivity change. The likelihood that one increases or decreases productivity may be associated with many factors in an individual career, but the change of research direction alone is not associated with it. We perform supplementary analyses to demonstrate the robustness of the conclusion. We also discuss in detail the difference between our findings and others'. Our study provides another point of view not fully captured in previous studies. The statistics provide an encouraging prediction for scientists who venture into a new field. Once they are established in the new field, they are likely to become better scientists. What is not mentioned, however, is the risk associated with the direction change \citep{bromham2016interdisciplinary, azoulay2011incentives, foster2015tradition, goldstein2020know}. Indeed, our analyses are based on scientists who have published enough papers in the new field. Those who try to change but fail to have the new research published are not included. This naturally poses a survival bias in the result, which motivates further studies on failures across a career \citep{yin2019quantifying, wang2019early}. There are also many factors that the correlation measure alone can not explain. For example, a scientist may be forced to leave the old field because it shrinks and can not yield interesting results, and another scientist finds a way to combine her prior knowledge with the new problem which leads to fruitful results \citep{pramanik2019migration}. In both cases, a citation gain and a direction change would be observed. Yet, in the current study we can not distinguish them nor identify which causes the other. Given confounding factors in an individual career, it is difficult but important to check other explanations of the observation. Moreover, our study is based on APS publication data. Despite the intensive usage \citep{liu2017knowledge, chinazzi2019mapping}, this data set only covers a small part of the science. The classification scheme provided by the PACS code has certain limitations as well. Given the availability of large scale data sets, such as Microsoft Academic Graph \citep{wang2020microsoft} and advances in machine learning tools to identify and classify topics from papers \citep{qian2020understanding, chinazzi2019mapping, palmucci2020your, shen2019node2vec}, it would be important to check if similar patterns can be observed in other data set. Finally, we adopt the whole counting in this study that gives equal credit to all co-authors. This approach may lead to inflation when calculating the impact of a scientist's work and her productivity \citep{huang2011counting,yu2021papers}. It may be worthwhile to try other counting or credit allocation method \citep{sivertsen2019measuring,shen2014collective,wang2019nonlinear}. \section*{Supplementary Information} See the file of supplementary information for additional materials. \section*{Acknowledgments} This work is supported by the National Natural Science Foundation of China (No. 61603309). Boleslaw K. Szymanski is also supported by the Army Research Office (ARO) under Grant W911NF-16-1-0524. \newpage \bibliographystyle{apalike2} \biboptions{authoryear}
1,477,468,750,240
arxiv
\section{Analytical solution in homogeneous doubly-uniaxial media} \label{app1} In this appendix, the closed-form analytical expressions used to obtain for electromagnetic fields in homogeneous \textcolor{\Cred}{and} doubly-uniaxial media are presented. In such media, Maxwell's equations with $e^{-\iu\omega t}$ time-dependence write as \begin{flalign} \na\times\mathbf{E}(\rr) &=\iu\omega\um\cdot\mathbf{H}(\rr), \label{app.1.E.Maxwell.Eq.a}\\ \na\times\mathbf{H}(\rr) &=-\iu\omega\ue\cdot\mathbf{E}(\rr) + \mathbf{J}(\rr). \label{app.1.E.Maxwell.Eq.b} \end{flalign} Permittivity values are complex-valued so as to include conductivities. For simplicity, it is assumed that the anisotropy ratio for the permeability tensor coincides with that of the complex permittivity tensor, i.e., $\kappa_\epsilon=\kappa_\mu$. To simplify the derivation, we adopt coordinate stretching techniques. For a general exposition of the coordinate stretching, refer to \cite{Chew94:3D, Teixeira98:General, Teixeira98:Analytical}. To begin with, let us consider modified Maxwell's curl equations with stretched coordinates, i.e., \begin{flalign} \tna\times\tE(\trr)&=\iu\omega\tmu\tH(\trr), \label{app.1.E.Maxwell.E}\\ \tna\times\tH(\trr)&=-\iu\omega\teps\tE(\trr)+\tJ(\trr), \label{app.1.E.Maxwell.H} \end{flalign} with a modified nabla operator $\tna$ defined as \begin{flalign} \tna = \hat{x}\frac{\pa}{\pa \widetilde{x}} + \hat{y}\frac{\pa}{\pa \widetilde{y}} + \hat{z}\frac{\pa}{\pa \widetilde{z}}, \label{app.1.E.modified.nabla} \end{flalign} where $\widetilde{x}$, $\widetilde{y}$, and $\widetilde{z}$ are stretched coordinates defined such that \begin{flalign} u \rightarrow \widetilde{u}=\int_0^u s_u(u')du', \label{app.1.E.stretching.coord} \end{flalign} where $s_u$ is the corresponding complex stretching variable, and $u$ stands for $x$, $y$, or $z$. In the above, the fields and sources are non-Maxwellian but $\tmu$ and $\teps$ are scalars, so the medium is isotropic. Using the technique in~\cite{Teixeira98:General}, \eqref{app.1.E.Maxwell.E} and \eqref{app.1.E.Maxwell.H} are rewritten as \begin{flalign} \na\times\left(\ttS^{-1}\cdot\tE(\trr)\right) &=\iu\omega\tmu\left(\text{det}\ttS\right)^{-1}\ttS\cdot\tH(\trr), \label{app.1.E.Maxwell.E.v3}\\ \na\times\left(\ttS^{-1}\cdot\tH(\trr)\right) &=-\iu\omega\teps\left(\text{det}\ttS\right)^{-1}\ttS\cdot\tE(\trr) +\left(\text{det}\ttS\right)^{-1}\ttS\cdot\tJ(\trr). \label{app.1.E.Maxwell.H.v3} \end{flalign} where a dyadic $\ttS$ is defined as \begin{flalign} \ttS = \hat{x}\hat{x}\left(\frac{1}{s_x}\right) + \hat{y}\hat{y}\left(\frac{1}{s_y}\right) + \hat{z}\hat{z}\left(\frac{1}{s_z}\right). \end{flalign} Using the relations between the stretched fields to unstretched (Maxwellian) fields, \begin{subequations} \begin{flalign} \mathbf{E}(\rr)&=\ttS^{-1}\cdot\tE(\trr), \label{app.1.E.unstr.E}\\ \mathbf{H}(\rr)&=\ttS^{-1}\cdot\tH(\trr), \label{app.1.E.unstr.H}\\ \mathbf{J}(\rr)&=\left(\text{det}\ttS\right)^{-1}\ttS\cdot\tJ(\trr), \label{app.1.E.unstr.J} \end{flalign} \end{subequations} \eqref{app.1.E.Maxwell.E.v3} and \eqref{app.1.E.Maxwell.H.v3} are rearranged as \begin{flalign} \na\times\mathbf{E}(\rr) &=\iu\omega\left[ \tmu\left(\text{det}\ttS\right)^{-1}\ttS\cdot\ttS \right]\cdot\mathbf{H}(\rr), \label{app.1.E.Maxwell.E.v4}\\ \na\times\mathbf{H}(\rr) &=-\iu\omega\left[ \teps\left(\text{det}\ttS\right)^{-1}\ttS\cdot\ttS\ \right]\cdot\mathbf{E}(\rr) + \mathbf{J}(\rr). \label{app.1.E.Maxwell.H.v4} \end{flalign} These two resulting curl equations can be associated with an effective anisotropic medium and represented as \begin{flalign} \na\times\mathbf{E}(\rr) &=\iu\omega\um\cdot\mathbf{H}(\rr), \label{app.1.E.Maxwell.E.v5}\\ \na\times\mathbf{H}(\rr) &=-\iu\omega\ue\cdot\mathbf{E}(\rr) + \mathbf{J}(\rr), \label{app.1.E.Maxwell.H.v5} \end{flalign} which recover the form of the original curl equations \eqref{app.1.E.Maxwell.Eq.a} and \eqref{app.1.E.Maxwell.Eq.b}. Therefore, electromagnetic fields in an homogeneous and uniaxial media with $\mathbf{E}(\rr)$ and $\mathbf{H}(\rr)$ can be easily obtained from $\tE(\trr)$ and $\tH(\trr)$, which are solutions in isotropic media with coordinate-stretching, by the transformations expressed in \eqref{app.1.E.unstr.E}, \eqref{app.1.E.unstr.H}, and \eqref{app.1.E.unstr.J}. In order to determine the form of the stretching variables relevant to our problem, let us examine the effective anisotropic medium obtained above. The constitutive tensors have the form \begin{flalign} \um =\left[\tmu\left(\text{det}\ttS\right)^{-1}\ttS\cdot\ttS\right] =\tmu\ttLa, \label{app.1.E.ani.mu}\\ \ue =\left[\teps\left(\text{det}\ttS\right)^{-1}\ttS\cdot\ttS\right] =\teps\ttLa, \label{app.1.E.ani.epsilon} \end{flalign} where \begin{flalign} \ttLa =s_x s_y s_z \begin{bmatrix} s_x^{-2} & 0 & 0 \\ 0 & s_y^{-2} & 0 \\ 0 & 0 & s_z^{-2} \\ \end{bmatrix} = \begin{bmatrix} \frac{s_y s_z}{s_x} & 0 & 0 \\ 0 & \frac{s_y s_z}{s_x} & 0 \\ 0 & 0 & \frac{s_x s_y}{s_z} \\ \end{bmatrix}. \label{app.1.E.Lambda} \end{flalign} Using two conditions on the stretching variables for uniaxial anisotropy, and using the wavenumber expression for the modified Maxwell's equations $\tk=\omega\sqrt{\tmu\teps}$, we can set $s_x=s_y=1$, and $s_z=\kappa$. Consequently,we obtain $\tmu = \frac{\mu_h}{\kappa}$ and $\teps = \frac{\epsilon_h}{\kappa}$. Next, let us consider the source transformation \eqref{app.1.E.unstr.J}. If the source is a point Hertzian electric dipole like $ \mathbf{J}(\rr)=Il\hat{\alpha}'\delta(\rr-\rp)$, the coordinate stretching should be carefully treated due to the presence of the Dirac delta function. The stretched current density is expressed as $ \tJ(\trr)=Il\hat{\widetilde{\alpha}}'\delta(\trr-\trp)$. From the Dirac delta function properties, \begin{flalign} \delta(\trr-\trp) = \frac{1}{s_x s_y s_z}\delta(\rr-\rp), \end{flalign} and from \eqref{app.1.E.unstr.J}, \begin{flalign} \hat{\alpha}'\delta(\rr-\rp) =\left(\text{det}\ttS\right)^{-1}\ttS\cdot\hat{\widetilde{\alpha}}'\delta(\trr-\trp) = \begin{bmatrix} s_x^{-1} & 0 & 0 \\ 0 & s_y^{-1} & 0 \\ 0 & 0 & s_z^{-1} \\ \end{bmatrix} \cdot\hat{\widetilde{\alpha}}'\delta(\rr-\rp). \label{app.1.E.delta.comp} \end{flalign} Since $s_x=s_y=1$ and $s_z=\kappa$, we have the source transformation $\hat{\widetilde{\alpha}}'=\ttS^{-1}\cdot\hat{\alpha}'$, and in homogeneous isotropic media, the Cartesian field components due to the Hertzian electric dipole source can be written as \begin{subequations} \renewcommand{\arraystretch}{1.4} \begin{flalign} \begin{bmatrix} \widetilde{E}_x \\ \widetilde{E}_y \\ \widetilde{E}_z \end{bmatrix} &=\frac{\iu Il}{\omega\teps} \; \frac{e^{\iu \tk\tr}}{4\pi\tr} \; \overline{\mathbf{M}}_e \cdot \begin{bmatrix} \widetilde{\alpha}_{x'} \\ \widetilde{\alpha}_{y'} \\ \widetilde{\alpha}_{z'} \end{bmatrix}, \label{app.1.E.Exyz}\\ \begin{bmatrix} \widetilde{H}_x \\ \widetilde{H}_y \\ \widetilde{H}_z \end{bmatrix} &=Il \; \frac{e^{\iu \tk\tr}}{4\pi\tr} \; \overline{\mathbf{M}}_m \cdot \begin{bmatrix} \widetilde{\alpha}_{x'} \\ \widetilde{\alpha}_{y'} \\ \widetilde{\alpha}_{z'} \end{bmatrix}, \label{app.1.E.Hxyz} \end{flalign} \end{subequations} where \begin{subequations} \renewcommand{\arraystretch}{1.4} \begin{flalign} \overline{\mathbf{M}}_e&= \begin{bmatrix} \tk^2+A+BX^2 & BXY & BXZ \\ BXY & \tk^2+A+BY^2 & BYZ \\ BXZ & BYZ & \tk^2+A+BZ^2 \\ \end{bmatrix}, \label{app.1.E.Me.str}\\ \overline{\mathbf{M}}_m&= \begin{bmatrix} 0 & AZ & -AY \\ -AZ & 0 & AX \\ AY & -AX & 0 \\ \end{bmatrix}, \label{app.1.E.Mm.str}\\ A&=\iu \tk/\tr-1/\tr^2, \label{app.1.E.A.str}\\ B&=-\tk^2/\tr^2-3\iu \tk/\tr^3+3/\tr^4, \label{app.1.E.B.str}\\ X&=s_x(x'-x)=x'-x, \label{app.1.E.X.str}\\ Y&=s_y(y'-y)=y'-y, \label{app.1.E.Y.str}\\ Z&=s_z(z'-z)=\kappa(z'-z), \label{app.1.E.Z.str}\\ \tk&=\omega\sqrt{\mu_h \epsilon_h}/\kappa, \label{app.1.E.k.str}\\ \tr&=\left[(x'-x)^2+(y'-y)^2+\kappa^2(z'-z)^2\right]^{1/2}. \label{app.1.E.r.str} \end{flalign} \end{subequations} Applying field transformations, \eqref{app.1.E.unstr.E} and \eqref{app.1.E.unstr.H}, and source transformation $\hat{\widetilde{\alpha}}'=\ttS^{-1}\cdot\hat{\alpha}'$, we obtain \begin{subequations} \renewcommand{\arraystretch}{1.4} \begin{flalign} \begin{bmatrix} E_x \\ E_y \\ E_z \end{bmatrix} &=\frac{\iu Il}{\omega\teps} \; \frac{e^{\iu \tk\tr}}{4\pi\tr} \; \ttS^{-1} \cdot \overline{\mathbf{M}}_e \cdot \ttS^{-1} \cdot \begin{bmatrix} \alpha_{x'} \\ \alpha_{y'} \\ \alpha_{z'} \end{bmatrix}, \label{app.1.E.Exyz.unstr}\\ \begin{bmatrix} H_x \\ H_y \\ H_z \end{bmatrix} &=Il \; \frac{e^{\iu \tk\tr}}{4\pi\tr} \; \ttS^{-1} \cdot \overline{\mathbf{M}}_m \cdot \ttS^{-1} \cdot \begin{bmatrix} \alpha_{x'} \\ \alpha_{y'} \\ \alpha_{z'} \end{bmatrix}. \label{app.1.E.Hxyz.unstr} \end{flalign} \end{subequations} Finally, applying the coordinate transformations from Cartesian to cylindrical coordinates, we obtain \begin{subequations} \renewcommand{\arraystretch}{1.4} \begin{flalign} \begin{bmatrix} E_\rho \\ E_\phi \\ E_z \end{bmatrix} &=\frac{\iu Il}{\omega\teps} \; \frac{e^{\iu \tk\tr}}{4\pi\tr} \; \overline{\mathbf{T}}_1 \cdot \ttS^{-1} \cdot \overline{\mathbf{M}}_e \cdot \ttS^{-1} \cdot \overline{\mathbf{T}}_2 \cdot \begin{bmatrix} \alpha_{\rho'} \\ \alpha_{\phi'} \\ \alpha_{z'} \end{bmatrix}, \label{app.1.E.Erpz}\\ \begin{bmatrix} H_\rho \\ H_\phi \\ H_z \end{bmatrix} &=Il \; \frac{e^{\iu \tk\tr}}{4\pi\tr} \; \overline{\mathbf{T}}_1 \cdot \ttS^{-1} \cdot \overline{\mathbf{M}}_m \cdot \ttS^{-1} \cdot \overline{\mathbf{T}}_2 \cdot \begin{bmatrix} \alpha_{\rho'} \\ \alpha_{\phi'} \\ \alpha_{z'} \end{bmatrix}, \label{app.1.E.Hrpz} \end{flalign} \end{subequations} where \begin{subequations} \renewcommand{\arraystretch}{1.4} \begin{flalign} \overline{\mathbf{T}}_1&= \begin{bmatrix} \cos\phi & \sin\phi & 0 \\ -\sin\phi & \cos\phi & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}, \label{app.1.E.T1}\\ \overline{\mathbf{T}}_2&= \begin{bmatrix} \cos\phi' & -\sin\phi' & 0 \\ \sin\phi' & \cos\phi' & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}. \label{app.1.E.T2} \end{flalign} \end{subequations} \section*{References}} \newcommand{\Cred}{black} \newcommand{\Cblue}{black} \journal{Elsevier} \begin{document} \begin{frontmatter} \title{Stable evaluation of Green's functions in cylindrically stratified regions with uniaxial anisotropic layers} \author[rvt1]{H. Moon\corref{cor1}} \ead{haksu.moon@gmail.com} \author[rvt2]{B. Donderici} \ead{burkay.donderici@halliburton.com} \author[rvt3]{F. L. Teixeira} \ead{teixeira@ece.osu.edu} \cortext[cor1]{Corresponding author} \address[rvt1]{ElectroScience Laboratory, The Ohio State University, Columbus, OH 43212, USA (present address: Intel Corporation, Hillsboro, OR 97124, USA)} \address[rvt2]{Sensor Physics \& Technology, Halliburton Energy Services, Houston, TX 77032, USA} \address[rvt3]{ElectroScience Laboratory, The Ohio State University, Columbus, OH 43212, USA} \begin{abstract} We present a robust algorithm for the computation of electromagnetic fields radiated by point sources (Hertzian dipoles) in cylindrically stratified media where each layer may exhibit material properties (permittivity, permeability, and conductivity) with uniaxial anisotropy. Analytical expressions are obtained based on the spectral representation of the tensor Green's function based on cylindrical Bessel and Hankel eigenfunctions, and extended for layered uniaxial media. Due to the poor scaling of these eigenfunctions for extreme arguments and/or orders, direct numerical evaluation of such expressions can produce numerical instability, i.e., underflow, overflow, and/or round-off errors under finite precision arithmetic. To circumvent these problems, we develop a numerically stable formulation through suitable rescaling of various expressions involved in the computational chain, to yield a robust algorithm for all parameter ranges. Numerical results are presented to illustrate the robustness of the formulation including cases of practical interest to geophysical exploration. \end{abstract} \begin{keyword} cylindrically stratified media \sep anisotropic media \sep Green's function \sep cylindrical coordinates \sep electromagnetic radiation \end{keyword} \end{frontmatter} \input{def} \input{sec.1.intro} \input{sec.2.analyt.formul} \input{sec.3.range.formul} \input{sec.4.results} \input{sec.5.con} \section*{Acknowledgement} We thank Halliburton Energy Services for the permission to publish this work, and Dr. Baris Guner for kindly compiling some of the comparison data. \section{Introduction} \label{ch1.intro} Analysis of electromagnetic fields in cylindrically stratified media is of great importance in many applications, such as borehole geophysics~\cite{Wait:Geo, Telford:Applied, Ellis:Well}. This is a classical problem with separable geometry where the components of the tensor Green's function can be expressed in generic form as~\cite[Ch. 3]{Chew:Waves},\cite{Moon14:Stable} \begin{flalign} \suma e^{\iu n(\phi-\phi')}\intmp dk_z e^{\iu k_{z}(z-z')} \mathbf{\Phi}_n(\rho,\rho'), \end{flalign} where the integrand factor $\mathbf{\Phi}_n(\rho,\rho')$ contains various products of cylindrical Bessel and Hankel functions. When applicable, such solutions are often preferred to brute-force numerical methods such as finite elements and finite difference \cite{Wang01:3D, Weiss02:Electromagnetic, Weiss03:Electromagnetic, Lee07:Cylindrical, Lee12:Numerical,Pardo06:Two,Pardo06:Simulation,Novo07:Finite, Novo10:Three,Nam10:Simulation} since the former can provide very accurate results with computational costs that are orders of magnitude smaller than the latter. This is especially important for inverse algorithms relying on repeated forward solutions and which seek to determine sought-after physical parameter values (say, layer resistivities) from the knowledge of the field values (measured) at certain subterranean locations. However, numerical computations directly based on the canonical expressions of this problem can lead to underflow and overflow issues in finite precision arithmetic. This is caused by the poor scaling of cylindrical Bessel and Hankel functions for extreme arguments and/or orders, which occur for low frequencies of operation and/or extreme values for layer resistivities. In addition, convergence problems in the numerical evaluation of the spectral integral on the longitudinal wavenumber $k_z$ may occur depending on the separation distance between the source $(\rho',\phi',z')$ and observation point $(\rho,\phi,z)$ as well as on the operation frequency. To circumvent these problems, a stable formulation based on a suitable analytical conditioning of the various factors in the computational chain and a proper choice of deformed integration paths in the complex $k_z$ plane was recently put forth in~\cite{Moon14:Stable}. This formulation was shown to be robust to variations on physical parameters that span several orders of magnitude. A related formulation to compute static fields (electric potentials) due to current electrodes in isotropic layers was described in~\cite{Moon15:Computation}. In this work, we extend the formulation presented in~\cite{Moon14:Stable} to account for scenarios where the layers comprising the cylindrical stratified media may exhibit anisotropic properties. In borehole geophysics, anisotropy is quite common \cite{Kunz58:Some,Teitler70:Refraction,Kong72:Electromagnetic,Moran79:Effects,Morgan87:Electromagnetic,Nekut94:Anisotropy,Bittar96:Effects,Howard00:Petrophysics,Yin01:Electromagnetic,Zhang04:Determination,Wang06:Weak,Hue07:Numerical,Wang08:Numerical,Zhong08:Computation,Yuan10:Simulation,Hagiwara11:Apparent,Hagiwara12:Determination,Liu12:Analysis,Luling13:Paradox,Sainath14:Robust,Sainath14:Tensor} and may result from geological factors affecting the various Earth layers such as salt water penetrating porous fractured formations and thereby increasing the conductivity in the direction parallel to the fracture and/or the presence of clay and sand laminates with directionally dependent resistivities. Here, for generality, we assume each layer to be doubly uniaxial, i.e., both the complex permittivity tensor $\ue$ (which includes the conductivity tensor) and the permeability tensor $\um$ are independently uniaxial, which facilitates the analysis of equivalent problems using electromagnetic duality~\cite[Ch. 1]{Chew:Waves}. \section{Fields in cylindrically-layered uniaxial media} \label{sec.2.formul} Most of the basic notation and terminology is adopted from \cite[Ch. 3]{Chew:Waves}. The section can be regarded as a generalization of the formulation presented for isotropic layers in \cite{Moon14:Stable} to uniaxial anisotropic layers. \subsection{General solution in homogeneous, uniaxial media} \label{sec.2.1} Maxwell's curl equations in uniaxial, homogeneous, and source-free media (with time-harmonic dependence $e^{-\iu\omega t}$ assumed) read as \begin{flalign} \na\times\mathbf{E}&=\iu\omega\um\mathbf{H}, \label{ch1.1.E.Maxwell.1}\\ \na\times\mathbf{H}&=-\iu\omega\ue\mathbf{E}, \label{ch1.1.E.Maxwell.2} \end{flalign} where $\um$ and $\ue$ are the permeability tensor and complex permittivity tensor, respectively. In the unixial case, $\um$ is written as \begin{flalign} \renewcommand{\arraystretch}{1.2} \um= \begin{bmatrix} \mu_h & 0 & 0 \\ 0 & \mu_h & 0 \\ 0 & 0 & \mu_v \\ \end{bmatrix}, \label{ch1.1.E.uniaxial.mu} \end{flalign} where $\mu_h$ and $\mu_v$ are the horizontal and vertical permeabilities, resp. The complex permittivity tensor $\ue$ includes the electric conductivity and it is written as \begin{flalign} \renewcommand{\arraystretch}{1.2} \ue = \begin{bmatrix} \epsilon_{h} & 0 & 0 \\ 0 & \epsilon_{h} & 0 \\ 0 & 0 & \epsilon_{v} \\ \end{bmatrix} = \begin{bmatrix} \epsilon_{p,h} +\iu\sigma_h/\omega & 0 & 0 \\ 0 & \epsilon_{p,h} +\iu\sigma_h/\omega & 0 \\ 0 & 0 & \epsilon_{p,v} +\iu\sigma_v/\omega \\ \end{bmatrix}, \label{ch1.1.E.uniaxial.epsilon} \end{flalign} where $\epsilon_{p,h}$ and $\epsilon_{p,v}$ are horizontal and vertical permittivities, and $\sigma_h$ and $\sigma_v$ are horizontal and vertical conductivities, resp. In such source-free media, the divergence equations can be written as \begin{flalign} \na\cdot\left( \ue \cdot \mathbf{E} \right) &=0, \label{ch1.1.E.Maxwell.3}\\ \na\cdot\left( \um \cdot \mathbf{H} \right) &=0. \label{ch1.1.E.Maxwell.4} \end{flalign} Note that in general $\na\cdot\mathbf{E}$ and $\na\cdot\mathbf{H}$ in uniaxial and source-free media are nonzero. Indeed, the left hand side of \eqref{ch1.1.E.Maxwell.3} in cylindrical coordinates is written as \begin{flalign} \na\cdot\ue\mathbf{E} =\epsilon_h \left\{ \frac{1}{\rho}\frac{\pa\left(\rho E_\rho\right)}{\pa\rho} + \frac{1}{\rho}\frac{\pa E_\phi}{\pa\phi} + \frac{\pa E_z}{\pa z} -\left(1-\frac{\epsilon_v}{\epsilon_h}\right)\frac{\pa E_z}{\pa z} \right\} =\epsilon_h \left\{ \na\cdot\mathbf{E} - \left(1-\frac{\epsilon_v}{\epsilon_h}\right)\frac{\pa E_z}{\pa z} \right\}. \label{ch1.1.E.div.eE} \end{flalign} From \eqref{ch1.1.E.Maxwell.3} and \eqref{ch1.1.E.div.eE}, we can obtain \begin{flalign} \na\cdot\mathbf{E} =\left(1-\frac{\epsilon_v}{\epsilon_h}\right)\frac{\pa E_z}{\pa z}. \label{ch1.1.E.div.E} \end{flalign} Similarly, we can obtain \begin{flalign} \na\cdot\mathbf{H} =\left(1-\frac{\mu_v}{\mu_h}\right)\frac{\pa H_z}{\pa z}. \label{ch1.1.E.div.H} \end{flalign} To obtain the vector wave equation for $\mathbf{E}$, taking the curl of \eqref{ch1.1.E.Maxwell.1} and using \eqref{ch1.1.E.div.E} yields \begin{flalign} \na^2\mathbf{E} - \left(1-\frac{\epsilon_v}{\epsilon_h}\right)\na\frac{\pa E_z}{\pa z} &=-\iu\omega \left[\left(\na_s + \hat{z}\frac{\pa}{\pa z}\right) \times \left(\mu_h\mathbf{H}_s + \mu_v\mathbf{H}_z \right) \right], \label{ch1.1.E.curl.curl.E} \end{flalign} where $\na=\na_s + \hat{z}\frac{\pa}{\pa z}$ is used and subscript $s$ indicates the transverse components to the $z$-component. Similarly, the vector wave equation for $\mathbf{H}$ can be obtained by taking the curl of \eqref{ch1.1.E.Maxwell.2} and using \eqref{ch1.1.E.div.H} such that \begin{flalign} \na^2\mathbf{H} - \left(1-\frac{\mu_v}{\mu_h}\right)\na\frac{\pa H_z}{\pa z} &=\iu\omega \left[\left(\na_s + \hat{z}\frac{\pa}{\pa z}\right) \times \left(\epsilon_h\mathbf{E}_s + \epsilon_v\mathbf{E}_z \right) \right]. \label{ch1.1.E.curl.curl.H} \end{flalign} When the $z$-components are extracted from \eqref{ch1.1.E.curl.curl.E} and \eqref{ch1.1.E.curl.curl.H}, the equations for the $z$-components are written as \begin{subequations} \begin{flalign} &\na^2 E_z - \left(1-\frac{\epsilon_v}{\epsilon_h}\right)\frac{\pa^2 E_z}{\pa z^2} + \omega^2\mu_h\epsilon_v E_z = 0, \label{ch1.1.E.curl.curl.Ez}\\ &\na^2 H_z - \left(1-\frac{\mu_v}{\mu_h}\right)\frac{\pa^2 H_z}{\pa z^2} + \omega^2\mu_v\epsilon_h H_z = 0. \label{ch1.1.E.curl.curl.Hz} \end{flalign} \end{subequations} As usual, $E_z$ and $H_z$ can be solved for using the separation of variables technique. We define the propagation constant as $k = \omega\sqrt{\mu_h\epsilon_h}$ with dispersion relation $\omega^2\mu_h\epsilon_h - k_z^2=k_\rho^2$ for the longitudinal (or vertical) $k_z$ and transverse (or radial) $k_\rho$ wavenumbers. Two different anisotropic ratios can be defined in such media: the anisotropy ratio for the complex permittivity as $\kappa_\epsilon=\sqrt{\frac{\epsilon_h}{\epsilon_v}}$, and the anisotropy ratio for permeability as $\kappa_\mu=\sqrt{\frac{\mu_h}{\mu_v}}$. It is also convenient to define two scaled radial wavenumbers as $\tk_\rho = \frac{k_\rho}{\kappa_\epsilon}$ and $\dk_\rho = \frac{k_\rho}{\kappa_\mu}$. With the above definitions, the general solution to the vector wave equation in such media becomes \begin{subequations} \begin{flalign} E_z&=\left[A_n\Jn\left(\tk_\rho \rho\right) +B_n\Hn\left(\tk_\rho \rho\right)\right]e^{\iu n\phi}e^{\iu k_z z}, \label{ch1.1.E.gen.sol.Ez}\\ H_z&=\left[C_n\Jn\left(\dk_\rho \rho\right) +D_n\Hn\left(\dk_\rho \rho\right)\right]e^{\iu n\phi}e^{\iu k_z z}, \label{ch1.1.E.gen.sol.Hz} \end{flalign} \end{subequations} with $A_n$, $B_n$, $C_n$, and $D_n$ determined by boundary conditions. The dispersion relations for $\tk_\rho$ and $\dk_\rho$ are \begin{subequations} \begin{flalign} \frac{\epsilon_v}{\epsilon_h}\left(\omega^2\mu_h\epsilon_h - k_z^2\right) &= \tk_\rho^2, \label{ch1.1.E.dispersion.relation.eps}\\ \frac{\mu_v}{\mu_h}\left(\omega^2\mu_h\epsilon_h - k_z^2\right) &= \dk_\rho^2. \label{ch1.1.E.dispersion.relation2.mu} \end{flalign} \end{subequations} The transverse ($\rho$ and $\phi$) field components can be expressed in terms of the above longitudinal components directly by using Maxwell's equations \cite{Chew:Waves}, to yield \begin{subequations} \begin{flalign} \mathbf{E}_s &=\frac{1}{\omega^2\mu_h\epsilon_h-k_z^2} \left[\na_s\frac{\pa E_z}{\pa z} - \iu\omega\mu_h\hat{z}\times\na_s H_z\right] =\frac{1}{k_\rho^2} \left[\iu k_z\na_s E_z - \iu\omega\mu_h\hat{z}\times\na_s H_z\right], \label{ch1.1.E.Etrans}\\ \mathbf{H}_s &=\frac{1}{\omega^2\mu_h\epsilon_h-k_z^2} \left[\na_s\frac{\pa H_z}{\pa z} + \iu\omega\epsilon_h\hat{z}\times\na_s E_z\right] =\frac{1}{k_\rho^2} \left[\iu k_z\na_s H_z + \iu\omega\epsilon_h\hat{z}\times\na_s E_z\right]. \label{ch1.1.E.Htrans} \end{flalign} \end{subequations} or, in a convenient matrix form, \renewcommand{\arraystretch}{1.4} \begin{subequations} \begin{flalign} \begin{bmatrix} E_\rho \\ H_\rho \end{bmatrix} &=\frac{1}{k^2_\rho} \begin{bmatrix} \iu k_z\frac{\pa}{\pa\rho} & -\frac{n\omega\mu_h}{\rho} \\ \frac{n\omega\epsilon_h}{\rho} & \iu k_z\frac{\pa}{\pa\rho} \end{bmatrix} \begin{bmatrix} E_z \\ H_z \end{bmatrix} =\frac{1}{k^2_\rho}\bBn \begin{bmatrix} E_z \\ H_z \end{bmatrix}, \label{ch1.1.E.EHrho}\\ \begin{bmatrix} E_\phi \\ H_\phi \end{bmatrix} &=\frac{1}{k^2_\rho} \begin{bmatrix} -\frac{nk_z}{\rho} & -\iu\omega\mu_h\frac{\pa}{\pa\rho} \\ \iu\omega\epsilon_h\frac{\pa}{\pa\rho} & -\frac{nk_z}{\rho} \\ \end{bmatrix} \begin{bmatrix} E_z \\ H_z \end{bmatrix} =\frac{1}{k^2_\rho}\bCn \begin{bmatrix} E_z \\ H_z \end{bmatrix}. \label{ch1.1.E.EHphi} \end{flalign} \end{subequations} It should be noted that $\bBn$ and $\bCn$ depend on horizontal medium properties $\epsilon_h$ and $\mu_h$. \subsection{Local reflection and transmission coefficients} \label{sec.2.2} When a number of cylindrical layers with different properties are present, the appropriate boundary conditions need to be enforced into the solutions. This is typically done via reflections and transmissions coefficients. In this section, local reflection and transmission coefficients are first derived for the two-layer case. Later, this will be extended to the case with arbitrary number of layers. The local coefficients can be classified into two types: outgoing-wave type and standing-wave type depending on the relative location of the source versus the observation point, as illustrated in Fig. \ref{F.2layers}. As the general solution of $E_z$ and $H_z$ for uniaxial media are slightly different from those for isotropic media, so are the local reflection and transmission coefficients. Nevertheless, the expressions for the {\it generalized} reflection and transmission coefficients (to account for more than two layers) {\it in terms of local ones} remain the same as those in~\cite{Moon14:Stable}. \begin{figure}[t] \centering \subfloat[\label{F.2layers.out}]{% \includegraphics[width=2.2in]{layer12_outgoing} } \hspace{2.5 cm} \subfloat[\label{F.2layers.stand}]{% \includegraphics[width=2.2in]{layer12_standing} } \caption{Two different cases of two uniaxial cylindrical layers with relevant reflection and transmission coefficients in the $\rho z$-plane: (a) Outgoing-wave case and (b) Standing-wave case.} \label{F.2layers} \end{figure} \subsubsection{Outgoing-wave case} \label{sec.2.2.1} Based on \eqref{ch1.1.E.gen.sol.Ez} and \eqref{ch1.1.E.gen.sol.Hz}, outgoing waves in a uniaxial medium can be expressed as \begin{flalign} \renewcommand{\arraystretch}{1.2} \begin{bmatrix} E_z \\ H_z \end{bmatrix} = \begin{bmatrix} \Hn(\tk_\rho \rho) & 0 \\ 0 & \Hn(\dk_\rho \rho) \\ \end{bmatrix} \begin{bmatrix} e_z \\ h_z \end{bmatrix} = \bHzn(k_\rho \rho)\cdot\mathbf{a}, \label{ch1.2.E.outgoing.EHz.gen} \end{flalign} where the column vector $\mathbf{a}$ includes $e^{\iu n\phi + \iu k_z z}$ dependence. Since the source is embedded in layer 1 for the outgoing-wave case depicted in Fig. \ref{F.2layers.out}, the $z$-components of the total fields in layer 1 and layer 2 is expressed as \begin{subequations} \begin{flalign} \begin{bmatrix} E_{z1} \\ H_{z1} \end{bmatrix} &= \bHzn(k_{1\rho}\rho)\cdot\mathbf{a}_1 + \bJzn(k_{1\rho}\rho)\cdot\bR_{12}\cdot\mathbf{a}_1, \label{ch1.2.E.out.EHz.1}\\ \begin{bmatrix} E_{z2} \\ H_{z2} \end{bmatrix} &= \bHzn(k_{2\rho}\rho)\cdot\bT_{12}\cdot\mathbf{a}_1. \label{ch1.2.E.out.EHz.2} \end{flalign} \end{subequations} Likewise, using \eqref{ch1.1.E.EHphi}, we have \begin{subequations} \begin{flalign} \begin{bmatrix} H_{\phi 1} \\ E_{\phi 1} \end{bmatrix} &= \bHpn(k_{1\rho}\rho)\cdot\mathbf{a}_1 + \bJpn(k_{1\rho}\rho)\cdot\bR_{12}\cdot\mathbf{a}_1, \label{ch1.2.E.out.EHphi.1} \\ \begin{bmatrix} H_{\phi 2} \\ E_{\phi 2} \end{bmatrix} &= \bHpn(k_{2\rho}\rho)\cdot\bT_{12}\cdot\mathbf{a}_1. \label{ch1.2.E.out.EHphi.2} \end{flalign} \end{subequations} From \eqref{ch1.2.E.out.EHz.1} through \eqref{ch1.2.E.out.EHphi.2}, two types of matrices only depending on $\rho$ are defined as \begin{subequations} \begin{flalign} \overline{\mathbf{B}}_{zn}(k_{i\rho}\rho) &= \begin{bmatrix} B_n(\tk_{i\rho}\rho) & 0 \\ 0 & B_n(\dk_{i\rho}\rho) \\ \end{bmatrix} ,\label{ch1.2.E.Bzn}\\ \overline{\mathbf{B}}_{\phi n}(k_{i\rho}\rho) &=\frac{1}{k_{i\rho}^2\rho} \begin{bmatrix} \iu\omega\epsilon_{hi}\tk_{i\rho}\rho B'_n(\tk_{i\rho}\rho) & -n k_z B_n(\dk_{i\rho}\rho) \\ -n k_z B_n(\tk_{i\rho}\rho) & -\iu\omega\mu_{hi} \dk_{i\rho}\rho B'_n(\dk_{i\rho}\rho) \\ \end{bmatrix} ,\label{ch1.2.E.Bpn} \end{flalign} \end{subequations} where $B_n$ is either $\Hn$ or $\Jn$, $k_{i\rho}=\omega^2\mu_{hi}\epsilon_{hi} - k_z^2$, and $\epsilon_{hi}$ and $\mu_{hi}$ are the horizontal complex permittivity and permeability in layer $i$, respectively. Applying the pertinent boundary conditions, viz., continuity of $z$- and $\phi$-components at $\rho=a_1$, to \eqref{ch1.2.E.out.EHz.1}--\eqref{ch1.2.E.out.EHphi.2} yields \begin{subequations} \begin{flalign} \left[\bHzn(k_{1\rho}a_1) + \bJzn(k_{1\rho}a_1)\cdot\bR_{12}\right]\cdot\mathbf{a}_1 &= \bHzn(k_{2\rho}a_1)\cdot\bT_{12}\cdot\mathbf{a}_1, \label{ch1.2.E.BC1}\\ \left[\bHpn(k_{1\rho}a_1) + \bJpn(k_{1\rho}a_1)\cdot\bR_{12}\right]\cdot\mathbf{a}_1 &= \bHpn(k_{2\rho}a_1)\cdot\bT_{12}\cdot\mathbf{a}_1. \label{ch1.2.E.BC2} \end{flalign} \end{subequations} For notational simplicity, a shorthand notation is defined such that \begin{flalign} \overline{\mathbf{B}}_{\alpha n}(k_{i\rho}a_j)=\overline{\mathbf{B}}_{\alpha ij}. \label{ch1.2.E.shorthands} \end{flalign} In the right hand side of \eqref{ch1.2.E.shorthands}, the first, second, and third subscripts indicate the relevant components of the fields, radial wavenumbers, and radial distances, respectively. For notational simplicity, the Hankel function superscript and subscript (kind and modal number) are suppressed in the following. Consequently, \eqref{ch1.2.E.BC1} and \eqref{ch1.2.E.BC2} are re-expressed as \begin{subequations} \begin{flalign} \left[\bH_{z11}+\bJ_{z11}\cdot\bR_{12}\right] &= \bH_{z21}\cdot\bT_{12}, \label{ch1.2.E.BC1.R12}\\ \left[\bH_{\phi 11}+\bJ_{\phi 11}\cdot\bR_{12}\right] &= \bH_{\phi 21}\cdot\bT_{12}. \label{ch1.2.E.BC2.R12} \end{flalign} \end{subequations} From \eqref{ch1.2.E.BC1.R12} and \eqref{ch1.2.E.BC2.R12}, we obtain \begin{subequations} \begin{flalign} \bR_{12} &=\left[\bJ_{z11} - \bH_{z21}\cdot\bH_{\phi 21}^{-1}\cdot\bJ_{\phi 11}\right]^{-1}\cdot \left[\bH_{z21}\cdot\bH_{\phi 21}^{-1}\cdot\bH_{\phi 11}-\bH_{z11}\right], \label{ch1.2.E.R12}\\ \bT_{12} &=\left[\bH_{z21} - \bJ_{z11}\cdot\bJ_{\phi 11}^{-1}\cdot\bH_{\phi 21}\right]^{-1}\cdot \left[\bH_{z11} - \bJ_{z11}\cdot\bJ_{\phi 11}^{-1}\cdot\bH_{\phi 11}\right]. \label{ch1.2.E.T12} \end{flalign} \end{subequations} \subsubsection{Standing-wave case} \label{sec.2.2.2} For the standing-wave case depicted in Fig. \ref{F.2layers.stand}, we now have \begin{subequations} \begin{flalign} \begin{bmatrix} E_{z1} \\ H_{z1} \end{bmatrix} &= \bJzn(k_{1\rho}\rho)\cdot\bT_{21}\cdot\mathbf{a}_2, \label{ch1.2.E.stand.EHz.1}\\ \begin{bmatrix} E_{z2} \\ H_{z2} \end{bmatrix} &= \bHzn(k_{2\rho}\rho)\cdot\bR_{21}\cdot\mathbf{a}_2 + \bJzn(k_{2\rho}\rho)\cdot\mathbf{a}_2, \label{ch1.2.E.stand.EHz.2} \end{flalign} \end{subequations} and, using \eqref{ch1.1.E.EHphi}, \begin{subequations} \begin{flalign} \begin{bmatrix} H_{\phi 1} \\ E_{\phi 1} \end{bmatrix} &= \bJpn(k_{1\rho}\rho)\cdot\bT_{21}\cdot\mathbf{a}_2, \label{ch1.2.E.stand.EHphi.1} \\ \begin{bmatrix} H_{\phi 2} \\ E_{\phi 2} \end{bmatrix} &= \bHpn(k_{2\rho}\rho)\cdot\bR_{21}\cdot\mathbf{a}_2 + \bJpn(k_{2\rho}\rho)\cdot\mathbf{a}_2. \label{ch1.2.E.stand.EHphi.2} \end{flalign} \end{subequations} Applying the boundary conditions at $\rho=a_1$ to \eqref{ch1.2.E.stand.EHz.1}--\eqref{ch1.2.E.stand.EHphi.2} yields \begin{subequations} \begin{flalign} \bJzn(k_{1\rho}a_1)\cdot\bT_{21}\cdot\mathbf{a}_2 &=\left[\bHzn(k_{2\rho}a_1)\cdot\bR_{21} + \bJzn(k_{2\rho}a_1)\right]\cdot\mathbf{a}_2, \label{ch1.2.E.BC3}\\ \bJpn(k_{1\rho}a_1)\cdot\bT_{21}\cdot\mathbf{a}_2 &=\left[\bHpn(k_{2\rho}a_1)\cdot\bR_{21} + \bJpn(k_{2\rho}a_1)\right]\cdot\mathbf{a}_2, \label{ch1.2.E.BC4} \end{flalign} \end{subequations} which can be re-expressed as \begin{subequations} \begin{flalign} \bJ_{z11}\cdot\bT_{21} &= \left[\bH_{z21}\cdot\bR_{21}+\bJ_{z21}\right] , \label{ch1.2.E.BC1.R21}\\ \bJ_{\phi 11}\cdot\bT_{21} &= \left[\bH_{\phi 21}\cdot\bR_{21}+\bJ_{\phi 21}\right]. \label{ch1.2.E.BC2.R21} \end{flalign} \end{subequations} Therefore, $\bR_{21}$ and $\bT_{21}$ are written as \begin{subequations} \begin{flalign} \bR_{21} &=\left[\bH_{z21} - \bJ_{z11}\cdot\bJ_{\phi 11}^{-1}\cdot\bH_{\phi 21}\right]^{-1}\cdot \left[\bJ_{z11}\cdot\bJ_{\phi 11}^{-1}\cdot\bJ_{\phi 21} - \bJ_{z21}\right], \label{ch1.2.E.R21}\\ \bT_{21} &=\left[\bJ_{z11} - \bH_{z21}\cdot\bH_{\phi 21}^{-1}\cdot\bJ_{\phi 11}\right]^{-1}\cdot \left[\bJ_{z21} - \bH_{z21}\cdot\bH_{\phi 21}^{-1}\cdot\bJ_{\phi 21}\right]. \label{ch1.2.E.T21} \end{flalign} \end{subequations} Furthermore, the local reflection and transmission coefficients \eqref{ch1.2.E.R12}, \eqref{ch1.2.E.T12}, \eqref{ch1.2.E.R21}, and \eqref{ch1.2.E.T21} can be succinctly rewritten as \begin{subequations} \begin{flalign} \bR_{12} &=\bD_A^{-1}\cdot \left[\bH_{z21}\cdot\bH_{\phi 21}^{-1}\cdot\bH_{\phi 11}-\bH_{z11}\right], \label{ch1.2.E.R12.short}\\ \bR_{21} &= \bD_B^{-1}\cdot \left[\bJ_{z11}\cdot\bJ_{\phi 11}^{-1}\cdot\bJ_{\phi 21} - \bJ_{z21}\right], \label{ch1.2.E.R21.short}\\ \bT_{12} &= \bD_B^{-1}\cdot \left[\bH_{z11} - \bJ_{z11}\cdot\bJ_{\phi 11}^{-1}\cdot\bH_{\phi 11}\right], \label{ch1.2.E.T12.short}\\ \bT_{21} &= \bD_A^{-1}\cdot \left[\bJ_{z21} - \bH_{z21}\cdot\bH_{\phi 21}^{-1}\cdot\bJ_{\phi 21}\right], \label{ch1.2.E.T21.short} \end{flalign} \end{subequations} where \begin{subequations} \begin{flalign} \bD_A&=\left[\bJ_{z11} - \bH_{z21}\cdot\bH_{\phi 21}^{-1}\cdot\bJ_{\phi 11}\right], \label{ch1.2.E.DA}\\ \bD_B&=\left[\bH_{z21} - \bJ_{z11}\cdot\bJ_{\phi 11}^{-1}\cdot\bH_{\phi 21}\right]. \label{ch1.2.E.DB} \end{flalign} \end{subequations} \subsection{Spectral representation of the Green's function} \label{sec.2.3} In this section, we derive convenient analytical expressions for the Green's function expressed in cylindrical coordinates that will facilitate further analysis in such uniaxial media. Specifically, we obtain expressions for the $z$-components of the fields produced by a point source (arbitrarily-oriented Hertzian electric dipole). In this case, the source writes as \begin{flalign} \mathbf{J}(\rr)=Il\hat{\alpha}'\delta(\rr-\rp), \label{ch1.3.E.current.source} \end{flalign} where $Il$ is the dipole moment. By taking the curl of Faraday's law, we obtain \begin{flalign} \na\times\na\times\mathbf{E} =\iu\omega\na\times\um\mathbf{H}. \label{ch1.3.E.curl.curl.w.source} \end{flalign} With the presence of the source, \eqref{ch1.3.E.curl.curl.w.source} becomes \begin{flalign} \na^2\mathbf{E}-\left(1-\frac{\epsilon_v}{\epsilon_h}\right)\na\frac{\pa E_z}{\pa z} =-\iu\omega\na\times\um\mathbf{H} +\na\left(\frac{\na\cdot\mathbf{J}}{\iu\omega\epsilon_h}\right), \label{ch1.3.E.2nd.order.E.b} \end{flalign} where the divergence of $\mathbf{E}$ is deduced from \eqref{ch1.1.E.div.eE} and the continuity equation $\na\cdot\mathbf{J} - \iu\omega\rho_v=0$ is applied. Extracting the $z$-component of the above, we can easily show that \begin{flalign} \na^2 E_z + \omega^2\mu_h\epsilon_v E_z - \left(1-\frac{\epsilon_v}{\epsilon_h}\right)\frac{\pa^2 E_z}{\pa z^2} &=-\iu\omega\mu_h \hat{z}\cdot\mathbf{J} + \frac{\pa}{\pa z}\left(\frac{\na\cdot\mathbf{J}}{\iu\omega\epsilon_h}\right). \label{ch1.3.E.2nd.order.Ez.a} \end{flalign} Using \eqref{ch1.3.E.current.source} and $e^{\iu k_z z}$ dependence shown in \eqref{ch1.1.E.gen.sol.Ez}, we obtain \begin{flalign} \na^2 E_z+\tk^2 E_z = -\frac{\iu Il}{\omega\epsilon_h} \left[k^2(\hat{z}\cdot\hat{\alpha}')+\frac{\pa}{\pa z}\na\cdot\hat{\alpha}'\right]\delta(\rr-\rp), \label{ch1.3.E.2nd.order.Ez.c} \end{flalign} where \begin{flalign} \tk^2 &= \omega^2\mu_h\epsilon_v + \left(1-\frac{\epsilon_v}{\epsilon_h}\right)k_z^2 = \frac{\epsilon_v}{\epsilon_h}\left(\omega^2\mu_h\epsilon_h-k_z^2\right)+k_z^2 = \tk_\rho^2 + k_z^2, \label{ch1.3.E.dis.rel.tk}\\ k^2 &= \omega^2\mu_h\epsilon_h. \label{ch1.3.E.dis.rel.k} \end{flalign} Therefore, $E_z$ is obtained via the scalar Green's function, \begin{flalign} E_z=\frac{\iu Il}{\omega\epsilon_h} \left[k^2(\hat{z}\cdot\hat{\alpha}')+\frac{\pa}{\pa z'}\na'\cdot\hat{\alpha}' \right]\widetilde{g}(\rr-\rp), \label{ch1.3.E.Ez} \end{flalign} where \begin{flalign} \widetilde{g}(\rr-\rp)= \frac{e^{\iu \tk|\rr-\rp|}}{4\pi|\rr-\rp|}. \label{ch1.3.E.scalar.stretched.Green.1} \end{flalign} The derivation for the $z$-component of the magnetic field can be done quite similarly by taking the curl of Ampere's law. The equivalent to \eqref{ch1.3.E.2nd.order.Ez.a} becomes \begin{flalign} \na^2 H_z + \omega^2\mu_v\epsilon_h H_z- \left(1-\frac{\mu_v}{\mu_h}\right)\frac{\pa^2 H_z}{\pa z^2} &= - \hat{z}\cdot\na\times\mathbf{J}. \label{ch1.3.E.2nd.order.Hz.a} \end{flalign} Since $H_z$ also has $e^{\iu k_z z}$ dependence as shown in \eqref{ch1.1.E.gen.sol.Hz}, \eqref{ch1.3.E.2nd.order.Hz.a} reduces to \begin{flalign} \na^2 H_z + \dk^2 H_z &= - Il\hat{z}\cdot\na\times\hat{\alpha}'\delta(\rr-\rp), \label{ch1.3.E.2nd.order.Hz.b} \end{flalign} where \begin{flalign} \dk^2 = \omega^2\mu_v\epsilon_h+\left(1-\frac{\mu_v}{\mu_h}\right)k_z^2 = \frac{\mu_v}{\mu_h}\left(\omega^2\mu_h\epsilon_h - k_z^2\right) + k_z^2 = \dk_\rho^2 + k_z^2. \label{ch1.3.E.dis.rel.dk} \end{flalign} Now, $H_z$ is obtained via \begin{flalign} H_z = -Il\hat{z}\cdot\na'\times\hat{\alpha}' \ddot{g}(\rr-\rp), \label{ch1.3.E.Hz} \end{flalign} where \begin{flalign} \ddot{g}(\rr-\rp)= \frac{e^{\iu \dk|\rr-\rp|}}{4\pi|\rr-\rp|}. \label{ch1.3.E.scalar.stretched.Green.2} \end{flalign} The $z$-components of electromagnetic fields expressed in \eqref{ch1.3.E.Ez} and \eqref{ch1.3.E.Hz} can be expanded as the linear combination of all spectral components. In other words, using the spectral representation of the scalar Green's function, \begin{flalign} \frac{e^{\iu k|\rr-\rp|}}{|\rr-\rp|} =\suma\frac{\iu e^{\iu n(\phi-\phi')}}{2}\intmp dk_z e^{\iu k_z(z-z')}\Jn(k_\rho\rho_<)\Hn(k_\rho\rho_>), \label{ch1.3.E.expansion.scalar.Green} \end{flalign} $E_z$ and $H_z$ in homogeneous, uniaxial media can be written as \begin{flalign} \begin{bmatrix} E_z \\ H_z \end{bmatrix} =\frac{\iu Il}{4\pi\omega\epsilon_h}\suma e^{\iu n(\phi-\phi')}\intmp dk_z e^{\iu k_z(z-z')} \begin{bmatrix} \Jn(\tk_\rho \rho_<)\Hn(\tk_\rho \rho_>) & 0 \\ 0 & \Jn(\dk_\rho \rho_<)\Hn(\dk_\rho \rho_>) \end{bmatrix} \cdot\overleftarrow{\mathbf{D}}', \label{ch1.3.E.EzHz.single} \end{flalign} where $\overleftarrow{\mathbf{D}}'$ is an operator acting on the primed variables on the left and written as \begin{flalign} \overleftarrow{\mathbf{D}}' &=\frac{\iu}{2} \begin{bmatrix} (\hat{z}k^2+\frac{\pa}{\pa z'}\na')\cdot\hat{\alpha}' \\ \iu\omega\epsilon_h\hat{\alpha}'\cdot\hat{z}\times\na' \end{bmatrix}. \label{ch1.3.E.D.single} \end{flalign} In order to account for multilayers, let us consider cylindrically stratified media with the source in layer $j$. Then, the reflection terms from the boundaries at $\rho=a_j$ and $\rho=a_{j-1}$ are added to \eqref{ch1.3.E.EzHz.single} such that \begin{flalign} \begin{bmatrix} E_{zj} \\ H_{zj} \end{bmatrix} =\frac{\iu Il}{4\pi\omega\epsilon_{hj}}\suma e^{\iu n(\phi-\phi')}\intmp dk_z e^{\iu k_z(z-z')} \left\{\bJ_{zj\rho_<}\cdot\bH_{zj\rho_>} + \bH_{zj\rho}\cdot\overline{\mathbf{a}}_{jn}(\rho') + \bJ_{zj\rho}\cdot\overline{\mathbf{b}}_{jn}(\rho') \right\}\cdot\Dj, \label{ch1.3.E.EzHz.multi} \end{flalign} where \begin{subequations} \begin{flalign} \bJ_{zj\rho_<}\cdot\bH_{zj\rho_>} &= \begin{bmatrix} \Jn(\tk_{j\rho} \rho_<)\Hn(\tk_{j\rho} \rho_>) & 0 \\ 0 & \Jn(\dk_{j\rho} \rho_<)\Hn(\dk_{j\rho} \rho_>) \end{bmatrix}, \label{ch1.3.E.JH.zjrho}\\ \bH_{zj\rho} &= \begin{bmatrix} \Hn(\tk_{j\rho} \rho) & 0 \\ 0 & \Hn(\dk_{j\rho} \rho) \end{bmatrix}, \label{ch1.3.E.H.zjrho}\\ \bJ_{zj\rho} &= \begin{bmatrix} \Jn(\tk_{j\rho} \rho) & 0 \\ 0 & \Jn(\dk_{j\rho} \rho) \end{bmatrix}, \label{ch1.3.E.J.zjrho}\\ \Dj&=\frac{\iu}{2} \begin{bmatrix} (\hat{z}k_j^2-\iu k_z\na')\cdot\hat{\alpha}' \\ \iu\omega\epsilon_{hj}\hat{\alpha}'\cdot\hat{z}\times\na' \end{bmatrix}. \label{ch1.3.E.Dj.multi} \end{flalign} \end{subequations} Recall that notations of \eqref{ch1.3.E.JH.zjrho}, \eqref{ch1.3.E.H.zjrho}, and \eqref{ch1.3.E.J.zjrho} are based on \eqref{ch1.2.E.Bzn} and \eqref{ch1.2.E.shorthands}. In \eqref{ch1.3.E.Dj.multi}, $k_j=\omega\mu_{hj}\epsilon_{hj}$, and $\mu_{hj}$ and $\epsilon_{hj}$ represent horizontal permeability and complex permittivity in layer $j$, respectively. When $\hat{\alpha}'$ is represented in cylindrical coordinates as $\hat{\alpha}'=\hat{\rho}'\alpha_{\rho'}+\hat{\phi}'\alpha_{\phi'}+\hat{z}'\alpha_{z'}$, \begin{flalign} \Dj= \frac{\iu}{2} \left(\Dja+\Djb+\frac{\pa}{\pa\rho'}\Djc\right) =\frac{\iu}{2} \left( \begin{bmatrix} (k^2_{j\rho})\alpha_{z'} \\ 0 \end{bmatrix} + \begin{bmatrix} -\frac{nk_z}{\rho'}\alpha_{\phi'} \\ -\frac{n\omega\epsilon_{hj}}{\rho'}\alpha_{\rho'} \end{bmatrix} +\frac{\pa}{\pa\rho'} \begin{bmatrix} -\iu k_z\alpha_{\rho'} \\ \iu\omega\epsilon_{hj}\alpha_{\phi'} \end{bmatrix} \right). \label{ch1.3.E.Dj.cylin} \end{flalign} Using the two constraint conditions at $\rho=a_j$ and $\rho=a_{j-1}$ detailed in \cite{Chew:Waves}, two unknowns $\overline{\mathbf{a}}_{jn}(\rho')$ and $\overline{\mathbf{b}}_{jn}(\rho')$ appeared in \eqref{ch1.3.E.EzHz.multi} can be determined such that \begin{subequations} \begin{flalign} \overline{\mathbf{a}}_{jn} &=\left[\bI-\tbR_{j,j-1}\cdot\tbR_{j,j+1}\right]\cdot\tbR_{j,j-1}\cdot \left[\bH_{zj\rho'}+\tbR_{j,j+1}\cdot\bJ_{zj\rho'}\right], \label{ch1.3.E.an}\\ \overline{\mathbf{b}}_{jn} &=\left[\bI-\tbR_{j,j+1}\cdot\tbR_{j,j-1}\right]\cdot\tbR_{j,j+1}\cdot \left[\bJ_{zj\rho'}+\tbR_{j,j-1}\cdot\bH_{zj\rho'}\right]. \label{ch1.3.E.bn} \end{flalign} \end{subequations} Note that the second brackets in the right hand sides of \eqref{ch1.3.E.an} and \eqref{ch1.3.E.bn} are slightly different from those for isotropic media. When $\rho>\rho'$, $\rho_<=\rho'$ and $\rho_>=\rho$. The curly bracket in \eqref{ch1.3.E.EzHz.multi} is expressed as \begin{flalign} \bJ_{zj\rho_<}\cdot\bH_{zj\rho_>} &+ \bH_{zj\rho}\cdot\overline{\mathbf{a}}_{jn}(\rho') + \bJ_{zj\rho}\cdot\overline{\mathbf{b}}_{jn}(\rho') \notag\\ &=\left[\bH_{zj\rho}+\bJ_{zj\rho}\cdot\tbR_{j,j+1}\right]\cdot\tbM_{j+}\cdot \left[\bJ_{zj\rho'}+\tbR_{j,j-1}\cdot\bH_{zj\rho'}\right]. \label{ch1.3.E.bracket.case1} \end{flalign} On the other hand, when $\rho<\rho'$, $\rho_<=\rho$ and $\rho_>=\rho'$. The curly bracket \eqref{ch1.3.E.EzHz.multi} is now expressed as \begin{flalign} \bJ_{zj\rho_<}\cdot\bH_{zj\rho_>} &+ \bH_{zj\rho}\cdot\overline{\mathbf{a}}_{jn}(\rho') + \bJ_{zj\rho}\cdot\overline{\mathbf{b}}_{jn}(\rho') \notag\\ &=\left[\bJ_{zj\rho}+\bH_{zj\rho}\cdot\tbR_{j,j-1}\right]\cdot\tbM_{j-}\cdot \left[\bH_{zj\rho'}+\tbR_{j,j+1}\cdot\bJ_{zj\rho'}\right]. \label{ch1.3.E.bracket.case2} \end{flalign} Again, \eqref{ch1.3.E.bracket.case1} and \eqref{ch1.3.E.bracket.case2} are slightly different from those for isotropic media. When the field layer is not the same as the source layer, the approach is the same as that in \cite{Chew:Waves}. In summary, \begin{flalign} \begin{bmatrix} E_z \\ H_z \end{bmatrix} =\frac{\iu Il}{4\pi\omega\epsilon_{hj}}\suma e^{\iu n(\phi-\phi')}\intmp dk_z e^{\iu k_{z}(z-z')} \Fn\cdot\Dj, \label{ch1.3.E.EzHz.general} \end{flalign} where: \\ \begin{subequations} \text{for Case 1: $\rho$ and $\rho'$ are in the same region and $\rho\geq\rho'$.} \begin{flalign} &\qquad\Fn= \left[\bH_{zj\rho}+\bJ_{zj\rho}\cdot\tbR_{j,j+1}\right]\cdot \tbM_{j+}\cdot \left[\bJ_{zj\rho'}+\tbR_{j,j-1}\cdot\bH_{zj\rho'}\right],& \label{ch1.3.E.EzHz.general.case1} \end{flalign} \text{for Case 2: $\rho$ and $\rho'$ are in the same region and $\rho<\rho'$.} \begin{flalign} &\qquad\Fn= \left[\bJ_{zj\rho}+\bH_{zj\rho}\cdot\tbR_{j,j-1}\right]\cdot \tbM_{j-}\cdot \left[\bH_{zj\rho'}+\tbR_{j,j+1}\cdot\bJ_{zj\rho'}\right],& \label{ch1.3.E.EzHz.general.case2} \end{flalign} \text{for Case 3: $\rho$ and $\rho'$ are in different regions and $\rho>\rho'$.} \begin{flalign} &\qquad\Fn= \left[\bH_{zi\rho}+\bJ_{zi\rho}\cdot\tbR_{i,i+1}\right]\cdot \bN_{i+}\cdot\tbT_{ji}\cdot\tbM_{j+}\cdot \left[\bJ_{zj\rho'}+\tbR_{j,j-1}\cdot\bH_{zj\rho'}\right],& \label{ch1.3.E.EzHz.general.case3} \end{flalign} \text{for Case 4: $\rho$ and $\rho'$ are in different regions and $\rho<\rho'$.} \begin{flalign} &\qquad\Fn= \left[\bJ_{zi\rho}+\bH_{zi\rho}\cdot\tbR_{i,i-1}\right]\cdot \bN_{i-}\cdot\tbT_{ji}\cdot\tbM_{j-}\cdot \left[\bH_{zj\rho'}+\tbR_{j,j+1}\cdot\bJ_{zj\rho'}\right].& \label{ch1.3.E.EzHz.general.case4} \end{flalign} \end{subequations} \section{Range-conditioned formulation} \label{sec.3.range.formul} As noted before, the poor scaling behavior of Bessel and Hankel functions for extreme arguments and/or orders causes instabilities in the numerical computation of the field expressions above under some parameter ranges. This section discusses how to stabilize the computation and provides relevant mathematical derivations. \subsection{Range-conditioned cylindrical functions and matrices} \label{sec.3.1} Range-conditioned cylindrical functions derived in~\cite{Moon14:Stable} are modified for uniaxial media because two different scaled radial wavenumbers $\tk_{i\rho}$ and $\dk_{i\rho}$ appear as part of the function arguments. Here, the subscript $i$ is the layer index, and, to recall, $\kappa_{i\epsilon}=\sqrt{\epsilon_{hi}/\epsilon_{vi}}$ and $\kappa_{i\mu}=\sqrt{\mu_{hi}/\mu_{vi}}$ are the anisotropy ratios of complex permittivity and permeability in layer $i$, respectively. Here, we will only focus on those aspects that differ from~\cite{Moon14:Stable}. The reader is referred to~\cite{Moon14:Stable} for the fundamentals of this stabilization approach. Table \ref{ch2.1.T.rccf} shows the definitions of the range-conditioned cylindrical functions, indicated by a hat, for uniaxial complex permittivity media, where: \begin{table}[t] \begin{center} \renewcommand{\arraystretch}{1.6} \setlength{\tabcolsep}{8pt} \caption{Definition of range-conditioned cylindrical functions for uniaxial complex permittivity media.} \begin{tabular}{ccc} \hline Small arguments & Moderate arguments & Large arguments \\ \hline $\Jn(\tk_{i\rho}a_j)=\widetilde{G}_i a_j^n\hJn(\tk_{i\rho}a_j)$ & $\Jn(\tk_{i\rho}a_j)=\widetilde{P}_{ij}\hJn(\tk_{i\rho}a_j)$ & $\Jn(\tk_{i\rho}a_j)=e^{|\tk''_{i\rho}|a_j}\hJn(\tk_{i\rho}a_j)$ \\ $\Jnd(\tk_{i\rho}a_j)=\widetilde{G}_i a_j^n\hJnd(\tk_{i\rho}a_j)$ & $\Jnd(\tk_{i\rho}a_j)=\widetilde{P}_{ij}\hJnd(\tk_{i\rho}a_j)$ & $\Jnd(\tk_{i\rho}a_j)=e^{|\tk''_{i\rho}|a_j}\hJnd(\tk_{i\rho}a_j)$ \\ $\Hn(\tk_{i\rho}a_j)=\widetilde{G}^{-1}_i a_j^{-n}\hHn(\tk_{i\rho}a_j)$ & $\Hn(\tk_{i\rho}a_j)=\widetilde{P}_{ij}^{-1}\hHn(\tk_{i\rho}a_j)$ & $\Hn(\tk_{i\rho}a_j)=e^{-\tk''_{i\rho}a_j}\hHn(\tk_{i\rho}a_j)$ \\ $\Hnd(\tk_{i\rho}a_j)=\widetilde{G}^{-1}_i a_j^{-n}\hHnd(\tk_{i\rho}a_j)$ & $\Hnd(\tk_{i\rho}a_j)=\widetilde{P}_{ij}^{-1}\hHnd(\tk_{i\rho}a_j)$ & $\Hnd(\tk_{i\rho}a_j)=e^{-\tk''_{i\rho}a_j}\hHnd(\tk_{i\rho}a_j)$ \\ \hline \end{tabular} \label{ch2.1.T.rccf} \end{center} \end{table} \begin{flalign} \widetilde{G}_i=\frac{1}{n!}\left(\frac{\tk_{i\rho}}{2}\right)^n, \label{ch2.1.E.Gi} \end{flalign} \begin{flalign} \widetilde{P}_{ij}= \begin{cases} 1,& \text{if } |\Jn(\tk_{i\rho} a_j)|^{-1}<T_m, \\ |\Jn(\tk_{i\rho} a_j)|, & \text{if } |\Jn(\tk_{i\rho} a_j)|^{-1} \ge T_m, \end{cases} \label{ch2.1.E.Pii} \end{flalign} \begin{flalign} \tk''_{i\rho} =\Im m\left[\frac{k_{i\rho}}{\kappa_{i\epsilon}}\right] =\Im m\left[\frac{k'_{i\rho}+\iu k''_{i\rho}}{\kappa_{i\epsilon}'+\iu \kappa_{i\epsilon}'' }\right] =\Im m\left[\frac{\Big(k'_{i\rho}+\iu k''_{i\rho}\Big)\Big(\kappa_{i\epsilon}'-\iu \kappa_{i\epsilon}'' \Big)} {\Big(\kappa_{i\epsilon}'\Big)^2+\Big(\kappa_{i\epsilon}''\Big)^2 }\right] =\frac{\kappa_{i\epsilon}' k''_{i\rho} - \kappa_{i\epsilon}'' k'_{i\rho}} {\left| \kappa_{i\epsilon} \right|^2}, \label{ch2.1.E.kirho} \end{flalign} where $T_m$ is the magnitude threshold for moderate arguments \cite{Moon14:Stable}. Note that subscripts $i$ and $j$ are arbitrary. Range-conditioned functions for uniaxial permeability media can be similarly constructed. The multiplicative factors associated with the new functions shown in Table \ref{ch2.1.T.rccf} can be classified into two types: $\alpha$-type and $\beta$-type, whereby the relationship between the original cylindrical functions and the range-conditioned ones can be succinctly expressed as \begin{subequations} \begin{flalign} \Jn(\tk_{i\rho}a_j) &= \widetilde\beta_{ij}\hJn(\tk_{i\rho}a_j), \\ \Jnd(\tk_{i\rho}a_j) &= \widetilde\beta_{ij}\hJnd(\tk_{i\rho}a_j), \\ \Hn(\tk_{i\rho}a_j) &= \widetilde\alpha_{ij}\hHn(\tk_{i\rho}a_j), \\ \Hnd(\tk_{i\rho}a_j) &= \widetilde\alpha_{ij}\hHnd(\tk_{i\rho}a_j). \end{flalign} \end{subequations} \begin{table}[t] \begin{center} \renewcommand{\arraystretch}{1.6} \setlength{\tabcolsep}{10pt} \caption{Definition of $\widetilde\alpha_{ij}$ and $\widetilde\beta_{ij}$.} \begin{tabular}{ccc} \hline Argument type & $\widetilde\alpha_{ij}$ & $\widetilde\beta_{ij}$ \\ \hline Small & $\widetilde G^{-1}_i a^{-n}_j$ & $\widetilde G_i a^n_j$ \\ Moderate & $\widetilde P_{ij}^{-1}$ & $\widetilde P_{ij}$\\ Large & $e^{-\widetilde k''_{i\rho}a_j}$ & $e^{\widetilde k''_{i\rho}a_j}$\\ \hline \end{tabular} \label{ch2.1.T.alpha.beta} \end{center} \end{table} The definitions of $\widetilde\alpha_{ij}$ and $\widetilde\beta_{ij}$ are provided in Table \ref{ch2.1.T.alpha.beta}. Similarly to the isotropic case, $\widetilde\alpha_{ij}$ and $\widetilde\beta_{ij}$ exhibit two important properties to ensure a stable computation~\cite{Moon14:Stable}: 1. {\it Reciprocity.} \begin{flalign} \widetilde\alpha_{ii}=\widetilde\beta_{ii}^{-1}. \label{ch2.1.E.alpha.beta.1} \end{flalign} 2. {\it Boundness.} \begin{flalign} |\widetilde\beta_{im}\,\widetilde\alpha_{in}|\leq 1, \quad\text{for } a_m < a_n. \label{ch2.1.E.alpha.beta.2} \end{flalign} For the anisotropic case, it is convenient to also derive range-conditioned cylindrical `matrices' because 2$\times$2 matrices rather than scalar factors appear in the computation of the reflection and transmission coefficients in layered media. We use hats to denote those matrices as well. From \eqref{ch1.2.E.Bzn} and \eqref{ch1.2.E.Bpn}, we obtain \begin{subequations} \begin{flalign} \bJzn(k_{i\rho} a_j) &= \begin{bmatrix} \hJn(\tk_{i\rho} a_j) & 0 \\ 0 & \hJn(\dk_{i\rho}a_j) \\ \end{bmatrix} \cdot \begin{bmatrix} \widetilde{\beta}_{ij} & 0 \\ 0 & \ddot{\beta}_{ij} \\ \end{bmatrix} =\bb_{ij}\cdot\hbJzn=\hbJzn\cdot\bb_{ij} ,\label{ch2.1.E.Jzn}\\ \bHzn(k_{i\rho} a_j) &= \begin{bmatrix} \hHn(\tk_{i\rho} a_j) & 0 \\ 0 & \hHn(\dk_{i\rho}a_j) \\ \end{bmatrix} \cdot \begin{bmatrix} \widetilde{\alpha}_{ij} & 0 \\ 0 & \ddot{\alpha}_{ij} \\ \end{bmatrix} =\ba_{ij}\cdot\hbHzn=\hbHzn\cdot\ba_{ij} ,\label{ch2.1.E.Hzn}\\ \bJpn(k_{i\rho}a_j) &=\frac{1}{k_{i\rho}^2 a_j} \begin{bmatrix} \iu\omega\epsilon_{hi} \tk_{i\rho}a_j \hJnd(\tk_{i\rho}a_j) & -n k_z \hJn(\dk_{i\rho}a_j) \\ -n k_z \hJn(\tk_{i\rho}a_j) & -\iu\omega\mu_{hi} \dk_{i\rho}a_j \hJnd(\dk_{i\rho}a_j) \\ \end{bmatrix} \cdot \begin{bmatrix} \widetilde{\beta}_{ij} & 0 \\ 0 & \ddot{\beta}_{ij} \\ \end{bmatrix} =\hbJpn\cdot\bb_{ij} ,\label{ch2.1.E.Jpn}\\ \bHpn(k_{i\rho}a_j) &=\frac{1}{k_{i\rho}^2 a_j} \begin{bmatrix} \iu\omega\epsilon_{hi} \tk_{i\rho}a_j \hHnd(\tk_{i\rho}a_j) & -n k_z \hHn(\dk_{i\rho}a_j) \\ -n k_z \hHn(\tk_{i\rho}a_j) & -\iu\omega\mu_{hi} \dk_{i\rho}a_j \hHnd(\dk_{i\rho}a_j) \\ \end{bmatrix} \cdot \begin{bmatrix} \widetilde{\alpha}_{ij} & 0 \\ 0 & \ddot{\alpha}_{ij} \\ \end{bmatrix} =\hbHpn\cdot\ba_{ij}. \label{ch2.1.E.Hpn} \end{flalign} \end{subequations} Note that two matrices in \eqref{ch2.1.E.Jzn} and \eqref{ch2.1.E.Hzn} are diagonal, so they commute. \subsection{Range-conditioned reflection and transmission coefficients} \label{sec.3.2} \begin{figure}[t] \centering \includegraphics[height=2.0in]{2layers_12.pdf} \caption{Reflection and transmission coefficients for two cylindrical layers.} \label{F.local.region12} \end{figure} Using the redefined matrices in \eqref{ch2.1.E.Jzn} -- \eqref{ch2.1.E.Hpn}, local reflection and transmission coefficients for the two-layer medium depicted in Fig. \ref{F.local.region12} can be also redefined. First of all, two intermediate matrices, $\bD_A$ and $\bD_B$, are redefined as \begin{subequations} \begin{flalign} \bD_A =\left[\hbJ_{z11}\cdot\Bs{11} -\hbH_{z21}\cdot\As{21}\cdot\Ass{21}{-1}\cdot\hbH_{\phi 21}^{-1}\cdot\hbJ_{\phi 11}\cdot\Bs{11} \right] =\left[\hbJ_{z11} - \hbH_{z21}\cdot\hbH_{\phi 21}^{-1}\cdot\hbJ_{\phi 11} \right]\cdot\Bs{11} =\hbD_A\cdot\Bs{11}, \label{ch2.2.E.DA}\\ \bD_B =\left[\hbH_{z21}\cdot\As{21} -\hbJ_{z11}\cdot\Bs{11}\cdot\Bss{11}{-1}\cdot\hbJ_{\phi 11}^{-1}\cdot\hbH_{\phi 21}\cdot\As{21} \right] =\left[\hbH_{z21} - \hbJ_{z11}\cdot\hbJ_{\phi 11}^{-1}\cdot\hbH_{\phi 21} \right]\cdot\As{21} =\hbD_B\cdot\As{21}. \label{ch2.2.E.DB} \end{flalign} \end{subequations} Therefore, the local reflection and transmission coefficient are redefined as \begin{subequations} \begin{flalign} \bR_{12} &=\bD_A^{-1}\cdot \left[\bH_{z21}\cdot\bH_{\phi 21}^{-1}\cdot\bH_{\phi 11}-\bH_{z11}\right] =\Bss{11}{-1}\cdot\hbD_A^{-1}\cdot \left[\hbH_{z21}\cdot\As{21}\cdot\Ass{21}{-1}\cdot\hbH_{\phi 21}^{-1}\cdot\hbH_{\phi 11}\cdot\As{11} -\hbH_{z11}\cdot\As{11}\right] \notag\\ &=\As{11}\cdot\hbD_A^{-1}\cdot \left[\hbH_{z21}\cdot\hbH_{\phi 21}^{-1}\cdot\hbH_{\phi 11}-\hbH_{z11}\right]\cdot\As{11} =\As{11}\cdot\hbR_{12}\cdot\As{11}, \label{ch2.2.E.R12} \end{flalign} \begin{flalign} \bR_{21} &=\bD_B^{-1}\cdot \left[\bJ_{z11}\cdot\bJ_{\phi 11}^{-1}\cdot\bJ_{\phi 21} - \bJ_{z21}\right] =\Ass{21}{-1}\cdot\hbD_B^{-1}\cdot \left[\hbJ_{z11}\cdot\Bs{11}\cdot\Bss{11}{-1}\cdot\hbJ_{\phi 11}^{-1}\cdot\hbJ_{\phi 21}\cdot\Bs{21} -\hbJ_{z21}\cdot\Bs{21}\right] \notag\\ &=\Bs{21}\cdot\hbD_B^{-1}\cdot \left[\hbJ_{z11}\cdot\hbJ_{\phi 11}^{-1}\cdot\hbJ_{\phi 21} - \hbJ_{z21}\right]\cdot\Bs{21} =\Bs{21}\cdot\hbR_{21}\cdot\Bs{21}, \label{ch2.2.E.R21} \end{flalign} \begin{flalign} \bT_{12} &=\bD_B^{-1}\cdot \left[\bH_{z11} - \bJ_{z11}\cdot\bJ_{\phi 11}^{-1}\cdot\bH_{\phi 11}\right] =\Ass{21}{-1}\cdot\hbD_B^{-1}\cdot \left[\hbH_{z11}\cdot\As{11} -\hbJ_{z11}\cdot\As{11}\cdot\Bss{11}{-1}\cdot\hbJ_{\phi 11}^{-1}\cdot\hbH_{\phi 11}\cdot\As{11} \right] \notag\\ &=\Bs{21}\cdot\hbD_B^{-1}\cdot \left[\hbH_{z11} - \hbJ_{z11}\cdot\hbJ_{\phi 11}^{-1}\cdot\hbH_{\phi 11}\right]\cdot\As{11} =\Bs{21}\cdot\hbT_{12}\cdot\As{11}, \label{ch2.2.E.T12} \end{flalign} \begin{flalign} \bT_{21} &=\bD_A^{-1}\cdot \left[\bJ_{z21} - \bH_{z21}\cdot\bH_{\phi 21}^{-1}\cdot\bJ_{\phi 21}\right] =\Bss{11}{-1}\cdot\hbD_A^{-1}\cdot \left[\hbJ_{z21}\cdot\Bs{21} -\hbH_{z21}\cdot\As{21}\cdot\Ass{21}{-1}\cdot\hbH_{\phi 21}^{-1}\cdot\hbJ_{\phi 21}\cdot\Bs{21} \right] \notag\\ &=\As{11}\cdot\hbD_A^{-1}\cdot \left[\hbJ_{z21} - \hbH_{z21}\cdot\hbH_{\phi 21}^{-1}\cdot\hbJ_{\phi 21}\right]\cdot\Bs{21} =\As{11}\cdot\hbT_{21}\cdot\Bs{21}. \label{ch2.2.E.T21} \end{flalign} \end{subequations} \begin{figure}[t] \centering \subfloat[\label{ch2.2.F.RF.Out}]{% \includegraphics[width=2.5in]{R12_outgoing.pdf} } \hspace{2.0cm} \subfloat[\label{ch2.2.F.RF.Stand}]{% \includegraphics[width=2.5in]{R32_standing.pdf} } \caption{Generalized reflection coefficients for three cylindrical layers: (a) $\tbR_{12}$ for the outgoing-wave case and (b) $\tbR_{32}$ for the standing-wave case.} \label{ch2.2.F.RF.3layers} \end{figure} We can proceed to redefine generalized reflection coefficients for multilayers, which are functions of local reflection and transmission coefficients. The generalized reflection coefficient for the outgoing-wave case for three cylindrical layers depicted in Fig. \ref{ch2.2.F.RF.Out} is modified to \begin{flalign} \tbR_{12} &=\bR_{12}+\bT_{21}\cdot\bR_{23}\cdot \left[\bI-\bR_{21}\cdot\bR_{23}\right]^{-1}\cdot\bT_{12} \notag\\ &=\As{11}\cdot \left\{ \hbR_{12}+\hbT_{21}\cdot\Bs{21}\cdot\As{22}\cdot\hbR_{23}\cdot\Bs{21}\cdot\As{22}\cdot \left[\bI-\hbR_{21}\cdot\Bs{21}\cdot\As{22}\cdot\hbR_{23}\cdot\Bs{21}\cdot\As{22} \right]^{-1}\cdot\hbT_{12} \right\}\cdot\As{11} \notag\\ &=\As{11}\cdot\htbR_{12}\cdot\As{11}. \label{ch2.2.E.gen.R12} \end{flalign} Note that the magnitude of the multiplicative factor $\Bs{21}\cdot\As{22}$ in \eqref{ch2.2.E.gen.R12} is never greater than one due to the boundness property, which guarantees moderate magnitude for $\htbR_{12}$ in any case. Since the associated multiplicative factors $\As{11}$ shown in \eqref{ch2.2.E.gen.R12} are the same as those for $\bR_{12}$ (see \eqref{ch2.2.E.R12}), they do not change when more than three layers are present. Therefore, the redefined generalized reflection coefficient between two arbitrarily-indexed adjacent layers for the outgoing-wave case can be expressed in general as \begin{flalign} \tbR_{i,i+1}=\As{ii}\cdot\htbR_{i,i+1}\cdot\As{ii}, \label{ch2.2.E.gen.Rij} \end{flalign} where \begin{flalign} \htbR_{i,i+1} &=\hbR_{i,i+1}+\hbT_{i+1,i}\cdot\Bs{i+1,i}\cdot\As{i+1,i+1}\cdot \hbR_{i+1,i+2}\cdot\Bs{i+1,i}\cdot\As{i+1,i+1} \notag\\ &\qquad\qquad\cdot \left[\bI-\hbR_{i+1,i}\cdot\Bs{i+1,i}\cdot\As{i+1,i+1}\cdot\hbR_{i+1,i+2}\cdot\Bs{i+1,i}\cdot\As{i+1,i+1} \right]^{-1}\cdot\hbT_{i,i+1}. \label{ch2.2.E.hRij} \end{flalign} The generalized reflection coefficient for the standing-wave case for three cylindrical layers depicted in Fig. \ref{ch2.2.F.RF.Stand} is modified to \begin{flalign} \tbR_{32} &=\bR_{32}+\bT_{23}\cdot\bR_{21}\cdot\left[\bI-\bR_{23}\cdot\bR_{21}\right]^{-1}\cdot\bT_{32} \notag\\ &=\Bs{32}\cdot \left\{ \hbR_{32}+\hbT_{23}\cdot\Bs{21}\cdot\As{22}\cdot\hbR_{21}\cdot\Bs{21}\cdot\As{22}\cdot \left[\bI-\hbR_{23}\cdot\Bs{21}\cdot\As{22}\cdot\hbR_{21}\cdot\Bs{21}\cdot\As{22} \right]^{-1}\cdot\hbT_{32} \right\}\cdot\Bs{32} \notag\\ &=\Bs{32}\cdot\htbR_{32}\cdot\Bs{32}. \label{ch2.2.E.gen.R32} \end{flalign} Again, the multiplicative factor $\Bs{21}\cdot\As{22}$ in \eqref{ch2.2.E.gen.R32} is never greater than one in magnitude due to the boundness property, which also guarantees moderate magnitude for $\htbR_{32}$ in any case. Again, when more than three layers are present, the multiplicative factors in \eqref{ch2.2.E.gen.R32} are not affected. Therefore, the redefined generalized reflection coefficient between two arbitrarily-indexed adjacent layers for the standing-wave case is expressed as \begin{flalign} \tbR_{i+1,i}&=\Bs{i+1,i}\cdot\htbR_{i+1,i}\cdot\Bs{i+1,i}, \label{ch2.2.E.gen.Rji} \end{flalign} where \begin{flalign} \htbR_{i+1,i} &=\hbR_{i+1,i}+\hbT_{i,i+1}\cdot\Bs{i,i-1}\cdot\As{ii}\cdot\hbR_{i,i-1}\cdot \Bs{i,i-1}\cdot\As{ii} \notag\\ &\qquad\qquad\cdot \left[\bI-\hbR_{i,i+1}\cdot\Bs{i,i-1}\cdot\As{ii}\cdot\hbR_{i,i-1}\cdot\Bs{i,i-1}\cdot\As{ii} \right]^{-1}\cdot\hbT_{i+1,i}. \label{ch2.2.E.hRji} \end{flalign} \begin{figure}[t] \centering \subfloat[\label{ch2.2.F.S.Out}]{% \includegraphics[width=2.5in]{S12_outgoing.pdf} } \hspace{2.0cm} \subfloat[\label{ch2.2.F.S.Stand}]{% \includegraphics[width=2.5in]{S32_standing.pdf} } \caption{$S$-coefficients for three cylindrical layers: (a) $\bS_{12}$ for the outgoing-wave case and (b) $\bS_{32}$ for the standing-wave case.} \label{ch2.2.F.S.3layers} \end{figure} Before we proceed to obtain generalized transmission coefficients, the redefinition of the so-called $S$-coefficients~\cite[Ch. 3]{Chew:Waves} is necessary, as they represent local transmission factors in the presence of multilayers. The $S$-coefficient for the outgoing-wave case for three cylindrical layers depicted in Fig. \ref{ch2.2.F.S.Out} is modified to \begin{flalign} \bS_{12} &=\left[\bI-\bR_{21}\cdot\bR_{23}\right]^{-1}\cdot\bT_{12} \notag\\ &=\Bs{21}\cdot \left[\bI-\hbR_{21}\cdot\Bs{21}\cdot\As{22}\cdot\hbR_{23}\cdot\Bs{21}\cdot\As{22} \right]^{-1}\cdot\hbT_{12}\cdot\As{11} \notag\\ &=\Bs{21}\cdot\hbS_{12}\cdot\As{11}. \label{ch2.2.E.S12} \end{flalign} Therefore, the redefined arbitrarily-indexed $S$-coefficient for the outgoing-wave case is written as \begin{flalign} \bS_{i,i+1}=\Bs{i+1,i}\cdot\hbS_{i,i+1}\cdot\As{ii}, \label{ch2.2.E.gen.Sij} \end{flalign} where \begin{flalign} \hbS_{i,i+1} &=\left[\bI-\hbR_{i+1,i}\cdot\Bs{i+1,i}\cdot \As{i+1,i+1}\cdot\hbR_{i+1,i+2}\cdot\Bs{i+1,i}\cdot\As{i+1,i+1} \right]^{-1}\cdot\hbT_{i,i+1}. \label{ch2.2.E.hSij} \end{flalign} The $S$-coefficient for the standing-wave case for three cylindrical layers depicted in Fig. \ref{ch2.2.F.S.Stand} is modified to \begin{flalign} \bS_{32} &=\left[\bI-\bR_{23}\cdot\bR_{21}\right]^{-1}\cdot\bT_{32} \notag\\ &=\As{22}\cdot \left[\bI-\hbR_{23}\cdot\Bs{21}\cdot\As{22}\cdot\hbR_{21}\cdot\Bs{21}\cdot\As{22} \right]^{-1}\cdot\hbT_{32}\cdot\Bs{32} \notag\\ &=\As{22}\cdot\hbS_{32}\cdot\Bs{32}. \label{ch2.2.E.S32} \end{flalign} As a result, the redefined arbitrarily-indexed $S$-coefficient for the standing-wave case is written as \begin{flalign} \bS_{i+1,i}=\As{ii}\cdot\hbS_{i+1,i}\cdot\Bs{i+1,i}, \label{ch2.2.E.gen.Sji} \end{flalign} where \begin{flalign} \hbS_{i+1,i} &=\left[\bI-\hbR_{i,i+1}\cdot\Bs{i,i-1}\cdot\As{ii}\cdot\hbR_{i,i-1}\cdot\Bs{i,i-1}\cdot\As{ii} \right]^{-1}\cdot\hbT_{i+1,i}. \label{ch2.2.E.hSji} \end{flalign} Let us now consider the generalized transmission coefficient for the outgoing-wave case ($i>j$) in cylindrically stratified media, which is expressed as \begin{flalign} \tbT_{ji}&=\bT_{i-1,i}\cdot\bS_{i-2,i-1}\cdots\bS_{j,j+1}. \label{ch2.2.E.Tji.Out.orig} \end{flalign} \eqref{ch2.2.E.Tji.Out.orig} can be modified in a way that \begin{flalign} \tbT_{ji} &=\bT_{i-1,i}\cdot\bS_{i-2,i-1}\cdots\bS_{j,j+1} \notag\\ &=\Bs{i,i-1}\cdot\hbT_{i-1,i}\cdot \left(\prod_{k=j}^{i-2}\Bs{k+1,k}\cdot\As{k+1,k+1}\cdot\hbS_{k,k+1}\right)\cdot\As{jj} \label{ch2.2.E.Tji.Out.part}\\ &=\Bs{i,i-1}\cdot\htbT_{ji}\cdot\As{jj}. \label{ch2.2.E.Tji.Out} \end{flalign} The magnitude of the multiplicative factors $\Bs{k+1,k}\cdot\As{k+1,k+1}$ in \eqref{ch2.2.E.Tji.Out.part} is never greater than one, which stabilizes the computation of $\htbT_{ji}$. The product in \eqref{ch2.2.E.Tji.Out.part} is the product of a number of 2$\times$2 matrices, so the order of the product should be specified. The 2$\times$2 matrix for $k=j$ and 2$\times$2 matrix for $k=i-2$ should be placed in the rightmost and leftmost in the matrix product, respectively. Furthermore, when $i=j+1$, the matrix product reduces to an identity matrix. It should be also noted that the associated multiplicative factors shown in \eqref{ch2.2.E.Tji.Out} are the generalized version of those shown in \eqref{ch2.2.E.T12}. Next, the generalized transmission coefficient for the standing-wave case ($i<j$) in cylindrically stratified media is expressed as \begin{flalign} \tbT_{ji}&=\bT_{i+1,i}\cdot\bS_{i+2,i+1}\cdots\bS_{j,j-1}. \label{ch2.2.E.Tji.Stand.orig} \end{flalign} Similarly, \eqref{ch2.2.E.Tji.Stand.orig} is modified to \begin{flalign} \tbT_{ji} &=\bT_{i+1,i}\cdot\bS_{i+2,i+1}\cdots\bS_{j,j-1} \notag\\ &=\As{ii}\cdot\hbT_{i+1,i}\cdot \left(\prod_{k=i+1}^{j-1}\Bs{k,k-1}\cdot\As{kk}\cdot\hbS_{k+1,k}\right)\cdot\Bs{j,j-1} \label{ch2.2.E.Tji.Stand.part}\\ &=\As{ii}\cdot\htbT_{ji}\cdot\Bs{j,j-1}. \label{ch2.2.E.Tji.Stand} \end{flalign} Again, the magnitudes of the multiplicative factors $\Bs{k,k-1}\cdot\As{kk}$ in \eqref{ch2.2.E.Tji.Stand.part} are never greater than one. For the matrix product in \eqref{ch2.2.E.Tji.Stand.part}, the 2$\times$2 matrix for $k=i+1$ and 2$\times$2 matrix for $k=j-1$ should be placed in the leftmost and rightmost, which is opposite to the outgoing-wave case. Furthermore, when $j=i+1$, the matrix product reduces to an identity matrix. The associated multiplicative factors shown in \eqref{ch2.2.E.Tji.Stand} are the generalized version of those shown in \eqref{ch2.2.E.T21}. Several auxiliary coefficients appeared in \eqref{ch1.3.E.EzHz.general.case1} -- \eqref{ch1.3.E.EzHz.general.case4} should be redefined properly as well. For the first integrand type, shown in \eqref{ch1.3.E.EzHz.general.case1}, $\tbM_{j+}$ is redefined as \begin{flalign} \tbM_{j+} &=\left[\bI-\tbR_{j,j-1}\cdot\tbR_{j,j+1}\right]^{-1} =\left[\bI-\Bs{j,j-1}\cdot\htbR_{j,j-1}\cdot\Bs{j,j-1}\cdot\As{jj}\cdot \htbR_{j,j+1}\cdot\As{jj}\right]^{-1} \notag\\ &=\Bs{j,[j-1,j]}\cdot \left[\bI-\Bs{j,j-1}\cdot\As{j,[j-1,j]}\cdot\htbR_{j,j-1}\cdot\Bs{j,j-1}\cdot \As{jj}\cdot\htbR_{j,j+1}\cdot\Bs{j,[j-1,j]}\cdot\As{jj} \right]^{-1}\cdot\As{j,[j-1,j]} \notag\\ &=\Bs{j,[j-1,j]}\cdot\htbM_{j+}\cdot\As{j,[j-1,j]}, \label{ch2.2.E.Mj.plus} \end{flalign} where the radial distance corresponding to subscript $[j-1,j]$ is $a_{[j-1,j]}=ca_{j-1}+(1-c)a_j$, $0\leq c\leq 1$. Two extreme choices of $a_{[j-1,j]}$ ($a_{[j-1,j]}=a_{j-1}$ and $a_{[j-1,j]}=a_j$) can be used for notational convenience but these are not useful in the redefinition of the integrand, as clarified below in Section \ref{sec.3.3}. For the second integrand type, shown in \eqref{ch1.3.E.EzHz.general.case2}, $\tbM_{j-}$ is redefined as \begin{flalign} \tbM_{j-} &=\left[\bI-\tbR_{j,j+1}\cdot\tbR_{j,j-1}\right]^{-1} =\left[\bI-\As{jj}\cdot\htbR_{j,j+1}\cdot\As{jj}\cdot\Bs{j,j-1}\cdot \htbR_{j,j-1}\cdot\Bs{j,j-1}\right]^{-1} \notag\\ &=\As{j,[j-1,j]}\cdot \left[\bI-\Bs{j,[j-1,j]}\cdot\As{jj}\cdot\htbR_{j,j+1}\cdot\Bs{j,j-1}\cdot \As{jj}\cdot\htbR_{j,j-1}\cdot\Bs{j,j-1}\cdot\As{j,[j-1,j]} \right]^{-1}\cdot\Bs{j,[j-1,j]} \notag\\ &=\As{j,[j-1,j]}\cdot\htbM_{j-}\cdot\Bs{j,[j-1,j]}. \label{ch2.2.E.Mj.minus} \end{flalign} Again, the two extreme cases of $a_{[j-1,j]}$ are undesired for the proper redefinition of the integrand as shown in Section \ref{sec.3.3}. For the third integrand type, shown in \eqref{ch1.3.E.EzHz.general.case3}, $\bN_{i+}$ is redefined as \begin{flalign} \bN_{i+} &=\left[\bI-\bR_{i,i-1}\cdot\tbR_{i,i+1}\right]^{-1} =\left[\bI-\Bs{i,i-1}\cdot\hbR_{i,i-1}\cdot\Bs{i,i-1}\cdot\As{ii}\cdot \htbR_{i,i+1}\cdot\As{ii}\right]^{-1} \notag\\ &=\Bs{i,i-1}\cdot \left[\bI-\hbR_{i,i-1}\cdot\Bs{i,i-1}\cdot\As{ii}\cdot\htbR_{i,i+1}\cdot\Bs{i,i-1}\cdot\As{ii} \right]^{-1}\cdot\As{i,i-1} \notag\\ &=\Bs{i,i-1}\cdot\hbN_{i+}\cdot\As{i,i-1}. \label{ch2.2.E.Ni.plus} \end{flalign} Finally, for the fourth integrand type, shown in \eqref{ch1.3.E.EzHz.general.case4}, $\bN_{i-}$ is redefined as \begin{flalign} \bN_{i-} &=\left[\bI-\bR_{i,i+1}\cdot\tbR_{i,i-1}\right]^{-1} =\left[\bI-\As{ii}\cdot\hbR_{i,i+1}\cdot\As{ii}\cdot\Bs{i,i-1}\cdot \htbR_{i,i-1}\cdot\Bs{i,i-1}\right]^{-1} \notag\\ &=\As{ii}\cdot \left[\bI-\hbR_{i,i+1}\cdot\Bs{i,i-1}\cdot\As{ii}\cdot\htbR_{i,i-1}\cdot\Bs{i,i-1}\cdot\As{ii} \right]^{-1}\cdot\Bs{ii} \notag\\ &=\As{ii}\cdot\hbN_{i-}\cdot\Bs{ii}. \label{ch2.2.E.Ni.minus} \end{flalign} \subsection{Range-conditioned integrand} \label{sec.3.3} For Case 1 in \eqref{ch1.3.E.EzHz.general.case1}, there are four arguments of interest: $k_{j\rho}a_{j-1}$, $k_{j\rho}\rho'$, $k_{j\rho}\rho$, and $k_{j\rho}a_j$. For convenience, we let $a_{j-1}=a_1$, $\rho'=a_2$, $\rho=a_3$, and $a_j=a_4$ so that $a_1<a_2<a_3<a_4$. The integrand is redefined as \begin{flalign} \Fn &=\left[\bH_{zj\rho}+\bJ_{zj\rho}\cdot\tbR_{j,j+1}\right]\cdot \tbM_{j+}\cdot \left[\bJ_{zj\rho'}+\tbR_{j,j-1}\cdot\bH_{zj\rho'}\right] \notag\\ &=\left[\hbH_{zj\rho}\cdot\As{j3}+\hbJ_{zj\rho}\cdot\Bs{j3}\cdot\As{j4}\cdot\htbR_{j,j+1}\cdot\As{j4} \right]\cdot\Bs{j2}\cdot\htbM_{j+}\cdot\As{j2} \label{ch2.3.E.EzHz.general.case1.m}\\ &\qquad\qquad\cdot \left[\hbJ_{zj\rho'}\cdot\Bs{j2}+\Bs{j1}\cdot\htbR_{j,j-1}\cdot\Bs{j1}\cdot\hbH_{zj\rho'}\cdot\As{j2} \right] \notag\\ &=\left[\Bs{j2}\cdot\As{j3}\cdot\hbH_{zj\rho} +\Bs{j3}\cdot\As{j4}\cdot\hbJ_{zj\rho}\cdot\htbR_{j,j+1}\cdot\Bs{j2}\cdot\As{j4} \right]\cdot\htbM_{j+} \notag\\ &\qquad\qquad\cdot \left[\hbJ_{zj\rho'} +\Bs{j1}\cdot\As{j2}\cdot\htbR_{j,j-1}\cdot\hbH_{zj\rho'}\cdot\Bs{j1}\cdot\As{j2} \right]. \label{ch2.3.E.EzHz.general.case1} \end{flalign} Note that, as \eqref{ch2.3.E.EzHz.general.case1.m} shows, the corresponding radial distance to the subscript $[j-1,j]$ in \eqref{ch2.2.E.Mj.plus} is chosen to be $a_2$, neither $a_1$ nor $a_4$. To be more specific, the radial distance of the source $\rho'$ is selected. The choice enables the left and right squared bracket factors in \eqref{ch2.3.E.EzHz.general.case1} to be balanced and yields a stable computation. For Case 2 in \eqref{ch1.3.E.EzHz.general.case2}, four arguments are of interest: $k_{j\rho}a_{j-1}$, $k_{j\rho}\rho$, $k_{j\rho}\rho'$, and $k_{j\rho}a_j$. Similarly, we let $a_{j-1}=a_1$, $\rho=a_2$, $\rho'=a_3$, and $a_j=a_4$ so that $a_1<a_2<a_3<a_4$. The integrand is redefined as \begin{flalign} \Fn &=\left[\bJ_{zj\rho}+\bH_{zj\rho}\cdot\tbR_{j,j-1}\right]\cdot \tbM_{j-}\cdot \left[\bH_{zj\rho'}+\tbR_{j,j+1}\cdot\bJ_{zj\rho'}\right] \notag\\ &=\left[\hbJ_{zj\rho}\cdot\Bs{j2}+\hbH_{zj\rho}\cdot\As{j2}\cdot\Bs{j1}\cdot\htbR_{j,j-1}\cdot\Bs{j1} \right]\cdot\As{j3}\cdot\htbM_{j-}\cdot\Bs{j3} \label{ch2.3.E.EzHz.general.case2.m}\\ &\qquad\qquad\cdot \left[\hbH_{zj\rho'}\cdot\As{j3}+\As{j4}\cdot\htbR_{j,j+1}\cdot\As{j4}\cdot\hbJ_{zj\rho'}\cdot\Bs{j3} \right] \notag\\ &=\left[\Bs{j2}\cdot\As{j3}\cdot\hbJ_{zj\rho} +\Bs{j1}\cdot\As{j2}\cdot\hbH_{zj\rho}\cdot\htbR_{j,j-1}\cdot\Bs{j1}\cdot\As{j3} \right]\cdot\htbM_{j-} \notag\\ &\qquad\qquad\cdot \left[\hbH_{zj\rho'} +\Bs{j3}\cdot\As{j4}\cdot\htbR_{j,j+1}\cdot\hbJ_{zj\rho'}\cdot\Bs{j3}\cdot\As{j4} \right]. \label{ch2.3.E.EzHz.general.case2} \end{flalign} It should be noted that, as \eqref{ch2.3.E.EzHz.general.case2.m} shows, the corresponding radial distance to the subscript $[j-1,j]$ in \eqref{ch2.2.E.Mj.minus} is chosen to be $a_3$, the radial distance of the source (neither $a_1$ nor $a_4$). Again, this choice enables the left and right squared bracket factors in \eqref{ch2.3.E.EzHz.general.case2} to be balanced and yields a stable computation. For Case 3 in \eqref{ch1.3.E.EzHz.general.case3}, there are 6 arguments of interest: $k_{i\rho}a_{i-1}$, $k_{i\rho}\rho$, $k_{i\rho}a_i$, $k_{j\rho}a_{j-1}$, $k_{j\rho}\rho'$, and $k_{j\rho}a_j$. We let $a_{i-1}=a_1$, $\rho=a_2$, $a_i=a_3$, $a_{j-1}=b_1$, $\rho'=b_2$, and $a_j=b_3$ so that $a_1<a_2<a_3$ and $b_1<b_2<b_3$. The integrand is redefined as \begin{flalign} \Fn &=\left[\bH_{zi\rho}+\bJ_{zi\rho}\cdot\tbR_{i,i+1}\right]\cdot \bN_{i+}\cdot\tbT_{ji}\cdot\tbM_{j+}\cdot \left[\bJ_{zj\rho'}+\tbR_{j,j-1}\cdot\bH_{zj\rho'}\right] \notag\\ &=\left[\hbH_{zi\rho}\cdot\As{i2}+\hbJ_{zi\rho}\cdot\Bs{i2}\cdot\As{i3}\cdot\htbR_{i,i+1}\cdot\As{i3} \right] \notag\\ &\qquad\qquad\cdot \left(\Bs{i1}\cdot\hbN_{i+}\cdot\As{i1}\right)\cdot \left(\Bs{i1}\cdot\htbT_{ji}\cdot\As{j3}\right)\cdot \left(\Bs{j2}\cdot\htbM_{j+}\cdot\As{j2}\right) \label{ch2.3.E.EzHz.general.case3.m}\\ &\qquad\qquad\qquad\cdot \left[\hbJ_{zj\rho'}\cdot\Bs{j2}+\Bs{j1}\cdot\htbR_{j,j-1}\cdot \Bs{j1}\cdot\hbH_{zj\rho'}\cdot\As{j2} \right] \notag\\ &=\left[\Bs{i1}\cdot\As{i2}\cdot\hbH_{zi\rho} +\Bs{i2}\cdot\As{i3}\cdot\hbJ_{zi\rho}\cdot\htbR_{i,i+1}\cdot\Bs{i1}\cdot\As{i3} \right]\cdot\hbN_{i+}\cdot\htbT_{ji}\cdot\Bs{j2}\cdot\As{j3}\cdot\htbM_{j+} \notag\\ &\qquad\qquad\qquad\cdot \left[\hbJ_{zj\rho'} +\Bs{j1}\cdot\As{j2}\cdot\htbR_{j,j-1}\cdot\hbH_{zj\rho'}\cdot\Bs{j1}\cdot\As{j2} \right]. \label{ch2.3.E.EzHz.general.case3} \end{flalign} In should be stressed that the corresponding radial distance for $\htbM_{j+}$ in \eqref{ch2.3.E.EzHz.general.case3.m} is now $b_2$, which is the radial distance of the source. For Case 4 in \eqref{ch1.3.E.EzHz.general.case4}, the arguments of interest are the same as those for Case 3. The integrand is redefined as \begin{flalign} \Fn &=\left[\bJ_{zi\rho}+\bH_{zi\rho}\cdot\tbR_{i,i-1}\right]\cdot \bN_{i-}\cdot\tbT_{ji}\cdot\tbM_{j-}\cdot \left[\bH_{zj\rho'}+\tbR_{j,j+1}\cdot\bJ_{zj\rho'}\right] \notag\\ &=\left[\hbJ_{zi\rho}\cdot\Bs{i2}+\hbH_{zi\rho}\cdot\As{i2}\cdot\Bs{i1}\cdot\htbR_{i,i-1}\cdot\Bs{i1} \right] \notag\\ &\qquad\qquad\cdot \left(\As{i3}\cdot\hbN_{i-}\cdot\Bs{i3}\right)\cdot \left(\As{i3}\cdot\htbT_{ji}\cdot\Bs{j1}\right)\cdot \left(\As{j2}\cdot\htbM_{j-}\cdot\Bs{j2}\right) \label{ch2.3.E.EzHz.general.case4.m}\\ &\qquad\qquad\qquad\cdot \left[\hbH_{zj\rho'}\cdot\As{j2}+\As{j3}\cdot\htbR_{j,j+1}\cdot \As{j3}\cdot\hbJ_{zj\rho'}\cdot\Bs{j2} \right] \notag\\ &=\left[\Bs{i2}\cdot\As{i3}\cdot\hbJ_{zi\rho} +\Bs{i1}\cdot\As{i2}\cdot\hbH_{zi\rho}\cdot\htbR_{i,i-1}\cdot\Bs{i1}\cdot\As{i3} \right]\cdot\hbN_{i-}\cdot\htbT_{ji}\cdot\Bs{j1}\cdot\As{j2}\cdot\htbM_{j-} \notag\\ &\qquad\qquad\qquad\cdot \left[\hbH_{zj\rho'} +\Bs{j2}\cdot\As{j3}\cdot\htbR_{j,j+1}\cdot\hbJ_{zj\rho'}\cdot\Bs{j2}\cdot\As{j3} \right]. \label{ch2.3.E.EzHz.general.case4} \end{flalign} Again, the radial distance of the source $b_2$ is chosen for the corresponding radial distance for $\htbM_{j-}$ in \eqref{ch2.3.E.EzHz.general.case4.m}. \subsection{Azimuth modal summation} \label{sec.3.4} The spectral representations of electromagnetic fields involve an infinite series as the azimuthal summation. The three components of the electromagnetic fields are expressed as \cite{Moon14:Stable} \begin{subequations} \begin{flalign} \begin{bmatrix} E_z \\ H_z \end{bmatrix} &=\frac{\iu Il}{4\pi\omega\epsilon_{hj}} \intmp dk_ze^{\iu k_z(z-z')} \left[\suma e^{\iu n(\phi-\phi')}\Fn\cdot\Dj\right], \label{ch3.1.EH.z.compnts}\\ \begin{bmatrix} E_\rho \\ H_\rho \end{bmatrix} &=\frac{\iu Il}{4\pi\omega\epsilon_{hj}}\intmp dk_z e^{\iu k_z(z-z')}\frac{1}{k^2_{\rho}} \left[\suma e^{\iu n(\phi-\phi')}\bBn\cdot\bLn(\rho)\cdot\bMn\cdot \bRn(\rho')\cdot\Dj\right], \label{ch3.2.EH.rho.compnts}\\ \begin{bmatrix} E_\phi \\ H_\phi \end{bmatrix} &=\frac{\iu Il}{4\pi\omega\epsilon_{hj}}\intmp dk_z e^{\iu k_z(z-z')}\frac{1}{k^2_{\rho}} \left[\suma e^{\iu n(\phi-\phi')}\bCn\cdot\bLn(\rho)\cdot\bMn\cdot \bRn(\rho')\cdot\Dj\right]. \label{ch4.3.EH.phi.compnts} \end{flalign} \end{subequations} To expedite the computation, it is possible to fold the series above by exploiting symmetries of the cylindrical eigenfunctions so that only zero and positive orders remain. The expressions of folded sums are quite similar to those for isotropic media in \cite{Moon14:Stable}, so final results are not provided here. In addition, the spectral integrals \eqref{ch3.1.EH.z.compnts}, \eqref{ch3.2.EH.rho.compnts}, and \eqref{ch4.3.EH.phi.compnts} typically cannot be computed along the real axis in a robust fashion. These integrals should instead be numerically evaluated along suitably deformed integration paths in the complex $k_z$ plane. To this end, either the so-called Sommerfeld integration path (SIP) and the deformed SIP (DSIP) can be used with the optimal choice among the two, depending on the longitudinal distance between source and observation points~\cite{Moon14:Stable}. \textcolor{\Cblue}{ Moreover, it should be noted that the concept of the direct field subtraction when source and observation points are in the same layer, discussed in \cite{Moon14:Stable}, still applies to uniaxial media. As analytical field expressions in such media are available only in certain cases (see \ref{app1}), direct field terms for isotropic media are used in the computation. } \section{Numerical validation results} \label{sec.4.results} This section provides some validation results for the formulations detailed above. In Section \ref{sec.4.1}, the results are compared against closed-form analytical solutions due to point dipole sources, available in homogeneous uniaxial media. In Section \ref{sec.4.2}, the results are compared against Finite Element Method (FEM) results for several selected cases of practical interest in geophysical exploration. These results also examine the effect of anisotropy ratios in the surrounding cylindrical layers on the resulting electromagnetic fields. Throughout this section, field values are expressed in a phasor form under $e^{\iu\omega t}$ convention. \subsection{Homogeneous uniaxial media} \label{sec.4.1} Numerical results from the present algorithm are compared to closed-form field expressions due to point dipole sources in homogeneous uniaxial media. For a derivation of such analytical solutions, refer to \ref{app1}. The fields are evaluated throughout a square region of 10 cm $\times$ 10 cm in the $\rho z$-plane. The source is a $z$-directed Hertzian electric dipole with unit dipole moment and operating frequency of 36 kHz. The medium has $\epsilon_{p,h}=16\epsilon_0$ [F/m], $\mu_h=16\mu_0$ [H/m], $\sigma_h=16$ [S/m], where $\epsilon_0$ and $\mu_0$ denote free-space permittivity and permeability values. These horizontal values are fixed whereas different $\epsilon_{p,v}$, $\mu_v$, and $\sigma_v$ values are considered to yield different anisotropy ratios. \begin{figure}[t] \centering \subfloat[\label{ch4.F.4k.10.1000}]{% \includegraphics[width=3.0in]{k4_10_1000.pdf} } \hfill \subfloat[\label{ch4.F.4k.20.1000}]{% \includegraphics[width=3.0in]{k4_20_1000.pdf} }\\ \subfloat[\label{ch4.F.4k.10.2000}]{% \includegraphics[width=3.0in]{k4_10_2000.pdf} } \hfill \subfloat[\label{ch4.F.4k.20.2000}]{% \includegraphics[width=3.0in]{k4_20_2000.pdf} } \caption{Relative error distribution with $\kappa=4$: (a) $n_{max}=10$, $n_{int}=1000$, (b) $n_{max}=20$, $n_{int}=1000$, (c) $n_{max}=10$, $n_{int}=2000$, and (d) $n_{max}=20$, $n_{int}=2000$.} \label{ch4.F.error.4k} \end{figure} Fig. \ref{ch4.F.4k.10.1000} -- \ref{ch4.F.4k.20.2000} show the relative error between the present algorithm and the closed-form analytical solution for different maximum orders $n_{max}$ employed in the azimuth summation and for various numbers of quadrature points $n_{int}$ employed in the numerical integration. It is assumed that $\epsilon_{p,v}=\epsilon_0$ [F/m], $\mu_v=\mu_0$ [H/m], and $\sigma_v=1$ [S/m], with $\kappa_\epsilon=\kappa_\mu=\kappa=4$. The relative error is defined as \begin{flalign} \text{relative error}_{dB} = 10\log_{10}\frac{|E_{z,a}-E_{z,n}|}{|E_{z,a}|}, \label{ch4.E.relative.error} \end{flalign} where $E_{z,a}$ and $E_{z,n}$ indicate analytical and numerical results, respectively. As expected, smaller relative errors are obtained for larger number of quadrature points or summation terms. \begin{figure}[t] \centering \subfloat[\label{ch4.F.2k.10.1000}]{% \includegraphics[width=3.0in]{k2_10_1000.pdf} } \hfill \subfloat[\label{ch4.F.2k.20.1000}]{% \includegraphics[width=3.0in]{k2_20_1000.pdf} }\\ \subfloat[\label{ch4.F.2k.10.2000}]{% \includegraphics[width=3.0in]{k2_10_2000.pdf} } \hfill \subfloat[\label{ch4.F.2k.20.2000}]{% \includegraphics[width=3.0in]{k2_20_2000.pdf} } \caption{Relative error distribution with $\kappa=2$: (a) $n_{max}=10$, $n_{int}=1000$, (b) $n_{max}=20$, $n_{int}=1000$, (c) $n_{max}=10$, $n_{int}=2000$, and (d) $n_{max}=20$, $n_{int}=2000$.} \label{ch4.F.error.2k} \end{figure} Fig. \ref{ch4.F.2k.10.1000} -- \ref{ch4.F.2k.20.2000} show the relative error distribution for $\kappa_\epsilon=\kappa_\mu=\kappa=2$, under the assumption of $\epsilon_{p,v}=4\epsilon_0$ [F/m], $\mu_v=4\mu_0$ [H/m], and $\sigma_v=4$ [S/m]. As expected, higher $n_{max}$ and $n_{int}$ produce smaller relative errors. \begin{figure}[t] \centering \subfloat[\label{ch4.F.sqrt2k.10.1000}]{% \includegraphics[width=3.0in]{ksqrt2_10_1000.pdf} } \hfill \subfloat[\label{ch4.F.sqrt2k.20.1000}]{% \includegraphics[width=3.0in]{ksqrt2_20_1000.pdf} }\\ \subfloat[\label{ch4.F.sqrt2k.10.2000}]{% \includegraphics[width=3.0in]{ksqrt2_10_2000.pdf} } \hfill \subfloat[\label{ch4.F.sqrt2k.20.2000}]{% \includegraphics[width=3.0in]{ksqrt2_20_2000.pdf} } \caption{Relative error distribution with $\kappa=\sqrt{2}$: (a) $n_{max}=10$, $n_{int}=1000$, (b) $n_{max}=20$, $n_{int}=1000$, (c) $n_{max}=10$, $n_{int}=2000$, and (d) $n_{max}=20$, $n_{int}=2000$.} \label{ch4.F.error.sqrt2k} \end{figure} Fig. \ref{ch4.F.sqrt2k.10.1000} -- \ref{ch4.F.sqrt2k.20.2000} show the relative error distribution for $\kappa_\epsilon=\kappa_\mu=\kappa=\sqrt{2}$, under the assumption of $\epsilon_{p,v}=8\epsilon_0$ [F/m], $\mu_v=8\mu_0$ [H/m], and $\sigma_v=8$ [S/m]. Comparing the cases with $\kappa=4$, $\kappa=2$, $\kappa=\sqrt{2}$, we observe that the error distribution shows a faster rate of decay along the vertical spatial direction for larger $\kappa$. \begin{figure}[t] \centering \subfloat[\label{ch4.F.1k.10.1000}]{% \includegraphics[width=3.0in]{k1_10_1000.pdf} } \hfill \subfloat[\label{ch4.F.1k.20.1000}]{% \includegraphics[width=3.0in]{k1_20_1000.pdf} }\\ \subfloat[\label{ch4.F.1k.10.2000}]{% \includegraphics[width=3.0in]{k1_10_2000.pdf} } \hfill \subfloat[\label{ch4.F.1k.20.2000}]{% \includegraphics[width=3.0in]{k1_20_2000.pdf} } \caption{Relative error distribution with $\kappa=1$: (a) $n_{max}=10$, $n_{int}=1000$, (b) $n_{max}=20$, $n_{int}=1000$, (c) $n_{max}=10$, $n_{int}=2000$, and (d) $n_{max}=20$, $n_{int}=2000$.} \label{ch4.F.error.1k} \end{figure} Finally, Fig. \ref{ch4.F.1k.10.1000} -- \ref{ch4.F.1k.20.2000} show the relative error distribution for $\kappa_\epsilon=\kappa_\mu=\kappa=1$, under the assumption of $\epsilon_{p,v}=16\epsilon_0$ [F/m], $\mu_v=16\mu_0$ [H/m], and $\sigma_v=16$ [S/m], which recovers the isotropic case. \begin{figure}[t] \centering \subfloat[\label{ch4.F.4k.10r10z}]{% \includegraphics[width=3.0in]{k4_10r10z.pdf} } \hfill \subfloat[\label{ch4.F.2k.10r10z}]{% \includegraphics[width=3.0in]{k2_10r10z.pdf} }\\ \subfloat[\label{ch4.F.sqrt2k.10r10z}]{% \includegraphics[width=3.0in]{ksqrt2_10r10z.pdf} } \hfill \subfloat[\label{ch4.F.1k.10r10z}]{% \includegraphics[width=3.0in]{k1_10r10z.pdf} } \caption{Relative error distribution in terms of various maximum orders $n_{max}$ and integration points $n_{int}$ with the receiver point at $\rho-\rho'=10$ cm, $\phi-\phi'=0^\circ$, and $z-z'=10$ cm: (a) $\kappa=4$, (b) $\kappa=2$, (c) $\kappa=\sqrt{2}$, and (d) $\kappa=1$.} \label{ch4.F.error.10r10z} \end{figure} In order to further scrutinize the effect of $n_{max}$ and $n_{int}$, the receiver point is next fixed at $\rho-\rho'=10$ cm, $\phi-\phi'=0^\circ$, and $z-z'=10$ cm. Figure \ref{ch4.F.4k.10r10z} -- \ref{ch4.F.1k.10r10z} show the error as $n_{max}$ and $n_{int}$ vary. \textcolor{\Cred}{ When $n_{max}$ is less than about 15, the relative error is not reduced despite the increase in the number of quadrature points. Therefore, $n_{max}$ should be set sufficiently large, otherwise a convergence of results with respect to the number of quadrature points would be only relative and the final results would be still inaccurate. For the types of scenarios considered here, we have observed that $n_{max} \gtrsim 30$ to provide absolute convergence. Furthermore, it is observed that convergence is achieved faster for larger anisotropy ratios. This stems from an effective spatial ``stretching" along the $\rho$-direction effected when the anisotropy ratio increases. As can be seen in \eqref{ch1.1.E.dispersion.relation.eps} and \eqref{ch1.1.E.dispersion.relation2.mu}, $\tk_\rho^2$ and $\dk_\rho^2$ decrease with an increase in the anisotropy ratio and consequently, higher order modes (with larger $n$) exhibit a faster decay away at the receiver point. } \subsection{Cylindrically layered scenarios} \label{sec.4.2} In this section, a number of practical cases of interest are considered to illustrate the applicability of the algorithm. In all the cases, both the relative permittivity $\epsilon_r$ and relative permeability \textcolor{\Cred}{$\mu_r$} are set to one so that $\epsilon_{p,h}=\epsilon_{p,v}=1$ and $\mu_{h}=\mu_{v}=1$, whereas the conductivity tensor (and complex permittivity tensor $\ue$, see \eqref{ch1.1.E.uniaxial.epsilon}) exhibits uniaxial anisotropy where the horizontal resistivity (reciprocal of horizontal conductivity) is set to 5 $\Omega\cdot m$ and the vertical resistivity is changed, leading to different anisotropy ratios $\kappa_\epsilon$. Case 1 is depicted in Fig. \ref{ch4.F.case1}. There are three layers, with the first layer representing a metallic mandrel with high conductivity, the mid-layer representing a borehole filled with an isotropic fluid, and the outermost layer representing the surrounding Earth formation with uniaxial anisotropy. Case 2 is depicted in Fig. \ref{ch4.F.case2}, where a metallic casing (third layer) is inserted between the borehole and anisotropic formation. Table \ref{ch4.T.case1} provides the comparison of corresponding results for Case 1 in terms of the square of anisotropy ratios. The discrepancy in the magnitude of magnetic fields can be traced to FEM mesh truncation effects: the fields obtained by FEM have smaller magnitudes because the Dirichlet boundary condition at the mesh boundary moves the ground potential (originally at infinity) closer to the source location. This causes a small offset in the results. This is confirmed by Table \ref{ch4.T.case1.dif}, which shows the relative difference in the computed field magnitudes with excellent agreement. Table \ref{ch4.T.case2} and \ref{ch4.T.case2.dif} provide corresponding results for Case 2. \begin{figure}[!htbp] \centering \subfloat[\label{ch4.F.case1}]{% \includegraphics[height=3.0in]{case1.pdf} } \hspace{2 cm} \subfloat[\label{ch4.F.case2}]{% \includegraphics[height=3.0in]{case2.pdf} } \caption{(a) Case 1 in the $\rho z$-plane and (b) Case 2 in the $\rho z$-plane.} \label{ch4.F.case12} \end{figure} \begin{table}[h] \begin{center} \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{11pt} \caption{Comparison of magnetic fields in terms of various anisotropy ratios for Case 1.} \begin{tabular}{cccc} \hline Square of & Magnetic field [A/m] & Magnetic field [A/m] & Computing time \\ anisotropy ratio $\kappa_\epsilon^2$ & (FEM) & (Present algorithm) & (Present algorithm) \\ \hline 1 & 10.3116 $\angle$98.1899$^\circ$ & 10.5475 $\angle$98.1390$^\circ$ & 10 sec. \\ 2 & 10.2365 $\angle$97.5910$^\circ$ & 10.4723 $\angle$97.5486$^\circ$ & 32 sec. \\ 5 & 10.1565 $\angle$96.9883$^\circ$ & 10.3924 $\angle$96.9612$^\circ$ & 32 sec. \\ 10 & 10.1070 $\angle$96.6327$^\circ$ & 10.3428 $\angle$96.6152$^\circ$ & 31 sec. \\ \hline \end{tabular} \label{ch4.T.case1} \vspace{1em} \caption{Comparison of magnitude difference in magnetic fields for Case 1.} \begin{tabular}{ccc} \hline & FEM & Present algorithm \\ \hline between $\kappa_\epsilon^2=1$ and $\kappa_\epsilon^2=2$ & 0.0751 & 0.0752 \\ between $\kappa_\epsilon^2=2$ and $\kappa_\epsilon^2=5$ & 0.0800 & 0.0799 \\ between $\kappa_\epsilon^2=5$ and $\kappa_\epsilon^2=10$ & 0.0495 & 0.0496 \\ \hline \end{tabular} \label{ch4.T.case1.dif} \vspace{3em} \caption{Comparison of magnetic fields in terms of various anisotropy ratios for Case 2.} \begin{tabular}{cccc} \hline Square of & Magnetic field [A/m] & Magnetic field [A/m] & Computing time \\ anisotropy ratio $\kappa_\epsilon^2$ & (FEM) & (Present algorithm) & (Present algorithm) \\ \hline 1 & 46.6091 $\angle$118.4181$^\circ$ & 46.6303 $\angle$118.4324$^\circ$ & 15 sec. \\ 2 & 46.6099 $\angle$118.4234$^\circ$ & 46.6311 $\angle$118.4381$^\circ$ & 44 sec. \\ 5 & 46.6110 $\angle$118.4283$^\circ$ & 46.6321 $\angle$118.4432$^\circ$ & 44 sec. \\ 10 & 46.6118 $\angle$118.4310$^\circ$ & 46.6329 $\angle$118.4459$^\circ$ & 44 sec. \\ \hline \end{tabular} \label{ch4.T.case2} \vspace{1em} \caption{Comparison of magnitude difference in magnetic fields for Case 2.} \begin{tabular}{ccc} \hline & FEM & Present algorithm \\ \hline between $\kappa_\epsilon^2=1$ and $\kappa_\epsilon^2=2$ & -0.0008 & -0.0008 \\ between $\kappa_\epsilon^2=2$ and $\kappa_\epsilon^2=5$ & -0.0011 & -0.0010 \\ between $\kappa_\epsilon^2=5$ and $\kappa_\epsilon^2=10$ & -0.0008 & -0.0008 \\ \hline \end{tabular} \label{ch4.T.case2.dif} \end{center} \end{table} Case 3 and 4 are depicted in Figs. \ref{ch4.F.case3} and \ref{ch4.F.case4}, which are the same as Case 2 except for the operating frequencies, which for Case 3 is 1 kHz and for Case 4 is 125 kHz. Tables \ref{ch4.T.case3}, \ref{ch4.T.case3.dif}, \ref{ch4.T.case4}, and \ref{ch4.T.case4.dif} provide the corresponding results.\\ \begin{figure}[!htbp] \centering \subfloat[\label{ch4.F.case3}]{% \includegraphics[height=3.0in]{case3.pdf} } \hspace{2 cm} \subfloat[\label{ch4.F.case4}]{% \includegraphics[height=3.0in]{case4.pdf} } \caption{(a) Case 3 in the $\rho z$-plane and (b) Case 4 in the $\rho z$-plane.} \label{ch4.F.case34} \end{figure} \begin{table}[h] \begin{center} \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{11pt} \caption{Comparison of magnetic fields in terms of various anisotropy ratios for Case 3.} \begin{tabular}{cccc} \hline Square of & Magnetic field [A/m] & Magnetic field [A/m] & Computing time \\ anisotropy ratio $\kappa_\epsilon^2$ & (FEM) & (Present algorithm) & (Present algorithm) \\ \hline 1 & 290.2144 $\angle$127.4332$^\circ$ & 290.2711 $\angle$127.4185$^\circ$ & 7 sec. \\ 2 & 289.9780 $\angle$127.4252$^\circ$ & 290.0564 $\angle$127.4170$^\circ$ & 22 sec. \\ 5 & 289.7678 $\angle$127.4089$^\circ$ & 289.8157 $\angle$127.4175$^\circ$ & 22 sec. \\ 10 & 289.6754 $\angle$127.3988$^\circ$ & 289.6589 $\angle$127.4189$^\circ$ & 22 sec. \\ \hline \end{tabular} \label{ch4.T.case3} \vspace{1em} \caption{Comparison of magnitude difference in magnetic fields for Case 3.} \begin{tabular}{ccc} \hline & FEM & Present algorithm \\ \hline between $\kappa_\epsilon^2=1$ and $\kappa_\epsilon^2=2$ & 0.2364 & 0.2147 \\ between $\kappa_\epsilon^2=2$ and $\kappa_\epsilon^2=5$ & 0.2102 & 0.2407 \\ between $\kappa_\epsilon^2=5$ and $\kappa_\epsilon^2=10$ & 0.0924 & 0.1568 \\ \hline \end{tabular} \label{ch4.T.case3.dif} \vspace{3em} \caption{Comparison of magnetic fields in terms of various anisotropy ratios for Case 4.} \begin{tabular}{cccc} \hline Square of & Magnetic field [A/m] & Magnetic field [A/m] & Computing time \\ anisotropy ratio $\kappa_\epsilon^2$ & (FEM) & (Present algorithm) & (Present algorithm) \\ \hline 1 & 18.7957 $\angle$110.9753$^\circ$ & 18.8074 $\angle$110.9191$^\circ$ & 7 sec. \\ 2 & 18.7959 $\angle$110.9762$^\circ$ & 18.8076 $\angle$110.9200$^\circ$ & 19 sec. \\ 5 & 18.7962 $\angle$110.9770$^\circ$ & 18.8079 $\angle$110.9209$^\circ$ & 19 sec. \\ 10 & 18.7963 $\angle$110.9774$^\circ$ & 18.8080 $\angle$110.9213$^\circ$ & 19 sec. \\ \hline \end{tabular} \label{ch4.T.case4} \vspace{1em} \caption{Comparison of magnitude difference in magnetic fields for Case 4.} \begin{tabular}{ccc} \hline & FEM & Present algorithm \\ \hline between $\kappa_\epsilon^2=1$ and $\kappa_\epsilon^2=2$ & -0.0002 & -0.0002 \\ between $\kappa_\epsilon^2=2$ and $\kappa_\epsilon^2=5$ & -0.0003 & -0.0003 \\ between $\kappa_\epsilon^2=5$ and $\kappa_\epsilon^2=10$ & -0.0001 & -0.0001 \\ \hline \end{tabular} \label{ch4.T.case4.dif} \end{center} \end{table} Case 5 and 6 are depicted in Figs. \ref{ch4.F.case6} and \ref{ch4.F.case9}. For Case 5, the borehole is extended to $16^{\prime\prime}$ without casing. For Case 6, both the transmitter and receiver are positioned inside the formation, which again has uniaxial anisotropy. Tables \ref{ch4.T.case6} and \ref{ch4.T.case9} provide the comparison of corresponding results for Case 5 and Case 6 in terms of the anisotropy ratios squared. Tables \ref{ch4.T.case6.dif} and \ref{ch4.T.case9.dif} show the relative difference in the field magnitude for each case.\\ \begin{figure}[!htbp] \centering \subfloat[\label{ch4.F.case6}]{% \includegraphics[height=3.0in]{case6.pdf} } \hspace{2 cm} \subfloat[\label{ch4.F.case9}]{% \includegraphics[height=3.0in]{case9.pdf} } \caption{(a) Case 5 in the $\rho z$-plane and (b) Case 6 in the $\rho z$-plane.} \label{ch4.F.case56} \end{figure} \begin{table}[h] \begin{center} \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{11pt} \caption{Comparison of magnetic fields in terms of various anisotropy ratios for Case 5.} \begin{tabular}{cccc} \hline Square of & Magnetic field [A/m] & Magnetic field [A/m] & Computing time \\ anisotropy ratio $\kappa_\epsilon^2$ & (FEM) & (Present algorithm) & (Present algorithm) \\ \hline 1 & 10.7855 $\angle$100.0572$^\circ$ & 10.7857 $\angle$100.0586$^\circ$ & 10 sec. \\ 2 & 10.6723 $\angle$99.4009$^\circ$ & 10.6721 $\angle$99.4024$^\circ$ & 29 sec. \\ 5 & 10.5553 $\angle$98.7180$^\circ$ & 10.5546 $\angle$98.7190$^\circ$ & 29 sec. \\ 10 & 10.4847 $\angle$98.3108$^\circ$ & 10.4839 $\angle$98.3113$^\circ$ & 29 sec. \\ \hline \end{tabular} \label{ch4.T.case6} \vspace{1em} \caption{Comparison of magnitude difference in magnetic fields for Case 5.} \begin{tabular}{ccc} \hline & FEM & Present algorithm \\ \hline between $\kappa_\epsilon^2=1$ and $\kappa_\epsilon^2=2$ & 0.1132 & 0.1136 \\ between $\kappa_\epsilon^2=2$ and $\kappa_\epsilon^2=5$ & 0.1170 & 0.1175 \\ between $\kappa_\epsilon^2=5$ and $\kappa_\epsilon^2=10$ & 0.0706 & 0.0707 \\ \hline \end{tabular} \label{ch4.T.case6.dif} \vspace{3em} \caption{Comparison of magnetic fields in terms of various anisotropy ratios for Case 6.} \begin{tabular}{cccc} \hline Square of & Magnetic field [A/m] & Magnetic field [A/m] & Computing time \\ anisotropy ratio $\kappa_\epsilon^2$ & (FEM) & (Present algorithm) & (Present algorithm) \\ \hline 1 & 8.1259 $\angle$97.0379$^\circ$ & 8.1326 $\angle$97.0341$^\circ$ & 11 sec. \\ 2 & 8.0817 $\angle$96.5262$^\circ$ & 8.0814 $\angle$96.4841$^\circ$ & 32 sec. \\ 5 & 8.0276 $\angle$95.9964$^\circ$ & 8.0271 $\angle$95.9416$^\circ$ & 32 sec. \\ 10 & 7.9939 $\angle$95.6786$^\circ$ & 7.9933 $\angle$95.6240$^\circ$ & 32 sec. \\ \hline \end{tabular} \label{ch4.T.case9} \vspace{1em} \caption{Comparison of magnitude difference in magnetic fields for Case 6.} \begin{tabular}{ccc} \hline & FEM & Present algorithm \\ \hline between $\kappa_\epsilon^2=1$ and $\kappa_\epsilon^2=2$ & 0.0442 & 0.0512 \\ between $\kappa_\epsilon^2=2$ and $\kappa_\epsilon^2=5$ & 0.0541 & 0.0543 \\ between $\kappa_\epsilon^2=5$ and $\kappa_\epsilon^2=10$ & 0.0337 & 0.0338 \\ \hline \end{tabular} \label{ch4.T.case9.dif} \end{center} \end{table} \textcolor{\Cblue}{ The magnetic field magnitude in the $y=0^{\prime\prime}$ plane is shown in Figure \ref{ch4.F.case12.y}, for Cases 1 and 2. The field is plotted in a decibel scale $10\log_{10}|\mathbf{H}|$ because of the large magnitude variation. In this scale, the small differences in magnitude between $\kappa_\epsilon^2=1$ and $\kappa_\epsilon^2=10$ observed in Tables 3 and 5 are hardly distinguishable. On the other hand, these figures clearly show that Case 1 has less confinement of fields within the source layer than Case 2 due to the presence of the metallic casing in the latter case, as depicted in Figure \ref{ch4.F.case12}. } \begin{figure}[t] \centering \subfloat[\label{ch4.F.case1.y.k1}]{% \includegraphics[width=3.0in]{case1_k1_xzplane.pdf} } \hfill \subfloat[\label{ch4.F.case2.y.k1}]{% \includegraphics[width=3.0in]{case2_k1_xzplane.pdf} }\\ \subfloat[\label{ch4.F.case1.y.k10}]{% \includegraphics[width=3.0in]{case1_k10_xzplane.pdf} } \hfill \subfloat[\label{ch4.F.case2.y.k10}]{% \includegraphics[width=3.0in]{case2_k10_xzplane.pdf} } \caption{\textcolor{\Cblue}{Spatial distribution of the magnetic field magnitude on the $y=0^{\prime\prime}$ plane at 36 kHz: (a) Case 1 with $\kappa_\epsilon^2=1$, (b) Case 2 with $\kappa_\epsilon^2=1$, (c) Case 1 with $\kappa_\epsilon^2=10$, and (d) Case 2 with $\kappa_\epsilon^2=10$.}} \label{ch4.F.case12.y} \end{figure} \section{Conclusion} \label{sec.5.con} We provided a robust algorithm for the stable computation of electromagnetic fields in cylindrically stratified media with doubly uniaxial anisotropic layers. Range-conditioned integrands, which were originally developed for isotropic media, are extended here for uniaxial media. Associated multiplicative factors used for the stabilization are expressed \textcolor{\Cred}{as} 2$\times$2 matrices in this case. The results show that the formulation is indeed stable and have good error controllability. Illustrative scenarios were included to show applicability of the proposed algorithm to geophysical exploration problems involving borehole sensors in Earth formations with anisotropic responses.
1,477,468,750,241
arxiv
\section*{Introduction} The main goal of this paper is to propose a theory of mirror symmetry for varieties of general type. At first glance, the existence of such a theory would perhaps seem unlikely. After all, if $S$, $\check S$ were a mirror pair with $S$ of general type and dimension $d$, and if the first symptom of mirror symmetry is a reflection of the Hodge diamond, then we must face the possibility of having, say, $h^{0,0}(\check S)=h^{d,0}(S)$ being larger than $1$. So it is clear that the mirror $\check S$ should not be a variety in the ordinary sense. In this paper we will propose that the mirror to a variety of general type is a reducible variety equipped with a certain perverse sheaf. The cohomology of this perverse sheaf will carry a mixed Hodge structure which we expect has the desired features. The motivation for these structures arises from the study of Landau-Ginzburg models, i.e., pairs $(X,w)$ with $X$ a variety and $w:X\rightarrow\CC$ a non-constant regular function. Let us consider a very basic form of mirror symmetry, involving duality between cones. Set $M\cong \ZZ^n$, $M_{\RR}=M\otimes_{\ZZ}\RR$, $N=\Hom_{\ZZ}(M,\ZZ)$, $N_{\RR}=N\otimes_{\ZZ}\RR$. Consider a strictly convex rational polyhedral cone $\sigma\subseteq M_{\RR}$ with $\dim\sigma=\dim M_{\RR}$, and let $\check\sigma\subseteq N_{\RR}$ be the dual cone, \[ \check\sigma:=\{n \in N_{\RR}\,|\,\hbox{$\langle n,m\rangle \ge 0$ for all $m\in\sigma$}\}. \] The corresponding toric varieties \begin{align*} X_{\sigma}{} & :=\Spec \CC[\check\sigma\cap N]\\ X_{\check\sigma}{} & :=\Spec \CC[\sigma\cap M] \end{align*} are usually singular. Choose desingularizations by choosing fans $\Sigma$ and $\check\Sigma$ which are refinements of $\sigma$ and $\check\sigma$ respectively, with $\Sigma$ and $\check\Sigma$ consisting only of standard cones, i.e., cones generated by part of a basis for $M$ or $N$. We now obtain smooth toric varieties $X_{\Sigma}$ and $X_{\check\Sigma}$, and in addition, we obtain Landau-Ginzburg potentials as follows. For each ray $\rho\in\Sigma$, let $m_{\rho}\in M$ be the primitive generator of $\rho$, so that $z^{m_{\rho}}$ is a monomial regular function on $X_{\check\Sigma}$. Similarly, for each ray $\check\rho\in\check\Sigma$, with primitive generator $n_{\check\rho}\in N$, $z^{n_{\check\rho}}$ is a monomial function on $X_{\Sigma}$. We then define Landau-Ginzburg potentials $w:X_{\Sigma} \rightarrow \CC$ and $\check w:X_{\check\Sigma}\rightarrow\CC$ as \begin{align} \label{generalmirror1} w {} & :=\sum_{\check\rho} c_{\check\rho} z^{n_{\check\rho}}\\ \label{generalmirror2} \check w {} & :=\sum_{\rho} c_{\rho} z^{m_{\rho}} \end{align} where $c_{\check\rho},c_{\rho}\in\CC$ are general coefficients. Note $w$ (resp.\ $\check w$) factors through the resolution $X_\Sigma\ra X_\sigma$ (resp.\ $X_{\check\Sigma}\ra X_{\check\sigma}$). Now in general, it is currently understood that given a Landau-Ginzburg model $w:X\rightarrow \CC$, the correct cohomology group to associate to this model is the one obtained from the \emph{twisted de Rham complex}, see \cite{KKP08},\,3.2. In this context, since in general $w$ is not proper, we need to partially compactify first. We choose a partial compactification $X\subseteq \bar{X}$ with $D:=\bar{X}\setminus X$ being normal crossings and such that $w$ extends to a projective map $\bar{w}:\bar{X}\rightarrow\CC$. We then consider the complex \[ (\Omega^{\bullet}_{\bar{X}}(\log D)[u],ud+d\bar{w}\wedge) \] where $u\in\CC$ is a parameter. The relevant cohomology groups in the Landau-Ginzburg theory are the hypercohomology groups of this complex. By a theorem of Barannikov and Kontsevich (unpublished), the hypercohomology is a free $\CC[u]$-module. New proofs were given by Sabbah \cite{Sab99} and Ogus and Vologodsky \cite{OV07}. As stated in \cite{Sab99}, this result takes the following form: \begin{theorem} \label{sabbahmaintheorem} Let $\bar{w}:\bar{X}\rightarrow\CC$ be projective, $D\subseteq \bar{X}$ a normal crossing divisor. Then \begin{enumerate} \item The hypercohomology groups of the complexes \[ (\Omega^{\bullet}_{\bar{X}}(\log D),d+d\bar{w}\wedge) \quad\hbox{ and }\quad (\Omega^{\bullet}_{\bar{X}}(\log D),d\bar{w}\wedge) \] have the same dimensions. \item Let $p_1,\ldots,p_k\in\CC$ be the critical values for $\bar{w}$ and $j:X=\bar{X}\setminus D\hookrightarrow \bar{X}$ the inclusion. Then in the analytic topology, \begin{equation} \label{singfibresum} \dim \HH^i(\bar X,(\Omega^{\bullet}_{\bar{X}}(\log D),d\bar{w}\wedge)) =\sum_{j=1}^k \dim \HH^{i-1}(\bar{w}^{-1}(p_j),\phi_{\bar{w},p_i}{\bf R}j_*\CC_X). \end{equation} \end{enumerate} \end{theorem} Here $\phi_{\bar{w},p_i}$ denotes the vanishing cycle functor at $p_i$, for a precise definition, see \S\ref{section3}. Most importantly for our current discussion, $\phi_{\bar{w},p_i}{\bf R}j_*\CC_X$ is a sheaf supported on the critical locus of each singular fibre. Now one subtlety in Landau-Ginzburg mirror symmetry is that this cohomology group is often too big, because there are some singular fibres of $w$ which we do not wish to consider. These singular fibres often ``come in from infinity'' as the Landau-Ginzburg potential is varied, and to get the correct group, we need to ignore these fibres. In particular, we should only use certain singular fibres in the sum \eqref{singfibresum}. We make suggestions in \S\ref{subsectionsingularfibres} about how to deal with this in general. In the general setup of $X_{\Sigma}$, $X_{\check\Sigma}$ given above, the singular fibres can be quite complicated. We are not yet proposing a general method for computing the relevant cohomology groups in this setup. On the other hand, we may restrict attention to a special case for which we show a mirror duality of Hodge numbers. So we now specialize to the following setup. Let $\Delta\subseteq M_{\RR}$ be a lattice polytope which is the Newton (moment) polytope of a non-singular projective toric variety $\PP_{\Delta}$. Define the cone $\operatorname{{Cone}}(\Delta)\subseteq M_{\RR}\oplus\RR$ by \[ \operatorname{{Cone}}(\Delta):=\{(rm,r)|m\in\Delta, r\ge 0\}. \] We can take $\sigma=\operatorname{{Cone}}(\Delta)$ in the above construction. We now subdivide $\sigma$ by choosing a triangulation of $\Delta$ into standard simplices; we assume that we can do this. This then gives rise to a fan $\Sigma$ consisting of cones over these simplices. Geometrically, $X_\Sigma$ is a crepant resolution of the Gorenstein singularity $X_\sigma$. On the other hand, as we shall check in \S\ref{section1}, the cone $\check\sigma$ can be subdivided to give a fan $\check\Sigma$ via a star subdivision with center the ray generated by $(0,1)\in N_{\RR}\oplus\RR$. Geometrically, this is the contraction of the zero section $$X_{\check\Sigma}=\Tot(\shO_{\PP_\Delta}(-1))\ra X_{\check\sigma},$$ where $\Tot(\shV)$ denotes the relative $\Spec$ of $\Sym\shV^*$ for a vector bundle $\shV$. Given these choices of $X_{\Sigma}$ and $X_{\check \Sigma}$, we obtain as above Landau-Ginzburg potentials $w$ and $\check w$ on these two spaces respectively. As we shall see in \S\ref{section1}, the origin $0\in \CC$ is a critical value for both $w$ and $\check w$. Furthermore, $\check w^{-1}(0)$ is quite simple, consisting of a normal crossings union of two divisors whose intersection is a hyperplane section $S$ of $\PP_{\Delta}$ determined by $\check w$. One can show that $\phi_{\check w,0}{\bf R}j_*\CC_{X_{\check\Sigma}}$ is just the constant sheaf $\CC$ on $S$ (shifted in degree). Moreover, the derived category of coherent sheaves on $S$ is equivalent to the category of singularities of $\check w: X_{\check\Sigma}\ra\CC$ by a generalized Kn\"orrer periodicity, see \S\ref{Section_Knoerrer}. In particular since $0$ is the only critical value of $\check w$, \[ \HH^{i}(\bar X_{\check\Sigma},(\Omega^\bullet_{\bar{X}_{\check\Sigma}},d\check{w}\wedge)) =\HH^{i-1}(\check w^{-1}(0),\phi_{\check w,0}{\bf R}j_*\CC_{X_{\check \Sigma}}) =H^{i-2}(S,\CC). \] On the other hand, $w^{-1}(0)$ will, in general, be much more complicated, although with the assumptions given, it will still be normal crossings as long as $S$ has non-negative Kodaira dimension $\kappa(S)$. Most of the time we will restrict ourselves to this case because the case of negative Kodaira dimension behaves somewhat differently and has already been studied intensively in the literature. See \S\ref{sectioncubic} for a detailed discussion of a non-trivial negative Kodaira dimension example. So let us assume $\kappa(S)\ge 0$. The singular locus of $w^{-1}(0)$ is in fact proper (before compactifying) and the perverse sheaf $\phi_{w,0}{\bf R}j_*\CC_{X_{\Sigma}}$ is supported entirely on this singular locus. Our proposal for addressing to a first approximation the question raised at the beginning of the paper is as follows. Let $\check S$ be the singular locus of $w^{-1}(0)$, and let \begin{equation} \label{perverseshifted} \shF_{\check S}:=\phi_{w,0}{\bf R}j_*\CC_{X_{\Sigma}}[1]. \end{equation} Then we should consider the pair $(\check S,\shF_{\check S})$ to be mirror to the pair $(S, \underline{\CC})$, where $\underline{\CC}$ denotes the constant sheaf on $S$ with coefficients $\CC$. This should give a version of mirror symmetry for which we verify the symmetry of Hodge numbers as follows. Note that $\phi_{\check w,0}{\bf R}j_*\CC_{X_{\check \Sigma}}$ supports a mixed Hodge structure given by Schmid-Steenbrink. We transport this to $\shF_{\check S}$ using (\ref{perverseshifted}) and apply the shift [1] to the Hodge and also to the weight filtration, i.e., \begin{equation*} \label{shiftHpqeq} h^{p,q}\HH^i(\check S,\shF_{\check S})=h^{p+1,q+1}\HH^{i+1}(w^{-1}(0),\phi_{w,0}{\bf R}j_*\CC_{X}). \end{equation*} It can be observed that the weight filtration reflects the Kodaira dimension of $S$, see Prop.~\ref{weightandkodaira}. However, we shall discard the weights and define \begin{equation} \label{hpqFdef} h^{p,q}(\check S,\shF_{\check S}) = \sum_k h^{p,q+k}\,\HH^{p+q}(\check S,\shF_{\check S}). \end{equation} Our main theorem is then: \begin{theorem} \label{mainthmintro} Assume that the fan $\Sigma$ comes from a \emph{star-like decomposition} of $\Delta$ (see Def.\ \ref{starlikedefinition}). Then $h^{p,q}(S) = h^{d-p,q}(\check S,\shF_{\check S})$, with $d=\dim S$. \end{theorem} Note that this result implies that $h^{p,q}(\check S,\shF_{\check S})$ is independent of the choice of a crepant resolution $X_\Sigma\ra X_\sigma$, i.e., independent of the choice of a triangulation of $\Delta$. The result suggests that there might also be a version $h^{p,q}(X,w)=h^{\dim X-p,q}(\check X,\check w)$; however, it is not currently known how to define $h^{p,q}(X,w)$ and $h^{\dim X-p,q}(\check X,\check w)$ directly from the twisted de Rham complex without using Thm.~\ref{sabbahmaintheorem}. Note however that rearranging indices yields: \begin{corollary} \label{cor_mainthm} With the assumption of Thm.~\ref{mainthmintro}, $$h^{p,q}(X_\Sigma,w) = h^{n-p,q}(X_{\check\Sigma},\check w)$$ where $n=\dim X_\Sigma$, $h^{p,q}(X_\Sigma,w)=\sum_k h^{p,q+k}H^{p+q}(X_\Sigma,w)$, and $H^i(X_\Sigma,w)$ is defined as the $(i-1)$th hypercohomology of $\phi_{w,0}{\bf R}j_*\CC_{X_\Sigma}$ with its Schmid-Steenbrink mixed Hodge structure (analogously for $(X_{\check\Sigma},\check w)$). \end{corollary} A formula for $h^{p,q}(S)$ was given by Danilov-Khovanskii. To compute $h^{p,q}(\check S)$, we use the toric description of the resolution $X_\Sigma$ and the weight filtration spectral sequence of the cohomological mixed Hodge complexes computing vanishing cycles. The structure of the paper is as follows. In \S\ref{section1}, we introduce the combinatorial setup and describe in detail the construction of the proposed Landau-Ginzburg mirrors and their structure. In \S\ref{section1.5} we speculate on homological mirror symmetry for our constructions, and explain its relationship with the results we prove in the paper about cohomology groups. This section, as well as \S\ref{sectionalephnull}, can be viewed as extensions of this introduction. \S\ref{section2} reviews basic formulae for Hodge numbers of hypersurfaces in toric varieties. \S\ref{section3} fills in some of the necessary background in mixed Hodge theory. \S\ref{section4} then gives the details of the calculation of the Hodge numbers of the mirror: this is the heart of the paper. \S\ref{sectionalephnull} discusses, without too many details, various additional issues associated to our proposals: the relationship of our construction with the discrete Legendre transform and the Gross-Siebert picture; the relationship with the proposal of Abouzaid, Auroux and Katzarkov \cite{AAK}; mirrors for complete intersections; and an orbifold version of some of our conjectures. Finally, \S\ref{sectioncubic} considers in detail a Fano example, namely the cubic threefold. We would like to thank Denis Auroux, Patrick Clarke, David Favero, Hiroshi Iritani, Maxim Kontsevich, Conan Leung, Kevin Lin, Arthur Ogus, Tony Pantev, Chris Peters, Bernd Siebert, Manfred Herbst, Daniel Pomerleano, Dmytro Shklyarov and Duco van Straten for useful conversations. \section{The setup: The mirror pair of Landau-Ginzburg models} \label{section1} \subsection{Resolutions} \label{section11} As in the introduction, let \[ M\cong \ZZ^{d+1},\quad M_{\RR}=M\otimes_{\ZZ}\RR,\quad N=\Hom_{\ZZ}(M,\ZZ),\quad N_{\RR}=N\otimes_{\ZZ}\RR. \] We will also use the notation \[ \bar{M}\cong M\oplus \ZZ,\quad \bar{M}_{\RR}=\bar{M} \otimes_{\ZZ}\RR,\quad \bar{N}=\Hom_{\ZZ}(\bar{M},\ZZ),\quad \bar{N}_{\RR}=\bar{N}\otimes_{\ZZ}\RR. \] We recall briefly the standard correspondence between convex polyhedra and fans. First, a \emph{lattice polyhedron} $\Delta\subseteq M_{\RR}$ is an intersection $\Delta$ of half-spaces with boundary of rational slope such that $\Delta$ has at least one vertex and all vertices of $\Delta$ lie in $M$. A \emph{lattice polytope} is a compact lattice polyhedron. If $\operatorname{{Cone}}(\Delta)$ denotes the cone over $\Delta$ as defined in the introduction, then a lattice polyhedron $\Delta$ defines a toric variety $\PP_{\Delta}$ by $$\PP_{\Delta} = \Proj \CC[\operatorname{{Cone}}(\Delta)\cap (M\oplus\ZZ)]$$ where $\CC[P]$ denotes the monoid algebra of a monoid $P$. For $\tau\subseteq \Delta$ a face, the normal cone to $\Delta$ along $\tau$ is \[ N_{\Delta}(\tau)=\{n\in N\,|\, \hbox{$n|_{\tau}={\rm constant}$, $\langle n,m\rangle\ge \langle n,m'\rangle$ for all $m\in\Delta$, $m' \in\tau$}\}. \] The normal fan of $\Delta$ is \[ \check\Sigma_{\Delta}:=\{N_{\Delta}(\tau)\,|\,\hbox{ $\tau$ is a face of $\Delta$}\}. \] The normal fan $\check\Sigma_{\Delta}$ carries a strictly convex piecewise linear function $\varphi_{\Delta}$ defined by \[ \varphi_{\Delta}(n)=-\inf\{\langle n,m\rangle \,|\, m\in\Delta\}. \] Conversely, given a fan $\Sigma$ in $N_{\RR}$ whose support $|\Sigma|$ is convex, and given a strictly convex piecewise linear function with integral slopes $\varphi:|\Sigma|\rightarrow \RR$, the \emph{Newton polyhedron} of $\varphi$ is \[ \Delta_{\varphi}:=\{m\in M_{\RR}\,|\,\hbox{$\varphi(n)+\langle n,m\rangle\ge 0$ for all $n\in |\Sigma|$}\}. \] By standard toric geometry this coincides up to translation with the convex hull of all points of $M$ indexing monomial sections of the line bundle associated to the divisor $\sum_\rho \varphi(n_\rho) D_\rho$. Here the sum is taken over the rays $\rho$ of $\Sigma$, $D_\rho$ being the corresponding toric prime divisor, and $n_\rho$ the primitive generator of $\rho$. So we may also associate a Newton polytope to a Laurent polynomial or a line bundle.\footnote{If no global section exists, the polytope will be empty.} If $\Sigma$ is a fan, we denote by $X_{\Sigma}$ the toric variety defined by $\Sigma$. If $\sigma$ is a strictly convex rational polyhedral cone, then we write $X_{\sigma}$ for the affine toric variety defined by the cone $\sigma$. Given $\tau\in\Sigma$, $V(\tau)$ will denote the closure of the torus orbit in $X_\Sigma$ corresponding to $\tau$, e.g., $V(\{0\})=X_\Sigma$. For $\rho\in\Sigma$ a ray, $V(\rho)$ is a toric divisor which we will also call $D_\rho$. Now fix once and for all a lattice polytope $\Delta\subseteq M_{\RR}$ with $\dim\Delta=\dim M_\RR>0$. This defines a projective toric variety $\PP_{\Delta}$ with an ample line bundle $\O_{\PP_{\Delta}}(1)$ with Newton polytope $\Delta$. The fan defining this toric variety is the normal fan $\check\Sigma_{\Delta}$ of $\Delta$, and the line bundle $\O_{\PP_{\Delta}}(1)$ is induced by the piecewise linear function $\varphi_{\Delta}:N_{\RR}\rightarrow\RR$ on the fan $\check\Sigma_{\Delta}$. We shall assume throughout this paper that $\PP_{\Delta}$ is a non-singular variety. This is equivalent to each cone in the normal fan to $\Delta$ being a standard cone, i.e., being generated by $e_1,\ldots,e_i$, where $e_1,\ldots,e_{d+1}$ is a basis of $N$. We shall also assume that $\Delta$ has at least one interior integral point. As we will see in \S\ref{section_kodairadim}, this is equivalent to the condition $\kappa(S)\ge 0$ used in the introduction. We define $\sigma=\operatorname{{Cone}}(\Delta)\subseteq \bar{M}_{\RR}$ as in the introduction and $\check\sigma=\sigma^{\vee}\subseteq \bar{N}_{\RR}$ where the dual cone is defined by $$\sigma^\vee = \{n\in\bar{N}_\RR| \langle m,n\rangle\ge 0\hbox{ for all }m\in\bar M_\RR\}.$$ Note that \[ \check\sigma=\{(n,r)\,|\, r\ge \varphi_{\Delta}(n)\}. \] Our first task is to specify precisely the subdivisions of the cones $\sigma$ and $\check\sigma$ we will use. There is a canonical choice of resolution for $\check\sigma$: \begin{proposition} \label{Sigmafanprop} Let $\rho:=(0,1)\in \bar{N}_{\RR}$. Then $\rho\in \Int(\check\sigma)$. Furthermore, let $\check\Sigma$ be the fan given by \begin{align*} \check\Sigma:= {} & \quad \{\check\tau \,|\, \hbox{$\check\tau$ a proper face of $\check\sigma$}\}\\ & \cup \{\check\tau+\RR_{\ge 0}\rho \,|\, \hbox{$\check\tau$ a proper face of $\check\sigma$}\}. \end{align*} This is the star subdivision of the cone $\check\sigma$ along the ray $\RR_{\ge 0}\rho$. Then $X_{\check\Sigma}$ is a non-singular variety. \end{proposition} \proof The first statement is obvious, since $\rho$ is strictly positive on every element of $\operatorname{{Cone}}(\Delta)\setminus \{0\}$. For the fact that $X_{\check\Sigma}$ is non-singular, let $\check\tau$ be a proper face of $\check\sigma$. Then $\check\tau$ takes the form \[ \check\tau=\{(n,\varphi_{\Delta}(n))\,|\, n\in\check\tau'\} \] for some $\check\tau'\in \check\Sigma_{\Delta}$. In particular, since $\PP_{\Delta}$ is assumed to be non-singular, $\check\tau'$ is a standard cone, say generated by $e_1,\ldots,e_i$, part of a basis. Then $\check\tau+\RR_{\ge 0}\rho$ is generated by $(e_1,\varphi_{\Delta}(e_1)), \ldots,(e_i,\varphi_{\Delta}(e_i)),(0,1)$ which extends to a basis of $\bar{N}$. \qed \begin{remark} Note that the projection $\bar{N}\rightarrow N$ induces a map on fans from $\check\Sigma$ to $\check\Sigma_{\Delta}$, so we have a morphism $X_{\check\Sigma}\rightarrow \PP_{\Delta}$. This is clearly an $\AA^1$-bundle, and the source is the total space of $\O_{\PP_{\Delta}}(-1)$. \end{remark} \bigskip Next, we will describe allowable refinements of $\sigma$. As we see shortly, we will only consider those corresponding to crepant resolutions, i.e., refinements which arise from polyhedral decompositions $\P$ of $\Delta$ into lattice polytopes. We first give a canonically determined polyhedral decomposition of $\Delta$. Let $h_*:\Delta\cap M\rightarrow\ZZ$ be the function defined by \[ h_*(m)=\begin{cases} 0&\hbox{if $m\in\partial\Delta$}\\ -1&\hbox{if $m\in \Int(\Delta)$}\end{cases} \] and \begin{equation} \label{Deltatildedef} \Delta_*:=\Conv\{(m,h_*(m))| m\in \Delta\cap M\}\subseteq M_{\RR} \oplus\RR. \end{equation} \begin{figure} \centerline{\epsfbox{genus2.eps}} \caption{A polytope $\Delta$ and its subdivision $\P_*$} \label{genus2} \end{figure} Here $\Conv A$ denotes the convex hull of a set $A$. Then $\Delta_*$ has one face (the upper face) equal to $\Delta\times\{0\}$, and the remaining proper faces define, via projection to $M_{\RR}$, a subdivision of $\Delta$. Let $\P_*$ denote the set of faces of this subdivision. \begin{definition} \label{starlikedefinition} A polyhedral decomposition $\P$ of $\Delta$ is said to be \emph{star-like} if it is a regular\footnote{Recall a polyhedral decomposition $\P$ of $\Delta$ is \emph{regular} if there is a strictly convex piecewise linear function on $\Delta$ whose maximal domains of linearity are the cells of $\P$.} refinement of $\P_*$. \end{definition} We will assume from now on the existence of the following: \begin{assumption} \label{overallhypo} Let $\P$ be a star-like triangulation of $\Delta$ into standard simplices, i.e., simplices $\tau$ such that $\operatorname{{Cone}}(\tau)$ is a standard cone. \end{assumption} Such a triangulation need not exist; it does, however, always exist if $\dim \Delta=2$. The existence of $\P$ is equivalent to the existence of a toric crepant resolution of the blow-up of $X_\sigma$ at the origin. To get rid of the Assumption~\ref{overallhypo}, one may work with toric stacks. There always exists a crepant resolution as a toric Deligne-Mumford stack whose coarse moduli space has at worst terminal quotient singularities. Such is given by a triangulation of $\P_*$ by elementary simplices, i.e., simplices whose only lattice points are its vertices. In this paper we stick to Assumption \ref{overallhypo} to avoid having to develop the relevant theory on stacks. More generally, one should conjecturally use an orbifold twisted de Rham complex, orbifold cohomology and vanishing cycles on orbifolds to obtain more general results, see \S\ref{orbifoldsection}. Note that there are typically several choices for $\P$. These are related by ``phase transitions in the K\"ahler moduli space.'' More precisely, each choice is given by a maximal cone in the secondary fan of $\sigma$. As we will see, the Hodge numbers don't depend on this choice. Having fixed $\P$, we obtain a refinement $\Sigma$ of $\sigma$ by \[ \Sigma=\{\operatorname{{Cone}}(\tau)\,|\, \tau\in\P\}\cup \{\{0\}\} \] and similarly $\Sigma_*$ replacing $\P$ by $\P_*$. Geometrically, we have a composition $$X_{\Sigma}\ra X_{\Sigma_*}\ra X_\sigma$$ where the second map is the blow-up of the origin in $X_\sigma$; this will be explained in \S\ref{section_geom_prop}. \begin{example} \label{basicexamples1} Let $\Delta$ be a reflexive polytope, i.e., \begin{enumerate} \item[a)] $\Delta$ has a unique interior lattice point $v$ and \item[b)] the polar dual $\Delta^*:=\{n\in N_{\RR}| \langle n,m-v\rangle \ge -1\quad\forall m\in \Delta\}$ is a lattice polytope. \end{enumerate} Under Assumption~\ref{overallhypo}, a) implies b). It is not hard to see that $\check\sigma=\operatorname{{Cone}}(\Delta)^{\vee}=\operatorname{{Cone}}(\Delta^*)$. In this case, $\P_*$ is the star subdivision of $\Delta$ at $v$. This is the subdivision whose maximal cells are the convex hulls of $\tau\cup \{v\}$ with $\tau$ a maximal proper face of $\Delta$. \end{example} \begin{example} \label{basicexamples2} This will be a running example throughout the paper. We consider the two-dimensional polytope drawn on the left in Figure \ref{genus2}. The picture on the right gives $\P_*$. We then have several possible choices for $\P$; for example, we may take the one given in Figure \ref{genus2new}. \begin{figure} \centerline{\epsfbox{genus2new.eps}} \caption{A star-like subdivision giving a crepant resolution} \label{genus2new} \end{figure} \end{example} We can now choose Landau-Ginzburg potentials \begin{align*} w:X_{\Sigma}&\rightarrow\CC,\\ \check w:X_{\check\Sigma}&\rightarrow\CC. \end{align*} We write these as follows. First, for $w$, the primitive generators of one-dimensional cones of $\check\Sigma$ are $\rho=(0,1)\in N\oplus\ZZ$ and $(n_{\tau},\varphi_{\Delta}(n_{\tau}))$, where $\tau$ runs over codimension one faces of $\Delta$ and $n_{\tau}$ is the primitive (inward-pointing) normal vector to $\tau$. Thus we write \begin{equation} \label{Wpotential} w=c_{\rho}z^{\rho}+\sum_{\tau\subset\Delta} c_{\tau}z^{(n_{\tau}, \varphi_{\Delta}(n_{\tau}))}, \end{equation} where again the sum is over all codimension one faces of $\Delta$. Second, the primitive generators of the one-dimensional cones of $\Sigma$ are of the form $(m,1)$ for $m\in\Delta\cap M$, so we write \begin{equation} \label{Wcheckpotential} \check w=\sum_{m\in \Delta\cap M} c_mz^{(m,1)}. \end{equation} Here all coefficients are chosen in $\CC$ generally. In Prop.~\ref{checkWproper}, we note that giving $\check{w}$ is equivalent to giving a global section of $\O_{\PP_\Delta}(1)$ and show that its zero locus $S$ coincides with the critical locus of $\check w$. \begin{example} \label{genusgmirror} Continuing and extending Ex.~\ref{basicexamples2}, we may take for $\Delta$ a rectangle of edge lengths $2$ and $g+1$ such that $\Delta$ has $g$ interior points and $S$ is a genus $g$ curve. Before the resolution, its mirror Landau-Ginzburg model $(X_\sigma, w)$ is then given via (\ref{Wpotential}) as $$(X_\sigma=\Spec\C[x,y,z,u,v]/(xy-z^2,uv-z^{g+1}), c_xx+c_yy+c_zz+c_uu+c_vv)$$ where $z=z^{\rho}$, $u,v$ are the monomials given by the normals of the length two edges of $\Delta$ and $x,y$ those for the length $g+1$ edges. The singular locus of $X_\sigma$ is non-compact with four irreducible components, two of which are generically curves of $A_1$ singularities, the other two generically curves of $A_g$ singularities. \end{example} \subsection{Properifications} Now $w$ and $\check w$ are not proper, so we need to choose properifications of these maps. The particular choice will turn out not to be important, as it won't affect the answer: the sheaves of vanishing cycles whose cohomology we will eventually have to compute will have proper support even before compactifying. We still need to make some choice to show that we are not losing any cohomology, however. The two functions $w$ and $\check w$ are dealt with separately. Since $X_{\check\Sigma}$ is an $\AA^1$-bundle over $\PP_{\Delta}$, the obvious thing to do is to compactify $X_{\check\Sigma}$ to a $\PP^1$-bundle over $\PP_{\Delta}$. \begin{proposition}[Properification of $\check w$] \label{checkWproper} Consider $\bar{\check\Sigma}$ given by \begin{align*} \bar{\check\Sigma}:= {} & \quad \{\check\tau \,|\, \hbox{$\check\tau$ a proper face of $\check\sigma$}\}\\ & \cup \{\check\tau+\RR_{\ge 0}\rho \,|\, \hbox{$\check\tau$ a proper face of $\check\sigma$}\}\\ & \cup \{\check\tau-\RR_{\ge 0}\rho \,|\, \hbox{$\check\tau$ a proper face of $\check\sigma$}\}. \end{align*} Then \begin{enumerate} \item $\bar{\check\Sigma}$ is a complete, non-singular fan containing the fan $\check\Sigma$, hence giving a projective compactification $X_{\check\Sigma}\subseteq X_{\bar {\check\Sigma}}$. The projection $\bar{N}\rightarrow N$ defines a map of fans from $\bar{\check\Sigma}$ to $\check\Sigma_{\Delta}$, giving a morphism $X_{\bar{\check\Sigma}}\rightarrow \PP_{\Delta}$ which is a $\PP^1$-bundle. Let $D_0$ be the divisor corresponding to the ray $\RR_{\ge 0}\rho$ and $D_{\infty}$ be the divisor corresponding to the ray $-\RR_{\ge 0}\rho$. These are sections of the projection to $\PP_{\Delta}$, hence isomorphic to $\PP_{\Delta}$. \item $\check w$ extends to a rational map $\check w:X_{\bar {\check \Sigma}}\,\raise.5ex\hbox{$\scriptscriptstyle ---\!\!\!>$} \PP^1$ which fails to be defined on a non-singular subvariety of codimension two. Blow up this subvariety to obtain $\tilde X_{\bar{\check\Sigma}}$. Then $\tilde X_{\bar{\check \Sigma}}\setminus X_{\check\Sigma}$ is normal crossings. Furthermore, $\check w$ extends to give a projective morphism $\bar{\check w}:\tilde X_{\bar{\check\Sigma}}\rightarrow\PP^1$. \item There is a non-singular divisor $\check W_0$ on $\tilde X_{\bar{\check\Sigma}}$ such that $\bar{\check w}^{-1}(0)=D_0\cup \check W_0$ is a normal crossings divisors, with $D_0\cap \check W_0$ isomorphic to the hypersurface $S$ in $\PP_{\Delta}$ given by the equation $\bar{\check w}=0$. Note this makes sense as the terms in $\bar{\check w}$ are in one-to-one correspondence with points of $\Delta\cap M$, and these points form a basis for $\O_{\PP_{\Delta}}(1)$. \end{enumerate} \end{proposition} \proof (1) is standard; we leave the details to the reader. For (2) and (3), let us begin by considering a cone of the form $\check\tau\pm\RR_{\ge 0}\rho$ in $\bar{\check\Sigma}$, where $\check\tau$ is a maximal proper face of $\check\sigma$. We know that $\check\tau$ is dual to $\operatorname{{Cone}}(v)\subseteq \sigma$ for some vertex $v$ of $\Delta$. Furthermore, $\check\tau\pm\RR_{\ge 0}\rho$ is generated by vectors $(e_1,\varphi_{\Delta}(e_1)),\ldots, (e_{d+1},\varphi_{\Delta}(e_{d+1})),\pm\rho$ where $e_i\in N$ is constant on a maximal proper face of $\Delta$ containing $v$ and $\varphi_{\Delta}(e_i)=-\langle e_i,v\rangle$. Thus $\CC[\check\tau^{\vee}\cap \bar{M}] \cong \CC[x_1,\ldots,x_{d+2}]$, where $x_1,\ldots,x_{d+2}$ are the monomials associated to the dual basis to $(e_1,\varphi_{\Delta}(e_1)), \ldots,(e_{d+1},\varphi_{\Delta}(e_{d+1})), \pm\rho$. Now if $m\in\Delta\cap M$, a monomial $z^{(m,1)}$ can then be written in terms of $x_1,\ldots,x_{d+2}$ as \begin{align*} z^{(m,1)} {} = & x_{d+2}^{\pm 1}\prod_{i=1}^{d+1} x_i^{\langle (e_i, \varphi_{\Delta}(e_i)),(m,1)\rangle}\\ {} = & x_{d+2}^{\pm 1} \prod_{i=1}^{d+1} x_i^{\langle e_i, m\rangle-\langle e_i,v\rangle}. \end{align*} Note also that if $e_1^*,\ldots,e_{d+1}^*$ is the dual basis to $e_1,\ldots,e_{d+1}$, then $e_1^*,\ldots,e_{d+1}^*$ generate the tangent cone to $\Delta$ at $v$, so in particular $v+e_i^*\in\Delta\cap M$. Thus up to coefficients the monomials $z^{(v,1)}$ and $z^{(v+e_i^*,1)}$ appear in $\check w$ and are of the form $x_{d+2}^{\pm 1}$ and $x_{d+2}^{\pm 1}x_i$ respectively. Therefore, in this affine coordinate patch, we can write \[ \check w=x_{d+2}^{\pm 1}(c_v+\sum_{i=1}^{d+1} c_{v+e_i^*} x_i+ \hbox{higher order terms}). \] Thus, for general choice of coefficients, in the affine open subset of $X_{{\check\Sigma}}$ corresponding to $\check\tau+\RR_{\ge 0}\rho$, ${\check w}^{-1}(0)$ is reducible, consisting of the two irreducible components given by $x_{d+2}=0$ (which is the divisor corresponding to the ray $\RR_{\ge 0}\rho$, i.e., $D_0$) and the hypersurface given by \begin{equation} \label{hypersurfeq} c_v+\sum_{i=1}^{d+1} c_{v+e_i^*} x_i+\hbox{higher order terms}=0. \end{equation} Again, for general choice of coefficients, this will be non-singular. Similarly, in the affine open subset of $X_{\bar{\check\Sigma}}$ corresponding to $\check\tau-\RR_{\ge 0}\rho$, we see that $\check w$ has a simple pole along the divisor $x_{d+1}=0$ (the divisor $D_{\infty}$) and is zero along a hypersurface defined by the same equation \eqref{hypersurfeq}. Let $\check W_0$ be the closure in $X_{\bar{\check\Sigma}}$ of the hypersurface given by \eqref{hypersurfeq} in any of the affine subsets considered. Then $\check w$ is zero along $D_0\cup \check W_0$ and has a simple pole along $D_{\infty}$, and $\check w$ is undefined along $\check W_0\cap D_{\infty}$. Furthermore, the equation \eqref{hypersurfeq} restricted to either $D_0$ or $D_{\infty}$ yields (an affine piece of) the hypersurface in $\PP_{\Delta}$ defined by $\check w=0$. Thus in particular, $\check W_0\cap D_{\infty}$ is a non-singular variety of codimension two, which we may blow up to get a non-singular variety $\tilde X_{\bar{\check\Sigma}}$, with exceptional hypersurface $E$, and $\check w$ extends to a well-defined function on $\tilde X_{\bar {\check\Sigma}}$. Note the proper transforms of $D_0, D_{\infty}$ and $\check W_0$ in $\tilde X_{\bar {\check\Sigma}}$ are isomorphic to $D_0,D_{\infty}$ and $\check W_0$, so we continue to use the same notation. The center of the blow-up is contained in $D_{\infty} =X_{\bar{\check\Sigma}} \setminus X_{\check\Sigma}$, so $X_{\check\Sigma}$ is an open subset of $\tilde X_{\bar{\check\Sigma}}$, with $\tilde X_{\bar{\check\Sigma}} \setminus X_{\Sigma}=D_{\infty}\cup E$. We have now shown (2). Then (3) follows also from the above discussion. \qed \bigskip Let $\Delta'\subseteq\Delta$ be given by \[ \Delta':=\Conv\{v\in\Int(\Delta)\cap M\}. \] Recall that we assume $\dim\Delta'\ge0$. \begin{remark} \label{remCYgeneralize} In classical Calabi-Yau mirror symmetry as referred to in Ex.~\ref{basicexamples1}, one considers a family of hypersurfaces given as $$0= t \left(\sum_{m\in\Delta\cap M} c_m z^{m}\right)+z^{v}$$ where $t$ varies, $\Delta$ is reflexive and $v$ is its unique interior integral point, see \cite{batyrev2}. Replacing $z^{m}$ by $z^{(m,1)}$, we may view this as a family of potentials $$\check w_t=t\check w+\check w_0.$$ In fact this generalizes to our more general setup if we set $$\check w_0=\sum_{m\in\Delta'\cap M} c_m z^{(m,1)}$$ because in the Calabi-Yau case $\Delta'=v$. However, whereas in the Calabi-Yau case this gives a toric degeneration of the fibre $\check w_t=0$ as $t\ra 0$, in general it will only be a partial toric degeneration, i.e., $\check w_0=0$ consists of the union of all toric divisors in $X_\Sigma$ plus one non-toric divisor given by the Laurent polynomial $\check w_0$. We will see in \S\ref{section_kodairadim} that we have the linear equivalence $$S\sim S'-K_{\PP_\Delta}$$ where $S'$ is the zero locus of $\sum_{m\in\Delta'\cap M} c_m z^{m}$, so $S'=0$ in the Calabi-Yau case. \end{remark} We next consider the properification of $w:X_{\Sigma}\rightarrow\CC$. To do this, we first consider the obvious choice of a projective toric variety on which $w$ can be viewed as the section of a line bundle. Let \[ \check\Delta=\Conv\{0,\rho\}\cup \{ (n_{\omega},\varphi_{\Delta}(n_{\omega})) \,|\,\hbox{$\omega\subseteq\Delta$ a codimension one face of $\Delta$}\}. \] The wary reader will notice that the corresponding dual object to $\check \Delta$ is $\Conv (\Delta\times\{1\}\cup\{0\})\subseteq \bar M_\RR$ rather than $\Delta$ itself because the former is the polytope supporting the pencil given by $\check w$. Because $\varphi_{\Delta}$ is convex, one sees that $0$ is a vertex of $\check\Delta$ and the tangent cone to $\check\Delta$ at $0$ is precisely the cone $\check\sigma$. Thus the normal fan $\check\Sigma_{\check\Delta}$ to $\check\Delta$ is a complete fan in $\bar{M}_{\RR}$ containing the cone $\sigma$, so $\PP_{\check\Delta}$ is a compactification of $X_{\sigma}$. The function $w_{\sigma}$ on $X_{\sigma}$ defined by the same equation as the function $w$ on $X_{\Sigma}$ then extends to a rational function $w_{\check\Delta}$ on $\PP_{\check\Delta}$ given by \[ w_{\check\Delta}={ c_{\rho}z^{\rho}+\sum_{\tau\subset\Delta} c_{\tau}z^{(n_{\tau}, \varphi_{\Delta}(n_{\tau}))} \over z^0}. \] \begin{proposition}[Properification of $w$] \label{properWprop} There is a projective birational morphism $\pi:\tilde\PP_{\check\Delta}\rightarrow\PP_{\check\Delta}$ such that \begin{enumerate} \item The map $\pi$ factors through a projective toric resolution of singularities $X_{\bar{\Sigma}}\ra\PP_{\check\Delta}$ given by a fan $\bar{\Sigma}$ which contains $\Sigma$ as a subfan. \item If $\dim\Delta'=0$, there is a surjection $\pi_{\operatorname{{Cone}}(\Delta')}:X_{\bar\Sigma} \ra D_{\operatorname{{Cone}}(\Delta')}$ where $D_{\operatorname{{Cone}}(\Delta')}$ denotes the toric divisor given by the ray $\operatorname{{Cone}}(\Delta')$. The inclusion $D_{\operatorname{{Cone}}(\Delta')}\ra X_{\bar\Sigma}$ is a section of $\pi_{\operatorname{{Cone}}(\Delta')}$. \item $\bar w:=w_{\check\Delta}\circ \pi$ is a projective regular map to $\PP^1$. \item $\bar w^{-1}(\CC)$ is non-singular, where $\CC=\PP^1\setminus \{\infty\}$, and $X_{\Sigma}\subseteq \bar w^{-1}(\CC)$, with $D:=\bar w^{-1}(\CC)\setminus X_{\Sigma}$ a normal crossings divisor. Furthermore, $\bar{w}^{-1}(0)$ is non-singular in a neighbourhood of $\bar{w}^{-1}(0)\cap D$. \end{enumerate} \end{proposition} \proof We begin by refining the normal fan $\check\Sigma_{\check\Delta}$ to a fan $\bar{\Sigma}$ with the properties \begin{itemize} \item[a)] $\Sigma=\{\tau\in\bar{\Sigma}\,|\,\tau\subseteq\sigma\}$ and \item[b)] $X_{\bar{\Sigma}}$ is a projective non-singular toric variety \end{itemize} as follows. Let $\varphi_{\Sigma}$ denote the piecewise linear convex function giving the subdivision $\Sigma$ of $\sigma$. By adding a linear function, we may assume $\varphi_{\Sigma}\ge 0$. Note that if one gives a function on the set of integral generators of a cone $\tau$, there is a canonical extension to all of $\tau$ as a convex piecewise linear function. Its graph is given by the lower faces of the convex hull of the graph of the function on the set of generators. We use this construction to extend $\varphi_{\Sigma}$ to all of $\bar{M}_\RR$ by setting the value on a generator $m$ of a ray contained in $\sigma$ to $\varphi_{\Sigma}(m)$ and to zero for all further rays. One easily checks that the so-constructed functions on the cones glue such that the extension is continuous and piecewise linear. Moreover, it is convex away from $\partial\sigma$. We denote the extension by $\varphi_{\Sigma}$ also. By the strict convexity of $\varphi_{\check\Delta}$ at $\partial\sigma$, for some small $\epsilon$, we find that $\varphi_{\check\Delta}+\epsilon\varphi_{\Sigma}$ is a piecewise linear convex function giving a refinement of $\Sigma_{\check\Delta}$ with the property of $\bar{\Sigma}$ in a) above. In general, this may not yet induce a desingularization, however we may refine it to such. This can be done by \emph{pulling additional rays}, i.e., by successively inserting new rays along with star-subdivisions where each ray is generated by an integral point not contained in the support of $\Sigma$. These operations can be realized by piecewise linear functions and thus induce projective partial resolutions eventually giving a total projective resolution. We call the resulting fan $\bar\Sigma$ which will be the fan in (1). To see (2), note that we may modify the previous procedure if $\dim\Delta'=0$ as follows. The fan of the projective toric divisor $D_{\operatorname{{Cone}}(\Delta')}$ is given as the minimal fan containing the maximal domains of linearity of a piecewise linear function $\bar{\varphi}'$ which we may pull back to a function $\varphi'$ under the projection $$\bar M_\RR\ra \bar M_\RR / (\RR\operatorname{{Cone}}(\Delta')).$$ Note that $\varphi'$ is piecewise linear on $\Sigma_{\check\Delta}$ because there is only one ray in $\Sigma_{\check\Delta}\backslash \Sigma$ which is in fact $-\RR_{\ge 0}\cdot \operatorname{{Cone}}(\Delta')$. We may replace $\varphi_{\check\Delta}+\epsilon\varphi_{\Sigma}$ in the above procedure by $\varphi_{\check\Delta}+\epsilon\varphi'$ to obtain a $\bar{\Sigma}$ satisfying (2). \begin{figure} \input{Wproper.pstex_t} \caption{Properification of $w$} \label{Wproper} \end{figure} We have a resolution of singularities $\phi:X_{\bar{\Sigma}}\rightarrow \PP_{\check\Delta}$ with $X_{\Sigma}\subseteq X_{\bar{\Sigma}}$, and since $X_{\bar\Sigma}$ is non-singular, $D_{\infty}:=X_{\bar{\Sigma}} \setminus X_{\Sigma}$ is a divisor with normal crossings. Next, consider the section \[ w'_{\check\Delta} =z^0+c_{\rho}z^{\rho}+\sum_{\tau\subset\Delta} c_{\tau}z^{(n_{\tau}, \varphi_{\Delta}(n_{\tau}))} \] of $\O_{\PP_{\check\Delta}}(1)$. Because the coefficients are general, this section is $\check\Delta$-regular in the sense of \cite{batyrev2},\,Def.\,3.1.1. Thus pulling back this section to $X_{\bar{\Sigma}}$ we obtain a section $\phi^*w'_{\check\Delta}$ of $\phi^*\O_{\PP_{\check\Delta}}(1)$ which by \cite{batyrev2},\,Prop.\,3.2.1, is $\bar{\Sigma}$-regular, and hence its zero locus defines a non-singular hypersurface $H\subseteq X_{\bar{\Sigma}}$. Now the rational function $w_{\check\Delta}$ pulls back to $X_{\bar\Sigma}$ and induces a pencil contained in the linear system $|\phi^*\O_{\PP_{\check\Delta}}(1)|$. This pencil includes both the non-singular hypersurface $H$ and the hypersurface $H_{\infty}$ given by $z^0=0$. One sees easily that $\supp(H_{\infty})=D_{\infty}$. Thus $H_{\infty}$ is a normal crossings divisor, but need not be reduced. Again since $H$ is $\bar{\Sigma}$-regular, it meets $D_{\infty}$ transversally. So locally, at a point of $D_{\infty}\cap H$, the base-locus of the pencil defined by $w_{\check\Delta}$ on $X_{\bar\Sigma}$ is given by equations $x_1^{d_1}\cdots x_n^{d_n}=x_0=0$. Blowing up this base-locus, we obtain a projective variety $\tilde\PP_{\check\Delta}$, which is singular, but now $w_{\check\Delta}$ extends to a morphism $\bar w: \tilde\PP_{\check\Delta}\rightarrow \PP^1$ factoring through the blowup map $\pi$. See Figure~\ref{Wproper} for a picture. This gives (3). Let $E$ be the exceptional locus of $\pi$. Next, note from the local description of the base-locus that the singular locus of $\tilde\PP_{\check\Delta}$ is contained entirely in $\bar w^{-1}(\infty)$, the proper transform of $H_{\infty}$. Note also that $X_{\Sigma}$ was disjoint from $H_{\infty}$, and hence $X_{\Sigma}\subseteq \bar w^{-1}(\CC)$, the latter variety being non-singular. Furthermore, $\bar w^{-1}(\CC)\setminus X_{\Sigma}=E\cap \bar w^{-1}(\CC)$, and from the explicit local description of $H_{\infty}\cap H$, one sees the remaining part of (4). \qed \begin{corollary} The morphisms $w:X_\Sigma\ra\CC$ and $\check w:X_{\check \Sigma}\ra\CC$ are quasi-projective. \end{corollary} \begin{example} \label{g2curveproper} Continuing with Ex.~\ref{basicexamples2}, let's assume that the vertices of $\Delta$ are $(0,0)$, $(3,0)$, $(0,2)$ and $(3,2)$. The normal fan to $\Delta$, $\check\Sigma_{\Delta}$, is the fan for $\PP^1\times\PP^1$, with rays generated by $(\pm 1,0)$ and $(0,\pm 1)$. We have \[ \varphi_{\Delta}(1,0)=0,\quad \varphi_{\Delta}(-1,0)=3,\quad \varphi_{\Delta}(0,1)=0,\quad \varphi_{\Delta}(0,-1)=2 \] and hence \[ \check\Delta=\Conv\{(0,0,0),(0,0,1),(1,0,0),(0,1,0),(-1,0,3),(0,-1,2)\} \] shown in Figure \ref{Pfan}. \begin{figure} \input{Deltacheck.pstex_t}\quad \input{Pfan.pstex_t} \caption{$\check\Delta$ on the left and $\check\Delta^*$ on the right.} \label{Pfan} \end{figure} One can check that the only integral points of $\check\Delta$ are the points listed, with $\rho=(0,0,1)$ the unique interior integral point of $\check\Delta$. So $\PP_{\check\Delta}$ can be embedded in $\PP^5$ using the six given points to determine sections of $\O_{\PP_{\check\Delta}}(1)$. Using coordinates $z_0,\ldots,z_5$ corresponding to the six points given above in the given order, one sees that the image of $\PP_{\check \Delta}$ in $\PP^5$ is given by the equations $z_0z_2z_4-z_1^3=0$ and $z_3z_5-z_1^2=0$, which are homogeneous versions of those in Ex.~\ref{genusgmirror}. In addition, \[ w_{\check\Delta}={z_1+z_2+z_3+z_4+z_5\over z_0}. \] Note that $\check\Delta$ is a reflexive polytope. This is no longer true if $g>2$ as in Ex.\ \ref{genusgmirror}. For a general choice of $c\in \CC$, the surface with equation $w_{\check\Delta}=c$ is a singular K3 surface whose inverse image $\tilde W_c$ under the blowup map $X_{\bar{\Sigma}}\ra\PP_{\check\Delta}$ is smooth and of Picard rank $18$. The right side of Figure \ref{Pfan} indicates the part of the fan $\bar{\Sigma}$ induced from the subdivision $\P$ of $\Delta$. One can view the entire fan $\bar{\Sigma}$ by also triangulating the further faces of $\check\Delta^*$, but using vertices which are not necessarily integral points. \end{example} \subsection{$\Delta'$ and the Kodaira dimension of $S$} \label{section_kodairadim} The significance of $\Delta'$ throughout the paper is in part explained by the following results. \begin{proposition} \label{DeltaDeltaprimerelation} Let $\varphi_K$ denote the piecewise linear function on $\check\Sigma_{\Delta}$ which represents $K_{\PP_{\Delta}}$, taking the value $-1$ on the primitive generator of each ray of $\check\Sigma_{\Delta}$. Then \[ \varphi_{\Delta'}=\varphi_{\Delta}+\varphi_K. \] \end{proposition} \proof Note that $\varphi_K$ exists by smoothness of $\PP_{\Delta}$. Let $\Delta''=\Delta_{\varphi''}$ denote the possibly empty Newton polytope of the piecewise linear function $\varphi''=\varphi_\Delta+\varphi_K$ on $\check\Sigma_\Delta$. We need to show that $\Delta''=\Delta'$. Indeed, since $\Delta''$ is a lattice poytope contained in the relative interior of $\Delta$, we have $\Delta''\subseteq\Delta'$. On the other hand $\Delta''\supseteq\Delta'$ because, using the fact that the tangent cones to $\Delta$ at vertices of $\Delta$ are standard, each lattice point in the relative interior of $\Delta$ has integral distance $\ge 1$ to each facet. \qed \begin{corollary} \label{DeltaK} If $\PP_\Delta$ has nef anti-canonical class, then the Newton polytope of $-K_{\PP_\Delta}$, which we denote by $\Delta_K$, is reflexive. We then have the Minkowski sum decomposition $$\Delta=\Delta_K+\Delta'.$$ \end{corollary} Figure~\ref{minkowski} shows this decomposition for Ex.~\ref{basicexamples2}. In the case that $-K_{\PP_{\Delta}}$ is nef, on the dual side, we have the convex hull of the graph of $-\varphi_K$ is the cone over the dual reflexive polytope of $\Delta_K$ which we denote $\check\Delta_K$. This implies that $$\check\Delta_K = \pi(\check\Delta)$$ where $\pi$ denotes the natural projection $\bar N_\RR\ra N_\RR$. Now we can relate the dimension of $\Delta'$ to the Kodaira dimension of $S$: \begin{proposition} \label{kodairadim} Let $S$ be the zero locus of a general section of $\Gamma(\PP_{\Delta},\O_{\PP_{\Delta}}(1))$ and hence a non-singular variety of dimension $d$. Then the Kodaira dimension of $S$ is \[ \kappa(S)=\min \{\dim\Delta',d\} \] where we use $\dim\emptyset=-\infty$. \end{proposition} \begin{remark} The proposition also holds true for $\Delta'=\emptyset$ which we have excluded from our general considerations. It was pointed out to us by Victor Batyrev that smoothness of $\PP_{\Delta}$ is necessary for the proposition to hold true because there exist hypersurfaces of general type in toric varieties with no interior lattice points in their Newton polytope. \end{remark} \proof Set $k:=\min \{\dim\Delta',d\}$. We need to show that $k$ is the minimal integer such that $\dim\Gamma(S,\O_S(nK_S))$ as a function of $n$ is $O(n^k)$. Let $l(n\Delta')$ denote the number of lattice points contained in $n\Delta'$. We are done if we show that $\dim\Gamma(S,\O_S(nK_S))=l(n\Delta')$ for $\dim\Delta'\le d$ and that $\dim\Gamma(S,\O_S(nK_S))$ is bounded below by $l(nF)$ for $\dim\Delta'= d+1$ and some facet $F$ of $\Delta'$ because the Kodaira dimension of $S$ is bounded above by $\dim S=d$. By the adjunction formula, we have $$K_S=(K_{\PP_\Delta}+S)|_S.$$ By Proposition \ref{DeltaDeltaprimerelation} and standard toric geometry, it follows that \[ l(n\Delta')=\dim \Gamma(\PP_{\Delta},\O_{\PP_{\Delta}}(n(K_{\PP_\Delta}+S))). \] For $\dim\Delta'\le d$ the map $\Gamma(\PP_{\Delta},\O_{\PP_{\Delta}}(n(K_{\PP_\Delta}+S)))\ra \Gamma(\PP_{\Delta},\O_{\PP_{\Delta}}(n(K_{\PP_\Delta}+S))\otimes\O_S)$ is injective. This can be checked on the dense torus where $S$ is given by a principal ideal a generator of which has Newton polytope $\Delta$. Thus, every non-trivial element in the ideal has a Newton polytope of dimension $d+1$. For the same reason, for $\dim\Delta'=d+1$, the restriction of the above map to sections given by monomials in a face of $\Delta'$ is injective. \qed \begin{figure} \input{minkowski.pstex_t} \caption{} \label{minkowski} \end{figure} \subsection{Geometry of the central fibre of the potential $w$} \label{section_geom_prop} We now return to describing $w:X_{\Sigma}\rightarrow\CC$ in more detail. In particular, we wish to describe $w^{-1}(0)$. It follows from Proposition \ref{properWprop},(4), that $\Sing(w^{-1}(0))$ is proper over $\CC$. We define some additional combinatorial objects. First, let \[ \check\sigma^o:=\Conv\{\bar n\in \bar{N}\,|\, \bar n\in \check \sigma, \bar n\not=0\}. \] All monomials of $w$ lie in $\check\sigma^o\cap\check\Delta$. Moreover, $$\left\{\sum_{n\in I}{a_n}z^{n}\,\bigg|\, I\subset\check\sigma^o\cap \bar{N},|I|<\infty,a_n\in\CC\right\}$$ is the ideal of the origin $0$ in $X_\sigma$. Its blow-up $\op{Bl}_0 X_\sigma$ coincides with the toric variety given by the normal fan of $\check\sigma^o$, see \cite{Th03} for more details. We will see shortly that this normal fan is $$\Sigma_*=\{\operatorname{{Cone}}(\tau)\,|\, \tau\in \P_*\}\cup\{\{0\}\}$$ which we may think of as the star subdivision of $\sigma$ along $\operatorname{{Cone}}(\Delta')$. We can extend the function $h_*:\Delta\cap M\rightarrow\ZZ$ to a piecewise linear function $h_*:\Delta\rightarrow\RR$ by $h_*(m)=\inf\{r|(m,r)\in\Delta_*\}$ where $\Delta_*$ is defined in \eqref{Deltatildedef}. This is a strictly convex function. We now give a more useful description of $\P_*$. We recommend keeping in mind Figure~\ref{genus2}. \begin{lemma} \label{Pstar} \begin{enumerate} \item If we think of $h_*$ as a piecewise linear function on $\Sigma_*$ given by \[ h_*(rm,r)=rh_*(m), \] then $\check\sigma^o$ is the Newton polyhedron of $h_*$. \item We have $\Sigma_*=\check\Sigma_{\check\sigma^o}$ and $$X_{\Sigma_*}=\Bl_0 X_\sigma = \PP_{\check\sigma^o}.$$ Thus, there is a one-to-one correspondence between proper faces of $\check\sigma^o$ and $\P_*$ which we will refer to as \emph{duality}. \item Assume $\Delta'\neq\emptyset$. Then we have $$\P_*=\{\tau|\tau\subseteq\partial\Delta\} \sqcup \{\tau|\tau\in\P_*,\tau\not\subseteq\Delta',\tau\cap\Delta'\neq\emptyset\} \sqcup \{\tau|\tau\subseteq\Delta'\}.$$ \end{enumerate} \end{lemma} \begin{remark} In the language of Gross-Siebert, we have refined the discrete Legendre transform $\sigma\leftrightarrow \check \sigma$ to $(\Sigma_*,h_*)\leftrightarrow (\check\sigma^o)$. This corresponds to a blow-up $X_\sigma$ and a degeneration of $X_{\check\sigma}$. We will come back to this point of view in \S\ref{sectionalephnull}. \end{remark} \begin{proof} Define $h_*':\Delta\rightarrow\RR$ by \[ h'_*(m)=-\inf_{\bar n\in\check\sigma^o} \langle \bar n,(m,1)\rangle; \] this is also a convex piecewise linear function. To prove (1), we need to show that in fact $h_*=h_*'$. To see this, first note that for $m\in \partial\Delta\cap M$, there exists an $\bar n\in \check\sigma^o\cap\bar{N}$ such that $\langle \bar n,(m,1)\rangle=0$. Since $\langle \bar n',(m,1)\rangle\ge 0$ for all $\bar n'\in\check\sigma$, we have $h'_*(m)=0$. If $m\in\Int(\Delta) \cap M$, then $\langle \rho,(m,1)\rangle =1$, while $\langle \bar n,(m,1)\rangle\ge 1$ for all $\bar n\in\check\sigma^o\cap \bar{N}$, so $\langle\bar n,(m,1)\rangle \ge 1$ for all $\bar n\in\check\sigma^o$. Thus $h_*'(m)=-1$. Now by construction, $h_*$ is clearly the largest convex function with these values on integral points, so $h_*'(m)\le h_*(m)$ for all $m\in\Delta$. On the other hand, suppose $\omega\in\P_*$ is a maximal cell; then $h_*|_{\omega}$ is represented by some $\bar n_{\omega}\in N_{\RR}\oplus\RR$ on $\omega$, identifying $\omega$ with $\omega\times\{1\}\subseteq \bar{M}_{\RR}$. By Assumption~\ref{overallhypo}, $\omega$ contains a standard simplex, and hence the integral points of $\operatorname{{Cone}}(\omega)\cap (M\times \{1\})$ span $\bar{M}$. Since $h_*$ only takes integral values on points of $\Delta\cap M$, we conclude that in fact $\bar n_{\omega}$ is integral, i.e., $\bar n_{\omega} \in \bar{N}$. Now we observe that $-\bar n_{\omega}\in \check\sigma^o$, as $\bar n_{\omega}\not=0$ and $0\ge h_*(m)\ge \langle \bar n_{\omega},(m,1)\rangle$ for all $m\in\Delta$. So for $m\in\omega$, $h_*'(m)\ge -\langle -\bar n_{\omega},(m,1)\rangle=h_*(m)$. Thus $h_*=h'_*$. Because $h_*$ is strictly convex on $\Sigma_*$, we have $\Sigma_*=\check\Sigma_{\check\sigma^o}$ and the remainder of (2) follows from what we discussed before the lemma. Part (3) follows from the construction of $h_*$ which makes $\P_*$ be the star subdivision of $\Delta$ centered at $\Delta'$. \end{proof} We now refine part (3) of the previous lemma and also prove some combinatorial facts that we need later. \begin{lemma} \label{pdeltadeltaprime} \begin{enumerate} \item If $\check\Sigma_{\Delta'}$ denotes the normal fan of $\Delta'$ in $N_\RR/\Delta'^\perp$, the projection \[ N_{\RR}\rightarrow N_{\RR}/\Delta'^{\perp} \] induces a map of fans \[ \check p_{\Delta\Delta'}:\check\Sigma_{\Delta}\ra \check\Sigma_{\Delta'}. \] \item There are natural maps \[ \{\tau|\tau\subseteq\partial\Delta\} \stackrel{p_{\Delta\Delta'}^1}{\lra} \{\tau|\tau\in\P_*,\tau\not\subseteq\Delta',\tau\cap\Delta'\neq\emptyset\} \stackrel{p_{\Delta\Delta'}^2}{\lra} \{\tau|\tau\subseteq\Delta',\, \dim\tau<\dim\Delta\}. \] Here $p^1_{\Delta\Delta'}$ is bijective and takes $\tau\subseteq\partial \Delta$ to the unique cell $\tau'$ of $\P_*$ with $\tau'\not\subseteq \Delta'$, $\tau'\cap\Delta'\not=\emptyset$, and $\tau'\cap\partial\Delta=\tau$. The map $p^2_{\Delta\Delta'}$ is surjective and takes $\tau'$ to $\tau'\cap\Delta'$. We define \[ p_{\Delta\Delta'}:\{\tau|\tau\subseteq\Delta\}\ra\{\tau|\tau\subseteq\Delta'\} \] to be the composition $p^2_{\Delta\Delta'}\circ p^1_{\Delta\Delta'}$ on proper faces of $\Delta$, and $p_{\Delta\Delta'}(\Delta)=\Delta'$. Explicitly, for $\tau\subseteq\Delta$, $$ \begin{array}{rcl} p_{\Delta\Delta'}(\tau) &=& \Conv\{p_{\Delta\Delta'}(v)|v\hbox{ is a vertex of }\tau\}\\ &=& \Delta'\cap\bigcap_{i=1}^k\left\{ m\in M_\RR|\langle m, n_{\omega_i}\rangle=-\varphi_\Delta(n_{\omega_i})+1\right\} \end{array}$$ where $\omega_i$ are the maximal proper faces of $\Delta$ containing $\tau$. We have $\dim\tau\ge\dim p_{\Delta\Delta'}(\tau)$. Moreover, $\check p_{\Delta\Delta'}$ is the composition of $p_{\Delta\Delta'}$ with the bijections which identify the set of faces of $\Delta$, respectively $\Delta'$, with the corresponding normal fan. \item The intersection of $\check\sigma^o$ with $\check\Sigma$ induces a subdivision $\P_{\partial\check\sigma^o}$ of $\partial\check\sigma^o$ where each bounded face is a standard simplex. Moreover, under the duality of Lemma~\ref{Pstar},(2), at most faces dual to $\tau'\subseteq\Delta'$ receive a refinement. For $\tau'\subseteq\Delta'$ and $\check\tau'\subseteq \check\sigma^o$ the corresponding dual face, there is a natural inclusion reversing bijection $$\{\check\tau\in\P_{\partial\check\sigma^o} | \Int(\check\tau)\subseteq \Int(\check\tau')\neq\emptyset\}\leftrightarrow p^{-1}_{\Delta\Delta'}(\tau')$$ where the simplex corresponding to $\tau\in p^{-1}_{\Delta\Delta'}(\tau')$ has dimension $d+1-\dim\tau$. \end{enumerate} \end{lemma} \begin{proof} For (1), first note that by Proposition \ref{DeltaDeltaprimerelation}, $\varphi_{\Delta'}$ is piecewise linear and convex, but not necessarily strictly convex, on the fan $\check\Sigma_{\Delta}$. The maximal domains of linearity of $\varphi_{\Delta'}$ define a fan $\check\Sigma'$ of not necessarily strictly convex cones in $N_{\RR}$, and the fan $\check\Sigma_{\Delta'}$ is then obtained by dividing out each cone in $\check\Sigma'$ by $\Delta'^{\perp}$. This gives the map of fans $\check p_{\Delta\Delta'}$ of (1). In particular, if $\tau\subseteq \partial\Delta$ and $\check\tau$ is the corresponding cone of $\check\Sigma_{\Delta}$, $n\in\check\tau$, $n\not=0$, we have that $\langle n,\cdot\rangle = -\varphi_\Delta(n)$ is a supporting hyperplane of the face $\tau$. The face of $\Delta'$ corresponding to $\check p_{\Delta\Delta'}(\check\tau)$ is then supported by the hyperplane $\langle n,\cdot\rangle = -\varphi_{\Delta'}(n)$. In particular, if $n\in\Int(\check\tau)$, the image of $n$ in $N_{\RR}/\Delta'^{\perp}$ lies in the interior of $\check p_{\Delta\Delta'}(\check\tau)$. In this case, $n$ defines a supporting hyperplane of $\Delta$ which intersects $\Delta$ only in $\tau$, and defines a supporting hyperplane of $\Delta'$ which intersects $\Delta'$ only in the face dual to $\check p_{\Delta\Delta'}(\check\tau)$. This gives a surjective map from the set of faces of $\Delta$ to the set of faces of $\Delta'$. Thus to prove (2), we just need to show that this map is the map $p_{\Delta\Delta'}$ described in (2). To show this, we first need to show that for any $\tau\subseteq\partial\Delta$, there is a unique $\tau'\in\P_*$ such that $\tau'\not\subseteq\partial \Delta$ and $\tau'\cap\partial\Delta=\tau$. This will show bijectivity of $p^1_{\Delta\Delta'}$. Furthermore, we need to show that $\tau'\cap\Delta'$ is the face $\tau''$ of $\Delta'$ corresponding to $\check p_{\Delta\Delta'}(\check\tau)$. To show both these items, let $n\in\Int(\check\tau)$ be chosen so that $\varphi_K(n)=-1$. Then the affine linear function $-\langle n,\cdot\rangle-\varphi_{\Delta}(n)$ takes the value $0$ on $\tau$ and is strictly negative on $\Delta\setminus\tau$, while $-\langle n,\cdot\rangle -\varphi_{\Delta'}(n)$ takes the value $0$ on $\tau''$ and is strictly negative on $\Delta'\setminus\tau''$. Since $\varphi_{\Delta'}=\varphi_{\Delta}+\varphi_K$ by Proposition \ref{DeltaDeltaprimerelation}, $-\langle n,\cdot\rangle -\varphi_{\Delta}(n)$ takes the value $0$ on $\tau$ and the value $-1$ on $\tau''$. So \begin{align*} &\hbox{$-\langle n,m\rangle-\varphi_{\Delta}(n)=h_*(m)$ for $m\in \Conv(\tau,\tau'')=:\tau'$,}\\ &\hbox{$-\langle n,m\rangle-\varphi_{\Delta}(n)<h_*(m)$ for $m\in \Delta\setminus\tau'$} \end{align*} by the definition of $h_*$. Thus $\tau'\in\P_*$ and $\tau'\cap\partial \Delta=\tau$, $\tau'\cap\Delta'=\tau''$, so $\tau'$ is as desired. Finally, given a cell $\tau'\in\P_*$ with $\tau'\cap\partial\Delta=\tau$, $\tau'\not\subseteq\partial\Delta$, we need to show that $\tau'$ is as constructed above. Indeed, there is an affine linear function $-\langle n,\cdot\rangle+c$ which coincides with $h_*$ on $\tau'$ and is smaller than $h_*$ on $\Delta \setminus\tau'$. Then necessarily the hyperplane $\langle n,\cdot\rangle -c=0$ intersects $\Delta$ precisely in the face $\tau$, so this hyperplane is a support hyperplane for the face $\tau$. Thus $n\in\Int(\check\tau)$ and $c=-\varphi_{\Delta}(n)$. Furthermore, for $-\langle n,\cdot\rangle +c$ to take the value $-1$ on $\tau'\cap\Delta'$, we must have $\varphi_K(n)=-1$. Thus $\tau'$ is as constructed in the previous paragraph. The remaining statements of (2) follow easily from the above discussion. For (3), by Prop.~\ref{Sigmafanprop}, we have $$\check\Sigma = \{\{0\},\RR_{\ge 0}\rho\} \cup \{\check\tau|\tau\subseteq\partial\Delta\} \cup\{\RR_{\ge 0}\rho+\check\tau|\tau\subseteq\partial\Delta\}$$ where $\check\tau=\operatorname{{Cone}}(\tau)^\perp\cap\check\sigma$. We claim that, for $\omega\in\check\Sigma$, $$\check\sigma^o\cap\omega = \Conv((\omega\cap\bar{N})\backslash\{0\}).$$ Indeed, $\omega$ is a standard cone which, w.l.o.g, we may assume maximal. Say $v_0,v_1,\ldots,v_{d+1}$ are its primitive integral generators with $v_0=\rho$. Let $v\in\Delta$ be the integral generator of the ray dual to the cone generated by $v_1,\ldots,v_{d+1}$. Then $\langle (p_{\Delta\Delta'}(v),1),\cdot\rangle = 1$ contains $v_0,\ldots,v_{d+1}$ and is a supporting hyperplane of $\check\sigma^o$. Thus this hyperplane supports $\Conv\{v_0,\ldots,v_{d+1}\}$ as a face of $\check\sigma^o\cap\omega$. Since $\Conv((\omega\cap \bar N)\setminus\{0\})= \{\sum_i \lambda_i v_i|\sum_i\lambda_i\ge 1\}$, the claim follows. Now $\check\Sigma$ induces a subdivision of $\check\sigma^o$ (resp.\ $\partial\check\sigma^o$) which we denote by $\P_{\check\sigma^o}$ (resp.\ $\P_{\partial\check\sigma^o}$), see Fig.~\ref{sigmaosub}. \begin{figure} \input{sigmaosub.pstex_t} \caption{$\P_{\check\sigma^o}$ for Example~\ref{basicexamples2}} \label{sigmaosub} \end{figure} At most faces of $\check\sigma^o$ which contain $\rho$ are effected by the subdivision. By the claim, the cells in $\P_{\partial\check\sigma^o}$ properly containing $\rho$ are \[ \{\Conv\{\rho,v^{\check\tau}_1,..,v^{\check\tau}_{r_{\check\tau}}\}|\tau\subseteq\partial\Delta\} \] where $v^{\check\tau}_1,\ldots,v^{\check\tau}_{r_{\check\tau}}$ denote the primitive integral generators of $\check\tau$, the cone of $\check\Sigma$ corresponding to $\tau$. Thus, these cells are in natural bijection with faces of $\Delta$ ($\rho$ itself corresponding to $\Delta$). Note that under the duality of Lemma~\ref{Pstar},(2), the face of $\check\sigma^o$ dual to $\tau\subsetneq\Delta$ is an unbounded face. Again by the claim, each such face has one bounded facet, which is \[ \check\omega_{\tau}:=\Conv\{v^{\check\tau}_1,\ldots, v^{\check\tau}_{r_{\check\tau}}\}. \] By the argument for (2) above, $\check\omega_{\tau}$ is dual to $p^1_{\Delta\Delta'}(\tau)$. In turn, the minimal face of $\check\sigma^o$ containing both $\rho$ and $\check\omega_{\tau}$ is dual to $p_{\Delta\Delta'}(\tau)$. This demonstrates the inclusion reversing bijection. The dimension formula follows from duality and what we said already. \end{proof} The Newton polytope of $w$ is given by \[ \check\Delta_0=\check\Delta\cap\check\sigma^o=\Conv\big(\{\rho\}\cup \{ (n_{\tau},\varphi_{\Delta}(n_{\tau})) \,|\,\hbox{$\tau\subseteq\Delta$ a codimension one face of $\Delta$}\}\big) \] which is also the convex hull of the bounded faces of $\check\sigma^o$. Lemma~\ref{Pstar} then implies \begin{lemma} \label{dimDelta0} We have $\dim\check\Delta_0=d+2$ for $\dim\Delta'>0$ and $\dim\check\Delta_0= d+1$ for $\dim\Delta'=0$. \end{lemma} We set \begin{equation} \label{stricttrafo} W_t = \overline{w^{-1}(t)\cap(\CC^*)^{d+2}} \end{equation} where the overline denotes the closure in $X_\Sigma$. Now, $W_t$ is the strict transform of the hypersurface of $X_\sigma$ given by the same equation because the maps $X_\Sigma\ra X_{\Sigma_*}\ra X_\sigma$ restricted to $(\CC^*)^{d+2}$ give isomorphisms. So we may also take the closure (\ref{stricttrafo}) in $X_{\Sigma_*}$ which we then denote by $W^*_t$. To complete the notation, let $\bar{W_t}$ denote the closure of ${W_t}$ in $X_{\bar{\Sigma}}$ and $\tilde{W_t}$ the closure in $\tilde\PP_{\check\Delta}$ such that we have a diagram \begin{equation} \label{setupWs} \begin{split} \xymatrix@C=30pt { W^{\sigma}_t\ar@{_{(}->}[d] & W^*_t\ar[l]\ar@{_{(}->}[d] & W_t\ar[l]\ar@{_{(}->}[d]\ar@{^{(}->}[r] & \bar W_t\ar@{_{(}->}[d] & \tilde{W}_t\ar@{_{(}->}[d]\ar_{\sim}[l]\\ X_{\sigma} & X_{\Sigma_*}\ar[l] & X_{\Sigma}\ar@{^{(}->}[r]\ar[l] & X_{\bar\Sigma} & \tilde\PP_{\check\Delta}.\ar[l] } \end{split} \end{equation} Given $\tau\in\P$, let $\P_*(\tau)$ denote the smallest cell of $\P_*$ containing $\tau$. For $\tau\in\P_*$, we set $$\check\Delta_{\tau}=\check\Delta_0\cap \check\tau,$$ where $\check\tau$ denotes the face of $\check\sigma^o$ dual to $\tau$. \begin{proposition} \label{W0descprop} \begin{enumerate} \item For $\tau\in\P_*$, the Newton polytope of the hypersurface\footnote{We use the notation $V(\tau)$ as shorthand for $V(\operatorname{{Cone}}(\tau))$.} $V(\tau)\cap W^*_0$ in $V(\tau)$ is $\check\Delta_\tau$. For $v\in\Delta'\cap M$, the divisor $W_0^*\cap D_v$ is ample in $D_v$. \item The intersection of $\bar W_t$ with every closed toric stratum in $X_{\bar\Sigma}$ is either empty or smooth for $t=0$ and $t\in\CC$ general. For $\tau\in\P$, the Newton polytope of the hypersurface $V(\tau)\cap W_0$ in $V(\tau)$ is $\check\Delta_{\P_*(\tau)}$. \item For $t\neq 0$, we have $w^{-1}(t)=W_t$. For $t=0$, we have $$ w^{-1}(0)=W_0\cup \bigcup_{v\in\Int(\Delta)\cap M} D_v, $$ where $D_v\subseteq X_{\Sigma}$ is the toric divisor corresponding to the ray $\operatorname{{Cone}}(v)\in \Sigma$. Furthermore, $w^{-1}(0)$ is normal crossings. \end{enumerate} \end{proposition} \begin{proof} Consider the embedding of polytopes $\check\Delta_0\hra\check\sigma^o$. First assume that $\dim\Delta'>0$, so that $\dim\check\Delta_0=\dim \check\sigma^o$ by Lemma~\ref{dimDelta0}. In view of Lemma~\ref{Pstar},(2), for $\tau\in\P_*$, $\operatorname{{Cone}}(\tau)$ is the normal cone to a face of $\check\sigma^o$. From this embedding and the fact that every bounded face of $\check\sigma^o$ is also a bounded face of $\check\Delta_0$, we see that $\operatorname{{Cone}}(\tau)$ is contained in a normal cone of $\check\Delta_0$, equal to a normal cone of $\check\Delta_0$ provided that $\tau\subseteq\Delta'$. Thus the embedding of polytopes induces a morphism of toric varieties $f:X_{\Sigma_*}\ra \PP_{\check\Delta_0}$. On the other hand, if $\dim\Delta'=0$, then by Lemma~\ref{dimDelta0} one sees that $\check\Delta_0$ is a face of $\check\sigma^o$, and the projection $\bar M\rightarrow \bar M/\ZZ m$ for $m$ the normal vector to the face $\check\Delta_0$ induces again a morphism of toric varieties $f:X_{\Sigma_*}\rightarrow\PP_{\check\Delta_0}$. In either case, $W_0^*=f^{-1}(W_{\check\Delta_0})$ for an ample hypersurface $W_{\check\Delta_0}\subset\PP_{\check\Delta_0}$ given by the same equation as $W_0^*$. Given $\tau\in\P_*$, its dual $\check\tau$ is a face of $\check\sigma^o$ and we have $V(\tau)=\PP_{\check\tau}$. The restriction of $f$ to $\PP_{\check\tau}$ yields the natural map $\PP_{\check\tau}\ra \PP_{\check\tau\cap\check\Delta_0}$. This is an isomorphism if $\tau\not\subseteq\partial\Delta$. In any case, the Newton polytope of $W_0^*\cap \PP_{\check\tau}$ is isomorphic to that of $W_{\check\Delta_0}\cap \PP_{\check\tau\cap\check\Delta_0}$ which is $\check\tau\cap\check\Delta_0$ by ampleness. In particular, $W_0^*\cap \PP_{\check\tau}$ is ample in $W_0^*$ if $\tau\not\subseteq\partial\Delta$. We have shown (1) and will now deduce (2). The assertion that the Newton polytope of $V(\tau)\cap W_0$ is $\check\Delta_{\P_*(\tau)}$ follows from the fact that $W_0$ is the pullback of $W_0^*$ under the map $X_\Sigma\ra X_{\Sigma_*}$ which takes a stratum $V(\tau)$ to $V(\P_*(\tau))$. Since the coefficients of $W_{\check\Delta_0}$ are assumed general, $W_{\check\Delta_0}$ is $\check\Delta_0$-regular. The remainder of (2) follows from the fact that regularity is preserved under pullback, see \cite{batyrev2},\,Prop.\,3.2.1, and the smoothness of $X_{\bar\Sigma}$ in a neighbourhood of the closure of $W_t$. Finally, for (3), note that, for $t\neq 0$, $W_t$ is the proper transform of the hypersurface $W^\sigma_t$ in $X_\sigma$ because $W^\sigma_t$ is $\sigma$-regular, which is not true for $W^\sigma_0$ because the latter contains the origin of $X_\sigma$. Since $w^{-1}(0)$ is the total transform of $W^\sigma_0$, isomorphic over the dense torus, the irreducible components of $w^{-1}(0)$ different from $W_0$ need to be toric divisors of $X_\Sigma$, the set of which is indexed by $\Delta\cap M$. The multiplicities may be computed locally as follows: A standard fact of toric geometry says that the monomial $z^{(n,r)}$ vanishes to order $\langle (n,r),(v,1)\rangle =\langle n,v\rangle+r$ along $D_v$. In particular, if $v\not\in\partial\Delta$, then for any $(n,r)\in \check\sigma^o$, $\langle (n,r),(v,1)\rangle >0$, so all the monomials $z^{(n,r)}$ appearing in $w$ vanish on $D_v$. Furthermore, the monomial $z^{\rho}$ vanishes to order $1$ on $D_v$, so $D_v\subseteq w^{-1}(0)$ and $D_v$ appears with multiplicity one. On the other hand, if $v\in\partial\Delta$, there is at least one monomial $z^{(n,r)}$ appearing in $w$ not vanishing on $D_v$. Moreover, all such non-vanishing monomials are linearly independent after restriction to $D_v$. \end{proof} \begin{corollary} \label{handlebodies} For $\tau\in\P$, $\tau\subset\partial\Delta$, denoting by $T_\tau$ the torus orbit of $X_{\Sigma}$ corresponding to $\operatorname{{Cone}}(\tau)$, we have $$\begin{array}{rcl} w^{-1}(t)\cap T_\tau &\cong& H^{\codim \P_*(\tau)-1}\times (\CC^*)^{\dim\P_*(\tau)-\dim\tau} \qquad \hbox{ for }t\neq 0,\\ w^{-1}(0)\cap T_\tau &\cong& H^{\codim \P_*(\tau)-2}\times (\CC^*)^{\dim\P_*(\tau)-\dim\tau+1} \end{array} $$ where $\codim \P_*(\tau)=d+1-\dim\P_*(\tau)$ and $H^k$ denotes a $k$-dimensional handlebody, i.e., the intersection $H\cap (\CC^*)^{k+1}$ for a general hyperplane $H$ in $\PP^{k+1}$. \end{corollary} \begin{proof} Given $\tau$ as in the assertion then $\P_*(\tau)$ is a proper face of $\Delta$. By Prop.~\ref{W0descprop} and Lemma~\ref{pdeltadeltaprime},(3), $\check\Delta_{\P_*(\tau)}$, the Newton polytope of $W^*_0\cap T_{\P_*(\tau)}$, is a standard simplex. It is the convex hull of the primitive generators of the face of $\check\sigma$ dual to the face $\operatorname{{Cone}}(\P_*(\tau))$ of $\sigma$. Thus the Newton polytope of $W^*_t\cap T_{\P_*(\tau)}$ for $t\neq 0$ is $\Conv(\{0\}\cup \check\Delta_{\P_*(\tau)})$. Checking dimensions implies $W^*_0\cap T_{\P_*(\tau)}=H^{d-\dim \P_*(\tau)-1} \times\CC^*$ and $W^*_t\cap T_{\P_*(\tau)}=H^{d-\dim \P_*(\tau)}$ for $t\neq 0$. The assertion follows from the fact that the restriction of the map $X_{\check\Sigma}\ra X_{\Sigma_*}$ to $T_\tau$ is a projection $T_\tau\cong T_{\P_*(\tau)}\times (\CC^*)^{\dim\P_*(\tau)-\dim\tau}\ra T_{\P_*(\tau)}$ and $w^{-1}(t)\cap T_\tau$ is the pullback of $W^*_t\cap T_{\P_*(\tau)}$ under this map. \end{proof} \begin{example} We return to Example \ref{basicexamples1} and Rem.~\ref{remCYgeneralize} where $\Delta$ is a reflexive polytope. Then the two Landau-Ginzburg models $w:X_{\Sigma}\rightarrow \CC$ and $\check w:X_{\check\Sigma} \rightarrow\CC$ have similar structure. Both $\check w^{-1}(0)$ and $w^{-1}(0)$ have two irreducible components. One of the irreducible components of $\check w^{-1}(0)$ is isomorphic to $\PP_{\Delta}$, and one of the irreducible components of $w^{-1}(0)$ is isomorphic to $\PP_{\Delta^*}$. The singular locus of $\check w^{-1}(0)$ and $w^{-1}(0)$ are Calabi-Yau hypersurfaces in, respectively, $\PP_{\Delta}$ and $\PP_{\Delta^*}$, and these hypersurfaces are mirror under the Batyrev construction \cite{batyrev2}. \end{example} \begin{example} \label{genus2.1} We extend Examples \ref{basicexamples2} and \ref{g2curveproper}. There are two interior vertices, $v_1=(1,1)$ and $v_2=(2,1)$, giving toric components $D_1$ and $D_2$ of $w^{-1}(0)$. From the particular choice of triangulation given in Fig.~\ref{genus2new}, one sees that $D_1$ and $D_2$ are given by the fans depicted in Fig.~\ref{genus2fans} and thus both are isomorphic to $\PP^1\times\PP^1$ blown up in three points. \begin{figure} \input{fans.pstex_t} \caption{The fans of $D_1$ and $D_2$} \label{genus2fans} \end{figure} \begin{figure} \input{threesurfaces.pstex_t} \caption{The mirror to a genus two curve} \label{threesurfaces} \end{figure} By the adjunction formula, $D_{v_1}\cap W_0$ and $D_{v_2}\cap W_0$ are rational curves, and one can easily compute that $W_0\cap D_{v_1}\cap D_{v_2}$ consists of two points. We deduce that $w^{-1}(0)$ is as depicted on the right in Fig.~\ref{threesurfaces} and its singular locus $\check S$ is a union of three $\PP^1$'s as depicted on the left in Fig.~\ref{threesurfaces}. Moreover, $\tilde W_0$ is a rational surface because $\bar w^{-1}(0)$ is clearly a type III degeneration of K3 surfaces, being simple normal crossings with triple points, and hence all components must be rational. One can show that $\tilde W_0$ is an Hirzebruch surface $\FF_2$ blown up in $12$ points. \end{example} \begin{remark}[The toric degeneration $w_t$] \label{mirrorfamily} In Rem.~\ref{remCYgeneralize}, we discussed that the family of potentials $\check w_t$ is only a partial toric degeneration in general. However, on the mirror dual side, if we set $$w_t = tw + z^\rho,$$ then $w_t$ defines a toric degeneration of $w^{-1}(0)$. Indeed, recall that $\rho$ evaluates to $1$ on each primitive generator of a ray in $\Sigma$ and thus the zero locus of $z^\rho$ is the reduced union of all toric divisors in $X_\Sigma$. Since by Prop.~\ref{W0descprop},(3), $w^{-1}(0)$ already contains those corresponding to rays generated by $v\in\Delta'$, we find that $w_t$ degenerates the component $\tilde W_0$ to the union of all toric divisors $D_v$ with $v\in\partial\Delta$. Note, however, that the compactified degeneration of $\tilde W_0$ in $\tilde\PP_{\check\Delta}$ is toric but possibly non-reduced in $D_\infty$. \begin{figure} \input{degeneratew.pstex_t} \caption{Toric degeneration of $w^{-1}(0)$ for the genus two curve mirror} \label{degeneratew} \end{figure} See Fig.~\ref{degeneratew} for $w_0$ in the example of the genus two curve. Note that the critical locus of $w_0$ is non-compact for $d\ge1$ whereas that of $w_t$ for $t\neq 0$ is compact. The choices for the families $w_t$ and $\check w_t$ are motivated by mirror symmetry: up to instanton corrections and up to fixing coefficients, the complex structure parameter $t$ in $w_t$ (resp.\ $\check w_t$) corresponds to the K\"ahler parameter given by scaling the sum of the complexified classes of the exceptional divisors in the resolution in $X_\Sigma$ (resp $X_{\check\Sigma}$) mapping to the origin in $X_\sigma$ (resp.\ $X_{\check\sigma}$). \end{remark} \subsection{The intersection complex of $w^{-1}(0)$} Recall the following standard definition: \begin{definition} Let $X=\bigcup_{i\in I} X_i$ be a strictly normal crossings variety. The \emph{dual intersection complex} $\Gamma_X$ of $X$ is the simplicial complex with vertices the index set $I$ and there is one simplex $\langle i_0,\ldots,i_p\rangle$ for every connected component of $X_{i_0}\cap \cdots \cap X_{i_p}$. \end{definition} \begin{example} The dual intersection complex of $\check w^{-1}(0)$ is a closed interval. In Example \ref{basicexamples1}, the dual intersection of $w^{-1}(0)$ is also a closed interval. In Example \ref{basicexamples2}, the dual intersection complex of $w^{-1}(0)$ consists of two triangles identified along their boundaries. Hence this complex is homeomorphic to $S^2$. \end{example} Note that the dual intersection complex of $w^{-1}(0)$ (resp.\ $\check w^{-1}(0)$) is the same as that of $\bar w^{-1}(0)$ (resp.\ ${\bar{\check w}}^{-1}(0)$). \begin{proposition} \label{propdualintcomplex} The set of vertices of the dual intersection complex $\Gamma_{w^{-1}(0)}$ of $w^{-1}(0)$ is \[ (\Delta'\cap M) \cup \{u\} \] where $u$ represents $W_0$. The precise structure of $\Gamma_{w^{-1}(0)}$ depends on $\dim\Delta'$: \begin{enumerate} \item If $\dim\Delta'\le d-1$ then $\Gamma_{w^{-1}(0)}$ is the cone over $\Delta'$. Precisely, the simplices are \[ \{\langle u\rangle\} \cup \{\langle v_0,\ldots,v_p\rangle \,|\,\Conv\{v_0,\ldots,v_p\}\in \P\} \cup \{\langle v_0,\ldots,v_p,u\rangle \,|\,\Conv\{v_0,\ldots,v_p\}\in \P\}. \] In particular, $\Gamma_{w^{-1}(0)}$ is topologically a ball of dimension $\dim\Delta'+1$. \item If $\dim\Delta'=d$ then we have one simplex $\langle u\rangle$, one simplex $\langle v_0,\ldots,v_p\rangle$ whenever $\Conv\{v_0,\ldots,v_p\}\in\P$, one simplex $\langle v_0,\ldots,v_p,u\rangle$ whenever \[ \hbox{$\Conv\{v_0,\ldots,v_p\}\in \P$ and $\Conv\{v_0,\ldots,v_p\}\subseteq\partial\Delta'$,} \] and two simplices $\langle v_0,\ldots,v_p,u\rangle$ whenever \[ \hbox{$\Conv\{v_0,\ldots,v_p\}\in\P$ and $\Conv\{v_0,\ldots,v_p\}\not\subseteq\partial\Delta'$.} \] So topologically, $\Gamma_{w^{-1}(0)}$ is obtained by taking two cones over $\Delta'$ and gluing them together along the boundary. In particular $\Gamma_{w^{-1}(0)}$ is a $d+1$-dimensional sphere. \item If $\dim\Delta'=d+1$ then the simplices of $\Gamma_{w^{-1}(0)}$ are \begin{align*} &\{\langle u\rangle\}\cup\{\langle v_0,\ldots,v_p\rangle \,|\,\Conv\{v_0,\ldots,v_p\}\in \P\}\\ \cup& \{\langle v_0,\ldots,v_p,u\rangle \,|\,\hbox{$\Conv\{v_0,\ldots,v_p\}\in \P$ and $\Conv\{v_0,\ldots,v_p\}\subseteq\partial\Delta'$}\}. \end{align*} Thus $\Gamma_{w^{-1}(0)}$ is again a $d+1$-dimensional sphere. \end{enumerate} \end{proposition} \proof By Proposition \ref{W0descprop}, the description of the vertices of $\Gamma_{w^{-1}(0)}$ is clear. Let \[ \P_{\Delta'}:=\{\omega\in\P\,|\,\omega\subseteq\Delta'\}. \] Clearly, for any cell $\omega\in\P_{\Delta'}$ with vertices $v_0,\ldots,v_p$, the toric stratum of $X_{\Sigma}$ determined by $\operatorname{{Cone}}(\omega)$ is just $D_{\omega}:= D_{v_0}\cap\cdots\cap D_{v_p}$, hence $\langle v_0,\ldots,v_p\rangle$ is a simplex in $\Gamma_{w^{-1}(0)}$. To understand the remaining simplices, we just need to understand $D_{\omega}\cap W_0$. The family $w_t$ in Remark~\ref{mirrorfamily} induces a linear equivalence $W_0\sim \bigcup_{v\in\partial\Delta} D_v$. Restricting this to $D_\omega$ yields $$D_{\omega}\cap W_0 \sim \bigcup_{v\in\partial\Delta} D_v\cap D_\omega.$$ We are interested in the number of connected components of this divisor class. Using the combinatorial description on the right hand side, this number can be read off from the fan of $D_\omega$. This fan is given by \[ \Sigma(\omega)=\{(\operatorname{{Cone}}(\tau)+\RR \operatorname{{Cone}}(\omega))/\RR \operatorname{{Cone}}(\omega)\,|\, \tau\in\P, \omega\subseteq\tau\} \] in $\bar M_\RR/(\RR \operatorname{{Cone}}(\omega))$. For two polyhedra $\tau\subseteq\tau'\subseteq \bar M_{\RR}$ (resp.\ in $M_\RR$), we choose any point $x\in\Int(\tau)$ and write \[ T_{\tau}\tau'=\{c(v-x)\,|\,c\in \RR_{\ge 0}, v\in\tau'\}; \] this is the tangent wedge to $\tau'$ along $\tau$. Using this notation, we observe that the rays of $\Sigma(\omega)$ which don't correspond to a divisor $D_v\cap D_\omega$ with $v\in\partial\Delta$ span $$(T_{\operatorname{{Cone}}(\omega)}\operatorname{{Cone}}(\Delta')+\RR \operatorname{{Cone}}(\omega))/\RR \operatorname{{Cone}}(\omega).$$ By standard toric geometry, the number of connected components of $W_0\cap D_{\omega}$ is the same as the number of connected components of \[ (\bar{M}_{\RR}/\RR \operatorname{{Cone}}(\omega))\setminus \big((T_{\operatorname{{Cone}}(\omega)}\operatorname{{Cone}}(\Delta')+\RR \operatorname{{Cone}}(\omega))/\RR \operatorname{{Cone}}(\omega)\big), \] or equivalently, the number of connected components of $M_{\RR}\setminus T_{\omega}\Delta'$. This now gives the case-by-case description of $\Gamma_{w^{-1}(0)}$. If $\dim\Delta'\le d-1$, i.e., $\codim(\Delta'\subseteq M_\RR)\ge 2$, then $M_{\RR}\setminus T_{\omega}\Delta'$ is connected and non-empty for all $\omega\in\P_{\Delta'}$, so $\Gamma_{w^{-1}(0)}$ is just a cone over $\Delta'$ as described in item (1) of the statement of the Proposition. If $\dim \Delta'=d$ then, for $\omega\subseteq\partial\Delta'$, $M_{\RR}\setminus T_{\omega}\Delta'$ is connected, and there is a unique simplex of $\Gamma_{w^{-1}(0)}$ with vertices $u$ and the vertices of $\omega$. If $\omega\in\P_{\Delta'}, \omega\not\subseteq\partial\Delta'$ then $M_{\RR}\setminus T_{\omega}\Delta'$ has two connected components. In this case, there are two simplices with vertices $u$ and the vertices of $\omega$. This gives the description in item (2). Finally, if $\dim\Delta'=d+1$ then if $\omega\subseteq \partial\Delta'$, there is again a unique simplex of $\Gamma_{w^{-1}(0)}$ with vertices $u$ and the vertices of $\omega$. On the other hand, if $\omega\not\subseteq \partial\Delta'$ then in fact $D_v\cap D_\omega=\emptyset$ for all $v\in\partial\Delta$, so $D_{\omega}$ is disjoint from $ W_0$. (Equivalently, $M_{\RR}\setminus T_{\omega}\Delta'$ has zero connected components.) \qed \section{Homological mirror symmetry and (co-)homology} \label{section1.5} A discussion of the categories related to our construction has already appeared in \cite{KKOY09} and \cite{Ka10}. We just quickly review the main ideas and apply these to the discussion of cohomology. Following \cite{Or11}, to a Landau-Ginzburg model $(X,w)$, we associate the triangulated category $\op{D}^b(X,w)$ which is defined as $$\op{D}^b(X,w) = \prod_{t \in\AA^1} \op{D}^b_\sing(w^{-1}(t))$$ where $\op{D}^b_\sing(w^{-1}(t))$ is the Verdier quotient of $\op{D}^b(w^{-1}(t))$, the bounded derived category of coherent sheaves on $w^{-1}(t)$, by $\op{Perf}(w^{-1}(t))$, the full subcategory of perfect complexes (i.e., complexes of locally free sheaves). For a non-critical value $t$ of $w$, we have $\op{D}^b_\sing(w^{-1}(t))=0$. The \emph{generalized homological mirror symmetry conjecture} suggests that for mirror dual models $(X_\Sigma,w)$ and $(X_{\check\Sigma},\check w)$ given by our construction, there are equivalences of categories \begin{equation} \label{HMS1} \op{D}^b(X_\Sigma,w) \cong \op{DFS}(X_{\check\Sigma},\check w) \end{equation} \begin{equation} \label{HMS2} \op{D}^b(X_{\check\Sigma},\check w) \cong \op{DFS}(X_{\Sigma}, w) \end{equation} where $\op{DFS}(X,w)$ is the derived Fukaya-Seidel category of a symplectic fibration $w:X\ra\CC$. In general, the Fukaya-Seidel category $\op{FS}(X,w)$ is a conjectural $A_\infty$-category at least part of whose objects are Lagrangians which are vanishing cycles over some subsets of the critical locus. It has been rigorously defined for the case where $w$ is a Lefschetz fibration in \cite{Sei01} as follows: Fix a non-critical value $\lambda_0$ of $w$, and choose paths $\gamma_1,\ldots,\gamma_n$ in $\CC$ which connect the critical values $\lambda_1,\ldots,\lambda_n$ of $w$ to $\lambda_0$. Parallel transport of cycles vanishing at $\lambda_i$ along $\gamma_i$ should give Lagrangian submanifolds of $w^{-1}(\lambda_0)$. These are the objects of the Fukaya-Seidel category. The morphisms are Floer complexes. Taking twisted complexes and idempotent completion finally yields $\op{DFS}(X,w)$. \subsection{Equivalences for a smooth critical locus: Renormalization flow and Kn\"orrer periodicity} \label{Section_Knoerrer} It was pointed out to us by Denis Auroux that, if $S=\crit(\check w)$ is a smooth compact symplectic manifold, by standard symplectic arguments, $\op{FS}(X_{\check \Sigma}, \check w)$ can be defined. Moreover there is a natural full and faithful functor $\phi: \op{Fuk}(S)\ra \op{FS}(X_{\check \Sigma},\check w)$ given by mapping a Lagrangian $L$ in $S$ to the set of points in the suitably chosen fixed non-singular fibre which are taken into $L$ under the gradient flow of $\op{Re}(\check w)$ for some fixed metric. This functor is expected to be essentially surjective when one restricts $(X_{\check \Sigma},\check w)$ to a neighbourhood of $S$. In the following discussion, we assume this is the case. Note that $\op{DFuk}(S)$ is a $\ZZ_2$-graded Calabi-Yau category\footnote{This notion was introduced by Kontsevich and means that this triangulated category supports a right Serre functor which is isomorphic to $[d]$ for some $d$, where $[\cdot]$ is the shift endo-functor.} and thus its Hochschild homology is $\ZZ_2$-graded and isomorphic to the Hochschild cohomology, see also the next section. By homological mirror symmetry, i.e., by \eqref{HMS1}, $D^b(X_\Sigma,w)$ should also be a Calabi-Yau category. Indeed, the anti-canonical divisor of $X_\Sigma$ is trivial, so by \cite{LP11},\,Thm.\,4.1, the subcategory of compact objects is a Calabi-Yau category and it is expected that this generates the entire category. There is a suggestion of how to refine the grading in \cite{Sei08},\,\S8 for a genus two curve. It is currently unknown whether there is a general way to refine the grading in the cases relevant to us. For the complex geometry, let $S=\crit(\check w)$ be given as a complete intersection in a toric variety as in \S\ref{completeint}. By \cite{HW09},\,Thm.\,2 we have an equivalence\footnote{The Calabi-Yau assumption in loc.cit. can be dropped for this result.} \begin{equation} \label{HerbstWalcher} \op{D}^b(S)\cong \op{D}^b(X,w,\ZZ^k) \end{equation} where $\ZZ^k$ indicates a $\ZZ^k$-grading given by the $(\CC^*)^k$-action on $\check w^{-1}(0)$ induced from the split vector bundle. The hypersurface case is also treated in \cite{Is10},\cite{Sh11}. This equivalence is called \emph{renormalization flow} in the physics literature. We assumed in \S\ref{section1} that $\PP_\Delta$ is smooth. More generally, one wants to drop the smoothness assumption on $\PP_\Delta$ and work instead with a maximal projective crepant partial resolution $\tilde\PP_\Delta$ of a singular $\PP_\Delta$ as in the Batyrev-Borisov construction \cite{BB94}. Assuming $S$ is a Calabi-Yau manifold after such a resolution, it was shown in \cite{HW09},\,Thm.\,3 that different choices of a resolution give non-canonically equivalent categories $\op{D}^b(S)$. By \cite{Or05},\,Cor.\,3.2, we have \begin{proposition}[Orlov]\label{Knoerrer} Let $S$ be smooth and quasi-projective, $f,g\in\Gamma(S,\shO_S)$, $x$ a coordinate on $\AA^1$, $V(g)\subseteq S$ smooth and $f|_{V(g)}$ non-constant then there is a natural equivalence $$D^b(V(g),f|_{V(g)})\cong D^b(S\times \AA^1,f+gx).$$ \end{proposition} In some sense (\ref{HerbstWalcher}) may be viewed as a version of this for the case where $f|_{V(g)}=0$. \subsection{Hochschild (co-)homology of a smooth critical locus} On the symplectic side, there are morphisms $$\op{HH}_{i-d}(\op{Fuk(S)}) \stackrel{\alpha}{\lra} \op{QH}^i(S) {\ra} \op{HH}^i(\op{Fuk(S)})$$ where the left and right are the Hochschild homology and cohomology of the $A_\infty$-category $\op{Fuk(S)}$ and the middle one is the quantum cohomology of $S$. These are conjectured to be isomorphisms under certain conditions, see \cite{Ko94}, \cite{AFOO} and for references with $\op{SH}$ in place of $\op{QH}$ see \cite{Sei07}, \cite{Ab10}, \cite{Ga}. For the following considerations, let us assume that $\alpha$ is an isomorphism. On the complex side, we have by the Kontsevich-Hochschild-Kostant-Rosenberg theorem for the Hochschild homology and cohomology rings of $\op{D}^b(S)$ respectively \begin{align} \label{KHKR1} \op{HH}^i(S)= {} & \bigoplus_{p+q=i} H^q(S,{\bigwedge}^p\shT_S), \\ \label{KHKR2} \op{HH}_i(S)= {} & \bigoplus_{p-q=i} H^q(S,\Omega^p_S). \end{align} In the classical limit $\op{QH}^i(S)$ becomes $H^i(S)$. Note that when $S,\check S$ are smooth Calabi-Yau manifolds\footnote{Calabi-Yau means for us in particular $h^{0,k}(S)=h^k(S^d)$ for $d=\dim S$.}, this gives a way of deducing the duality of Hodge numbers $h^{p,q}(S)=h^{d-p,q}(\check S)$ from the (generalized) homological mirror symmetry conjecture if $d=\dim S\le 5$. Given all the assumptions, we have \begin{equation} \label{CYHodgededuce} \begin{array}{rcl} \bigoplus_{p+q=i}{H^{p,q}(S)}&\cong& H^i(S)\\ &\cong& \op{QH}^i(S)\\ &\cong& \op{HH}_{i-d}(\op{Fuk}(S))\\ &\cong& \op{HH}_{i-d}(\op{D}^b(\check S))\\ &\cong& \bigoplus_{p-q=i-d}H^{p,q}(\check S)\\ &\cong& \bigoplus_{p+q=2d-i}H^{d-p,q}(\check S). \end{array} \end{equation} In higher dimensions one needs to add the information of a monodromy action. \subsection{Hochschild (co-)homology of a singular critical locus} We discuss here the case where $\check S$ is compact but very singular, e.g., where $\check S$ looks like the mirror of a hypersurface $S$ of positive Kodaira dimension. Given a Landau-Ginzburg model $w:X\ra \CC$, by \cite{Or11},\,Thm.\,3.5, there is an equivalence of triangulated categories $$\op{D}^b(X,w)\cong \prod_{t \in\CC} \op{MF}(X,w-t)$$ where $\op{MF}(W,w)$ is the triangulated category of matrix factorisations defined in loc.cit. It comes with a natural differential $\ZZ/2\ZZ$-graded enhancement $\op{MF}^{\op{dg}}(W,w)$ (see \cite{Or11},\,Rem\,2.6) which is needed in order to define its Hochschild homology and cohomology. By \cite{LP11},\,3.1, for $i=0,1$, we then have\footnote{As mentioned in the introduction of loc.cit., the requirement of a single critical value $0$ as assumed in loc.cit.\ can easily be removed in order to get the result stated here.} $$ \underset{k\equiv i \op{mod} 2}{\bigoplus} \op{HH}^k(\op{D}^d(X,w)) \cong \underset{k\equiv i \op{mod} 2}{\bigoplus} \HH^k(X,(\textstyle\bigwedge^\bullet \shT_X,\iota_{dw})) $$ where $\iota_{dw}$ denotes contraction by $dw$. According to \cite{LP11},\,3.2 one also expects \begin{equation} \label{HHmod2} \underset{k\equiv i \op{mod} 2}{\bigoplus} \op{HH}_k(\op{D}^{b}(X,w)) \cong \underset{k\equiv i \op{mod} 2}{\bigoplus} \HH^k(X,(\Omega_X^\bullet,dw\wedge)). \end{equation} In fact, one desires a $\ZZ$-graded enhancement of $\op{MF}(W,w)$ instead of a $\ZZ/2\ZZ$-graded one in order to be able to ``remove'' $\oplus_{k\equiv i \op{mod} 2}$ from the above equalities. However, note ``removing'' cannot hold literally. For example, for the setup of (\ref{HerbstWalcher}), (\ref{HHmod2}) becomes (\ref{KHKR2}). The right hand side of \eqref{KHKR2} involves individual Hodge groups. On the other hand, in the cohomology of the sheaf of vanishing cycles, appearing on the right hand side of \eqref{HHmod2}, we can't identify this splitting. Assuming the generalized homological mirror symmetry conjecture holds for a mirror pair $(X_{\check\Sigma},\check w), (X_{\Sigma},w)$ of our construction in \S\ref{section1} (i.e., with $S=\crit(\check w)$ smooth) we deduce for $i=0,1$, \begin{equation} \label{oneside} \underset{k\equiv i \op{mod} 2}{\bigoplus} H^k(S,\CC) \cong \underset{k\equiv i \op{mod} 2}{\bigoplus} \HH^{k-d}(\check S,\shF_{\check S}) \end{equation} by using (\ref{CYHodgededuce}) on one side of the mirror pair (unlike in the Calabi-Yau case where it applies on both sides) and combining it with the functor $\phi$ from the beginning of \S\ref{Section_Knoerrer}, with (\ref{HHmod2}) and Thm~\ref{sabbahmaintheorem}. In fact, we prove a much stronger result in Thm. \ref{mainthmintro}. This suggests there might be a more refined version of homological mirror symmetry in this situation. \subsection{A conjecture on the Hochschild cohomology} In the light of the previous discussion, the calculations of this paper really only represent half of what one should expect, in the following sense. Except for the Calabi-Yau case, we expect that the Hochschild homology and cohomology of $\op{D}^b(S)$ differ; e.g., the (co-)homologies generally differ in the Fano case. However, (\ref{oneside}) only deals with the Hochschild homology. We also wish to identify the relevant Hochschild cohomology group of $\op{D}^b(S)$ on the singular mirror $\check S$. Since $\op{DFuk}(S)$ is a Calabi-Yau category and therefore Hochschild cohomology is isomorphic to Hochschild homology, this cannot be understood from (\ref{HMS1}), but we should look at (\ref{HMS2}) instead. However, as mentioned before, there is no known construction of $\op{FS}(X_{\Sigma},w)$ or a conjecturally equivalent category $\op{Fuk}(\check S,\shF_{\check S})$. Furthermore, we don't know how to relate this to a version of quantum cohomology for $(\check S,\shF_{\check S})$. Despite this being all rather speculative, we still have some evidence that the cohomologies match up. In fact, we conjecture \begin{conjecture} \label{conjectureHH} For $S,\check S$ the singular loci of $\check w^{-1}(0),w^{-1}(0)$ for a mirror pair of Landau-Ginzburg models as constructed in \S \ref{section1}, we have $$\op{HH}^i(S) \cong H^i(\check S,\CC).$$ \end{conjecture} As evidence for this conjecture, note this holds when $S$ is a curve. Indeed, we have, for $g\ge 2$ the genus of $S$, \begin{align*} \op{HH}^0(S){} = & H^0(S,\O_S)\cong \CC\\ \op{HH}^1(S){} = & H^1(S,\O_S)\oplus H^0(S,\shT_S) \cong \CC^g\\ \op{HH}^2(S){} = & H^1(S,\shT_S)\cong \CC^{3g-3}. \end{align*} Assuming that $S$ is defined as a hyperplane section of $\PP_{\Delta}$ for $\Delta$ satisfying Assumption \ref{overallhypo}, then it is standard that $g$ is $\#\Int(\Delta)\cap M=\#\Delta'\cap M$. Now $\check S$ is connected, so $H^0(\check S,\CC)=\CC\cong \op{HH}^0(S)$. The curve $\check S$ is a union of rational components, but it is easy to see that its intersection complex is a graph of genus $g$, and thus $H^1(\check S,\CC)\cong\CC^g\cong \op{HH}^1(S)$. Finally, \[ H^2(\check S,\CC)\cong\CC^{\hbox{\# of irreducible components of $\check S$}}. \] >From the combinatorial description of $\check S$ from Prop.\ \ref{propdualintcomplex}, one sees that this number of irreducible components is $e+b$, where $e$ is the number of edges $e$ of $\P\cap\Delta'$ and $b:=\#\partial\Delta'\cap M$. Note that $b$ is also the number of edges of $\P\cap\Delta'$ contained in $\partial\Delta'$. Let $f$ be the number of two-dimensional cells (standard simplices) in $\P\cap\Delta'$. Then the area $A$ of $\Delta'$ is $f/2$, but by Pick's Theorem we also have $A=i+b/2-1$, where $i=\#\Int(\Delta')\cap M$. Also $1=\chi(\Delta')=(b+i)-e+f$, where $\chi$ denotes the topological Euler characteristic (using compactly supported cohomology). >From these two equations one calculates that $e+b= 3(i+b)-3=3g-3$, as desired. We have also checked that Conjecture~\ref{conjectureHH} holds when $S$ is a quintic surface in $\PP^3$. \section{Hodge numbers of hypersurfaces in projective toric varieties} \label{section2} In this section, we recall the results of Danilov and Khovanskii about the Hodge numbers of a regular hypersurface in a non-singular toric variety. We will later compare this with the Hodge numbers of the mirror of such a hypersurface. We recall: \begin{definition} \label{HDdef} For a variety $X$, one defines the $(p,q)$-th and $p$-th \emph{Hodge-Deligne numbers} $$e^{p,q}(X)=\sum_{i}(-1)^{i} h^{p,q}\, H^{i}_c(X,\CC),$$ $$e^p(X)=\sum_{q} e^{p,q}(X)\stackrel{{q=q'+k}\atop{i=p+q'}}{=}(-1)^p\sum_{q',k}(-1)^{q'} h^{p,q'+k}\, H^{p+q'}_c(X,\CC).$$ \end{definition} We fix a polytope $\Delta\subseteq M_{\RR}$ as usual with $\dim\Delta=\dim M_\RR=d+1$ and assume that it comes with a polyhedral decomposition $\P$ into standard simplices. We also assume that $\PP_{\Delta}$ is a non-singular toric variety. Note that $\PP_{\Delta}$ comes with the ample line bundle $\O_{\PP_{\Delta}}(1)$. We pick a general section of this line bundle, defining a non-singular hypersurface $S$ in $\PP_{\Delta}$. \begin{proposition} \label{HodgenumbersS} \begin{enumerate} \item $h^{p,q}(S)=0$ unless $p=q$ or $p+q=d$. \item For $\tau\in\P$, let $\Delta(\tau)$ be the minimal face of $\Delta$ containing $\tau$. Then \begin{align*} (-1)^pe^p(S)=\sum_q (-1)^{q}h^{p,q}(S)= {} & -\sum_{\tau\subseteq\Delta} (-1)^{\dim\tau} \begin{pmatrix} \dim\tau\\ p+1\end{pmatrix} \\ &+\sum_{\tau\in\P}(-1)^{\dim\tau} \begin{pmatrix}\dim \Delta(\tau)-\dim\tau\\ p+1\end{pmatrix} \end{align*} \item For $2p>d$, \[ h^{p,p}(S)=h^{p+1,p+1}(\PP_{\Delta})=(-1)^{p+1}\sum_{\tau\subseteq \Delta} (-1)^{\dim\tau} \begin{pmatrix} \dim\tau\\ p+1\end{pmatrix} \] and \[ h^{p,d-p}(S)=\sum_{\tau\in\P}(-1)^{d-p+\dim\tau} \begin{pmatrix} \dim\Delta(\tau)-\dim\tau\\ p+1 \end{pmatrix}. \] \end{enumerate} \end{proposition} \begin{figure} \input{hodgeS.pstex_t} \caption{The Hodge diamond of $S$} \label{hodgeS} \end{figure} \proof This is just rewriting formulas of \cite{DK86},\,5.5. We begin with \[ \sum_q (-1)^{p+q}h^{p,q}(S)= (-1)^{p+1}\sum_{\tau\subseteq\Delta} (-1)^{\dim\tau}\begin{pmatrix} \dim\tau\\ p+1\end{pmatrix} -\sum_{\omega\subseteq\Delta} (-1)^{\dim\omega} \varphi_{\dim\omega-p}(\omega). \] Here the sum is over all faces $\tau$ (resp.\ $\omega$) of $\Delta$, and \[ \varphi_i(\omega)=(-1)^i\sum_{j\ge 1} (-1)^j\begin{pmatrix}\dim\omega+1\\ i-j\end{pmatrix} l^*(j\omega) \] with $l^*(j\omega)$ the number of interior integral points in $j\omega$. Using $\P$, we can compute this as follows. If $\tau$ is a standard $i$-dimensional simplex, then $l^*(j\tau)=\begin{pmatrix} j-1\\ i\end{pmatrix}$. Thus, if $\omega$ is a face of $\Delta$, we have \begin{align*} l^*(j\omega) \quad= \sum_{\tau\in\P\atop \tau\subseteq\omega,\tau \not\subseteq\partial\omega} l^*(j\tau) \quad= \sum_{\tau\in\P\atop \tau\subseteq\omega,\tau \not\subseteq\partial\omega} \begin{pmatrix} j-1\\ \dim\tau\end{pmatrix}. \end{align*} We insert this in the above expression for $\varphi_i(\omega)$ and apply Prop.~\ref{binomident},(1) to get \[ \varphi_i(\omega)= \sum_{\tau\in\P\atop \tau\subseteq\omega,\tau \not\subseteq\partial\omega} (-1)^{i+\dim\tau+1} \begin{pmatrix}\dim\omega-\dim\tau \\\dim\omega+1-i \end{pmatrix}, \] and we conclude (2). (1) follows from the Lefschetz theorem proved in 3.7 of \cite{DK86}, and the formula for $h^{p,p}$ in (3) follows from that Lefschetz theorem and \cite{DK86},\,2.5. The formula for $h^{p,d-p}(S)$ then comes from (2) and the fact that $(-1)^pe^p(S)=(-1)^ph^{p,p}(S)+(-1)^{d-p} h^{p,d-p}(S)$. \qed \medskip The statements of \cite{DK86},\,1.6 and 1.8 give: \begin{theorem} \label{eulerdecompose} For $X=\sqcup_i X_i$ a disjoint union and $X,Y,X_i$ varieties, we have \begin{enumerate} \item $e^{p,q}(X)=\sum_i e^{p,q}(X_i)$, in particular $e^{p}(X)=\sum_i e^{p}(X_i)$, \item $e^{p,q}(X\times Y)=\sum_{{p_1+p_2=p}\atop{q_1+q_2=q}} e^{p_1,q_1}(X)e^{p_2,q_2}(Y)$, in particular \\$e^p(X\times Y)=\sum_{k} e^{p-k}(X)e^k(Y)$. \end{enumerate} \end{theorem} We give a proof of a lemma that we will need later: \begin{lemma} \label{handlebodytimestorus} Recall a \emph{handlebody} $H^k$ is the intersection of a general hyperplane in $\PP^{k+1}$ with $(\CC^*)^{k+1}$. We have $e^{p,q}(H^k\times(\CC^*)^l) =0$ for $p\neq q$ and $$e^{p,p}(H^k\times(\CC^*)^l) =(-1)^{p+k+l}\left( \begin{pmatrix}k+l+1\\p+1\end{pmatrix} -\begin{pmatrix}l\\p+1\end{pmatrix} \right).$$ \end{lemma} \begin{proof} By \cite{DK86},\,1.10, $e^{p,q}((\CC^*)^l)$ is zero for $p\neq q$ and $e^{p,p}((\CC^*)^l)=(-1)^{p+l}\begin{pmatrix}l\\p\end{pmatrix}$. Note that if $H$ denotes a hyperplane in $\PP^{k+1}$ then we have the motivic sum $H=\bigsqcup_{i=0}^k \begin{pmatrix}k+2\\i+2\end{pmatrix} H^i$. Since $H\cong\PP^k$, by induction over $k$ using Prop.~\ref{binomident},(1), we get $e^{p,q}(H^k)=0$ for $p\neq q$ and $e^{p,p}(H^k)=(-1)^{p+k}\begin{pmatrix}k+1\\p+1\end{pmatrix}$. The product formula Thm~\ref{eulerdecompose},(2) yields $$e^p(H^k\times(\CC^*)^l)=\sum_{p_1\ge 0} (-1)^{p_1+k}(-1)^{p-p_1+l} \begin{pmatrix}k+1\\p_1+1\end{pmatrix} \begin{pmatrix}l \\p-p_1\end{pmatrix}$$ and the assertion follows from Prop.~\ref{binomident},(2). \end{proof} \section{The mixed Hodge structure on the cohomology of the vanishing cycles} \label{section3} We review the notion of the sheaf of vanishing cycles from \cite{Del73} and the Hodge structure on its cohomology as given in \cite{St75}, \cite{PS08}. \subsection{Vanishing cycles of a semistable degeneration} \label{section3_1} We fix a proper map $f:\bar{X}\rightarrow O$, where $O$ is the unit disk and $f$ is smooth away from $f^{-1}(0)$. Consider the following diagram: \[ \xymatrix@C=30pt {Y\ar[d]\ar[r]^i & \bar{X}\ar[d]_f& \tilde{{\bar{X^*}}}\ar[l]_{k}\ar[r]\ar[d]& \tilde O^*\ar[d]\\ \{0\}\ar[r]&O&\bar{X}^*\ar[l]\ar[lu]_{j^Y}\ar[r]&O^* } \] Here $Y$ is the fibre over $0\in O$, $i$ the inclusion, $\bar{X}^*=\bar{X}\setminus Y$, $O^*=O\setminus \{0\}$, $\tilde O^*$ the universal cover of $O^*$ and $\tilde{\bar{X^*}}= \bar{X}^*\times_{O^*} \tilde O^*$ the pullback of the family $\bar{X}^*\rightarrow O^*$ to $\tilde O^*$. The map $j^Y$ is the inclusion and the map $k$ the projection $\tilde{\bar{X^*}}\rightarrow \bar{X}^*$ followed by $j^Y$. \begin{definition} The functor $\psi_f:D^+(\bar{X},\ZZ)\rightarrow D^+(Y,\ZZ)$ from the derived category of sheaves of abelian groups on $\bar{X}$ to the derived category of sheaves of abelian groups on $Y$ is defined by, for $\shF\in D^+(\bar{X},\ZZ)$, \[ \psi_f(\shF)=i^{-1} {\bf R}k_*(k^{-1}(\shF)). \] This is the \emph{sheaf of nearby cycles} of $\shF$. There is a natural map \[ \spe : i^{-1}\shF\rightarrow \psi_f(\shF). \] The cone of this map in $D^+(Y,\ZZ)$ is $\phi_f(\shF)$, the \emph{sheaf of vanishing cycles} of $\shF$. For a complex of sheaves $\shF$, we denote by $\shH^i(\shF)$ the $i$-th cohomology sheaf of the complex, and put\footnote{Note that $\shH^i$ commutes with $i^{-1}$ since $\bar X$ retracts to $Y$.} \begin{align*} R^i\psi_f(\shF):= {} & \shH^i(\psi_f(\shF)),\\ R^i\phi_f(\shF):= {} & \shH^i(\phi_f(\shF)). \end{align*} If $g:\bar{X}\ra C$ is a proper map to a Riemann surface $C$ and $p\in C$, we denote by $\psi_{g,p}$ and $\phi_{g,p}$ the above functors on the category of complexes of sheaves on $g^{-1}(O)$ for a disk $O$ centered at $p$, small enough so that $p$ is the only critical value of $g$ in $O$. Clearly, the image of the functor is independent of the size of the disk. This now explains the notation of Theorem \ref{sabbahmaintheorem}. \end{definition} \begin{theorem} \label{cycles_computed} Let $f:\bar{X}\rightarrow O$ be a proper morphism over a disk $O$, and suppose $X\subseteq\bar{X}$ is an open subset such that, with $D:=\bar{X}\setminus X$ flat over $O$, $Y=f^{-1}(0)$, $D\cup Y$ is a reduced normal crossings divisor. Let $j^D:X\rightarrow \bar{X}$ be the inclusion and $$Y=\bigcup_{i=1}^{N_Y}Y_i\qquad\hbox{ and }\qquad D=\bigcup_{i=1}^{N_D}D_i$$ be the decomposition into irreducible components. We define the sheaf on $\bar{X}$ \[ \CC_{Y^1}:= \bigoplus_{i=1}^{N_Y} {\CC}_{Y_i} \] where ${\CC}_{Y_i}$ denotes the (push-forward of) the constant sheaf on $Y_i$ with coefficients in $\CC$. We define $\CC_{D_i}$ and $\CC_{D^1}$ similarly. We set \[ \CC_{Y^1 \cup D^1}':=\coker\left({\CC}_{Y}\stackrel{(\Diag,0)}{\lra}\CC_{Y^1}\oplus \CC_{D^1} \right) \] where $\Diag$ is the linear map sending 1 to $\rho$ with $\rho_i=1$ for $1\le i\le N_Y$. Then \begin{enumerate} \item $R^q\psi_f({\bf R}j^D_*\CC_X)={\bigwedge}^q\CC_{Y^1 \cup D^1}'.$ \vspace{0.16cm} \end{enumerate} Under the additional assumption of \begin{equation}\label{overallassumption} \Sing(Y)\cap D=\emptyset, \end{equation} we have \begin{enumerate} \setcounter{enumi}{1} \item $R^q\phi_f({\bf R}j^D_*\CC_X)$ is supported on $\Sing(Y)$ for $q\ge 0$, \item $ R^q\phi_f({\bf R}j^D_*\CC_X)= \begin{cases} 0&\hbox{if $q=0$};\\ {\bigwedge}^q\CC_{Y^1\cup D^1}'|_{Y\setminus D}&\hbox{if $q>0$}, \end{cases} $ \vspace{0.16cm} \item $ R^q\phi_f({\bf R}j^D_*\CC_X)=R^q\phi_f(\CC_{\bar X})\qquad\hbox{for }q\ge0. $ \end{enumerate} \end{theorem} \proof For $U\subseteq \bar X$ a small neighbourhood of a point $p$ in $Y$, since $Y\cup D$ is normal crossings, $U\backslash (Y\cup D)$ has the homotopy type of $(S^1)^{n_Y}\times (S^1)^{n_D}$ where $n_Y,n_D$ are the numbers of irreducible components of $Y$, resp. $D$, passing through $p$. We use the Eilenberg-Moore spectral sequence to translate the Cartesian square \[ \xymatrix@C=30pt {U\ar_f[d] & k^{-1}(U)\ar_k[l]\ar[d]\\ O & \ar[l] \tilde O^* } \] into cohomology. It degenerates to the generalized K\"unneth formula $$ H^\bullet(k^{-1}(U),\ZZ) = H^\bullet((S^1)^{n_Y}\!\times\!(S^1)^{n_D},\ZZ) \otimes_{H^\bullet(S^1,\ZZ)} H^\bullet(\RR,\ZZ) $$ where the map $H^\bullet(S^1,\ZZ)=\ZZ[x]/x^2\ra H^\bullet(\RR,\ZZ)=\ZZ$ is given by sending $x\mapsto 0$ and $H^\bullet(S^1,\ZZ)\ra H^\bullet((S^1)^{n_Y}\!\times\!(S^1)^{n_D},\ZZ)=(\ZZ[y]/y^2)^{\otimes n_Y}\otimes (\ZZ[d]/d^2)^{\otimes n_D}$ is given by $x\mapsto \rho=\sum_{i=1}^{n_Y} 1^{\otimes(i-1)}\otimes y\otimes 1^{\otimes n_Y+n_D-i}$. Rewriting yields $$H^q(k^{-1}(U),\CC)= H^q(U,\CC)/(\rho)=\Gamma(U,{\bigwedge}^q\CC_{Y^1 \cup D^1}')$$ and proves (1). Part (2) follows from the fact that the adjunction $i^{-1}{\bf R}j^D_*\CC_X\ra \psi_f({\bf R}j^D_*\CC_X)$ is a quasi-isomorphism outside of $\Sing(Y)$ which can be seen from (1). Part (3)-(4) follows from the fact that $\CC_{\bar X}\ra {\bf R}j^D_*\CC_X$ is a quasi-isomorphism away from $D$. \qed \begin{example} \label{cohoS} Applying this to the case of $\bar{\check w}:\bar{\check w}^{-1}(O)\rightarrow O$, we take $D=\tilde X_{\bar{\check\Sigma}}\setminus X_{\check\Sigma}$. We note that $\CC_{Y^1\cup D^1}'|_{\bar{\check w}^{-1}(0)\cap X_{\check\Sigma}}$ is $\CC_S$, where $S=D_0\cap\check W_0$, in the notation of Proposition \ref{checkWproper}, is a hypersurface in $\PP_{\Delta}$. Thus \[ R^q\phi_{\bar{\check w}}({\bf R}j^D_*\CC_X)= \begin{cases} 0&\hbox{if $q\not=1$};\\ \CC_S&\hbox{if $q=1$}. \end{cases} \] >From this we conclude that \begin{equation} \label{checkWcohom} \HH^q(\bar{\check w}^{-1}(0),\phi_{\bar{\check w}}({\bf R}j^D_*\CC_X)) =H^{q-1}(S,\CC). \end{equation} \end{example} Most useful for the next sections is Thm.~\ref{cycles_computed},(4) because it enables us to work with the vanishing cycles of a compact degeneration which involves slightly less technology. \subsection{Mixed Hodge structure} Our goal in this section is to define a mixed Hodge structure on the hypercohomology groups of $\phi_f\CC_{\bar X}$. To do so, we shall identify a cohomological mixed Hodge complex whose $\CC$-part is quasi-isomorphic to $\phi_f\CC_{\bar X}$. The notion of a cohomological mixed Hodge complex is due to Deligne \cite{DelTH},\,III. We will always ignore the $\ZZ$-module structure of these complexes, and will only be concerned with $\QQ$-module structures. Moreover, we restrict ourselves to normalized ones in the sense of \cite{PS08},\,Rem.\,3.15, i.e., with an explicit comparison pseudo-morphism $\beta$ given as \begin{equation} \label{qisochain} (\shK^\bullet_\QQ,W) \stackrel{\beta_1}{\lra} ('\shK^\bullet_\CC,W) \stackrel{\beta_2}{\lla} (\shK^\bullet_\CC,W,F) \end{equation} where $\beta_2$ is a filtered quasi-isomorphism and $\beta_1$ become such after tensoring with $\CC$. A map of cohomological mixed Hodge complexes is a map on all three terms compatible with the $\beta_i$. Recall that to a filtered complex of sheaves $K^\bullet$ on a topological space with increasing filtration $W$ one associates a spectral sequence $E_\bullet(K^\bullet,W)$ with \begin{equation} \label{specseqfiltcomplex} E_1^{p,q}(K^\bullet,W) = \HH^{p+q}(\Gr^W_{-p}K^\bullet) \Rightarrow \HH^{p+q}(K^\bullet). \end{equation} To apply this to a complex with decreasing filtration $F^\bullet$, one sets $F_n=F^{-n}$. \begin{lemma} \label{cok_CHMC} Let $\phi:\shK^\bullet\ra\shL^\bullet$ given by $$\begin{array}{ccccc} (\shK^\bullet_\QQ,W)& \stackrel{\beta_1}{\lra}& ('\shK^\bullet_\CC,W) & \stackrel{\beta_2}{\lla}& (\shK^\bullet_\CC,W,F)\\ \downarrow&&\downarrow&&\downarrow\\ (\shL^\bullet_\QQ,W)& \stackrel{\beta_1}{\lra}& ('\shL^\bullet_\CC,W) & \stackrel{\beta_2}{\lla}& (\shL^\bullet_\CC,W,F)\\ \end{array}$$ be a map of cohomological mixed Hodge complexes which induces \begin{enumerate} \item an injection (resp. surjection) upon applying $\Gr_F\Gr^W$, \item an injection (resp. surjection) upon applying $E_1(\cdot,W)$ \end{enumerate} then $\coker(\phi)$ (resp.\ $\ker(\phi)$) naturally is a cohomological mixed Hodge complex. \end{lemma} \begin{proof} We only prove the statement for $\coker(\phi)$ since $\ker(\phi)$ works analogously. We first show the degeneration of $E_\bullet(\Gr^W\!\shL/\shK,F)$ at $E_1$ for which we only need assumption (1). We have a commutative diagram with exact rows \[ \resizebox{1\textwidth}{!}{ \xymatrix@C=30pt { &\HH^k(\Gr_F\Gr^W\shL^\bullet)\ar[r]\ar@{=}[d] & \HH^k(\Gr_F\Gr^W(\shL/\shK)^\bullet)\ar[r] & \HH^{k+1}(\Gr_F\Gr^W\shK^\bullet)\ar@{=}[d] \\ 0\ar[r]&\HH^k(\Gr_F\Gr^W\shL^\bullet)\ar[r] & \HH^k(\Gr_F\Gr^W\!\op{Cone}_M(\phi)^\bullet)\ar[r]\ar[u] & \HH^{k+1}(\Gr_F\Gr^W\shK^\bullet) \ar[r] &0\\ } } \] where $\op{Cone}_M(\phi)$ denotes the mixed cone of $\phi$ as in \cite{PS08},\,Thm\,3.22 and the middle vertical map is induced by $\shK^\bullet[1]\oplus\shL^\bullet\ra(\shL/\shK)^\bullet$ which is the natural map factoring through the projection to $\shL^\bullet$. The zeros on the left and right of the bottom row arise because the sequence defining $\op{Cone}_M(\phi)^{\bullet}$ as an extension of $\shL^{\bullet}$ and $\shK^{\bullet}[1]$ splits after applying $\Gr_W$. A diagram chase shows the surjectivity of the middle vertical map. Since this map comes from a map of filtered complexes, it is part of a map of spectral sequences $E_\bullet(\Gr^W\!\op{Cone}_M(\phi)^\bullet,F)\ra E_\bullet(\Gr^W(\shL/\shK)^\bullet,F)$. The surjectivity at $E_1$ and the degeneration of the first at $E_1$ implies the degeneration of the second at $E_1$. The first row in the above diagram extends to a long exact sequence where we may interchange $\HH^\bullet$ and $\Gr_F$ in each term. From this and \cite{DelTH},\,II,\,Prop.\,(1.1.11),\,(i), it follows that the maps in the long exact sequence $$\cdots\ra\HH^k(\Gr^W\!\shK^\bullet)\ra \HH^k(\Gr^W\!\shL^\bullet)\ra \HH^k(\Gr^W(\shL/\shK)^\bullet)\ra\cdots$$ are strict with respect to $F$. By assumption (2), the connecting homomorphisms are all trivial and this long exact sequence splits in short exact sequences. The strictness implies that we have an isomorphism of filtered vector spaces $(\HH^k(\Gr^W_l\!\shL^\bullet)/\im(\phi),F)\ra (\HH^k(\Gr^W_l(\shL/\shK)^\bullet),F)$. Since the first is a Hodge structure of weight $k+l$, so is the second. \end{proof} We assume the setup and notation of \S\ref{section3_1} as given in Thm~\ref{cycles_computed}. In addition, we denote by $Y^k$ the normalization of $$\coprod_{i_1<\cdots<i_k} Y_{i_1}\cap\cdots\cap Y_{i_k}$$ and by $a^Y_i$ the projection $Y^i\ra \bar{X}$. We are going to recall the construction of the mixed Hodge structure on the hypercohomology of $\phi_{f}(\CC_{\bar{X}})$ following \cite{St75}, \cite{PS08}. This is done by giving a map of cohomological mixed Hodge complexes resolving $i^{-1}\CC_{\bar{X}}\ra\psi_f\CC_{\bar{X}}$. Taking the mixed cone, we will then obtain a cohomological mixed Hodge complex resolving $$\phi_f \CC_{\bar{X}} = \operatorname{{Cone}}(\CC_{\bar{X}}|_Y\ra\psi_{f}\CC_{\bar{X}}).$$ We have the increasing filtrations $W^Y$ defined on $\Omega^{\bullet}_{\bar{X}}(\log Y)$ by \begin{align*} W^Y_k\Omega^{p}_{\bar{X}}(\log Y) = {} & \Omega^{k}_{\bar{X}}(\log Y)\wedge \Omega^{p-k}_{\bar{X}}. \end{align*} Moreover, there is the Hodge filtration $$F^k\Omega^{\bullet}_{\bar{X}}(\log Y)=\Omega^{\bullet\ge k}_{\bar{X}}(\log Y).$$ Consider the double complex\footnote{Note that we adapt to the orginal notation by Steenbrink \cite{St75}. The two indices $p,q$ are swapped in \cite{PS08}.} $$A^{p,q}= \Omega^{p+q+1}_{\bar{X}}(\log Y)/W^Y_q \Omega^{p+q+1}_{\bar{X}}(\log Y).$$ The first differential is the exterior derivation and the second differential is given by wedging with $\dlog f = f^*\dlog t$, i.e., we fix a coordinate $t$ of $O$. For a double complex $C^{\bullet,\bullet}$, we denote the total complex by $C^\bullet$. We have three filtrations on $A^{\bullet}$ given by the rules \begin{equation} \label{weightA1} W_k A^r = \bigoplus_{p+q=r}W^Y_{2q+k+1} \Omega^{p+q+1}_{\bar{X}}(\log Y)/W^Y_q\Omega^{p+q+1}_{\bar{X}}(\log Y) \end{equation} and respectively in terms of the filtrations on $A^r$ and $\Omega^{p+q+1}_{\bar{X}}(\log Y)/W^Y_q\Omega^{p+q+1}_{\bar{X}}(\log Y)$ \begin{equation} \label{weightA2} W^Y_k = \bigoplus_{p+q=r} W^Y_{k+q+1} \qquad F^k = \bigoplus_{p+q=r} F^{k+q+1}. \end{equation} We have $F^k A^{\bullet,\bullet}= A^{\bullet\ge k,\bullet}$. The injection $\dlog f\wedge: \Omega^p_{\bar{X}/O}(\log Y)\otimes \O_Y \ra A^{p,0}$ turns $A^{\bullet,\bullet}$ into a resolution of $\Omega^\bullet_{\bar X/O}(\log Y)\otimes \O_Y$. By \cite{St75},\,Thm\,4.19, $A^\bullet$ is the $\CC$-part of a cohomological mixed Hodge complex. There is an endomorphism of this double complex $\nu:A^{p,q}\ra A^{p-1,q+1}$ simply given by the natural projection modulo $W^Y_{q+1}$. We have $\op{log} T=2\pi i \nu$ where $T$ is the monodromy transform on cohomology, see \cite{PS08}, Thm\,11.21 and Cor.\,11.17. We have $\ker(\nu)^{\bullet}= W^Y_0 A^{\bullet}$ with the filtrations $W$ and $F$ induced from $A^{\bullet}$. The injection $$\spe:\ker(\nu)^{\bullet}\ra A^\bullet$$ is bifiltered. By \cite{PS08},\,\S11.3.1, $\ker(\nu)^{\bullet}$ is a cohomological mixed Hodge complex computing $H^\bullet(Y,\CC)$. A useful description for the rational structure for $A^\bullet$ was given in \cite{PS08},\,11.2.6 using Illusie's Koszul complex giving a (normalized) cohomological mixed Hodge complex $(C^\bullet,A^\bullet,\beta)$. The inclusion of $W_0^Y$ gives an bifiltered injection of cohomological mixed Hodge complexes $$\spe:(W_0^YC^\bullet,W_0^YA^\bullet,W_0^Y\beta)\hra (C^\bullet,A^\bullet,\beta)$$ whose cokernel we denote by $(\bar C^\bullet,\bar A^\bullet,\bar\beta)$. \begin{theorem} \label{exseqCHMCL0} \begin{enumerate} \item We have an exact sequence of cohomological mixed Hodge complexes \begin{equation*} 0\ra(W^Y_0C^\bullet,W^Y_0A^\bullet,W^Y_0\beta) \stackrel{\spe}{\lra} (C^\bullet,A^\bullet,\beta) \ra (\bar C^\bullet,\bar A^\bullet,\bar\beta) \ra 0. \end{equation*} \item The inclusion $W^Y_0A^\bullet\ra A^\bullet$ is isomorphic to $\CC_Y\ra\psi_f\CC_{\bar{X}}$ in $D^+(Y,\ZZ)$ and thus $\bar A^{\bullet}$ is isomorphic to $\phi_f\CC_{\bar{X}}$. This gives a mixed Hodge structure on $\HH^i(Y,\phi_f\CC_{\bar X})$, and the sequence in (1) turns the long exact sequence \begin{equation*} \cdots\ra H^i(Y,\CC) \ra \HH^i(Y,\psi_f\CC_{\bar X}) \ra \HH^i(Y,\phi_f\CC_{\bar X}) \ra H^{i+1}(Y,\CC) \ra \cdots \end{equation*} into an exact sequence of mixed Hodge structures. \item We have $\Gr_k^W\HH^i(Y,\psi_f\CC_{\bar X})=\Gr_k^W\HH^i(Y,\phi_f\CC_{\bar X})$ for $k\ge2$. \end{enumerate} \end{theorem} \begin{proof} Part (1) follows from Lemma~\ref{cok_CHMC} and what we said before. The first part of (2) is given in the discussion after Theorem 11.28 of \cite{PS08}, the remainder of (2) is standard given (1). Since $Y$ is compact, by \cite{DelTH},\,III,\,8.2.4, we have $h^{p,q}H^i(Y)=0$ for $p+q>i$. This implies (3). \end{proof} \begin{remark} There is map a of cohomological mixed Hodge complexes $\op{Cone}_M(\spe)\ra (\bar C^\bullet,\bar A^\bullet,\bar\beta)$ as is in the proof of Lemma \ref{cok_CHMC}, however it is not a filtered quasi-isomorphism even though it induces an isomorphism of mixed Hodge structures. The latter is cohomologically ``more efficient'' which is why we are using this rather than the cone. \end{remark} \begin{lemma} \label{lemWspecseq} We consider the spectral sequence of $(\bar A^\bullet,W)$, with \begin{equation} \label{Wspecseq} E^{-k,m+k}_1:\HH^{m}(X,\Gr^W_{k}\bar A^\bullet) \Rightarrow \HH^{m}(X,\bar A^\bullet). \end{equation} We have \begin{enumerate} \item The sequence (\ref{Wspecseq}) is degenerate at $E_2$. \item The Poincar\'e residue map along $Y$ induces an isomorphism $$\Gr^W_{k} \bar A^\bullet = \bigoplus_{q>-1,-k}\Gr^{W^Y}_{2q+k+1}\Omega^\bullet_X(\log Y)[1] \stackrel{\sim}{\ra} \bigoplus_{q>-1,-k} \Omega^\bullet_{{Y}^{2q+k+1}}[-2q-k].$$ \item We thus have $$\HH^{m}(X,\Gr^W_{k}\bar A^\bullet)=\bigoplus_{q>-1,-k} H^{m-2q-k}( Y^{2q+k+1},\CC)\langle-q-k\rangle$$ where $\langle\cdot\rangle$ denotes the Tate twist. \item The map $d_1$ in (\ref{Wspecseq}) is given by $d_1=\delta - \gamma$ where $$\delta:H^l(Y^s,\CC)\ra H^l(Y^{s+1},\CC)$$ is the restriction map given as $$(\delta\alpha)|_{Y_{i_1}\cap\cdots\cap Y_{i_s}}= \sum_j (-1)^{j+1}\alpha|_{Y_{i_1}\cap\cdots\hat{Y}_{i_j}\cdots\cap Y_{i_s}},$$ $$\gamma:H^l(Y^{s},\CC)\ra H^{l+2}( Y^{s-1},\CC)$$ is the Gysin map, i.e., the Poincar\'e dual of $\delta$. \item We have Poincar\'e duality for (\ref{Wspecseq}), i.e., if we set $n=\dim X$, $m'=2n-m-2$, $k'=2-k$, we have an isomorphism $$E_1^{-k,m+k}=(E_1^{-k',m'+k'}\langle n\rangle)^*$$ which is compatible with the respective differentials $d_1^*$ and $d_1$. In particular, it also holds when we replace $E_1$ by $E_\infty$. We obtain $$h^{p,q}\HH^i(Y,\phi_f\CC_{\bar X}) = h^{n-p,n-q}\HH^{2n-2-i}(Y,\phi_f\CC_{\bar X}).$$ \item We have Poincar\'e duality for $E_1(A^\bullet,W)$ which yields $$h^{p,q}\HH^i(Y,\psi_f\CC_{\bar X}) = h^{\dim Y-p,\dim Y-q}\HH^{2\dim Y-i}(Y,\psi_f\CC_{\bar X}).$$ \end{enumerate} \end{lemma} \begin{proof} For (1) and (2), see, e.g., \cite{PS08},\,Thm.\,3.18 and \S4.2, respectively. By (\ref{weightA2}), $F^i\Gr_k^W\bar A^\bullet$ becomes $F^{i-q-k}$ on the right hand side of (2), thus the Tate twist in (3) becomes clear. We deduce (4) from \cite{PS08},\,\S11.3.2,\,p.\,280. For (5), we apply Poincar\'e duality to each summand in (3). For $Z$ a compact manifold, Poincar\'e duality means \[ H^i(Z,\CC)=\Hom(H^{2\dim Z-i}(Z,\CC)\langle \dim Z\rangle,\CC). \] Using $\dim Y^i=n-i$, one remodels the resulting sum $$E_1^{-k,m+k}=(\bigoplus_{q>-1,-k} H^{2n-2-m-2q-k}( Y^{2q+k+1},\CC)\langle n-q-1\rangle)^*$$ by replacing $m,k,q$ by $m'$, $k'$ and $q'=q+k-1$. Part (6) goes along the same lines as (5). \end{proof} \begin{figure} \resizebox{1\textwidth}{!}{ \input{E2term.pstex_t} } \caption{The $E_1$ term with respect to the weight filtration of the spectral sequence of the cohomology of the special fibre, nearby fibre and vanishing cycles for the case of a degeneration of a compact threefold, with odd cohomologies indicated by dots.} \label{E2termfigure} \end{figure} \section{The Hodge numbers of the mirror} \label{section4} Given $M$, $N$, $M_{\RR}$, $N_{\RR}$, $\Delta \subseteq M_{\RR}$, a star-like triangulation $\P$ of $\Delta$ consisting only of standard simplices, we obtain data \begin{align*} w:X_{\Sigma}&\rightarrow\CC\\ \check w:X_{\check\Sigma}&\rightarrow\CC \end{align*} with compactifications \begin{align*} \bar{w}:\tilde\PP_{\check\Delta}&\rightarrow\PP^1\\ \bar{\check w}:\tilde X_{\bar{\check\Sigma}}&\rightarrow\PP^1 \end{align*} given by Propositions \ref{properWprop} and \ref{checkWproper} respectively. We choose a small disk $O\subseteq \CC\subseteq\PP^1$ with center $0\in\CC$ which does not contain any other critical values of $\bar{w}$ or $\bar{\check w}$, and consider the restrictions \begin{align*} \bar{w}:\bar{w}^{-1}(O)&\rightarrow O\\ \bar{\check w}:\bar{\check w}^{-1}(O)&\rightarrow O \end{align*} In the two cases, we have inclusions of open sets \[ j^{D}:X_{\Sigma}\cap \bar{w}^{-1}(O)\subseteq \bar{w}^{-1}(O), \] \[ \check j^{\check D}:X_{\check\Sigma}\cap \bar{\check w}^{-1}(O)\subseteq \bar{\check w}^{-1}(O). \] We have already identified $\HH^q(\check w^{-1}(0),\phi_{\bar{\check w}}({\bf R}\check j^{\check D}_*\CC_{X_{\check\Sigma}}))$ with $H^{q-1}(S,\CC)$ in Ex.~\ref{cohoS}. It is not hard to see that the usual Hodge structure from the K\"ahler manifold $S$ and that from the vanishing cohomology construction, as given in the last section, coincide on $H^{q-1}(S,\CC)$. We can compute its Hodge numbers via the formulae in Prop.~\ref{HodgenumbersS}. We are now going to compute the Hodge numbers of $\HH^q(\bar w^{-1}(0),\phi_{\bar w}({\bf R}j^D_*\CC_{X_\Sigma}))$ in order to compare it to the former and to prove our main result. We apply the construction of the last section and use its notation, i.e., $\bar{X}=\bar w^{-1}(O)$, $X=X_\Sigma\cap \bar{X}$, $\bar{w}:\bar{X}\ra O$, $$Y={\bar{w}}^{-1}(0)=\bigcup_{i=1}^{N_Y}Y_i\qquad\hbox{ and }\qquad D=\bar{X}\backslash X=\bigcup_{i=1}^{N_D}D_i.$$ Indeed, by Lemma~\ref{W0descprop},(3) and simpliciality of $\bar{\Sigma}$, $Y\cup D$ is a normal crossing divisor. In the definition \eqref{hpqFdef} of the Hodge numbers for $\check S$, we throw out the information related to the weight filtration. Despite this, it is worth noting that the monodromy around $0$ in the fibration defined by $\bar w$ is related to the Kodaira dimension of $S$. \begin{proposition} \label{weightandkodaira} The logarithm of the monodromy $\nu$ operates on the hypercohomology of the vanishing cycles of the central fibre of $w$. Let $m$ be the maximal integer such that $\nu^m\not=0$ on cohomology. We have $m\le \kappa(S)$ and the following are equivalent: \begin{enumerate} \item $m=\dim S$ \item $S$ is of general type (i.e. $\kappa(S)=\dim S$). \end{enumerate} \end{proposition} \begin{proof} By Prop.~\ref{propdualintcomplex} and Prop.~\ref{kodairadim}, the dual intersection complex of $w^{-1}(0)$ is a $(\kappa(S)+1)$-dimensional ball for $\kappa(S)<\dim S=d$ and a $(d+1)$-dimensional sphere otherwise. This means that the largest $k$ such that $Y^k\neq\emptyset$ is $\kappa(S)+2$. Knowing the operation of $\nu$ on the expression given in Lemma~\ref{lemWspecseq},(3), we conclude $m\le \kappa(S)$ which shows that (1) implies (2). For the converse, we need to show that the isomorphism $\nu^d:\HH^\bullet(\Gr^W_{d+1}\bar A^\bullet)\stackrel{\sim}{\ra} \HH^\bullet(\Gr^W_{1-d}\bar A^\bullet)$, which is just $\id:H^0(Y^{d+2})\stackrel{\sim}{\ra}H^0(Y^{d+2})$, descends to a non-trivial map on cohomology with respect to $d_1$. This map is $$H_{d_1}(\nu^d):\ker(-\gamma:H^0(Y^{d+2})\ra H^2(Y^{d+1}))\ra \coker(\delta:H^0(Y^{d+1})\ra H^0(Y^{d+2})).$$ Since the intersection complex of $w^{-1}(0)$ is a $d+1$-sphere, source and target are one-dimensional. Moreover, since $\gamma$ is Poincar\'e dual to $\delta$, it follows from the linear algebra property $\ker(f)^\perp=\im(f^*)$ of a linear map $f$ and the identification of $H^0(Y^{d+2})$ with its Poincare dual (turning it into an inner product space) that $H_{d_1}(\nu^d)$ is an isomorphism. \end{proof} We now proceed to the main calculation. Motivated by Def.~\ref{HDdef}, we set \begin{align*} e^{p}(\check S,\shF_{\check S})= {} &(-1)^p\sum_{q,k}(-1)^q h^{p,q+k}\, \HH^{p+q}(\check S,\shF_{\check S})\\ = {} &(-1)^p\sum_{q,k}(-1)^q h^{p+1,q+k}\, \HH^{p+1+q}(Y,\bar A^\bullet). \end{align*} Recall that $W_0$ is the component of $w^{-1}(0)$ which is not contained in any toric stratum of $X_\Sigma$, i.e., the unique component of $Y$ which meets $D$. Let ${Y}_{\tor}^{i}\subset Y^{i}$ be the subset of those components which are not contained in ${\tilde{W}_0}$ and ${Y}_{\ntor}^{i}= Y^{i} \setminus {Y}_{\tor}^{i}$. Note that ${Y}_{\ntor}^{1}=\tilde{W}_0$. For $\tau\in\P$, we denote by $\P_*(\tau)$ the smallest cell of $\P_*$ containing $\tau$ and by $T_\tau\cong (\CC^*)^{d+1-\dim\tau}$ the torus orbit in $X_\Sigma$ corresponding to $\operatorname{{Cone}}(\tau)$. Analogously, we define $T_\tau$ to be the torus orbit corresponding to $\tau\in\P_*$. Let $\P_{\Delta'}$, $\P_{\partial\Delta'}$ and $\P_{\partial\Delta}$ denote\footnote{We take $\partial\Delta'$ in the topology of $\Delta$, e.g. $\partial\Delta'=\Delta'$ for $\dim\Delta'<\dim\Delta$.} the induced subdivisions $\P\cap\Delta'$, $\P\cap\partial\Delta'$ and $\P\cap\partial\Delta$. Let $\P_{\partial\Delta'}^{[0]}$ denote the subset of vertices of $\P_{\partial\Delta'}$. For $\omega\in\P_{\Delta'}$, let $X_\omega$ denote the toric variety defined by the fan along $\omega$. Note that $\dim Y^{k}=d+2-k$ and $\dim X_\omega=d+1-\dim\omega$, thus $X_\omega\subset Y^{\dim\omega+1}$. \begin{lemma} \label{disjointdecomposition} We have \begin{enumerate} \item $ Y^{k} = Y_\tor^{k}\ \sqcup\ Y_\tor^{k-1}\cap {\tilde{W}_0}$,\hbox{\qquad i.e.,\qquad} $Y^k_\ntor=Y_\tor^{k-1}\cap {\tilde{W}_0}$, \item $ Y_\tor^{k} = \coprod_{{\omega\in\P_{\Delta'}}\atop{k=\dim\omega+1}}X_\omega$, \item $X_\omega = \coprod_{{\tau\in\P}\atop{\tau\supset\omega}}T_\tau$ for $\omega\in\P_{\Delta'}$, \item $T_\tau \cap {\tilde{W}_0} \cong (\CC^*)^{\dim\P_*(\tau)-\dim\tau} \times (T_{\P_*(\tau)}\cap {W^*_0})$ for $\tau\in\P$. \end{enumerate} \end{lemma} \begin{proof} (1) follows from Prop.~\ref{propdualintcomplex} and (2)-(4) are standard in toric geometry where (4) uses the fact that $w$ factors through $X_\Sigma\ra X_{\Sigma_*}$. \end{proof} \subsection{The duality for the $p$-th Euler characteristic} \begin{lemma} \label{lemmasignidentities} \begin{enumerate} \item For $\tau\subseteq\Delta'$, we have $$(-1)^{\dim\tau}=\sum_{\omega\in p_{\Delta\Delta'}^{-1}(\tau)}(-1)^{\dim\omega}.$$ \item For a polytope $\tau$ with a simplicial polyhedral decomposition $\P_\tau$, we have $$(-1)^{\dim\tau}=\sum_{{\omega\in\P_\tau}\atop{\omega\not\subseteq\partial\tau}} (-1)^{\dim\omega}.$$ \item Let $\tau_1\in\P\cap\partial\Delta, \tau_2\subseteq\partial\Delta$ with $\P_*(\tau_1)\subseteq\tau_2$. We set $$\P_{\tau_1,\tau_2}=\{\tau\in\P|\tau\cap\partial\Delta=\tau_1, \tau\cap\Delta'\neq\emptyset, (p_{\Delta\Delta'}^1)^{-1}(\P_*(\tau))=\tau_2\}$$ \begin{minipage}[b]{0.54\textwidth} and have $$\sum_{\tau\in\P_{\tau_1,\tau_2}}(-1)^{\dim\tau+1} =\left\{\begin{array}{ll} (-1)^{\dim\tau_1}&\P_*(\tau_1)=\tau_2\\ 0&\P_*(\tau_1)\neq\tau_2. \end{array}\right.$$ \end{minipage} \qquad\qquad \begin{minipage}{0.3\textwidth} \vspace{3mm} \resizebox{0.9\textwidth}{!}{ \input{Ptau1tau2.pstex_t} } \end{minipage} \end{enumerate} \end{lemma} \begin{proof} (1) This is an Euler characteristic calculation. Following the notation of Lemma \ref{pdeltadeltaprime}, let $\check\tau\in\check\Sigma_{\Delta'}$ be the cone dual to the face $\tau$. Let $\check\tau'$ denote the inverse image of $\check\tau$ under the projection $N_{\RR}\rightarrow N_{\RR}/{\Delta'}^{\perp}$. Then $\omega\in p^{-1}_{\Delta\Delta'}(\tau)$ if and only if the corresponding cone $\check\omega\in\check\Sigma_{\Delta}$ satisfies $\check\omega\subseteq \check\tau'$, $\check\omega\not\subseteq\partial\check\tau'$. Then \[ \sum_{\check\omega\in\check\Sigma\atop \check\omega\subseteq\check\tau', \check\omega\not\subseteq\partial\check\tau'} (-1)^{\dim\check\omega}=\chi(\check\tau')-\chi(\partial\check\tau') =1-(1+(-1)^{\dim\check\tau'-1})=(-1)^{\dim\check\tau'}. \] Since $\dim\check\omega=\dim\Delta-\dim\omega$ and $\dim\check\tau'= (\dim\Delta-\dim\Delta')+(\dim\Delta'-\dim\tau)$, the desired result follows. (2) As above, this is just a computation of the Euler characteristic of $\tau\setminus\partial\tau$. (3) The proof will use M\"obius inversion and be a variation of the proof of Lemma 3.5 in \cite{KS10}. Recall that for any finite poset $B$, the incidence algebra consists of $\ZZ$-valued functions on $\{(a,b)|a,b\in B,a\le b\}$ with the associative convolution product $$(f*g)(a,b)=\sum_{a\le x\le b}f(a,x)g(x,b).$$ Its unit is $\delta$ which is non-zero only on $\{(a,a)|a\in B\}$ where it takes value one. If $\zeta$ denotes the function which is constant of value $1$, then the M\"obius function $\mu$ is its inverse, i.e., \begin{equation} \label{moebinv0} \delta=\zeta*\mu. \end{equation} We set $\hat B = \{0\}\cup B$ and let $0\le a$ for all $a\in B$. For any function $h:B\ra\ZZ$, we define $\hat{h}:\hat B\times \hat B\ra\ZZ$ as $\hat{h}(a,b)=h(b)$ for $a=0,b\in B$ and $\hat{h}(a,b)=0$ otherwise. Multiplying (\ref{moebinv0}) from the left by $\hat{h}$ and restricting to $\{0\}\times B$ yields \begin{equation} \label{moebinv} h(b) = \sum_{x\le b} \mu(x,b)g(x),\hbox{ where }g(x)=\sum_{a\le x}h(a) \end{equation} because $\hat{g}=\hat{h}*\zeta$. We apply this to our setup. First note that by Lemma~\ref{Pstar},(3), we have for $\tau\in\P$ that \[ \tau\cap\Delta'\neq\emptyset \iff \tau\cap \Int(\Delta)\neq\emptyset. \] We pick $\tau_1'\in\P_{\partial\Delta},\hat\tau_2\in\P_*$ with $\tau_1\subseteq\tau_1'\subseteq\hat\tau_2$ and $\hat\tau_2\cap\Int(\Delta)\neq\emptyset$. Let $\P|_{\hat\tau_2}$ denote the induced subdivision on $\hat\tau_2$. The link of $\tau_1'\in\P|_{\hat\tau_2}$ is contractible and thus $$\sum_{\tau\in\P|_{\hat\tau_2},\tau_1'\subsetneq\tau} (-1)^{\dim\tau-\dim\tau_1'-1}=1,$$ and hence $$ \sum_{\tau\in\P|_{\hat\tau_2},\tau\supseteq\tau_1'} (-1)^{\dim\tau+1}=0. $$ We think of this as the value of the function $g$ at $\tau'_1$, where $g$ is defined in (\ref{moebinv}) using the poset $\{\tau_1'\,|\,\tau_1'\in\P|_{\hat\tau_2},\tau_1'\subseteq\partial\Delta,\tau_1\subseteq\tau_1'\}$ under reverse inclusion and the function \[ h(\tau_1')=\sum_{\tau\in\P|_{\check\tau_2},\tau\cap\partial\Delta=\tau_1'} (-1)^{\dim\tau+1}. \] We then obtain as an expression for $h(\tau_1)$ the identity \begin{equation} \label{moebres1} \sum_{\tau\in\P|_{\hat\tau_2},\tau\cap\partial\Delta=\tau_1} (-1)^{\dim\tau+1}=0. \end{equation} Next consider the poset $ B=\{\hat\tau_2\,|\,\P_*(\tau_1)\subseteq\hat\tau_2, \hat\tau_2\cap\Int(\Delta)\neq\emptyset\} $ under inclusion which has a global minimal element $b_0=p^1_{\Delta\Delta}(\P_*(\tau_1))$. We define $g:B\times B\ra\ZZ$ by \[ g(b_0,\hat\tau_2) = \sum_{{\tau\in\P|_{\hat\tau_2},\tau\cap\partial\Delta=\tau_1}\atop{\tau\cap\Int(\Delta)\neq\emptyset}} (-1)^{\dim\tau+1}, \] which agrees with $(-1)^{\dim\tau_1}$ by (\ref{moebres1}), and we set $g(a,b)=0$ for $a\neq b_0$. We are interested in \[ \sum_{{\tau\in\P|_{\hat\tau_2},\tau\cap\partial\Delta=\tau_1}\atop{\tau\cap\Int(\hat\tau_2)\neq\emptyset}} (-1)^{\dim\tau+1} = h(b_0,\hat\tau_2) = (g*\mu)(b_0,\hat\tau_2) \] for $\hat\tau_2=p^1_{\Delta\Delta'}(\tau_2)$. However, on $\{b_0\}\times B$, we have $g=(-1)^{\dim\tau_1}\zeta$. By (\ref{moebinv0}) we thus get $$h(b_0,\hat\tau_2) = (-1)^{\dim\tau_1} \delta(b_0,\hat\tau_2)$$ which completes the proof. \begin{comment} Let $T_{\tau_1}$ denote the tangent space of $\tau_1$ and let $\Sigma_{\tau_1}$ be the quotient fan in $M_\RR/T_{\tau_1}$, i.e., its cones are generated by the images of $\{\tau\in\P | \tau_1\subseteq \tau\}$ under the projection $M_\RR\ra M_\RR/T_{\tau_1}$. We set $$\bar{\P}_{\tau_1,\tau_2}=\bigcup_{\tau_1\subseteq\tau'\subseteq\tau_2}\P_{\tau_1,\tau'}$$ and $$\partial\bar{\P}_{\tau_1,\tau_2}=\bar{\P}_{\tau_1,\tau_2}\backslash \P_{\tau_1,\tau_2}$$ and denote by $\bar{\Sigma}_{\tau_1,\tau_2}, \partial\bar{\Sigma}_{\tau_1,\tau_2}, \Sigma_{\tau_1,\tau'}$ the subsets of $\Sigma_{\tau_1}$ generated by the projection of $\bar{\P}_{\tau_1,\tau_2}$, $\partial\bar{\P}_{\tau_1,\tau_2}, \P_{\tau_1,\tau'}$, respectively. Note that $\bar{\Sigma}_{\tau_1,\tau_2}\cup\{0\}$ and $\partial\bar{\Sigma}_{\tau_1,\tau_2}\cup\{0\}$ constitute fans. The assertion can now be rephrased as \begin{equation*} \sum_{\omega\in\Sigma_{\tau_1,\tau_2}}(-1)^{\dim\omega-1} =\left\{\begin{array}{ll} 1&\P_*(\tau_1)=\tau_2\\ 0&\P_*(\tau_1)\neq\tau_2. \end{array}\right. \end{equation*} because a cone in $\Sigma_{\tau_1}$ generated by $\tau\in\P_{\tau_1,\tau_2}$ has dimension $\dim\tau-\dim\tau_1$. We are done if we show \begin{eqnarray} \label{hr1} \sum_{\omega\in\bar{\Sigma}_{\tau_1,\tau_2}}(-1)^{\dim\omega-1}&=&1\\ \label{hr2} \sum_{\omega\in\partial\bar{\Sigma}_{\tau_1,\tau_2}}(-1)^{\dim\omega-1} &=&\left\{\begin{array}{ll} 0&\P_*(\tau_1)=\tau_2\\ 1&\P_*(\tau_1)\neq\tau_2. \end{array}\right. \end{eqnarray} Since $\partial\bar{\Sigma}_{\tau_1,\tau_1}=\emptyset$, the vanishing part of the latter is clear. We next prove the following. {\it Claim.} Let $0\neq p\in \Conv(\bigcup\bar{\Sigma}_{\tau_1,\tau_2})$ and $p=\sum_{i=1}^r\lambda_i \bar{q}_i$ with $\lambda_i\ge 0$, $\tau=\Conv(\tau_1,q_1,\ldots,$ $q_r)\in\P$ such that $\bar{q}_i=(q_i\,\op{mod}\, T_{\tau_1})$ are the primitive generators of the cone generated by the projection of $\tau$. There is some $i$ with $1\le i\le r$, $\lambda_i> 0$ and $\RR_{\ge0}\bar{q}_i\in\bar{\Sigma}_{\tau_1,\tau_2}$. {\it Proof of the Claim.} We assume the contrary. We may assume $\lambda_i>0$ and $\bar{q}_i\not\in\supp\bar{\Sigma}_{\tau_1,\tau_2}$ for all $i$. Note that $\bar{\Sigma}_{\tau_1,\tau_2}$ is the union of all cones in $\Sigma_{\tau_1}$ whose primitive generators are contained in the projection of $p_{\Delta\Delta'}(\tau_2)$. Thus $q_i\not\in p_{\Delta\Delta'}(\tau_2)$ for all $i$. However, $p\in \RR_{\ge0}(p_{\Delta\Delta'}(\tau_2)\,\op{mod}\,T_{\tau_1})$. We have a natural surjection of fans $$\Sigma_{\tau_1}\ra \Sigma_{\P_*(\tau_1)}$$ where $\Sigma_{\P_*(\tau_1)}$ denotes the fan in $M_\RR/T_{\P_*(\tau_1)}$ which is given by $\Sigma_{\P_*(\tau_1)}=\{\RR_{\ge0}\pi(\tau')\in\P_*|\tau'\supseteq\P_*(\tau_1)\}$ with $\pi:M_\RR\ra M_\RR/T_{\P_*(\tau_1)}$ being the projection. Note that \linebreak $\RR_{\ge0}\pi(p^1_{\Delta\Delta'}(\tau_2))$ is the smallest cone in $\Sigma_{\P_*(\tau_1)}$ containing $\RR_{\ge0}\pi(p_{\Delta\Delta'}(\tau_2))$. Because \linebreak $\P_*(\tau_1)\subseteq\partial\Delta$ and $p\not\in\partial\Delta$, we have $$\{0\}\neq \pi(p)\in\RR_{\ge0}\pi(\tau)\cap\RR_{\ge0}\pi(p^1_{\Delta\Delta'}(\tau_2)).$$ The point $p\,\op{mod}\,T_{\tau_1}$ lies in the interior of $\RR_{\ge0}(\tau\,\op{mod}\,T_{\tau_1})$, thus $\pi$ maps this cone entirely into $\RR_{\ge0}\pi(p^1_{\Delta\Delta'}(\tau_2))$ and we have $q_i\in p^1_{\Delta\Delta'}(\tau_2)$ for all $i$. Recall from Lemma~\ref{pdeltadeltaprime}, that $p^1_{\Delta\Delta'}(\tau_2)$ is the convex hull of $\tau_2$ and $p_{\Delta\Delta'}(\tau_2)$ and that the latter two are contained in parallel hyperplanes. Since $q_i\not\in p_{\Delta\Delta'}(\tau_2)$, we have $q_i\in \tau_2$ for all $i$. Let $H$ denote a hyperplane containing $\tau_2$ such that $p_{\Delta\Delta'}(\tau_2)$ is contained in a parallel shift of $H$. We have $\pi(p)=\sum_{i}\lambda_i\pi(q_i)\in \pi(H)$ and $\pi(H)\cap \RR_{\ge 0}\pi(p_{\Delta\Delta'}(\tau_2))=0$. This implies $p=0$ which contradicts the assumptions. We have thus finished the proof of the Claim. We define a strong deformation retract $$r:\supp\Sigma_{\tau_1}\times [0,1]\ra \supp\Sigma_{\tau_1}$$ of $\supp\Sigma_{\tau_1}$ to $\supp\bar{\Sigma}_{\tau_1,\tau_2}\cup\{0\}$ on each cone separately, compatibility being checked easily. For $\hat\omega\in \Sigma_{\tau_1}\backslash\{0\}$, by simpliciality, we can write a point $p\in\hat\omega$ uniquely as $$ p = \sum_{\omega\in\Sigma^{[1]}_{\tau_1}\cap\hat\omega} a_\omega m_\omega$$ where $\Sigma^{[1]}_{\tau_1}$ is the subset of rays in $\Sigma_{\tau_1}$, $m_\omega$ denotes the primitive generator of $\omega$ and $a_\omega\in\RR_{\ge 0}$. We specify the deformation rectract componentwise by $((a_\omega)_\omega,t)\mapsto (a_\omega(t))_\omega$ where we set \begin{eqnarray*} a_\omega(t)&=&\left\{\begin{array}{ll} a_\omega&\omega\in\bar{\Sigma}_{\tau_1,\tau_2} \\ (1-t)a_\omega&\hbox{otherwise}. \end{array}\right. \end{eqnarray*} The main point of the above Claim is now that it implies that the restriction of $r$ to $\Conv(\bigcup\bar{\Sigma}_{\tau_1,\tau_2})$ gives a retract of $\Conv(\bigcup\bar{\Sigma}_{\tau_1,\tau_2})$ to $\bigcup\bar{\Sigma}_{\tau_1,\tau_2}$. This implies that both sets have the same Euler characteristic. Since the first is convex, we have $$1=\chi(\bigcup\bar{\Sigma}_{\tau_1,\tau_2}).$$ The collection $\{U_\omega| \omega\in\bar{\Sigma}_{\tau_1,\tau_2}^{[1]}\}$ where $$U_\omega = \bigcup_{{\hat\omega\in\bar{\Sigma}_{\tau_1,\tau_2}}\atop {\hat\omega\supseteq\omega}} \Int(\hat\omega)$$ gives an open cover of $\bigcup\bar{\Sigma}_{\tau_1,\tau_2}$. A \v{C}ech Euler characteristic computation now gives (\ref{hr1}). Since $\Delta$ is simple, the set $\{\tau'|\tau_1\subseteq\tau'\subseteq\tau_2\}$ is order-preserving bijective to the face poset of a simplex. The identity $$\sum_{\omega\in\partial\bar{\Sigma}_{\tau_1,\tau_2}}(-1)^{\dim\omega-1} =\sum_{\tau_1\subseteq\tau'\subsetneq\tau_2} (-1)^{\dim\tau_2-1-\dim\tau'} \sum_{\omega\in\bar{\Sigma}_{\tau_1,\tau'}}(-1)^{\dim\omega-1}$$ then derives from the inclusion-exclusion principle. We apply (\ref{hr1}) to have $$\sum_{\omega\in\partial\bar{\Sigma}_{\tau_1,\tau_2}}(-1)^{\dim\omega-1} =\sum_{\tau_1\subseteq\tau'\subsetneq\tau_2} (-1)^{\dim\tau_2-1-\dim\tau'}.$$ The right hand side can be interpreted as the \v{C}ech Euler characteristic computation of a simplex and is thus $1$ unless $\tau_1=\tau_2$. \end{comment} \end{proof} \begin{lemma} \label{epW} For $\tau\in\P$, let $T_\tau$ denote the corresponding torus orbit in $X_\Sigma$. We have $$(-1)^p e^p(T_\tau) = (-1)^{d+1-\dim\tau}\begin{pmatrix}{d+1-\dim\tau}\\p\end{pmatrix}.$$ Moreover, for $\tau\not\subseteq \partial\Delta$, we have $$ (-1)^p e^p(T_\tau \cap {\tilde{W}_0}) = (-1)^{d-\dim\tau}\left( \begin{pmatrix}{d+1-\dim\tau}\\ p+1\end{pmatrix} -\begin{pmatrix}{\dim\P_*(\tau)-\dim\tau}\\ p+1\end{pmatrix} \right) + A_{\tau,p}. $$ Here, $A_{\tau,p}=0$ if $\tau\not\subseteq \partial\Delta'$ and otherwise $$ \begin{array}{r@{}l} A_{\tau,p}=\sum_{\hat\tau\in p_{\Delta\Delta'}^{-1}(\P_*(\tau))} (-1)^{\dim\P_*(\tau)-\dim\tau+d+1-\dim\hat\tau} &\left(\begin{pmatrix}\dim\hat\tau-\dim\tau \\p+1 \end{pmatrix}\right.\\ &\left.-\begin{pmatrix}\dim\P_*(\tau)-\dim\tau\\p+1\end{pmatrix}\right). \end{array}$$ \end{lemma} Before we embark on the proof, note that the most simple form $T_\tau \cap {\tilde{W}_0}$ could have is a handlebody, i.e., the intersection of a general hyperplane with the open torus in the projective space. This occurs for $\dim\P_*(\tau)=\dim\tau$ and $A_{\tau,p}=0$ in the above lemma. A slightly more complicated shape of $T_\tau \cap {\tilde{W}_0}$ is given for $\dim\P_*(\tau)\neq\dim\tau$ and $A_{\tau,p}=0$ where it is a product of a lower-dimensional handlebody with an algebraic torus. Finally, $A_{\tau,p}\neq 0$ accounts for a $T_\tau \cap {\tilde{W}_0}$ which is a product of an algebraic torus with a decomposition of handlebodies rather than with a single handlebody. \begin{proof} By Lemma \ref{handlebodytimestorus}, we have $e^p((\CC^*)^n)=(-1)^{p+n}\begin{pmatrix}n\\p\end{pmatrix}$. Since $\dim T_\tau=d+1-\dim\tau$, this proves the first statement. Note that $T_\tau \cap {\tilde{W}_0}=\emptyset$ if $\dim\P_*(\tau)=d+1$ because $T_{\P_*(\tau)}$ is a point in that case. So we may assume that $\tau\not\subseteq\partial\Delta$ and $\dim\P_*(\tau)<d+1$. By Lemma \ref{disjointdecomposition},\,(4) and Thm.~\ref{eulerdecompose},\,(2), \[ (-1)^p e^p(T_\tau\cap {\tilde{W}_0})= (-1)^p \sum_{k=0}^p e^k((\CC^*)^{\dim\P_*(\tau)-\dim\tau}) e^{p-k}(T_{\P_*(\tau)}\cap {W^*_0}). \] By \cite{DK86},\,4.4, for $p\ge 0$, we have \begin{equation} \label{DK86eq} e^p(T_{\P_*(\tau)} \cap {W^*_0})=(-1)^{p+a_\tau-1} \begin{pmatrix}{a_\tau}\\ p+1\end{pmatrix} + (-1)^{a_\tau-1}\varphi_{a_\tau-p}(\Delta_{\P_*(\tau)}) \end{equation} where $a_\tau= d+1-\dim\P_*(\tau)$, $\Delta_{\P_*(\tau)}=\Newton(T_{\P_*(\tau)} \cap {W^*_0})$ and $\varphi_i$ is defined as in the proof of Prop.~\ref{HodgenumbersS}. For $\P_*(\tau)\not\subset\Delta'$, by Lemma~\ref{pdeltadeltaprime},(3), we have that $\Delta_{\P_*(\tau)}$ is a $a_\tau$-dimensional standard simplex and thus $\varphi_{i}(\Delta_{\P_*(\tau)})=0$ for $i\le a_\tau$. In this case, $T_{\tau}\cap \tilde W_0\cong H^{a_\tau-1}\times (\CC^*)^{\dim\P_*(\tau)-\dim\tau}$, and Prop.\ \ref{handlebodytimestorus} gives the result. The case $\tau\subseteq \partial\Delta'$ is similar, with the first term on the right hand side of \eqref{DK86eq} giving the same contribution as the previous case, and an additional possible contribution from $\varphi_{a_\tau-p}(\Delta_{\P_*(\tau)})$. We compute this term using Lemma~\ref{pdeltadeltaprime} and the formula for $\varphi_i$ given in the proof of Prop.~\ref{HodgenumbersS} as follows: \begin{align*} &(-1)^p \sum_{k\ge 0}e^k((\CC^*)^{\dim\P_*(\tau)-\dim\tau})(-1)^{a_\tau-1}\varphi_{a_\tau-p+k}(\Delta_{\P_*(\tau)})\\ &=\sum_{k\ge 0} \begin{pmatrix}\dim\P_*(\tau)-\dim\tau\\k\end{pmatrix} \sum_{\hat\tau\in p_{\Delta\Delta'}^{-1}(\P_*(\tau))} (-1)^{m_{\tau,\hat\tau}} \begin{pmatrix}a_{\tau}-(d+1-\dim\hat\tau) \\a_{\tau}+1-(a_\tau-p+k) \end{pmatrix} \end{align*} where $(-1)^{m_{\tau,\hat\tau}}= (-1)^p(-1)^{k+\dim\P_*(\tau)-\dim\tau}(-1)^{a_\tau-1+a_\tau-p+k+(d+1-\dim\hat\tau)+1}$ simplifies to\newline $m_{\tau,\hat\tau}=\dim\P_*(\tau)-\dim\tau+(d+1-\dim\hat\tau)$. To obtain $A_{\tau,p}$, we need to subtract the $k=p+1$ term from the above which is $$\sum_{\hat\tau\in p_{\Delta\Delta'}^{-1}(\P_*(\tau))} (-1)^{m_{\tau,\hat\tau}} \begin{pmatrix}\dim\P_*(\tau)-\dim\tau\\p+1\end{pmatrix}.$$ Using Prop.~\ref{binomident},(2), yields $A_{\tau,p}$ as given in the assertion. \end{proof} Recall from the introduction that $\shF_{\check S}=\phi_{\bar W,0}{\bf R}j_*\CC_{X}[1]$ where the filtrations are shifted by $$F^i\shF_{\check S}^k = F^{i+1}\bar{A}^{k+1},\qquad W_i\shF_{\check S}^k = W_{i+1}\bar{A}^{k+1}. $$ This implies \begin{lemma} \label{shiftHpq} $h^{p,q}\HH^i(\check S,\shF_{\check S})=h^{p+1,q+1}\HH^{i+1}(Y,\phi_{w,0}{\bf R}j_*\CC_{X}).$ \end{lemma} \begin{theorem} \label{epdual} We have that \begin{enumerate} \item Poincar\'e duality holds for $h^{p,q}\HH^i(\check S,\shF_{\check S})$, i.e., $$h^{p,q}\, \HH^{i}(\check S,\shF_{\check S}) = h^{d-p,d-q}\, \HH^{2d-i}(\check S,\shF_{\check S}),$$ \item $e^{p}(\check S,\shF_{\check S})= \sum_{i,j\ge 0} (-1)^{i+j} e^{p-i}(Y^{2+i+j})$, \item $e^p(S)=(-1)^d e^{d-p}(\check S,\shF_{\check S})$. \end{enumerate} \end{theorem} \begin{proof} (1) Using Lemma~\ref{lemWspecseq},(5), we get \begin{align*} h^{p,q} \HH^i(\check S,\shF_{\check S}) = {} &h^{p+1,q+1} \HH^{i+1}(Y,\bar A^\bullet)\\ = {} &h^{d+2-(p+1),d+2-(q+1)} \HH^{2d+2-(i+1)}(Y,\bar A^\bullet) =h^{d-p,d-q} \HH^{2d-i}(\check S,\shF_{\check S}). \end{align*} (2) The Euler characteristic can be computed as an alternating sum of dimensions of the terms in the $E_1$ term of \eqref{Wspecseq}. We use Lemma~\ref{lemWspecseq},(3) to get \begin{align*} &(-1)^p e^p(\check S,\shF_{\check S})\\ = {} & \sum_{q} (-1)^q \sum_{k}h^{p,q+k}\,\HH^{p+q}(\check S,\shF_{\check S})\\ = {} & \sum_{q} (-1)^q \sum_{k}h^{p+1,q+k}\, \HH^{p+1+q}(X,\Gr^W_{k}\bar A^\bullet)\\ = {} & \sum_{q} (-1)^q \sum_{k} \sum_{q'>-1,-k} h^{p+1,q+k}\,H^{(p+1+q)-2q'-k}( Y^{2q'+k+1},\CC)\langle-q'-k\rangle\\ = {} & \sum_{q} (-1)^q \sum_{k} \sum_{q'>-1,-k} h^{p+1-q'-k,q-q'}( Y^{2q'+k+1}). \end{align*} Note that $\{(q',k) | q'>-1,-k\}$ and $\{(j,1+i-j)|i,j\ge 0\}$ define the same subsets of $\ZZ^2$, we may thus reorganize the sum via $k=1+i-j,q'=j$ to get \begin{align*} (-1)^p e^p(\check S,\shF_{\check S}) = {} & \sum_{q\ge 0} (-1)^q \sum_{i,j\ge 0}h^{p+1-j-(1+i-j),q-j}( Y^{2j+(1+i-j)+1})\\ = {} & \sum_{i,j\ge 0}(-1)^{p-i+j} e^{p-i}( Y^{2+i+j}). \end{align*} (3) By (2) and Lemma~\ref{disjointdecomposition},(1), we have \begin{align*} e^{d-p}(\check S,\shF_{\check S}) = {} &\sum_{i,j\ge 0} (-1)^{i+j} e^{d-p-i}(Y^{2+i+j})\\ = {} &\sum_{i,j\ge 0} (-1)^{i+j} e^{d-p-i}(Y_{\tor}^{2+i+j})\\ &+ \sum_{i,j\ge 0} (-1)^{i+j} e^{d-p-i}(Y_{\tor}^{1+i+j}\cap {\tilde{W}_0}). \end{align*} Using Lemma~\ref{disjointdecomposition},(2) and setting $2+i+j=\dim\omega+1$ in the first and $1+i+j=\dim\omega+1$ in the second sum allows us to continue the equality as \[ =\displaystyle\sum_{\omega\in\P_{\Delta'}\backslash \P_{\Delta'}^{[0]}\atop{0\le i\le\dim\omega-1}} (-1)^{\dim\omega-1} e^{d-p-i}(X_\omega) + \displaystyle\sum_{{\omega\in\P_{\Delta'}}\atop{0\le i\le\dim\omega}} (-1)^{\dim\omega} e^{d-p-i}(X_\omega\cap {\tilde{W}_0}) \] and by Lemma~\ref{disjointdecomposition},(3), as \[ = \displaystyle\sum_{{{\tau\in\P}\atop{\omega\subseteq \tau\cap\Delta'}}\atop{0\le i\le\dim\omega-1}} \hspace{-0.15cm} (-1)^{\dim\omega-1} e^{d-p-i}(T_\tau) + \hspace{-0.15cm} \displaystyle\sum_{{{\tau\in\P}\atop{\omega\subseteq \tau\cap\Delta'}}\atop{-1\le i\le\dim\omega-1}} \hspace{-0.15cm} (-1)^{\dim\omega} e^{d-p-i-1}(T_\tau\cap {\tilde{W}_0}). \] Note that, using Prop.~\ref{binomident},(1), for any simplex $\tau$ and $i\ge-1$, we have $$\sum_{{\omega\subseteq\tau}\atop{\dim\omega\ge i+1}}(-1)^{\dim\omega} =\sum_{j=i+1}^{\dim\tau} (-1)^j \begin{pmatrix}\dim\tau+1\\ j+1\end{pmatrix} =(-1)^{i+1}\begin{pmatrix}\dim\tau\\ i+1\end{pmatrix}$$ which we insert above to have \begin{align} \label{edpeq} \begin{split} e^{d-p}(\check S,\shF_{\check S}) = {} & \displaystyle\sum_{{\tau\in\P}\atop{0\le i\le\dim\tau\cap\Delta'-1}} (-1)^i \begin{pmatrix}\dim\tau\cap\Delta'\\ i+1\end{pmatrix} e^{d-p-i}(T_\tau)\\ &+ \displaystyle\sum_{{\tau\in\P}\atop{-1\le i\le\dim\tau\cap\Delta'-1}} (-1)^{i+1} \begin{pmatrix}\dim\tau\cap\Delta'\\ i+1\end{pmatrix} e^{d-p-i-1}(T_\tau\cap {\tilde{W}_0}). \end{split} \end{align} We apply Lemma~\ref{epW} and Prop.~\ref{binomident},(2), and obtain \[ (-1)^{p} e^{d-p}(\check S,\shF_{\check S}) = C_1 + C_2 + C_3 + C_4 \] where \begin{align*} C_1 = {} & \displaystyle\sum_{{\tau\in\P}\atop{\tau\cap\Delta'\neq\emptyset}} (-1)^{\dim\tau+1} \begin{pmatrix}d+1-\dim\tau+\dim\tau\cap\Delta'\\ d-p+1\end{pmatrix}\\ C_2 = {} & \displaystyle\sum_{{\tau\in\P}\atop{\tau\cap\Delta'\neq\emptyset}} (-1)^{\dim\tau} \begin{pmatrix}d+1-\dim\tau+\dim\tau\cap\Delta'\\ d-p+1\end{pmatrix}\\ &\ \ - \displaystyle\sum_{{\tau\in\P}\atop{{\tau\cap\Delta'\neq\emptyset}\atop{\dim\P_*(\tau)<d+1}}} (-1)^{\dim\tau} \begin{pmatrix}\dim\P_*(\tau)-\dim\tau+\dim\tau\cap\Delta'\\ d-p+1\end{pmatrix}\\ C_3= {} & \displaystyle\sum_{{{\tau\in\P}\atop{\P_*(\tau)\subseteq\partial\Delta'}} \atop{\hat\tau\in p_{\Delta\Delta'}^{-1}(\P_*(\tau))}} (-1)^{\dim\P_*(\tau)-\dim\tau+1-\dim\hat\tau} \left(\begin{pmatrix}\dim\hat\tau \\d-p+1 \end{pmatrix} - \begin{pmatrix}\dim\P_*(\tau) \\d-p+1 \end{pmatrix} \right)\\ C_4= {} & \displaystyle\sum_{{\tau\in\P}\atop{\tau\cap\Delta'\neq\emptyset}} (-1)^{\dim\tau} \begin{pmatrix}d+1-\dim\tau\\ d-p+1\end{pmatrix}\\ &+ \displaystyle\sum_{{\tau\in\P}\atop{{\tau\cap\Delta'\neq\emptyset}\atop{\dim\P_*(\tau)=d+1}}} (-1)^{\dim\tau+1} \begin{pmatrix}d+1-\dim\tau+\dim\tau\cap\Delta'\\ d-p+1\end{pmatrix}. \end{align*} Here $C_1$ is the first term of the right-hand-side of \eqref{edpeq}, along with an additional contribution for $i=-1$; this latter contribution is cancelled by the first term of $C_4$. The expression for $C_2$ comes from the second term of \eqref{edpeq}, using Lemma~\ref{epW}, without taking into account the term $A_{\tau,p}$ in that lemma. However, the first sum of $C_2$ includes a contribution from cells $\tau$ with $\dim\P_*(\tau)=d+1$, for which $e^{d-p-i-1}(T_{\tau}\cap\tilde W_0)=0$. The second term in $C_4$ cancels this contribution. Finally, the $A_{\tau,p}$ term is accounted for in $C_3$. The first sum of $C_2$ cancels with $C_1$, the deeper reason for this being the Lefschetz hyperplane theorem. Using Lemma~\ref{lemmasignidentities},(2), the second sum of $C_2$ can be written as \begin{align*} &\sum_{\tau\in\P_{\partial\Delta'}}(-1)^{\dim\tau+1} \begin{pmatrix}\dim\P_*(\tau) \\d-p+1 \end{pmatrix} + C'_2\\ = {} & \sum_{\tau\subset\partial\Delta'}(-1)^{\dim\tau+1} \begin{pmatrix}\dim\tau \\d-p+1 \end{pmatrix} + C'_2 \end{align*} where \[ C'_2=\displaystyle\sum_{{\tau\in\P\backslash\P_{\Delta'}}\atop{{\tau\cap\Delta'\neq\emptyset}\atop{\dim\P_*(\tau)<d+1}}} (-1)^{\dim\tau+1} \begin{pmatrix}\dim\P_*(\tau)-\dim\tau+\dim\tau\cap\Delta'\\ d-p+1\end{pmatrix} \] We apply Lemma~\ref{lemmasignidentities},(2) and (1) successively to $C_3$ to get \begin{align*} C_3= {} &\displaystyle\sum_{{\omega\subseteq\partial\Delta'} \atop{\tau\in p_{\Delta\Delta'}^{-1}(\omega)}} (-1)^{\dim\tau+1} \left(\begin{pmatrix}\dim\tau \\d-p+1 \end{pmatrix} - \begin{pmatrix}\dim\omega \\d-p+1 \end{pmatrix} \right)\\ = {} & \displaystyle\sum_{\tau\subseteq\Delta} (-1)^{\dim\tau+1} \begin{pmatrix}\dim\tau \\d-p+1 \end{pmatrix} +\displaystyle\sum_{\tau\subset\partial\Delta'} (-1)^{\dim\tau} \begin{pmatrix}\dim\tau \\d-p+1 \end{pmatrix}\\ &+ \delta_{\dim\Delta}^{\dim\Delta'} (-1)^{\dim\Delta} \begin{pmatrix}\dim\Delta \\d-p+1 \end{pmatrix} \end{align*} where $\delta$ denotes the Kronecker symbol. This last term arises because if $\dim\Delta'=\dim\Delta$, then $\partial\Delta' \not=\Delta'$, and hence $\Delta\not\in p^{-1}_{\Delta\Delta'}(\omega)$ for any $\omega\subseteq\partial\Delta'$. Using Lemma~\ref{lemmasignidentities},(2), we rewrite the part of the second sum of $C_4$ involving those $\tau$ with $\tau\subseteq\Delta'$ as \[ \displaystyle\sum_{{\tau\in\P_{\Delta'}}\atop{\dim\P_*(\tau)=d+1}} \hspace{-0.15cm} (-1)^{\dim\tau+1} \begin{pmatrix}d+1-\dim\tau+\dim\tau\cap\Delta'\\ d-p+1\end{pmatrix} = \delta_{\dim\Delta}^{\dim\Delta'} (-1)^{\dim\Delta'+1} \begin{pmatrix} \dim\Delta' \\d-p+1 \end{pmatrix}. \] Putting all transformations together in the previous order, after term pair cancellations in $(C_1,C_2)$, $(C_2,C_3)$ and $(C_3,C_4)$, we obtain \begin{align*} (-1)^{p} e^{d-p}(\check S,\shF_{\check S}) = {} & \displaystyle\sum_{{\tau\in\P\backslash\P_{\Delta'}}\atop{{\tau\cap\Delta'\neq\emptyset}\atop{\dim\P_*(\tau)<d+1}}} (-1)^{\dim\tau+1} \begin{pmatrix}\dim\P_*(\tau)-\dim\tau+\dim\tau\cap\Delta'\\ d-p+1\end{pmatrix}\\ &+ \displaystyle\sum_{\tau\subseteq\Delta} (-1)^{\dim\tau+1} \begin{pmatrix}\dim\tau \\d-p+1 \end{pmatrix}\\ &+ \displaystyle\sum_{{\tau\in\P}\atop{\tau\cap\Delta'\neq\emptyset}} (-1)^{\dim\tau} \begin{pmatrix}d+1-\dim\tau\\ d-p+1\end{pmatrix}\\ &+ \displaystyle\sum_{{\tau\in\P\backslash\P_{\Delta'}}\atop{{\tau\cap\Delta'\neq\emptyset}\atop{\dim\P_*(\tau)=d+1}}} (-1)^{\dim\tau+1} \begin{pmatrix}d+1-\dim\tau+\dim\tau\cap\Delta'\\ d-p+1\end{pmatrix}. \end{align*} Recall that $\Delta(\tau)$ denotes the smallest face of $\Delta$ containing $\tau\in\P$. Note that since $\Delta'$ contains all lattice points in the interior of $\Delta$, $\dim\Delta(\tau)=d+1$ is equivalent to $\tau\cap\Delta'\neq\emptyset$, so the third sum becomes $$\displaystyle\sum_{{\tau\in\P}\atop{\tau\not\subseteq\partial\Delta}} (-1)^{\dim\tau} \begin{pmatrix}\dim\Delta(\tau)-\dim\tau\\ d-p+1\end{pmatrix}. $$ For $\tau\in\P\backslash\P_{\Delta'}$ with $\tau\cap\Delta'\neq\emptyset$, we have $\dim\tau=\dim\tau\cap\partial\Delta+\dim\tau\cap\Delta'+1$. We can unite the first and fourth sum and write this as \begin{align} \label{almostdoneeq} \begin{split} &\displaystyle\sum_{{\tau\in\P\backslash\P_{\Delta'}}\atop{\tau\cap\Delta'\neq\emptyset}} (-1)^{\dim\tau+1} \begin{pmatrix}\dim\P_*(\tau)-\dim\tau\cap\partial\Delta-1\\ d-p+1\end{pmatrix}\\ = {} & \displaystyle\sum_{ {\tau'\in\P_{\partial\Delta}\atop {{\hat{\tau}\in\P_*,\hat{\tau}\supseteq\P_*(\tau')}\atop {\hat{\tau}\cap\Delta'\neq\emptyset}}}} \begin{pmatrix}\dim\hat\tau-\dim\tau'-1\\ d-p+1\end{pmatrix} \displaystyle\sum_{{\tau\in\P}\atop{{\P_*(\tau)=\hat\tau}\atop{\tau\cap\partial\Delta=\tau'}}} (-1)^{\dim\tau+1}. \end{split} \end{align} In order to apply Lemma~\ref{lemmasignidentities},(3), we identify \[ \P_{\tau',(p^1_{\Delta\Delta'})^{-1}(\hat\tau)}=\{\tau\in\P|\P_*(\tau)=\hat\tau, \tau\cap\partial\Delta=\tau'\} \] and obtain \[ \displaystyle\sum_{{\tau\in\P}\atop{{\P_*(\tau)=\hat\tau}\atop{\tau\cap\partial\Delta=\tau'}}} (-1)^{\dim\tau+1} =\left\{\begin{array}{ll} (-1)^{\dim\tau'}&\P_*(\tau')=(p^1_{\Delta\Delta'})^{-1}(\hat\tau) \\ 0&\hbox{otherwise}. \end{array}\right. \] Thus, the non-trivial case coincides with $\hat\tau=p^1_{\Delta\Delta'}(\P_*(\tau'))$ such that the sum on the right-hand-side of \eqref{almostdoneeq} can be reduced to a sum over $\tau'\in\P_{\partial\Delta}$ by using $\dim p^1_{\Delta\Delta'}(\P_*(\tau'))=\dim\P_*(\tau')+1$. Identifying $\P_*(\tau')=\Delta(\tau')$ for $\tau'\in\P_{\partial\Delta}$ and comparing the results with Prop.~\ref{HodgenumbersS},(2), we get \[ (-1)^{d-p}e^{d-p}(S)=(-1)^{p} e^{d-p}(\check S,\shF_{\check S}) \] and by Poincar\'e duality for $S$, we have $e^{d-p}(S)=e^{p}(S)$ which finishes the proof. \end{proof} \subsection{A vanishing result} By the Lefschetz hyperplane theorem, $h^{p,q}(S)=0$ unless $p=q$ or $p+q=d$. In this section, we prove that the corresponding mirror dual Hodge numbers also vanish. We recall the notation from (\ref{setupWs}). We will often drop the coefficient ring $\CC$ from a cohomology group, writing $H^k(T)$ instead of $H^k(T,\CC)$ for a variety $T$. \begin{theorem} \label{batyrevthm} Let $Z$ be a smooth hypersurface in $(\CC^*)^k$ given by a Laurent polynomial whose Newton polytope is $k$-dimensional. Then $h^{p,q}H^i(Z)=0$ unless either $i<\dim Z$ and $i=2p=2q$ or $i=\dim Z$ and $p+q\ge i$. \end{theorem} \proof For a smooth affine variety, $H^i(Z)=0$ for $i>\dim Z$ anyway. For a smooth variety $Z$, $h^{p,q}H^i(Z)=0$ for $p+q<i$ (see e.g., \cite{PS08},\,Thm.\,5.39). The remaining statements follow from the Lefschetz-type theorem of \cite{DK86},\,Prop.\,3.9. \qed \medskip Let $\tilde{D}$ denote the complement of the dense torus in $\tilde\PP_{\check\Delta}$. By Lemma~\ref{W0descprop},(3) and the simpliciality of $\bar\Sigma$, $\tilde W_0\cap \tilde D$ is a normal crossing divisor in $\tilde W_0$. Let $\delta^{\tilde W_0\cap \tilde D}:H^k(\tilde W_0\cap\tilde D^i)\ra H^k(\tilde W_0\cap \tilde D^{i+1})$ denote the differential and augmentation of the cohomological complex associated to the semi-simplicial scheme $\tilde W_0\cap \tilde D$ and let $\gamma^{\tilde W_0\cap \tilde D}$ be its Poincar\'e dual, the Gysin map. \begin{lemma} \label{Wsequence} \begin{enumerate} \item There is a sequence $$\cdots \ra H^{p-i,q-i}( {\tilde W_0}\cap \tilde D^i) \stackrel{-\gamma^{\tilde W_0\cap \tilde D}}{\lra} \cdots \stackrel{-\gamma^{\tilde W_0\cap \tilde D}}{\lra} H^{p-1,q-1}( {\tilde{W}_0}\cap \tilde D^1) \stackrel{-\gamma^{\tilde W_0\cap \tilde D}}{\lra} H^{p,q}({\tilde W_0})\ra 0.$$ For $p\neq q$ and $\dim\Delta'>0$, the sequence is exact at every term except possibly at $H^{p-i,q-i}( {\tilde W_0}\cap \tilde D^i)$ where $p+q-2i=\dim \tilde W_0\cap \tilde D^{i}=d+1-i$. \item For $p\neq q$, there is a sequence $$\cdots \ra H^{p-i,q-i}({\tilde{W}_0}\cap Y^{i}_\tor) \stackrel{-\gamma^{Y_\ntor}}{\lra} \cdots \stackrel{-\gamma^{Y_\ntor}}{\lra} H^{p-1,q-1}( {\tilde{W}_0}\cap Y^{1}_\tor) \stackrel{-\gamma^{Y_\ntor}}{\lra} H^{p,q}({\tilde{W}_0})$$ where each map is an alternating sum of Gysin maps given by projecting $-\gamma$ in Lemma~\ref{lemWspecseq},(4), to $Y^\bullet_\ntor$. When replacing the last term by the image of the last map, the resulting sequence is a direct summand of the sequence in (1) and thus it is exact at every term except possibly at $H^{p-i,q-i}( {\tilde{W}_0}\cap Y^{i}_\tor)$ where $p+q-2i=\dim {\tilde{W}_0}\cap Y^{i}_\tor=d+1-i$. Moreover, if $\dim\Delta'=0$ then it is exact everywhere for every $p,q$. \end{enumerate} \end{lemma} \begin{proof} The sequence in (1) can be derived from the weight spectral sequence of the cohomological mixed Hodge complex with complex part $\Omega^\bullet_{\tilde{W}_0}(\log (\tilde W_0\cap\tilde D))$ computing the mixed Hodge structure on $H^{a+b}(\tilde W_0\setminus (\tilde W_0\cap\tilde D))$. It is \begin{equation} E_1^{a,b}=\HH^{a+b}(\tilde{W}_0, \Gr_{-a}^{W^{\tilde{D}}}\Omega^\bullet_{\tilde{W}_0}(\log (\tilde W_0\cap\tilde D))) \Rightarrow H^{a+b}(\tilde{W}_0\setminus (\tilde W_0\cap\tilde D)). \end{equation} Using the residue map, in terms of the fan $\bar{\Sigma}$, this becomes $$E_1^{a,b}=\bigoplus_{{\tau\in\bar\Sigma}\atop{\dim\tau=-a}} H^{2a+b}(\tilde V(\tau)\cap {\tilde{W}_0}), $$ where $V(\tau)$ denotes the closure of the orbit corresponding to $\tau$ and $\tilde V(\tau)$ denotes its inverse image under the blowup $\tilde\PP_{\check\Delta}\ra X_{\bar\Sigma}$. The differential $d_1=-\gamma^{\tilde{W}_0\cap \tilde{D}}$ is given explicitly in \cite{PS08},\,Prop.\,4.10 as the (twisted) Gysin map. Setting $a=-i, b=p+q$ gives the sequence in the assertion. By Lemma \ref{dimDelta0}, we have $$\dim \check\Delta_0\neq d+2 \iff \dim \check\Delta_0=d+1 \iff \dim\Delta'=0,$$ so we assume $\dim \check\Delta_0= d+2$. We have $$E_\infty^{a,b}=E_2^{a,b}=\Gr_{-a}^W H^{a+b}({\tilde{W}_0}\backslash({\tilde{W}_0}\cap \tilde D)).$$ The exactness follows if we show that \begin{equation} h^{p',q'}\Gr_{-a}^W H^{a+b}({\tilde{W}_0}\backslash({\tilde{W}_0}\cap \tilde D))=0\hbox{ for } p'\neq q' \end{equation} unless $a+b=d+1$. This follows directly from ${\tilde{W}_0}\setminus({\tilde{W}_0}\cap \tilde D) ={\bar{W}_0}\setminus({\bar{W}_0}\cap \bar D)$ and Thm.~\ref{batyrevthm}, where $\bar D$ is the toric boundary in $X_{\bar\Sigma}$. To prove (2), we set $$A=\{\tau\in\bar{\Sigma}| \tau=\operatorname{{Cone}}(\tau_1), \tau_1\in \P, \tau_1\not\subseteq \Delta', \tau_1\not\subseteq \partial\Delta\}.$$ Note that $\tilde V(\tau)=V(\tau)$ for $\tau\in A$. Prop.~\ref{W0descprop},(2), implies that for $\tau\in A$, ${\tilde{W}_0}\cap V(\tau)$ is the pullback of a projective space under a toric blowup. Hence $H^{p,q}({\tilde{W}_0}\cap V(\tau))=0$ for $p\neq q$ and $\tau \in A$. Moreover, $\supp\bar\Sigma\backslash \supp(\{\Int(\tau)|\tau \in A\}\cup\{0\})$ has two connected components. We focus on the component of cones contained in $\operatorname{{Cone}}(\Delta')$. As a summand of the sequence in (1), we get the desired sequence $$\cdots \ra \bigoplus_{{\tau\in\bar\Sigma\cap \operatorname{{Cone}}(\Delta')}\atop{\dim\tau=i}} H^{p-i,q-i}(V(\tau)\cap {\tilde{W}_0})\ra\cdots\ra \bigoplus_{{\tau\in\bar\Sigma\cap \operatorname{{Cone}}(\Delta')}\atop{\dim\tau=1}} H^{p-1,q-1}(V(\tau)\cap {\tilde{W}_0}) \ra H^{p,q}({\tilde{W}_0})$$ by identifying $ Y_\tor^{i}=\coprod_{{\tau\in\bar\Sigma\cap \operatorname{{Cone}}(\Delta')}\atop{\dim\tau=i}} V(\tau)$. We now treat the case $\dim\Delta'=0$ separately by a different proof. The projection $\pi_{\operatorname{{Cone}}(\Delta')}$ in Prop.~\ref{properWprop},(2) induces a projection $$\bar{W}_0\ra \bar{W}_0\cap D_{\operatorname{{Cone}}(\Delta')}=Y^2_{\ntor}$$ for which the inclusion $Y^2_{\ntor} \ra\bar{W}_0$ provides a section. Hence, the pullback by the inclusion is surjective on cohomology and thus its dual, the Gysin map $H^{i-2}(Y^2_{\ntor})\ra H^i(\bar{W}_0)$, is an injection. Since $Y^1_{\ntor}={\tilde{W}}_0$ is a blowup of $\bar W_0$, we may compose the above injection with the injection $H^i(\bar{W}_0)\ra H^i(Y^1_{\ntor})$. This composition is indeed $-\gamma^{Y_\ntor}$. \end{proof} \begin{proposition} \label{vanishing} We have \begin{enumerate} \item $h^{p,q+k}\, \HH^{p+q}(\check S,\shF_{\check S})=h^{p+1,q+k+1}\,\HH^{p+q+1}(Y,\psi_{\bar w,0}\CC_{\bar X})$ for $k\ge 1$, \item $h^{p,q+k}\, \HH^{p+q}(Y,\psi_{\bar w,0}\CC_{\bar X})=0$ unless $p+q=d+1$ or $k=0$, \item $h^{p,q+k}\, \HH^{p+q}(\check S,\shF_{\check S})=0$ unless $p+q=d$ or $p-q=k=0$. \end{enumerate} \end{proposition} \begin{proof} (1) follows from Lemma~\ref{shiftHpq} and Thm.~\ref{exseqCHMCL0},(3). By Poincar\'e duality, Lemma~\ref{lemWspecseq},(6), it suffices to prove the vanishing in (2) for $p+q>d+1$ and $k\neq 0$. Choose $t_0\in\CC$ with $|t_0|$ sufficiently small so that $0$ is the only critical value of $\bar w$ for in the closed disk with radius $|t_0|$. Note that $\bar W_{t_0}:=\bar w^{-1}(t_0)$ is a $\bar{\Sigma}$-regular hypersurface as argued in the proof of Prop.~\ref{properWprop}. As in the proof of Lemma~\ref{Wsequence},(1), we have an exact sequence $$ \bigoplus_{{\tau\in\bar\Sigma}\atop{\dim\tau=1}} H^{b-2}(\tilde V(\tau)\cap \tilde W_{t_0}) \ra H^b(\tilde W_{t_0}) \ra \Gr^W_0H^b(\bar W_{t_0}\backslash(\bar W_{t_0}\cap \bar D)) \ra 0.$$ Let $T$ denote the monodromy operator for $\bar w$ around $0$. Then $T$ and $N=\log T$ operate on this sequence. Note that it suffices to show that $N$ is trivial on $H^b(\tilde W_{t_0})$ for $b>d+1$ because then the (monodromy) weight filtration on $\HH^{b}(Y,\psi_{\bar w,0}\CC_{\bar X})$ is also trivial, i.e., $$\HH^{b}(Y,\psi_{\bar w,0}\CC_{\bar X})=\Gr_0^W \HH^{b}(Y,\psi_{\bar w,0}\CC_{\bar X}).$$ By Thm.~\ref{batyrevthm}, $\Gr^W_0H^b(\bar W_{t_0}\backslash(\bar W_{t_0}\cap \bar{D}))=0$ for $b>d+1$, so we only need to show that $N$ is trivial on $H^{b-2}(\tilde V(\tau)\cap \tilde W_{t_0})$. It suffices to show the triviality on $H^{b-2}(V(\tau)\cap \bar W_{t_0})$. We show that $\bar{w}|_{V(\tau)}$ is constant if $\tau\not\in \Sigma$. Recall that the pencil defined by $\bar{w}$ as a family of sections of $\phi^*\O_{\PP_{\check \Delta}}(1)$ (where $\phi:X_{\bar\Sigma}\rightarrow\PP_{\check \Delta}$ is the resolution) is $$ \bar{w}(t) = t\cdot z^0 + c_{\rho}z^{\rho}+\sum_{\omega\subset\Delta} c_{\omega}z^{(n_{\omega}, \varphi_{\Delta}(n_{\omega}))} $$ and $z^0$ vanishes on $D_\infty = X_{\bar\Sigma}\backslash X_\Sigma$. We have $V(\tau)\subseteq D_\infty$ if $\tau\not\in \Sigma$, so indeed $\bar{w}$ is constant on such $V(\tau)$. Now let us assume that $\tau\in\Sigma\backslash\{0\}$. The Newton polytope of $\bar W^\tau_t:=\bar w^{-1}(t)\cap V(\tau)$ is a proper face of $\check\Delta$ supported by the hyperplane $\tau^\perp$. It contains $0$ and thus by the smoothness assumption of $\PP_\Delta$, this face generates the standard cone $\tau^\perp\cap\check\sigma \subseteq \partial\check\sigma$. For each $t$, there is a diagram \[ \xymatrix@C=30pt {X_{\bar\Sigma}\ar[d] & V(\tau)\ar[d]\ar@{_{(}->}[l]& \bar W^\tau_t\ar[d]\ar@{_{(}->}[l]\\ \PP_{\check\Delta} & \PP_{\check\Delta\cap\tau^\perp}\ar@{_{(}->}[l]& W^\tau_t\ar@{_{(}->}[l] } \] where the first vertical map is birational, the second and third vertical maps have toric fibres, the horizontal maps are closed embeddings, the squares are pullback diagrams, $\PP_{\check\Delta\cap\tau^\perp}\cong \PP^{\dim \check\Delta\cap\tau^\perp}$ and $W^\tau_t$ is a hyperplane section in the latter. The fibres of the vertical maps over closed points of $W^\tau_t$ are toric varieties. In particular, $W^\tau_{t_0}$ is a $\check\Delta\cap\tau^\perp$-regular hypersurface and thus has a disjoint decomposition in handlebodies of different dimensions induced from the intersection with the toric strata in $\PP_{\check\Delta\cap\tau^\perp}$. Since handlebodies as well as toric varieties have Hodge structures concentrated in degrees $(p,q)$ with $p=q$ (see Lemma~\ref{handlebodytimestorus}), this also holds for $\bar{W}^\tau_{t_0}$ which inherits a decomposition in products of handlebodies and toric varieties. The monodromy theorem, e.g., \cite{PS08},\,Cor.\,11.42, implies that $N$ operates trivially on $H^\bullet(\bar{W}^\tau_{t_0})$. We now show (3). Note that (1) and (2) and the Poincar\'e duality of Lemma~\ref{lemWspecseq},(5), imply the vanishing for $k\neq 0$. It suffices to show it for the case where $k=0, p+q>d$ and $p\neq q$. We use Lemma~\ref{shiftHpq} and work with $\bar{A}^\bullet$, i.e., we want to show $$h^{p+1,q+1}\HH^{p+q+1}(Y,\bar A^\bullet)=0$$ for $p+q>d$. Recall that Lemma~\ref{lemWspecseq} provides us with a sequence $$\cdots\ra {}_WE_1^{-k,m+k}\stackrel{d_1}{\lra} {}_WE_1^{-(k-1),(m+1)+(k-1)} \ra \cdots$$ which becomes $$ \cdots\ra \bigoplus_{\tilde q>-1,-k}H^{m-2\tilde q-k}( Y^{2\tilde q+k+1}) \stackrel{d_1}{\lra} \bigoplus_{\tilde q>-1,-(k-1)}H^{m-2\tilde q-k+2}( Y^{2\tilde q+k}) \ra \cdots $$ We have $\dim Y^i=d+2-i$, so $H^j(Y^i)=0$ for $2i+j>2d+4$ and in particular $H^{m-1}(Y^i)=0$ for $i>d+2-(m-1)/2$. We fix $m$. Because $d_1$ splits up as $d_1=\delta-\gamma$, the above sequence is the total complex of the double complex \begin{center} \resizebox{16cm}{!}{ $$ \xymatrix@C=30pt { H^{m-1}( Y^{2})\ar^{\delta}[r] \ar@{.}[dr]|-{k=1} &\qquad\cdots \qquad\ar[r] &H^{m-1}( Y^{d+1-(m-1-i)/2)}) \ar[r] \ar@{.}[dr]|-{k=2-d+(m-1-i)/2} &H^{m-1}( Y^{d+2-(m-1-i)/2}) \\ H^{m-3}( Y^{3})\ar^{-\gamma}[u]\ar[r] &H^{m-3}( Y^{4}) \ar[r]\ar[u] &\qquad\cdots \qquad\ar[r] &H^{m-3}( Y^{d+3-(m-1-i)/2}) \ar[u] \\ \qquad\vdots\qquad\ar[u] \ar@{.}[dr]|-{k=(m-1-i)/2} &\vdots\ar[u]&\qquad\qquad\qquad&\vdots\ar[u] \\ H^i( Y^{2+(m-1-i)/2}) \ar[r]^{\delta}\ar[u]^{-\gamma} &H^i( Y^{3+(m-1-i)/2}) \ar[r]\ar[u] & \qquad \cdots \ar[r]\qquad &H^i( Y^{d+2}) \ar[u] \ar@{.}[ul]|-{k=1-d+(m-1-i)} } $$ } \end{center} concentrated in a rectangle and with $i=1$ if $m$ is even and $i=0$ otherwise. Here the main diagonal (marked as $k=1$) gives $E^{-1,m+1}$, with other diagonals giving $E^{-k,(m-k+1)+k}$ for various $k$. We set $m=p+q+1>d+1$. Note that $\Gr_1^W\HH^{p+q+1}(Y,\bar A^\bullet)$ is the cohomology group by the total differential at the main diagonal. Since $m>d+1$, the rectangle extends more in the $-\gamma$-direction than it does in the $\delta$-direction. We restrict this double complex to the off-diagonal Hodge classes, i.e., we write $\bigoplus_{p'\neq q'}H^{p',q'}$ in front of each term. There is no ambiguity here because the Hodge structure of each term is pure and maps are strictly compatible with these. We then compute the cohomology of this restricted double complex with respect to $\delta-\gamma$ using the spectral sequence whose $E_0$-term has differential $-\gamma$ and claim that $E_2|_{k=1}=0$. This will finish the proof of (3). For $p'\neq q'$, $H^{p',q'}(Y^i_\tor)=0$ because toric varieties have no off-diagonal Hodge classes and thus $H^{p',q'}(Y^i)=H^{p',q'}(Y^i_\ntor)$. All columns are exact at $k=1$ by Lemma~\ref{Wsequence},(2). Indeed ${\tilde{W}_0}\cap Y^{i}_\tor=Y^{i+1}_\ntor$ and since $$\dim \tilde W_0\cap Y_\tor^{2\tilde q+1}=d-2\tilde q< p+q-2\tilde q=m-1-2\tilde q$$ by $p+q>d$, the exceptional cases lie strictly below the main diagonal. We have thus shown that $E_1|_{k=1}=0$ away from the top left corner, i.e., away from $\bigoplus_{{p'+q'=m-1}\atop{p'\neq q'}}H^{p',q'}( Y_\ntor^{2})$. We claim that $E_2|_{k=1}=0$ at this term. This is equivalent to the map on the cokernels of the two top left vertical arrows induced by $\delta$ being an injection. Using the exactness of Lemma~\ref{Wsequence},(2), this is equivalent to the injectivity of \begin{equation} \label{imgamma} (\im\gamma)\cap H^{m+1}_{\neq}(\tilde W_0) \stackrel{\delta}{\lra} (\im\gamma)\cap H^{m+1}_{\neq}(\tilde W_0\cap Y_\tor^{1}). \end{equation} where we have used the short notation $H^b_{\neq}$ for $\bigoplus_{p'\neq q'}H^{p',q'}H^b$. By Lemma~\ref{Wsequence},(1) and Poincar\'e duality, we have an injection $$ H^{m+1}_{\neq}(\tilde W_0) \stackrel{\delta^{\tilde W_0\cap \tilde D}}{\lra} H^{m+1}_{\neq}(\tilde W_0\cap\tilde D^1).$$ We can't directly deduce the injectivity in (\ref{imgamma}) from this because $\delta=\pi_{Y_\ntor}\circ\delta^{\tilde W_0\cap \tilde D}$ where $\pi_{Y_\ntor}:H^{m+1}_{\neq}(\tilde W_0\cap \tilde D^1) \ra H^{m+1}_{\neq}( \tilde W_0\cap Y_\tor^{1})$ denotes the projection. We are going to show that \begin{equation} \label{imagedeltacontained} \delta^{\tilde W_0\cap \tilde D}((\im\gamma)\cap H^{m+1}_{\neq}(\tilde W_0)) \subseteq (\im\gamma)\cap H^{m+1}_{\neq}(\tilde W_0\cap Y_\tor^{1}), \end{equation} which then implies (\ref{imgamma}). Let us consider the diagram \[ \xymatrix@C=30pt { H^{m+1}_{\neq}(\tilde W_0) \ar^{\delta^{\tilde W_0\cap \tilde D}}[r]& H^{m+1}_{\neq}(\tilde W_0\cap\tilde D^1)\\ H^{m-1}_{\neq}(\tilde W_0\cap\tilde D^1) \ar^{\delta^{\tilde W_0\cap \tilde D}}[r] \ar^{-\gamma^{\tilde W_0\cap \tilde D}}[u] & H^{m-1}_{\neq}(\tilde W_0\cap\tilde D^2) \ar^{-\gamma^{\tilde W_0\cap \tilde D}}[u]. } \] It is anti-commutative because it is part of the differential in the weight spectral sequence of the punctured tubular neighbourhood of $\tilde W_0\cap \tilde D$ in $\tilde W_0$. Moreover, by Lemma~\ref{Wsequence},(2), the three terms involving the bottom and right map split as direct sums where one summand is $$H^{m-1}_{\neq}(\tilde W_0\cap Y_\tor^1)\stackrel{\delta}{\lra} H^{m-1}_{\neq}(\tilde W_0\cap Y_\tor^2)\stackrel{-\gamma}{\lra} H^{m+1}_{\neq}(\tilde W_0\cap Y_\tor^1).$$ We get (\ref{imagedeltacontained}) from \begin{align*} \delta^{\tilde W_0\cap \tilde D}({(\im\gamma)\cap H^{m+1}_{\neq}(\tilde W_0)}) ={} & (\delta^{\tilde W_0\cap \tilde D}\circ \gamma^{\tilde W_0\cap \tilde D})({H^{m-1}_{\neq}(\tilde W_0\cap Y_\tor^1)})\\ ={}& (-\gamma^{\tilde W_0\cap \tilde D} \circ\delta^{\tilde W_0\cap \tilde D}) ({H^{m-1}_{\neq}(\tilde W_0\cap Y_\tor^1)})\\ ={}& (-\gamma \circ\delta)({H^{m-1}_{\neq}(\tilde W_0\cap Y_\tor^1)})\\ \subseteq {} & (\im\gamma)\cap H^{m+1}_{\neq}(\tilde W_0\cap Y_\tor^1). \end{align*} \end{proof} \subsection{The main theorem} With preparations complete, we can finish the proof of our main result by computing $h^{p,p}(\check S,\shF_{\check S})$ as defined in \eqref{hpqFdef}, for $2p>d$. Note that, for $2p>d$, we have by Lemma~\ref{shiftHpq} and Prop.~\ref{vanishing},(3), that \[ h^{p,p}(\shF_{\check S})=h^{p,p}\HH^{2p}(\check S,\shF_{\check S}) =h^{p+1,p+1}\HH^{2p+1}(Y,\bar{A}^\bullet). \] \begin{proposition} \label{van_to_gen_nby} For $2p>d+2$, we have \begin{enumerate} \item $h^{p,p}\HH^{2p-1}(Y,\bar{A}^\bullet) = h^{p,p}\HH^{2p}(Y,\CC) -h^{p,p}\HH^{2p}(Y,{A}^\bullet)$. \item $\Gr_i^W\HH^{m}(Y,\CC)=0$ for $i\neq 0$ and $m>d+2$. \end{enumerate} \end{proposition} \begin{proof} We apply $\Gr_\bullet^W$ to the sequence in Thm.~\ref{exseqCHMCL0},(2) in order to obtain the exact sequence $$\cdots\ra\Gr^W_1\HH^{2p-1}(Y,{A}^\bullet) \ra\Gr^W_1\HH^{2p-1}(Y,\bar{A}^\bullet) \ra\qquad\qquad\qquad\qquad$$ $$ \Gr^W_0\HH^{2p}(Y,\CC) \ra\Gr^W_0\HH^{2p}(Y,{A}^\bullet) \ra\Gr^W_0\HH^{2p}(Y,\bar{A}^\bullet) \ra\cdots $$ and conclude (1) from the vanishing of the exterior terms by Prop.~\ref{vanishing},(2)-(3). Similarly, replacing $\Gr_0^W$ (resp.\ $\Gr_1^W$) in the above sequence by $\Gr_i^W$ (resp.\ $\Gr_{i+1}^W$), we deduce (2). \end{proof} \begin{lemma} \label{lemepqYtor} Let $Y_\tor$ denote the closure of $Y\setminus \tilde W_0$. We have $e^{p,q}(Y_\tor)=0$ for $p\neq q$ and $$e^{p,p}(Y_\tor) =(-1)^{d+1-p}\sum_{\tau\in\P,\tau\not\subset\partial\Delta} (-1)^{\dim\tau} \begin{pmatrix}\dim\Delta(\tau)-\dim\tau\\ p\end{pmatrix}.$$ \end{lemma} \begin{proof} Recall from \cite{DK86},\,2.5 that for a compact toric variety $X_{\Sigma_0}$, one has $$h^{p,p}(X_{\Sigma_0},\CC)=\dim H^{2p}(X_{\Sigma_0})=\sum_{\tau\in\Sigma_0}(-1)^{\codim\tau-p} \begin{pmatrix}\codim\tau\\ p\end{pmatrix}$$ and $H^{k}(X_{\Sigma_0},\CC)=0$ for odd $k$. >From the weight spectral sequence on the mixed Hodge complex computing the mixed Hodge structure on $H^\bullet(Y,\CC)$, we get $$e^{p,q}(Y_\tor)=\sum_{i\ge 1} (-1)^{i+1} h^{p,q}(Y_\tor^i)$$ which is zero if $p\neq q$, so let's assume $p=q$ giving \begin{align*} e^{p,q}(Y_\tor)= {} &\sum_{\omega\in\P_{\Delta'}} (-1)^{\dim\omega} \dim H^{2p}(X_\omega)\\ = {} & \sum_{\omega\in\P_{\Delta'}} (-1)^{\dim\omega} \sum_{\tau\in\P, \tau\supseteq\omega} (-1)^{d+1-\dim\tau-p} \begin{pmatrix}d+1-\dim\tau\\ p\end{pmatrix}. \end{align*} Using that, for fixed $\tau$, we have $1=\sum_{\P_{\Delta'}\ni\omega\subseteq\tau}(-1)^{\dim\omega}$ and that $d+1=\dim\Delta(\tau)$ for $\tau\in\P\setminus \P_{\partial\Delta}$ we conclude the assertion. \end{proof} \begin{theorem} Given \begin{itemize} \item a lattice polytope $\Delta$ defining a smooth toric variety and having at least one interior lattice point; \item a star-like triangulation of $\Delta$ by standard simplices; \item Landau-Ginzburg models $w:X_\Sigma\ra\CC$ and $\check w:X_{\check\Sigma}\ra\CC$ associated to resolutions of the cone over $\Delta$ and its dual cone; \end{itemize} then for the sheaves of vanishing cycles $\shF_{S}=\phi_{\check w,0}{\bf R}j_*\CC_{X_{\check \Sigma}}[1]$ and $\shF_{\check S}=\phi_{w,0}{\bf R}j_*\CC_{X_{\Sigma}}[1]$ we have\footnote{We use the notation $h^{p,q}(\shF_{S})=h^{p,q}(S,\shF_{S})= \sum_k h^{p,q+k}\,\HH^{p+q}(S,\shF_S)$ and likewise for $\check S$.} $$h^{p,q}(\shF_{S}) = h^{d-p,q}(\shF_{\check S})$$ giving $$h^{p,q}(S) = h^{d-p,q}(\shF_{\check S}).$$ \end{theorem} \begin{proof} By Example~\ref{cohoS}, $h^{p,q}(\shF_S)=h^{p,q}(S)$. By Prop.~\ref{HodgenumbersS},(1), we have \begin{equation} \label{whatwehavealready} e^p(S)=h^{p,p}(S)+(-1)^d h^{p,d-p}(S), \end{equation} while by Thm.~\ref{epdual},(3) and the vanishing by Prop.~\ref{vanishing},(3), we have \[ e^p(S)=(-1)^de^{d-p}(\check S,\shF_{\check S})= h^{d-p,p}(\shF_{\check S})+(-1)^d h^{d-p,d-p}(\shF_{\check S}). \] Thus it is enough to show that $h^{d-p,d-p}(\shF_{\check S})=h^{p,d-p}(S)$. This follows from (\ref{whatwehavealready}) if $d$ is even and $p=d/2$, so by the duality of Theorem~\ref{epdual},(1), it remains to show that the equality holds for $2p>d$. Using Prop.~\ref{HodgenumbersS},(3), again, we just need to show for $2p>d$ that \begin{equation} \label{laststep} h^{p,p}(\shF_{\check S}) = (-1)^{d-p}\sum_{\tau\in\P}(-1)^{\dim\tau} \begin{pmatrix}\dim\Delta(\tau)-\dim\tau\\ p+1\end{pmatrix}. \end{equation} Let us assume $2p>d$. Choose $t_0\in\CC^*$ with $|t_0|$ small. By Prop.~\ref{van_to_gen_nby}, we have $h^{p,p}(\shF_{\check S}) = h^{p+1,p+1}H^{2p+2}(\bar w^{-1}(0),\CC)-h^{p+1,p+1}H^{2p+2}(\bar w^{-1}(t_0),\CC)$. Note that by Prop.~\ref{van_to_gen_nby},(2), and the fact that $h^{p,q}H^i(Y)=0$ for $Y$ proper and $p+q>i$, \[ e^{p+1,p+1}(\bar w^{-1}(0))=h^{p+1,p+1}H^{2p+2}(\bar w^{-1}(0),\CC). \] By the smoothness of $\bar w^{-1}(t_0)$, this similarly holds for $\bar w^{-1}(t_0)$. The contraction $\tilde \PP_{\check\Delta}\ra X_{\bar{\Sigma}}$ gives an isomorphism of $\bar w^{-1}(t_0)\cap D$ and $\bar w^{-1}(0)\cap D$. Thus using Thm.~\ref{eulerdecompose},(1), we get \[ h^{p,p}(\shF_{\check S}) = e^{p+1,p+1}(w^{-1}(0))-e^{p+1,p+1}(w^{-1}(t_0)). \] Moreover by a Lefschetz-type result (see \cite{DK86},\,3.9), we have Gysin isomorphisms $$H^i_c(w^{-1}(t_0)\cap(\CC^*)^{d+2}) \ra H^{i+2}_c((\CC^*)^{d+2}) \leftarrow H^i_c(w^{-1}(0)\cap(\CC^*)^{d+2})$$ for $i\ge d+2$. Note that this is also true in the $\dim\Delta'=0$ case using the fact that then $w^{-1}(0)\cap(\CC^*)^{d+2} \cong\CC^*\times W'$ and $W'$ has a Newton polytope of dimension $d$. On the other hand $h^{p+1,p+1}H^i_c(T)=0$ for $i<2p+2$ and $T$ smooth (by Poincar\'e duality and \cite{PS08},\,Thm.\,5.39), so $h^{p+1,p+1}H^{i}_c(w^{-1}(t)\cap(\CC^*)^{d+2})=0$ for $i\le d+2$, $t\in\{0,t_0\}$ and thus again by Thm.~\ref{eulerdecompose},(1), \[ h^{p,p}(\shF_{\check S}) = e^{p+1,p+1}(\partial w^{-1}(0)) - e^{p+1,p+1}(\partial w^{-1}(t_0)) \] where $\partial w^{-1}(t)$ denotes the intersection of $w^{-1}(t)$ with the complement of the dense torus in $X_\Sigma$. Note that $Y_\tor\subset w^{-1}(0)$, $w^{-1}(t_0)\cap Y_\tor=\emptyset$ and the torus orbits in $X_\Sigma\setminus Y_\tor$ are indexed by $\P_{\partial\Delta}$. Decomposing in torus orbits yields \[ h^{p,p}(\shF_{\check S}) = e^{\hat{p},\hat{p}}(Y_\tor) - \sum_{\tau\in\P_{\partial\Delta}} (e^{\hat{p},\hat{p}}(w^{-1}(t_0)\cap T_\tau) - e^{\hat{p},\hat{p}}( w^{-1}(0)\cap T_\tau)) \] where $\hat{p}=p+1$. By Cor.~\ref{handlebodies}, for $\tau\in\P_{\partial\Delta}$, we have \begin{align*} w^{-1}(t_0)\cap T_\tau \cong {} & H^{\codim \P_*(\tau)-1}\times (\CC^*)^{\dim\P_*(\tau)-\dim\tau},\\ w^{-1}(0)\cap T_\tau \cong{} & H^{\codim \P_*(\tau)-2}\times (\CC^*)^{\dim\P_*(\tau)-\dim\tau+1} \end{align*} Note that, for $\tau\in\P_{\partial\Delta}$, $w^{-1}(t_0)\cap T_\tau$ is non-empty iff $\codim \P_*(\tau)\ge 1$ whereas $w^{-1}(0)\cap T_\tau$ is non-empty iff $\codim \P_*(\tau)\ge 2$; moreover, $\P_*(\tau)=\Delta(\tau)$. The assertion now follows from Lemma~\ref{lemepqYtor} and Lemma~\ref{handlebodytimestorus}. \end{proof} \section{Complements} \label{sectionalephnull} \subsection{Relation to discrete Legendre transforms and toric degenerations} \label{discLeg} Recall from \cite{GS03} the definition of a polarized tropical manifold, $(B,\P,\varphi)$, where $B$ is an integral affine manifold with singularities, $\P$ a polyhedral decomposition of $B$ into lattice polyhedra, and $\varphi$ a strictly convex multi-valued piecewise linear function with integral slopes on $B$. A tropical manifold may have a boundary as well as unbounded cells. Here, we will only need tropical manifolds without singularities, isomorphic to polyhedra, and with $\varphi$ single-valued. Also recall from \cite{GS03} how to associate a polarized tropical manifold to a polarized toric degeneration of an algebraic variety. This polarized tropical manifold is the dual intersection complex of the toric degeneration. In addition, recall the notion of the discrete Legendre transform. The latter associates another polarized tropical manifold $(\check B,\check\P,\check\varphi)$ to $(B,\P,\varphi)$ which in turn transforms back to $(B,\P,\varphi)$ upon another application of a discrete Legendre transform. This transform has been found to realize mirror symmetry for maximally unipotent toric degenerations. We now relate this to our mirror construction. \begin{figure} \input{DLTs.pstex_t} \caption{Tropical manifolds and their relationships: {\scriptsize DLT} marks a discrete Legendre transform, {\scriptsize s} marks a subdivision, {\scriptsize d} marks a degeneration} \label{DLTs} \end{figure} We return to the situation of \eqref{generalmirror1}, \eqref{generalmirror2}, so we are given dual cones $\sigma$, $\check\sigma$ subdivided into fans $\Sigma$, $\check\Sigma$ giving Landau-Ginzburg models $w:X_{\Sigma} \rightarrow\CC$, $\check w:X_{\check\Sigma}\rightarrow\CC$. We shall build toric degenerations of $X_{\Sigma}$, $X_{\check\Sigma}$ whose corresponding dual intersection complexes $(B,\P,\varphi)$ and $(\check B,\check\P,\check\varphi)$ are related by discrete Legendre transform. This will show that the mirror symmetry of \eqref{generalmirror1}, \eqref{generalmirror2} fits into the general setup of the Gross-Siebert program. We use a standard method of building toric degenerations, see e.g., \cite{NS06}. Let $B\subseteq M_{\RR}$ be a (non-compact) polyhedron with a polyhedral decomposition $\P$ and a strictly convex piecewise linear function $\varphi$ with integral slopes. Let $\tilde{\Sigma}$ be the fan in $\bar M_{\RR}=M_{\RR}\oplus\RR$ defined by \[ \tilde{\Sigma}=\bigcup_{\tau\in\P} \{\hbox{faces of $\operatorname{{Cone}}(\tau)$}\}, \] where now one must take care to take the closure in defining \[ \operatorname{{Cone}}(\tau)=\overline{\{(rm,r)\,|\,m\in \tau, r\ge 0\}}, \] as $\tau$ need not be compact. We say $(B,\P)$ has \emph{asymptotic fan} $\Sigma$ if \[ \Sigma=\{\tau\in\tilde\Sigma\,|\,\tau\subseteq M_{\RR}\times\{0\}\}. \] If $(B,\P)$ has asymptotic fan $\Sigma$, then the projection $\bar M_{\RR} \rightarrow\RR$ induces a map of toric varieties $X_{\tilde\Sigma} \rightarrow\AA^1$ whose general fibre is isomorphic to $X_{\Sigma}$. This is a toric degeneration of $X_{\Sigma}$. Further, $\varphi$ induces a piecewise linear function $\tilde\varphi$ on $\tilde\Sigma$ as the unique piecewise linear extension of $\varphi$ to $\tilde\Sigma$, thinking of $\varphi$ as a function on $B\times \{1\}$. We then have: \begin{theorem} Let $w:X_\Sigma\ra\CC, \check w:X_{\check\Sigma}\ra\CC$ be dual Landau-Ginzburg models as given in (\ref{generalmirror1}),(\ref{generalmirror2}). Let $\psi$ (resp. $\check \psi$) denote the piecewise linear function inducing the subdivision $\Sigma$ of $\sigma$ (resp. $\check\Sigma$ of $\check\sigma$). Then there are polarized tropical manifolds $(B,\P,\varphi)$, $(\check B, \check\P,\check\varphi)$, related by discrete Legendre transform, yielding fans $\tilde\Sigma, \tilde{\check\Sigma}$ as above, such that \begin{itemize} \item $\Sigma$ and $\check\Sigma$ are the asymptotic fans of $(B,\P)$ and $(\check B,\check\P)$ respectively. \item Thinking of $\Sigma$ and $\check\Sigma$ as subfans of $\tilde \Sigma$ and $\tilde{\check\Sigma}$ respectively via $\tau\mapsto \tau\times \{0\}$, we have $\tilde\varphi|_{\Sigma}=\psi$ and $\tilde{\check\varphi}|_{\check\Sigma}=\check\psi$. \end{itemize} \end{theorem} \begin{proof} By adding a linear function, we may assume that $\check\psi|_{\check \tau}\equiv 0$ for some maximal cone $\check\tau\in\check\Sigma$. Let $\check B=\Delta_\psi$ be the Newton polyhedron of $\psi$ and $\Delta_\psi^c$ the convex hull of its compact faces. We have $\Delta_\psi=\Delta_\psi^c+\check\sigma$. Then \[ \operatorname{{Cone}}(\check B)=\bigcup_{t\ge 0} (t\Delta_\psi^c+\check\sigma,t). \] Let $\check\psi^c$ be the zero function on $\Delta_\psi^c$. For $(b,t)\in \operatorname{{Cone}}(\check B)$ we set $$\tilde{\check\varphi}(b)=\min\{\check\psi^c(p)+\check\psi(q)|p+q=b,p\in t \Delta_\psi^c,q\in\check\sigma\}.$$ This function is piecewise affine and convex on $\operatorname{{Cone}}(\check B)$. Let $\tilde{\check\Sigma}$ be the fan of maximal domains of linearity of $\tilde{\check\varphi}$, let $\check\P$ be the induced cell decomposition on $\check B$, identified with $\check B\times \{1\}$, and let $\check\varphi=\tilde{\check\varphi}|_{\check B\times\{1\}}$. Note that $\tilde{\check\varphi}|_{\check\sigma\times \{0\}}=\check\psi$ by construction, so $(\check B,\check\P,\check\varphi)$ has the desired properties. To get the dual, set $$\check P=\{(b,s)\in \check B\times\RR|s\ge\check\varphi(b)\},$$ and let $\tilde\Sigma$ be the normal fan to $\check P$, with piecewise linear function $\tilde\varphi$ induced by $\check P$. Then $\tilde\Sigma$ has support $|\tilde\Sigma|$ contained in $N_{\RR}\times\RR_{\ge 0}$, and if we take $B=|\tilde\Sigma|\cap (N_{\RR}\times\{1\})$, then $|\tilde\Sigma|=\operatorname{{Cone}}(B)$. Furthermore, $B$ inherits a decomposition $\P$ from $\tilde\Sigma$, and $\tilde\Sigma$ is obtained by taking cones over elements of $\P$. The asymptotic fan of $B$ is the normal fan to $\check B$, i.e., $\Sigma$, and $\tilde\varphi|_{\Sigma}=\psi$, since $\check B$ was the Newton polytope of $\psi$. Setting $\varphi=\tilde\varphi|_{B\times\{1\}}$, one checks that $(B,\P,\varphi)$ is the discrete Legendre transform of $(\check B, \check\P,\check\varphi)$. \end{proof} We note that the choice of $\P$, $\check\P$ given in the proof is not canonical: there may be many choices. Given the pair $\tilde\Sigma$, $\tilde{\check{\Sigma}}$ produced by the theorem, we can construct Landau-Ginzburg models $\tilde w$, $\tilde{\check w}$ on the families $X_{\tilde\Sigma},X_{\tilde{\check\Sigma}}\rightarrow\AA^1$. Let \begin{align*} \tilde w:= {} &\sum_{\check\rho} c_{\check\rho} z^{(n_{\check\rho},\check\psi(n_{\check\rho}))} \\ \tilde{\check w}:= {} &\sum_{\rho} c_{\rho}z^{(m_{\rho},\psi(m_{\rho}))} \end{align*} where the sums are over all one-dimensional rays $\check\rho$ in $\check \Sigma$ ($\rho$ in $\Sigma$), with $n_{\check\rho}$ ($m_{\rho}$) the primitive generator of $\check\rho$ ($\rho$). One checks that these are regular functions on $X_{\tilde\Sigma}$ and $X_{\tilde{\check\Sigma}}$ respectively. Note that by identifying $t$ with $z^{(0,1)}$, away from the fibre of $X_{\tilde\Sigma}\rightarrow\AA^1$ over $0$, we can view $\tilde w$ as giving a family $w_t$ of Landau-Ginzburg potentials on $X_{\Sigma}$, parameterized by $t$, with $w_t=\sum_{\check\rho} c_{\check\rho} t^{\check\psi(n_{\check\rho})}z^{n_{\check\rho}}$. \begin{examples} For a suitable choice of $\check\psi$, applying this construction to $(\check\sigma,\check\Sigma,\check\psi)$ yields the degeneration given in Rem.~\ref{mirrorfamily}. Applying this to $(\sigma,\Sigma_*,h_*)$ yields the one given in Rem~\ref{remCYgeneralize} possibly up to a change of the coefficients. \end{examples} \subsection{Relation to conic bundles and the work of others} \label{conicbundle} Mirror symmetry can be studied locally by looking at a conic bundle of the shape \begin{equation} \label{coniceq} uv=f(w_1,\ldots,w_n)t \end{equation} in $\AA^2\times(\CC^*)^n\times\AA^1$ where $u,v,w_1,\ldots,w_n,t$ are coordinates of the factors in the given order. Here, $t$ is a family parameter. One should think of $t=0$ as being a toric degeneration of a non-compact Calabi-Yau manifold given by a general fibre for $t\neq 0$ (assuming $f$ defines a smooth subvariety of $(\CC^*)^n$). For a fixed $t\neq 0$, the projection to $(\CC^*)^n$ yields a conic bundle with discriminant $f=0$ which is also the singular locus of the total space of the family. This local Calabi-Yau is an essential building block of the toric degenerations studied by Gross-Siebert in \cite{GS03}. In \cite{AAK}, the authors work out an understanding of mirror symmetry for varieties of general type from the point of view of Strominger-Yau-Zaslow torus fibrations and blow-ups. Their basic setup is a conic bundle as given above. It arises from when one blows up $\AA^1\times(\CC^*)^n$ in $0\times\{f=0\}$. Let us understand how a conic bundle appears in our construction: In Prop.~\ref{checkWproper}, we have compactified $X_{\check\Sigma}$ to $X_{\bar{\check\Sigma}}$, a $\PP^1$-bundle over $\PP_\Delta$. To turn the resulting rational map $$\check w: X_{\bar{\check\Sigma}}\,\raise.5ex\hbox{$\scriptscriptstyle ---\!\!\!>$}\PP^1 $$ into a regular one, we have blown up the intersection of $\check w^{-1}(0)$ with the divisor at infinity. In a neighbourhood of the center of the blow-up, we have precisely the setup of \cite{AAK}: Indeed, recall from (\ref{Wcheckpotential}) that $$\check w = \sum_{m\in\Delta\cap M}c_m z^{(m,1)}.$$ We restrict this to $(\CC^*)^{d+2}$, then compactify to the $\PP^1$-bundle. The graph of $\check w$ is given by $$v_1u_0f-u_1v_0,$$ setting $f=\sum_{m\in\Delta\cap M}c_m z^{(m,0)}, u_1=z^{(0,-1)}, u_0=z^{(0,1)}$, $v_0,v_1$ being homogeneous coordinates on the target $\PP^1$ of $\check w$. Most importantly, in the neighbourhood at infinity given by setting $u_0=1$, we blow up the locus $f=u_1=0$ just as in \cite{AAK}. We have thus seen that in the resolution $\check w: \tilde X_{\bar{\check\Sigma}}\ra\PP^1$, there is no critical value in $\CC^*$ and the fibres over $0$ and $\infty$ have isomorphic singular locus with trivial monodromy on cohomology. Our construction sits at $0$ whereas the conic bundle is a neighbourhood of $\infty$. The potentials we gave in (\ref{Wpotential}) and (\ref{Wcheckpotential}) are not quite the right ones as expected from the SYZ mirror symmetry construction in \cite{Aur07}. Roughly speaking, Auroux's construction associates to a manifold $X$ with effective anticanonical divisor $D$ a mirror as a manifold $\check X$ with a potential $\check w :\check X\ra\CC$ constructed from counting certain Maslov index two holomorphic disks in $X$. In our setup where $X=X_{\Sigma}$ (resp. $X=X_{\check\Sigma}$), we implicitly use the toric boundary divisor as a choice for $D$. The naive potential we are using --- up to changing its coefficients --- only counts a subset of all holomorphic disks. This is because our potential can be viewed as given by a count of \emph{tropical} disks, see \cite{Gr10}. Tropical geometry, however, cannot see (degenerate) disks with components mapping into $D$. There are typically algebraic curves contained in $D$ with non-negative Chern number which can be glued to disks visible tropically to obtain additional Maslov index two disks contributing to the potential. These can contribute additional monomials. There doesn't seem to be an easy way to describe all the curves that contribute additionally to the potential directly from $\Sigma$ or $\check\Sigma$. However, \cite{CPS11} provides an approach for obtaining what should be the correct potential in the context of the Gross-Siebert program. The authors choose a smoothing of $D$ which is reflected in the tropical manifold (see \S\ref{discLeg}) by ``pulling in singularities from infinity'' such that all unbounded rays become parallel, i.e., making the boundary in the discrete Legendre dual totally geodesic. This requires introducing singularities in the affine manifolds introduced in \S\ref{discLeg}. Once this is done, the techniques of \cite{GS11} can be applied to obtain a tropical description of what should be interpreted as Maslov index zero disks. Finally, \cite{CPS11} then demonstrates how to construct a well-defined potential from this data. Presumably, this will give the same potential as the one obtained in \cite{AAK}. It can be checked in examples that carrying this out in our setup does change the potentials but does not affect its critical locus and sheaf of vanishing cycles. This point of view will be explored in more detail elsewhere. \subsection{Singular fibres and deformations of the potential} \label{subsectionsingularfibres} In the study of mirror symmetry involving any Landau-Ginzburg model, there is always a question as to which singular fibres should contribute. Except in the case of the mirror of the cubic three-fold, we only make use of the zero fibre, whereas in the case of the cubic three-fold (see \S\ref{sectioncubic}), we in fact make use of all singular fibres. This raises the question of justifying these choices. We believe that these choices can be justified mathematically by incorporating the Landau-Ginzburg picture into the Gross-Siebert mirror symmetry program as discussed in \S\ref{discLeg}. If $t$ is the parameter for the family, with $t=0$ the degenerate fibre, and $w_t$ the $t$-dependent potential, then one can explore what happens to the critical values of $w_t$ as $t\rightarrow 0$. Those critical values which go to $\infty$ as $t\rightarrow 0$ are the ones which should be ignored. This is essentially the behaviour already observed in \cite{FOOO} in the case of mirrors of toric varieties. There, the authors work over the Novikov ring which implicitly disregards the unwanted critical values. To see this in practice in several of the explicit examples of this paper, consider first the example of the genus two curve. It is easiest to describe a natural toric degeneration in terms of the compactification described in Example~\ref{basicexamples2}, where one takes a degeneration of the form $xy-tz^2=uv-tz^3=0$, while we take $w=c_xx+c_yy+c_zz+c_uu+c_vv$ as before. One finds that $0$ is always a critical value, but the remaining critical values behave like order $t^{-1/2}$, and hence go to infinity as $t\rightarrow 0$. On the other hand, a natural degeneration for the mirror of the cubic three-fold is given by $x_0x_4u_1u_2u_3=ts^3$, $u_1+u_2+u_3=s$, and one checks the critical values are $0$ and $\pm 6\sqrt{3t}$, which do not go to infinity. We will not be more precise here, as this will be explored in greater detail elsewhere. \subsection{Complete intersections in toric varieties} \label{completeint} A Landau-Ginzburg model for a complete intersection in a toric variety was already given in \cite{HW09} based on \cite{BB94}. It closely relates to the local models of the logarithmic singularities given in \cite{Rud10} based on \cite{GS10}. Let $\PP_\Delta$ be a smooth projective toric variety, $D_1,...,D_k$ effective toric divisors with Newton polytopes $\Delta_1,...,\Delta_k$ and non-degenerate global sections $f_1,...,f_k$ of the corresponding line bundles. We require $f_1,...,f_k$ to be transversal, i.e., $(\partial_{x_j} f_i)$ has rank $k$ at each point of $\PP_\Delta$, where $x_j$ are local coordinates on $\PP_\Delta$ and the $f_i$ are viewed as regular functions using a local trivialisation of $\shO(D_i)$. Transversality of $f_1,..,f_k$ is implied if $\Delta_1,...,\Delta_k$ are transversal, i.e., their tangent spaces embed as a direct sum in $M_\RR$. We define the cone $$\sigma=\operatorname{{Cone}}(\Conv(\Delta_1\times\{e_1\},\ldots,\Delta_k\times\{e_k\}))$$ in $M_\RR\oplus\RR^k$ where $e_1,\ldots,e_k$ is the standard basis of $\RR^k$. Its dual cone is given by $$\check\sigma=\{(n,a_1,\ldots,a_k)\,|\,a_i\ge\varphi_{\Delta_i}(n)\}\ \subseteq\ N_\RR\oplus\RR^k.$$ Let $\check\Sigma$ denote the star subdivision of $\check\sigma$ along the cone generated by $e_1^*,...,e_k^*$. It is not hard to see that $X_{\check\Sigma}=\Tot(\shO_{\PP_\Delta}(-D_1)\oplus...\oplus\shO_{\PP_\Delta}(-D_k))$. Setting $u_i=z^{e_i}$, we find that $$\check w=\sum_i u_if_i$$ is a regular function on $X_{\check\sigma}=\Spec[\sigma\cap (M\oplus\ZZ^k)]$ with Newton polytope $\hat\Delta=\Conv(\Delta_1\times\{e_1\},...,\Delta_k\times\{e_k\})$. We pull $\check w$ back to $X_{\check\Sigma}$. The smoothness of $$S=\crit(\check w)=V(f_1)\cap\cdots\cap V(f_k)$$ follows from the transversality of the $f_i$. We construct the mirror of $S$ as follows. Let $\Sigma_*$ be the star subdivision of $\sigma$ along the subcone generated by $$\hat\Delta'=\Conv\{\Delta'_1\times\{e_1\},...,\Delta'_k\times\{e_k\}\}$$ where $\Delta_i'$ denotes the convex hull of the interior lattice points of $\Delta_i$. Let $\Sigma$ be a refinement of $\Sigma_*$ given by a triangulation of $\hat\Delta$ such that each cone in $\Sigma$ is a standard cone. Since this does not need to exist, more generally one needs to allow simplicial cones, see \S\ref{orbifoldsection}. However, with the assumption made, $X_\Sigma$ is smooth. Moreover, $\check w$ is now in the shape of (\ref{generalmirror2}). We define the potential $w$ on $X_\Sigma$ as in (\ref{generalmirror1}) and take the pair $$(\check S = \Sing(w^{-1}(0)),\shF_{\check S}=\phi_{w,0}\CC_{X_\Sigma}[1])$$ for the mirror dual of $S$. One can show that $\dim\check S=\dim S$, so that $(\check S,\shF_{\check S})$ is plausible as a mirror of $S$, in analogy with the hypersurface case. \subsection{A refinement of the general conjecture using orbifolds} \label{orbifoldsection} We state here a refined version of the conjecture concerning Landau-Ginzburg models defined using dual cones $\sigma$ and $\check\sigma$ of the statement made in the introduction. Given a cone $\sigma\subseteq M$, one can define a fan $\Sigma_*$ refining $\sigma$ in a canonical way, by taking $\Sigma_*$ to be the cones over faces of the convex hull $\sigma^o$ of the set of points $\sigma \cap (M\setminus \{0\})$. The corresponding toric variety $X_{\Sigma_*}$ is not necessarily a resolution of $X_{\sigma}$, however it is always Gorenstein. One can subdivide each bounded face of $\sigma^o$ into elementary simplices, i.e., simplices which do not contain any integral points of $M$ other than vertices. This refinement $\Sigma$, which is not unique, yields an orbifold resolution $X_{\Sigma}\rightarrow X_{\sigma}$ which is crepant over $X_{\Sigma_*}$. We can follow the same procedure for $\check\sigma$, hence obtain as in the introduction Landau-Ginzburg potentials \begin{align*} \label{refinedconjecture} w:X_{\Sigma}\rightarrow &\CC\\ \check w:X_{\check\Sigma}\rightarrow &\CC. \end{align*} We pose the following \begin{conjecture} There is a version of the sheaf of vanishing cycles for orbifolds, where each $H^{p,q}(Y^j)$ in Lemma~\ref{lemWspecseq},(3) is replaced by $H^{p,q}_{\op{orb}}(Y^j)$. Defining $h^{p,q}_{\op{orb}}(X_\Sigma,w)$ and $h^{p,q}_{\op{orb}}(X_{\check \Sigma},\check w)$ then analogously to Cor.~\ref{cor_mainthm}, $n=\dim X_\Sigma$, we have $$h^{p,q}_{\op{orb}}(X_\Sigma,w) = h^{n-p,q}_{\op{orb}}(X_{\check \Sigma},\check w).$$ \end{conjecture} Assuming a renormalization flow argument works in the orbifold case, the last statement of the conjecture holds true in the Calabi-Yau case as was shown in \cite{BB96}. Note that in the particular case of this paper, where $\sigma$ is the cone over a polytope, the resolutions we use are special cases of the above resolutions. We believe, based on this and some other examples, that these special types of resolutions allow us to make the above statement using just the critical value $0$ on both sides. This holds for the case considered in this paper. On the other hand, using abitrary total resolutions as in (\ref{generalmirror1}), (\ref{generalmirror2}) in some sense adds geometry that wasn't originally there. The simplest case of this conjecture which is not a Calabi-Yau situation and not already verified in this paper would be where $\sigma$ and $\check\sigma$ are both two-dimensional cones defining non-Gorenstein rational quotient singularities. We have verified the conjecture in several such explicit examples. \section{The cubic three-fold} \label{sectioncubic} This section can be viewed as being complementary to the main discussion of this paper. We shall consider one example of a mirror to a non-trivial Fano threefold, namely the cubic hypersurface in $\PP^4$. In the general type case considered in the bulk of the paper, a hypersurface gave rise to a Landau-Ginzburg mirror whose dimension is two more than that of the starting hypersurface. We then obtained a mirror of the correct dimension by passing to the critical locus of one fibre of the potential. In the case of a Fano hypersurface, we can't do this. The critical locus has a very different character, and it doesn't make sense to restrict to this critical locus. There are then several alternative approaches one can take. First, there are already a number of constructions of Landau-Ginzburg mirrors to Fano hypersurfaces in the literature, giving mirrors of the correct dimension. Second, we can use a different technique to reduce the dimension, namely Kn\"orrer periodicity, see \S\ref{Section_Knoerrer}. We examine the various approaches: (1) In \cite{Gi96}, A.\ Givental proposed a mirror given as the pair $(X,w)$ where $X$ is defined by the equations $x_0x_4u_1u_2u_3=1$, $u_1+u_2+u_3=1$ in $(\CC^*)^5$. Here $x_0,x_4, u_1,u_2,u_3$ are coordinates on this algebraic torus, and $w=x_0+x_4$. It is will be helpful to partially compactify this as follows, replacing $X$ by the subvariety of $\AA^2\times\PP^3$ defined by $x_0x_4u_1u_2u_3=s^3$, $u_1+u_2+u_3=s$, where now $x_0,x_4$ are coordinates on $\AA^2$ and $u_1,u_2,u_3,s$ are coordinates on $\PP^3$. (2) In \cite{ILP11}, the authors proposed a ``weak Landau-Ginzburg model'' mirror for the cubic threefold, the function ${(x+y+1)^3 \over xyz}+z$ on the algebraic torus $(\CC^*)^3$. This is in fact the same as $w$ in construction (1) after a change of coordinates: take $x=u_1/u_3$, $y=u_2/u_3$, $z=x_4$, and note that in (1) we have the equation $x_0x_4u_1u_2u_3=(u_1+u_2+u_3)^3$. Then $(x+y+1)^3/(xyz)=(u_1+u_2+u_3)^3/(u_1u_2u_3x_4)=x_0$, so in fact $w=x_0+x_4$. (3) The construction of this paper proposes a five-dimensional Landau-Ginzburg mirror to the cubic threefold. One begins with a polytope $\Delta$ in $\RR^4$ which is the standard simplex rescaled by a factor of $3$. The cone $\sigma$ over $\Delta$ in $\RR^5$ is generated by \[ v_0=(0,0,0,0,1), v_1=(3,0,0,0,1),\ldots,v_4=(0,0,0,3,1). \] The dual cone $\check\sigma$ is generated by \begin{equation} \label{dualconegenerators} (1,0,0,0,0),\ldots,(0,0,0,1,0), (-1,-1,-1,-1,3). \end{equation} The corresponding toric variety is desingularized by subdividing along the ray generated by $(0,0,0,0,1)$, and the argument of Prop.~\ref{checkWproper},(3) and Example~\ref{cohoS} shows that the Landau-Ginzburg model on this toric variety corresponds to a cubic three-fold. Dually, the toric variety $X_{\sigma}$ carries a potential $w=s_0+\cdots+s_4-s_5$, where $s_0,\ldots,s_4$ correspond to the vectors of the list \eqref{dualconegenerators} and $s_5$ to $(0,0,0,0,1)$. We note the choice of sign in front of $s_5$ is arbitrary, as in theory we could use any coefficients. This choice fits better with the previous models. To see the relationship between this five-dimensional model and the mirror cubic of (1) or (2), we proceed as follows. Take a partial crepant resolution of $X_{\sigma}$ by taking a subdivision $\P_*$ of $\Delta$ via a star subdivision at $v=(1,1,1,0,1)=(v_1+v_2+v_3)/3$. Thus the edges in this star subdivision with endpoint $v$ have as other endpoint $v_0, v_1, v_2, v_3$ and $v_4$. This gives a fan $\Sigma_*$ refining $\sigma$. The exceptional divisor $E$ of the partial resolution corresponding to the vertex $v$ is then described by a quotient fan $\Sigma_v$ in $\RR^4$ with one-dimensional cones generated by the vectors in the left column of the following table (eliminating the last coordinate): \\[2mm] \begin{tabular}{cc} \begin{minipage}{0.64\textwidth} $$ \begin{array}{ll|cccccc} &\multicolumn{1}{l}{}& x_0 & u_1 & u_2 & u_3 & x_4 & s\\ \cline{3-8} P_0 = & (-1,-1,-1,0)& 3&&&&&1\\ P_1 = & (2,-1,-1,0)& &3&&&&1\\ P_2 = & (-1,2,-1,0)& &&3&&&1\\ P_3 = & (-1,-1,2,0)& &&&3&&1\\ P_4 = & (-1,-1,-1,3)& &&&&3&1\\ \end{array} $$ \end{minipage} & \begin{minipage}{0.24\textwidth} \input{cubicfan_new.pstex_t} \end{minipage} \end{tabular} \\[4mm] The remainder of the table displays the vanishing order of certain sections of the anticanonical bundle of $X_{\Sigma_v}$ and regular functions (to be explained shortly) on the divisors corresponding to $P_0,...,P_4$. The diagram on the right shows the combinatorics of the fan $\Sigma_v$. It is in fact an incomplete fan with the three four-dimensional cones \[ \langle P_0,P_1,P_2,P_4\rangle,\quad \langle P_0,P_1,P_3,P_4\rangle, \quad \langle P_0,P_2,P_3,P_4\rangle. \] We can describe the corresponding toric variety $E\cong X_{\Sigma_v}$ as follows. Consider the Newton polyhedron for $-K_{E}$. This divisor is represented by the piecewise linear function which takes the value $1$ on each $P_i$. Note that $(1,0,0,0),(0,1,0,0),(0,0,1,0),(0,0,0,0)\in N$ all lie in the Newton polyhedron for this piecewise linear function, and hence represent sections of $-K_E$, which we write as $u_1,u_2,u_3$ and $s$ respectively. On the other hand, the monomials $x_0=z^{(-1,-1,-1,-1)}, x_4=z^{(0,0,0,1)}$ are in fact regular functions on $E$. Using $x_0,x_4,u_1,u_2,u_3,s$ to map $X_{\Sigma_v}$ to $\AA^2\times\PP^3$, one sees one has the relation $x_0x_4u_1u_2u_3=s^3$. One checks easily that this map is in fact an embedding. In particular, we can view $E$ as a partial compactification of the torus $x_0u_1u_2u_3x_4=1$ appearing in Givental's construction (1). As the notation suggests, the partial resolution coming from the decomposition $\P_*$ can be viewed as analogous to the partial resolution we used in \S\ref{section1} with the same notation. This latter decomposition was completely canonical, whereas in the Fano case, there is no such canonical resolution. To relate the five-dimensional Landau-Ginzburg model on $X_{\Sigma_*}$ to a three-dimensional one, restrict $w$ to any affine subset corresponding to a maximal cone of $\Sigma_*$ containing $v$, say the cone spanned by $v,v_0,v_1,v_2,v_4$. The dual cone is spanned by $(0,0,1,0,0)$, $(-1,-1,-1,-1,3)$, $(1,0,-1,0,0)$, $(0,1,-1,0,0)$, and $(0,0,0,1,0)$. In addition, the dual cone contains the integral point $(0,0,-1,0,1)$. In fact, these six integral points are generators of the monoid of integral points of the dual cone. If we take monomial functions $y,x_0,u_1,u_2,x_4,s$ corresponding to these six integral points, we note that we have the relation $x_0x_4u_1u_2=s^3$. Furthermore, we can write the potential \[ s_0+s_1+s_2+s_3+s_4-s_5=yu_1+yu_2+y+x_4+x_0-ys=y(1-s+u_1+u_2)+x_0+x_4. \] By Kn\"orrer periodicity (see Prop.~\ref{Knoerrer}), the associated category is equivalent to the category of the LG model on the locus $y=u_1+u_2+1-s=0$ with potential $x_0+x_4$. Note that $y=0$ gives an affine piece of the exceptional divisor $E$. Checking this description on each affine subset corresponding a maximal cone, one finds our five-dimensional LG model should be equivalent to the three-dimensional one given on the three-fold $V(u_1+u_2+u_3-s)\subseteq E\subseteq \AA^2\times\PP^3$ with $w=x_0+x_4$. Restricting this potential to the intersection of this three-fold with the big torus orbit on $E$ gives Givental's model in (1). Furthermore, we have now obtained the partial compactification of Givental's model described there, hence justifying this choice of partial compactification. Thus we will take as a starting point the model \[ (X=V(x_0x_4u_1u_2u_3-s^3, u_1+u_2+u_3-s)\subseteq \AA^2\times\PP^3, w=x_0+x_4). \] \medskip In fact, $(X,w)$ is not quite what we want for the mirror: we should take a crepant resolution of $X$. Once this is done, we can describe the sheaf of vanishing cycles and compute its cohomology. Given the fan $\Sigma_v$ describing the exceptional divisor $E$ above, we note that the toric divisor of $E$ corresponding to the ray generated by $P_i$ is \begin{equation} \label{Edivisors} \begin{array}{cl} s=x_i=0 & \hbox{if }i\in\{0,4\},\\ s=u_i=0 &\hbox{if }i\in \{1,2,3\}. \end{array} \end{equation} We can now refine the fan $\Sigma_v$ to resolve the toric singularities. Note that the hyperplane $u_1+u_2+u_3=s$ defining $X$ inside $X_{\Sigma_v}$ is $\Sigma_v$-regular. In particular, $X$ is disjoint from zero-dimensional toric strata of $X_{\Sigma_v}$. So to resolve $X$, we just need to choose a subdivision of $\Sigma_v$ which resolves $X_{\Sigma_v}$ away from the zero-dimensional strata. Hence, we do not need to specify the subdivision of the four-dimensional cones of $X_{\Sigma_v}$; rather, we will just subdivide the three-dimensional cones. In addition, the hyperplane $u_1+u_2+u_3=s$ is disjoint from the one-dimensional toric strata corresponding to the three-dimensional cones generated by $P_0, P_i, P_j$ or $P_4, P_i, P_j$ with $i,j\in \{1,2,3\}$ as can be seen from (\ref{Edivisors}). Therefore, we only need to specify subdivisions of the remaining three three-dimensional cones which we do as given in Figure \ref{subdivision}. Any choice extending the subdivision of these three cones to a subdivision $\Sigma$ of $\Sigma_v$ will not affect the resolution of $X$. \begin{figure} \input{subdivision.pstex_t} \caption{The subdivision of three-dimensional cones in $\Sigma_v$ is induced by the above triangulation of the triangle with vertices $P_0,P_i$ and $P_4$, as depicted.} \label{subdivision} \end{figure} Now let $\tilde X$ be the proper transform of $X$ under the blow-up $\pi:X_{\Sigma}\rightarrow X_{\Sigma_v}$, and write $\tilde w:\tilde X \rightarrow\CC$ for the composition $w\circ\pi$. The smoothness of $\tilde X$ follows because it is $\Sigma$-regular and $X_\Sigma$ is smooth outside of zero-dimensional strata. The crepancy of $\tilde X\ra X$ follows from that of $X_\Sigma\ra X_{\Sigma_v}$ by the adjunction formula. There is an operation of $S_3$ on the open subvariety of $X_\Sigma$ given by the union of all torus orbits that intersect $\tilde X$. It is given on the fan by the permutation of the first three coordinates and lifts to $\tilde X$. The following summarizes the relevant geometry. \begin{lemma} \label{cubiclemma1} $\tilde w^{-1}(0)$ has six irreducible components, each with multiplicity one, which we shall write as $D_1$, $D_2$, $D_3$, $S_1$, $S_2$ and $W_0$. Here \begin{itemize} \item $D_i$ is a del Pezzo surface of degree $6$, and is the intersection of the toric divisor corresponding to the ray generated by $(P_0+P_4+P_i)/3$ with $\tilde X$. \item $S_i$ is a rational scroll blown up in three points, and is the intersection of the toric divisor corresponding to the ray generated by $(iP_0+(3-i)P_4)/3$ with $\tilde X$. \item $W_0$ is the proper transform of $w^{-1}(0)$, and is a non-singular quasi-projective variety. \end{itemize} Furthermore, these irreducible components intersect each other pairwise transversally, as follows: \begin{itemize} \item $\ell:=S_1\cap S_2=W_0\cap S_1=W_0\cap S_2$ is isomorphic to $\PP^1$. \item $Q_i:=D_i\cap\ell$ is a point. \item $D_i\cap S_j\cong\PP^1$ for each $i$, $j$. \item $D_i\cap W_0\cong\PP^1$ for each $i$. \item $D_i\cap D_j=\emptyset$ for $i\not=j$. \end{itemize} Thus in particular general points of $\ell$ are triple points of $\tilde w^{-1}(0)$. Let $\hat\pi:\widehat X\rightarrow \tilde X$ be the blow-up of $\ell$, $\widehat w=\tilde w\circ\hat\pi$. Then $\widehat w^{-1}(0)$ is normal crossings, with irreducible components $\widehat D_i$, $i=1,2,3$, $S_j$, $j=1,2$, $\widehat W_0$, and $E$. Here \begin{itemize} \item $\hat\pi:S_j\rightarrow S_j$ and $\hat\pi:\widehat W_0 \rightarrow W_0$ are isomorphisms. \item $\hat\pi: \widehat D_i\rightarrow D_i$ is the blow-up of $D_i$ at the point $S_1\cap S_2\cap D_i$. \item $E$ is isomorphic to $\PP^1\times\PP^1$, and appears with multiplicity three in $\widehat w^{-1}(0)$. \end{itemize} These components intersect as follows: \begin{itemize} \item $S_1\cap S_2=\emptyset$, $\widehat D_i\cap \widehat D_j=\emptyset$ for $i\not= j$, $\widehat W_0\cap S_i=\emptyset$. \item $S_i\cap\widehat D_j\cong\PP^1$. \item $E\cap S_1, E\cap S_2$ and $E\cap\widehat W_0$ are three disjoint lines of one of the rulings on $E$. \item $E\cap \widehat D_i$, $i=1,2,3$ are the exceptional curves of $\hat\pi:\widehat D_i\rightarrow D_i$ and give three disjoint lines on the other ruling of $E$. \end{itemize} See Figure \ref{W0figure} for a pictorial summary of this data. \end{lemma} \begin{figure} \input{W0figure.pstex_t} \caption{$\tilde w^{-1}(0)$ is depicted on the left and $\widehat w^{-1}(0)$ is depicted on the right. The grey lines on the left show components of the singular fibres of the scrolls. The unlabelled horizontal components on the left are $D_1,D_2,D_3$ and on the right are $\widehat D_1, \widehat D_2, \widehat D_3$.} \label{W0figure} \end{figure} \proof This is largely a somewhat tedious calculation, so we merely summarize the most important points. First, from $x_0=z^{(-1,-1,-1,-1)}$ and $x_4=z^{(0,0,0,1)}$, one sees that the only exceptional divisors of $X_{\Sigma}\rightarrow X_{\Sigma_v}$ which both intersect $X$ and on which the function $x_0+x_4$ vanishes identically are those divisors corresponding to the rays generated by the points $(P_0+P_4+P_i)/3$, $i\in\{1,2,3\}$, (on which both $x_0$ and $x_4$ vanish to order one) and $(jP_0+(3-j)P_4)/3$, $j\in\{1,2\}$ (on which one of $x_0,x_4$ vanishes to order one and one to order two). The intersection of these five divisors with $\tilde X=\pi^{-1}(X)$ are the components $D_i,S_j$ of $\tilde w^{-1}(0)$. Note that the toric divisor corresponding to $(P_0+P_4+P_i)/3$ maps surjectively to the $\PP^1$ stratum of $X_{\Sigma_v}$ given by $x_0=x_4=u_i=s=0$, and that $D_i$ is the inverse image of the point defined by $u_j+u_k=0$, $\{i,j,k\}=\{1,2,3\}$. One sees easily from Figure \ref{subdivision} that this fibre is isomorphic to $\PP^2$ blown up at $3$ points, as described in the statement of the lemma. Furthermore, the divisor corresponding to $(jP_0+(3-j)P_4)/3$ maps surjectively to the $\PP^2$ stratum of $X_{\Sigma_v}$ given by $x_0=x_4=s=0$, and $S_j$ is the inverse image of the line $u_1+u_2+u_3=0$ under this map. Again, from Figure \ref{subdivision}, it is not difficult to verify the description of $S_j$. Furthermore, again from the explicit subdivision given, one can verify the description of the intersections of the components $D_1,D_2,D_3,S_1,S_2$. To understand $W_0$, one must compute the proper transform of $w^{-1}(0)$, which must be studied on open subsets corresponding to each triangle in Figure \ref{subdivision}. We shall do this just for one crucial triangle, leaving it to the reader to check the others. Consider the triangle whose vertices are $(2P_0+P_4)/3, (P_0+2P_4)/3$ and $(P_0+P_4+P_i)/3$, generating a cone $\tau$ in $\Sigma$. Without loss of generality, take $i=1$. Note that $\tau^{\vee}$ is generated by $(-1,0,-1,-1)$, $(0,0,1,1)$, $(1,-1,0,0)$ and $\pm (0,-1,1,0)$. Now on the open subset of $X_{\Sigma_v}$ defined by the cone generated by $P_0, P_1$ and $P_4$, $u_2$ and $u_3$ are non-zero. Thus on this open set, we can trivialize $-K_E$ by setting $u_2=1$. We get $u_1=z^{(1,-1,0,0)}$, $u_3=z^{(0,-1,1,0)}$ and $s=z^{(0,-1,0,0)}$. We can then take \[ \alpha_1=z^{(-1,0,-1,-1)},\quad \alpha_2=z^{(0,0,1,1)},\quad \alpha_3=z^{(1,-1,0,0)},\quad u_3^{\pm 1} \] as coordinates on the open subset of $X_{\Sigma}$ defined by $\tau$. Note that the equation for $\tilde X=\pi^{-1}(X)$ is now \begin{equation} \label{u3elimeq} \alpha_3+1+u_3=\alpha_1\alpha_2\alpha_3 \end{equation} and the equation $x_0+x_4=0$ becomes \[ \alpha_1^2\alpha_2\alpha_3+\alpha_1\alpha_2^2\alpha_3u_3^{-1}=0. \] We can factor out $\alpha_1\alpha_2\alpha_3$, reflecting that the divisors $S_1,S_2$ and $D_1$ occur in $\tilde w^{-1}(0)$ with muliplicity $1$, leaving the equation for $W_0$ (multiplying by $u_3$, keeping in mind it is invertible, and then eliminating this variable using \eqref{u3elimeq}): \[ \alpha_1(\alpha_1\alpha_2\alpha_3-\alpha_3-1)+\alpha_2=0. \] One checks that this is non-singular, and intersects the other three irreducible components as claimed. Checking all charts, one finds the complete description of $\tilde w^{-1}(0)$ as given. To describe $\widehat w^{-1}(0)$, it is sufficient to note that the three divisors $S_1, S_2$ and $W_0$ meet each other mutually transversally along $S_1\cap S_2$ in $\tilde X$. As a consequence, when one blows up the curve $S_1\cap S_2$ in $\tilde X$, the proper transforms of these three divisors are now disjoint, and the exceptional divisor $E$, being a $\PP^1$-bundle over $S_1\cap S_2$, now contains three disjoint sections, and hence is isomorphic to $\PP^1\times\PP^1$. The remaining details follow easily. \qed \medskip We also need a properification of the map $\tilde w:\tilde X\rightarrow \CC$. To do so, begin by partially compactifying $\AA^2\times\PP^3$ by embedding it in $\AA^1\times\PP^1\times\PP^3$, with coordinates $w, (y_0,y_1)$, on the $\AA^1$ and $\PP^1$ factors. This embedding is given by \[ (x_0,x_4,(u_1,u_2,u_3,s))\mapsto (x_0+x_4, (x_0,1), (u_1,u_2,u_3,s)). \] Then the closure $X'_{\Sigma_v}$ of $X_{\Sigma_v}$ in $\AA^1\times\PP^1\times\PP^3$ is given by the equation \[ y_0(y_1w-y_0)u_1u_2u_3=s^3y_1^2. \] Let $X'$ be the hypersurface in $X'_{\Sigma_v}$ defined by the equation $u_1+u_2+u_3=s$. \begin{lemma} \label{cubiclemma2} \begin{enumerate} \item There is a resolution $\pi':\tilde X'\rightarrow X'$ extending the resolution $\pi:\tilde X\rightarrow X$. \item The map $\tilde w:\tilde X\rightarrow \CC$ extends to a proper map $\tilde w:\tilde X'\rightarrow \CC$. \item $D:=\tilde X'\setminus\tilde X$ is a normal crossings divisor, each component of which is mapped smoothly to $\CC$ under $\tilde w$. \item Every fibre of $\tilde w$ is a non-singular K3 surface except for $\tilde w^{-1}(0)$ and $\tilde w^{-1}(\pm 6\sqrt{3})$. The latter two fibres are K3 surfaces with one ordinary double point each. \item $\tilde w^{-1}(0)=W_0'\cup S_1\cup S_2\cup D_1\cup D_2\cup D_3$, where $S_i, D_j$ are as in Lemma \ref{cubiclemma1} and $W_0'$ is a compactification of $W_0$. \end{enumerate} \end{lemma} \proof For (1), the open subset of $X'$ where $y_0=1$ has the equation \[ (y_1w-1)u_1u_2u_3=(u_1+u_2+u_3)^3y_1^2 \] in $\AA^2\times\PP^2$, and one finds the following singular locus. There are three curves of $A_1$ singularities given by $y_1=u_i=u_j=0$. There are also three curves of $A_2$ singularities, given by the equations $y_1w-1=u_1+u_2+u_3=u_i=0$, $i\in \{1,2,3\}$. However, the latter curves are contained already in $X$, given here by $y_1\not=0$. Thus we may resolve $X'$ by using the resolution $\pi\circ\hat\pi$ for $X$ and in addition blowing up the three curves of $A_1$ singularities, which did not occur in $X$. (2) is clear, since $w$ agrees with the regular function $w$ on $X'$, which is clearly proper. One then takes $\tilde w=w\circ\pi'$. For (3), note that $X'\setminus X$ is given by setting $y_0=1$, $y_1=0$, giving the equation $u_1u_2u_3=0$ in $\AA^1(w)\times\PP^2$. Furthermore, $\pi'$ blows up the curves $y_1=u_i=u_j=0$, and hence one sees easily that $\tilde X'\setminus \tilde X$ is $C_6 \times \AA^1(w)$, where $C_6$ is a cycle of six rational curves. The restriction of $\tilde w$ to this divisor is just given by projection onto $\AA^1(w)$, making the result clear. For (4), note that a fibre of $w:X'\rightarrow\CC$ is given by fixing $w$, in which case this fibre can be described as the zero set of \[ y_0^2u_1u_2u_3-y_0y_1wu_1u_2u_3+y_1^2(u_1+u_2+u_3)^3 \] in $\PP^1\times\PP^2$. The projection to $\PP^2$ describes this surface as a partial resolution of a double cover of $\PP^2$, branched over the discriminant, the latter given by \[ u_1u_2u_3(w^2u_1u_2u_3-4(u_1+u_2+u_3)^3)=0. \] This is the union of the three coordinate lines and a (for general $w$) smooth cubic for which $u_i=0$ is an inflectional tangent for each $i$. Suppose this cubic is indeed smooth. Then the double cover of $\PP^2$ branched over this locus has three $A_1$-singularities over the points $u_i=u_j=0$ and three $A_5$-singularities over the points $u_1+u_2+u_3=u_i=0$. However, the fibre of $w$ as described in $\PP^1\times\PP^2$ has partially resolved the $A_5$-singularities, as the fibre of the projection to $\PP^2$ over $u_1+u_2+u_3=u_i=0$ is a $\PP^1$, but the surface contains two $A_2$-singularities along this $\PP^1$, at $y_0=0$ and $y_1w-y_0=0$. The resolution $\tilde X'\rightarrow X'$ now resolves all remaining singularities minimally because it is crepant. Hence, a general fibre of $\tilde w$ is a minimal K3 surface. To identify the remaining singular fibres, we just need to know for what non-zero values of $w$ the cubic $w^2u_1u_2u_3-4(u_1+u_2+u_3)^3=0$ is singular. One checks easily that this occurs only when $w^2=4\cdot 3^3$, at the point $(u_1,u_2,u_3)=(1,1,1)$. Furthermore, this point is a node of the cubic. This gives the two singular fibres with ordinary double points. (5) is obvious. \qed \bigskip The main result is then the following theorem: \begin{theorem} \label{cubicmirrorhodge} Let $D=\tilde X'\setminus\tilde X$, $j^D:\tilde X\hookrightarrow \tilde X'$ the inclusion. Then \[ \HH^i(\phi_{\tilde w,\pm 6\sqrt{3}} {\bf R}j_*^D\CC_{\tilde X}) =\begin{cases} \CC & i=2\\ 0 &i\not =2 \end{cases} \] and \[ \HH^i(\phi_{\tilde w,0} {\bf R}j_*^D\CC_{\tilde X}) = \begin{cases} \CC^5 & i=1\\ \CC^2 & i=2\\ \CC^5 & i=3\\ 0 & \hbox{otherwise.} \end{cases} \] In particular, if $Y\subset\PP^4$ is a smooth cubic three-fold, then \[ \dim_{\CC} \bigoplus_{p\in\CC}\HH^i(\phi_{\tilde w,p}{\bf R}j_*^D \CC_{\tilde X}) =\begin{cases} h^{1,2}(Y)=h^{2,1}(Y)& \hbox{$i=1$ or $3$}\\ \sum_{j=0}^3 h^{j,j}(Y)&i=2. \end{cases} \] \end{theorem} \begin{proof} The description of $D$ in Lemma~\ref{cubiclemma2},(3) and Thm.~\ref{cycles_computed} tell us that $R^q\phi_{\tilde w,p} {\bf R}j_*^D\CC_{\tilde X}$ has support away from $D$, so we can instead calculate $R^q\phi_{\tilde w,p} \CC_{\tilde X'}$. In the case that $p=\pm 6\sqrt{3}$, we immediately see the claimed result from Lemma \ref{cubiclemma2},(4). We can now describe $\shF=\phi_{\tilde w,0}\CC_{\tilde X'}$ as follows\footnote{We omit the costumary shift [1] here.}. We choose a retraction map $r:\tilde X_t'\rightarrow \tilde X_0'$ for $t\in\CC$ close to $0$. Then $\shF= \operatorname{{Cone}}_M(\CC_{\tilde X_0'}\rightarrow Rr_*\CC_{\tilde X_t'})$, where $\CC_{\tilde X_0'}\rightarrow Rr_*\CC_{\tilde X_t'}$ is the canonical map. We wish to compute the hypercohomology of $\shF$. To describe $\shF$, let $\widehat{X}'\rightarrow\tilde X'$ be the blowup along $\ell$, as in Lemma \ref{cubiclemma1}, coming with the potential $\widehat{w}:\widehat{X}'\rightarrow\CC$. We note first that we can choose $r$ as a composition $\hat r:\widehat{X}'_t\rightarrow \widehat{X}'_0$ and the blow-down $\widehat{X}'_0\rightarrow \tilde X_0'$. Further, we can make a base-change $\widehat{X}'\times_{\CC} \CC$ via the map $\CC\rightarrow\CC$ given by $w\mapsto w^3$, and then normalizing. This produces a family $\overline w:\overline X'\rightarrow\CC$ where now $\overline X'$ has quotient singularities (what is called a $V$-manifold in the literature). The effects of this on the central fibre are easily seen to be as follows. One has \[ \overline w^{-1}(0)=W_0'\cup S_1\cup S_2\cup \widehat D_1\cup\widehat D_2\cup\widehat D_3\cup \overline E, \] where the first six divisors are identical to those appearing in $\widehat X_0'$, and the last one is a cyclic triple cover of $E$ totally ramified over $E\cap (W_0'\cup S_1\cup S_2\cup \widehat D_1\cup\widehat D_2\cup\widehat D_3)$. This allows us to choose $\hat r$ as a composition of a retraction $\bar r:\overline{X}_t'\rightarrow \overline{X}_0'$ followed by the projection $\overline{X}_0'\rightarrow\widehat{X}'_0$. The advantage of working with $\bar r$ is that this is relatively easy to understand. One can in fact use techniques from log geometry, namely the Kato-Nakayama construction. Given a log analytic space $(X,\shM_X)$, there is a topological space $X_{\log}$ along with a continuous map $\rho:X_{\log} \rightarrow X$. To define $X_{\log}$, we define a log point $P:=(\Spec \CC, \RR_{\ge 0}\times S^1)$, where the structure map $\alpha:\RR_{\ge 0}\times S^1\rightarrow \CC$ is given by $\alpha(h,e^{i\theta})=he^{i\theta}\in\CC$. Then as a set, $X_{\log}=\Hom(P,X)$. There is an obvious map $\rho:X_{\log}\rightarrow X$ taking a morphism $P\rightarrow X$ to its image. There is a natural topology on $X_{\log}$. If $f:X\rightarrow Y$ is a morphism of log schemes, there is the obvious map $f_{\log}:X_{\log}\rightarrow Y_{\log}$, which is also continuous, so the construction is functorial. See \cite{KN99}, \cite{NO10} for details. We use this as follows. If one puts the divisorial log structure on $\overline{X}'$ given by the divisor $\overline{X}_0'\subseteq\overline{X}'$, and on $\CC$ the divisorial log structure given by $0\in \CC$, the map $\overline w:\overline{X}'\rightarrow \CC$ becomes log smooth in a neighbourhood of $\overline{X}'_0$. The space $\CC_{\log}$ is just the real oriented blowup of $\CC$ at the origin. We then have a diagram \[ \xymatrix@C=30pt { \overline{X}'_{\log}\ar[r]^{\rho'}\ar[d]_{\overline w_{\log}} &\overline{X}'\ar[d]^{\overline{w}}\\ \CC_{\log}\ar[r]_{\rho}&\CC } \] The map $\rho'$ defines a homeomorphism $(\rho')^{-1}(\overline X'\setminus \overline X'_0)\rightarrow\overline X'\setminus \overline X'_0$ because the log structure is only non-trivial on $\overline X'_0$. Furthermore, if $U$ is an open neigbourhood of $0\in\CC$ which does not contain any non-zero critical values of $\overline w$, then the restriction of $\overline w_{\log}$ to $\overline w_{\log}^{-1}\rho^{-1}(U)$ is a topological fibre bundle by \cite{NO10}. In particular, all fibres of $\overline w_{\log}$ over $\rho^{-1}(U)$ are homeomorphic. For $t\in\rho^{-1}(U\setminus\{0\})$, $\overline w^{-1}_{\log}(t)$ is homeomorphic via $\rho'$ to $\overline X'_{\rho(t)}$, while for $t_0\in \rho^{-1}(0)$, we can take the map \[ \rho'|_{\overline w^{-1}_{\log}(t_0)} :\overline w^{-1}_{\log}(t_0)\rightarrow \overline X'_0 \] to be the desired retraction $\bar r$. The advantage of this description is that it is now easy to describe the topology of this retraction. Indeed, $\bar r$ is an isomorphism over the smooth points of $\overline X'_0$, has fibre $S^1$ over the double points of $\overline X'_0$, and fibre $S^1\times S^1$ over the triple points of $\overline X'_0$. The inverse image under $\bar r$ of an irreducible component $F$ of $\overline X'_0$ can be viewed as the real oriented blow-up of $\Sing(\overline X'_0)\cap F$ inside $F$. Taking $r$ to be the composition of $\bar r$ with the projections $\overline X'_0\rightarrow \widehat X'_0\rightarrow \tilde X_0'$, we can describe the fibres of $r$ as follows. First, $r$ is an isomorphism away from $\Sing(\tilde X_0')$. The fibre of $r$ over a double point of $\tilde X_0$ is $S^1$. Next, we have the set $\ell=S_1\cap S_2\cap W_0\subseteq \tilde X_0$, a copy of $\PP^1$, with three special points $Q_i=\ell\cap D_i$, $i=1,2,3$. Then the fibre of $r$ over a point of $\ell^o=\ell\setminus \{Q_1,Q_2,Q_3\}$ can be described as follows. We have $\overline{E}\subseteq \overline X_0'$, and the projection to $\tilde X_0'$ yields an elliptically fibred K3 surface $f:\overline{E}\rightarrow \ell$. A fibre over a point of $\ell^o$ is a triple cover of $\PP^1$ branched at three points. Since $\bar r^{-1}(\overline{E})$ is the real oriented blow-up of $\overline{E}$ along the ramification locus of the projection $\overline{E}\rightarrow \widehat E$, one sees that the fibre of $r$ over a point of $\ell^o$ is the real blow-up of an elliptic curve at three points. Finally, $r^{-1}(Q_i)$ can be described from this point of view as $S^1\times M$, where $M$ is a real blow-up of a $\PP^1$ at three points. Given this description, one can describe $R^1r_*\CC$ and $R^2r_*\CC$ as follows. Note that for a point $x\in\ell^o$, $H^1(r^{-1}(x),\CC) \cong\CC^4$, with $\CC^2$ coming from the image of $H^1(T^2,\CC)$ under the embedding $r^{-1}(x)\hookrightarrow T^2$. The other $\CC^2$ comes from removing three disks. As $x$ varies, one can then describe $R^1r_*\CC|_{\ell^o}=R^1f_*\CC\oplus\CC^2$. Note that because of monodromy in the elliptic fibration $f$, the pushforward of $R^1f_*\CC|_{\ell^o}$ across $\ell$ is just the extension by zero of $R^1f_*\CC|_{\ell^o}$. From this, one finds one can write \[ R^1r_*\CC=\shG\oplus R^1f_*\CC, \] where $\shG$ has stalks $\CC^2$ on $\ell^o$, $\CC$ at all double points of $\tilde X_0'$, and stalk $H^1(S^1\times M,\CC)=\CC^3$ at $Q_1,Q_2,Q_3$. The sheaf $\shG$ is constant on each connected component of $\Sing(\tilde X_0')\setminus \{Q_1,Q_2,Q_3\}$, and the generization maps on stalks $\shG_{Q_i}\rightarrow \shG_{\eta}$ for $\eta$ a generic point of $\Sing(\tilde X_0')$ are seen to be surjective. The sheaf $R^2r_*\CC$ is much easier to describe: it is supported on the set $\{Q_1,Q_2,Q_3\}$ with stalk $H^2(M\times S^1)\cong\CC^2$ at each of these points. It is then not difficult to work out the $E_2$ term for the hypercohomology spectral sequence $E_2^{pq}=H^q(\shH^p(\shF))\Rightarrow \HH^n(\shF)$. Indeed, $H^0(R^1r_*\CC)=H^0(\shG)=\CC^5$, since a section of $\shG$ is entirely determined by its stalks at $Q_1,Q_2,Q_3$, and the generization of these stalks at the generic point of $\ell$ must agree. Then we have an exact sequence \[ 0\rightarrow \shG \rightarrow \CC_{\ell}^2\oplus \bigoplus_{i=1}^9 \CC_{d_i} \rightarrow \bigoplus_{i=1}^3 \CC_{Q_i}^2\rightarrow 0, \] where $d_1,\ldots,d_9$ are the closures of the irreducible components of the double point locus, and $\CC_S$ denotes the constant sheaf $\CC$ on the variety $S$. From $H^0(\shG)=\CC^5$, one obtains from the long exact cohomology sequence that $H^1(\shG)=0$ and $H^2(\shG)\cong\CC^{11}$. Finally, one can check from the Leray spectral sequence for $f:E\rightarrow \ell$ and the fact that $E$ is a singular K3 surface with $9$ $A_2$-singularities that $H^1(R^1f_*\CC)=\CC^2$ and $H^2(R^1f_*\CC)=0$. Putting this together, we obtain the $E_2$ term \[ \xymatrix@C=30pt { H^0(R^2r_*\CC)=\CC^6\ar[rrd]^d&0&0\\ H^0(R^1r_*\ZZ)=\CC^5&H^1(R^1r_*\CC)=\CC^2&H^2(R^1r_*\CC)=\CC^{11}\\ 0&0&0 } \] Finally, we need to show the map $d$ is injective. First note that this map coincides with the same map in the Leray spectral sequence for $r$, and hence $\ker d$ is the image of the natural map $H^2(\tilde X_t,\CC)\rightarrow H^0(R^2r_*\CC)$. This map is dual to the natural map \begin{equation} \label{natmap} \bigoplus_{i=1}^3 H_2(r^{-1}(Q_i),\CC)\rightarrow H_2(\tilde X_t,\CC). \end{equation} But $H_2(r^{-1}(Q_i),\CC)$ is generated by the connected components of the boundary of $r^{-1}(Q_i)$ (with one relation), and it is easy to see that these cycles are bounded by the closure of the sets $r^{-1}(d_i\setminus \{Q_1,Q_2, Q_3\})$, for various $i$. Thus the map \eqref{natmap} is zero, from which we conclude that $\ker d=0$. So the $E_3=E_{\infty}$ term is \[ \xymatrix@C=30pt { 0&0&0\\ \CC^5&\CC^2&\CC^5\\ 0&0&0 } \] This shows the remaining claims, with the cohomology of a cubic threefold being well-known. \end{proof} It has been conjectured in \cite{Ka10} and verified along a list of cases that a three-dimensional Fano manifold is non-rational if its resolved mirror dual has a fibre with non-isolated singularities and non-unipotent monodromy. See also \cite{KP09}. A general cubic being non-rational by a theorem of Clemens-Griffiths \cite{CG72}, we verify this conjecture for the cubic by computing the monodromy in cohomology of the family $\tilde w:\tilde X'\rightarrow\AA^1$ around the zero fibre. \begin{proposition} Let $t_0\in\CC$ be a point near $0$, and let $T:H^2(\tilde w^{-1}(t_0),\CC) \rightarrow H^2(\tilde w^{-1}(t_0),\CC)$ be the monodromy operator associated to a counter-clockwise loop around the origin based at $t_0$. Then $T^3=I$, so that the eigenvalues of $T$ are third roots of unity. Furthermore, eigenvalue $1$ has multiplicity $20$, and the two primitive third roots of unity each have multiplicity one. \end{proposition} \begin{proof} Let $\overline{w}:\overline{X}'\rightarrow\CC$ be as in the proof of Theorem \ref{cubicmirrorhodge}, and let $Y=\overline{w}^{-1}(0)$. Then $\overline{X'}$ is a $V$-manifold and $Y$ is a $V$-normal crossings divisor in $\overline{X}'$. Recall $Y$ has seven irreducible components, six isomorphic to $W_0'$, $S_1$, $S_2$, $D_1$, $D_2$, $D_3$ respectively, and the seventh being $\overline{E}$ a cyclic triple cover of $E\cong \PP^1\times\PP^1$ totally ramified over a union of six lines, three in each ruling. Note $\overline{E}$ is a K3 surface with $9$ $A_2$-singularities, and thus $H^2(\overline{E},\CC)\cong \CC^4$, while $H^0(\overline{E},\CC)\cong H^4(\overline{E},\CC)\cong\CC$. Since all components of $Y$ are reduced, the monodromy $\tilde T=T^3$ of $\overline{X}'\rightarrow\AA^1$ around the origin must be unipotent, but since $Y$ contains a K3 component, $\overline{X}' \rightarrow\AA^1$ must be birationally equivalent to a type I degeneration of K3 surfaces. Thus $\tilde T=I$, so we see $T^3=I$. Let $\lambda:Y\rightarrow Y$ be a generator of the $\ZZ_3$-action on $Y$ arising from the construction of $Y$ as the normalization of a cyclic triple cover. Then $\lambda$ acts component-wise on $Y$, hence acts on each $Y^{r}$, and according to the proof of (2.13) of \cite{St76}, the action of $\lambda^*$ on the weight spectral sequence \[ E_1^{-r,q+r}=\bigoplus_{k\ge 0,-r} H^{q-r-2k} (Y^{2k+r+1},\CC)(-r-k)\Rightarrow H^q(\tilde w^{-1}(t_0),\CC) \] coincides with the action of $T$. However, one sees easily that this action is trivial except for the action on the contribution $H^2(\overline E,\CC)$, which appears in $E_1^{0,2}$. Since the quotient of $\overline E$ by the action the $\ZZ_3$-action is $\PP^1\times\PP^1$, the action of $\lambda^*$ on $H^2(\overline E,\CC)$ must have only a two-dimensional invariant subspace, hence $\lambda^*$ has in addition two eigenvalues being primitive third roots of unity, each appearing with multiplicity one. From this one concludes the same is true for the action of $T$ on $H^2(\tilde w^{-1}(t_0),\CC)$. \end{proof} \begin{appendix} \section{A binomial identity} We include the proof of a binomial identity which we use in \S\ref{section2} and \S\ref{section4}. \begin{proposition} \label{binomident} For $n,k,m,p\in\ZZ_{\ge 0}$, we have \begin{enumerate} \item $$\begin{pmatrix}n\\k\end{pmatrix}=(-1)^m\sum_{i\ge 0}(-1)^i \begin{pmatrix}i\\m\end{pmatrix} \begin{pmatrix}n+m+1\\k+1+i\end{pmatrix}$$ \item (Vandermonde's identity) $$\begin{pmatrix}m+n\\k\end{pmatrix}=\sum_{i\ge 0} \begin{pmatrix}m\\i\end{pmatrix} \begin{pmatrix}n\\k-i\end{pmatrix}$$ \end{enumerate} \end{proposition} \begin{proof} (2) is standard. To see (1), starting with $\begin{pmatrix}n\\k\end{pmatrix}$ and $i=0$, the iterated insertion of $\begin{pmatrix}n\\k+i\end{pmatrix}=\begin{pmatrix}n+1\\k+1+i\end{pmatrix}-\begin{pmatrix}n\\k+1+i\end{pmatrix}$ yields $\begin{pmatrix}n\\k\end{pmatrix}=\sum_{i\ge 0}(-1)^i\begin{pmatrix}n+1\\k+1+i\end{pmatrix}.$ This being the base case for $m=0$, the general case follows by induction by inserting the base case in the induction hypothesis; indeed, \begin{align*} \ \ &(-1)^m\sum_{i\ge 0}(-1)^i \begin{pmatrix}i\\m\end{pmatrix} \begin{pmatrix}n+m+1\\k+1+i\end{pmatrix}\\ = {} &(-1)^m\sum_{i\ge 0}(-1)^i \begin{pmatrix}i\\m\end{pmatrix} \sum_{j\ge 0}(-1)^j\begin{pmatrix}n+m+2\\k+2+i+j\end{pmatrix}\\ = {} &(-1)^{m+1}\sum_{i'=1+i+j\ge 0}(-1)^{i'} \sum_{j\ge 0}\begin{pmatrix}i'-j-1\\m\end{pmatrix} \begin{pmatrix}n+(m+1)+1\\k+1+i'\end{pmatrix}, \end{align*} from which the assertion follows by noting that $\sum_{j=0}^{i'-1}\begin{pmatrix}i'-j-1\\m\end{pmatrix} =\begin{pmatrix} i'\\ m+1\end{pmatrix}$. \end{proof} \end{appendix}
1,477,468,750,242
arxiv
\section{Abstract}\label{abstract} In the spirit of software engineering, I have applied the separation of concerns principle by creating a separate document for the abstract, \href{http://drive.google.com/open?id=1lTxG_WOtMI6IbK3NU-cFe6LAKrQKN53Jn0MwpE7m3Xg}{\emph{Big Data Processing in Apache Spark (Abstract)}}, since we have to submit it separately. I'm going to work on actual paper organization and content before Thursday. \section{Overview}\label{overview} Our goal is to understand and articulate the characteristics of cloud-based dataflow processing in the context of HPC analysis tasks, implement sample apps using spark, pegasus, and mpi and evaluate their performance on both a supercomputing cluster and a cloud. We will compare XYZ written in Apache Spark vs. MPI (a traditionally used messaging middleware) The benchmarks are chosen based on typical computation/communication patterns found in HPC. We're also looking at Python vs. Scala and whether there are any performance differences. Some words about tuning garbage collection. These folks seem to have done some good legwork on same\footnote{https://databricks.com/blog/2015/05/28/tuning-java-garbage-collection-for-spark-applications.html}. Also simply describe differences between implementation (e.g. lines of code). Out of core: Implicit in Spark NOTE: we need to clarify what we mean by \emph{dataflow.} \begin{itemize} \item \begin{quote} Use in compilers: \href{https://en.wikipedia.org/wiki/Data-flow_analysis}{\emph{https://en.wikipedia.org/wiki/Data-flow\_analysis}} \end{quote} \item \begin{quote} Dataflow architecture, https://en.wikipedia.org/wiki/Dataflow\_architecture \end{quote} \end{itemize} \section{Project Resources}\label{project-resources} \textbf{Github:} \href{http://github.com/cchriste/dataflow}{\emph{http://github.com/cchriste/dataflow}} \textbf{Slack:} \href{https://dataflowanalysis.slack.com}{\emph{https://dataflowanalysis.slack.com}} \textbf{Google Docs:} \href{https://drive.google.com/folderview?id=0BxLkEMNd9q6FfmpRaFZXSGlPc0JsSDdVdndCUm83SzN6UnlLVEk5T3ZsZmJ0VEVGREtNTkE\&usp=sharing}{\emph{dataflow\_analysis}} \textbf{Dropbox:} \href{https://www.dropbox.com/sh/odsd9uxe1elbhbf/AAD8J1TGuFY1VHJl2oPdm5E0a?dl=0}{\emph{spark\_hpc}} My dropbox id is gkt@cs.luc.edu \section{Dataflows}\label{dataflows} \subsection{Components/Pipeline Stages}\label{componentspipeline-stages} \begin{itemize} \item \begin{quote} map \end{quote} \item \begin{quote} flatmap \end{quote} \item \begin{quote} reduce \end{quote} \item \begin{quote} reducemap (ex: aggregation for file i/o) \end{quote} \item \begin{quote} other? \end{quote} \end{itemize} \subsection{Multistage}\label{multistage} Result of one computation feeds into the next, for example a \emph{map} to another \emph{map}. \subsection{Streaming}\label{streaming} Computation performed over moving window. \subsection{Evaluating Performance}\label{evaluating-performance} Assessing the various components based on their rates of production and consumption. \subsection{To DAG or not to DAG}\label{to-dag-or-not-to-dag} Some dataflows can have feedback.. \section{Dataflow Processing Frameworks}\label{dataflow-processing-frameworks} \subsection{Spark}\label{spark} Spark is a general purpose cluster computing system similar to Hadoop. It provides a new data abstraction that facilitates fast sharing and history-based resilience, as well as an expanded set of data transformation and actions, in addition to traditional map-reduce. In addition, Spark provides a streaming processing abstraction as well as bindings to common processing languages such as GraphX, R, and MLlib. Spark is \emph{lazy,} and this philosophy underlies much of its design. Computations will not be performed until their result is requested and data will not be consolidated or repartitioned unless explicitly requested. \subsubsection{Running}\label{running} Basics of job submission and execution. \href{https://docs.google.com/document/d/1fq3z1-oEcCBhjKA__vl8LsVm3-uArJik7YFLYvYoN1Y/edit?usp=sharing}{\emph{running spark on cooley}} \href{https://docs.google.com/document/d/1lyzEHap1EznES0DKiMa3fsalepqPD6vnbZqSEMjCVPQ/edit?usp=sharing}{\emph{running spark on magellan}} \subsubsection{Resilient Distributed Data}\label{resilient-distributed-data} When a dataset is loaded by Spark, it becomes an immutable RDD. This abstraction allows the data to be treated as a whole when in fact it may be partitioned across many nodes of a distributed system. Each partition also contains the history of transformations with which it was created, called a \emph{lineage}, with which the partition can be recomputed if necessary, such as in the case of a node failure. This lineage is a more compact form of resiliency compared to data duplication as utilized by Hadoop. An RDD is split into \emph{partitions} whose size is at minimum the size of a block on whatever storage device is being utilized (e.g. 32MB). Each partition is further divided into \emph{records}, typically a single line for text processing, or an entire binary file for binary data. Binary data records can be explicitly specified. Large binary files will be broken down into multiple partitions only if these partitions can themselves be divided into records. Spark distributes the blocks/data among the workers, if it does at all. o is RDD different than Tachyon or do RDD blocks get written using Tachyon (or HDFS, etc)? o what data models does it support? o how does it handle fault tolerance? o what about transformation/views? (rows vs columns, more generally chunks/blocks) o when/how is the data replicated? \subsubsection{}\label{section} \paragraph{Storage Levels}\label{storage-levels} RDDs can be stored in memory, on disk, a combination of the two, or off heap (using Tachyon). MEMORY\_ONLY Partitions that do not fit in memory will be recomputed from their history when they are needed. \subparagraph{DISK\_ONLY}\label{diskux5fonly} All partitions are stored on disk, just like Hadoop. RDD partitions can be optionally replicated across up to two nodes. An RDD partition must be small enough to fit \paragraph{Persistence}\label{persistence} RDDs can be persisted in memory for fast access between successive operations. This is particularly important for iterative algorithms that reuse some intermediate result. An RDD will be persisted according to its specified storage level. Caches are cleared according to a least recently used policy, or manually using RDD.unpersist() \paragraph{Input}\label{input} In order to get binary data into spark, we use the binaryFiles or binaryRecords SparkContext functions. Both will read a directory filled with binary files. The first will create one record per file, while the second will partition files into records of an explicitly stated number of bytes. For example, the following line will create an RDD with partitions containing records of size float{[}3{]}: vecs = sc.binaryRecords(sys.argv{[}1{]}, 12) The binary data is read into a string. If you want to use the data as a numpy array, the numpy.fromstring function can be used: arr= np.fromstring(vec, dtype=np.float32) This function can be used by RDD.map in order to convert the string records into numpy records: data = vecs.map(parseVector) \subsubsection{Streaming}\label{streaming-1} How to run spark in streaming mode. \subsubsection{Spark and Tachyon}\label{spark-and-tachyon} How do they work together? Are the performance characteristics different? Is it the same as ceph on Magellan? (ask Dan Olsen about ceph) \subsection{Pegasus}\label{pegasus} todo \subsection{MPI}\label{mpi} todo: reference implementation \section{System Architectures}\label{system-architectures} \subsection{Cooley}\label{cooley} 225Tflop cluster with 40Tb memory, shared disk + 355Tb local disk per node, ... \subsection{Magellan}\label{magellan} 7000 node cloud, no shared disk, per-node provisioning flexibility, \ldots{} \section{Sample Applications}\label{sample-applications} \subsection{APS}\label{aps} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \item \begin{quote} analyze image stacks from advanced photon source \end{quote} \item \begin{quote} save results as IDX \end{quote} \end{enumerate} Our first sample application comes from a materials scientist who utilizes the Advanced Photon Source (APS) to scan the construction of a polymer material that could be used for nanoscale circuit printing. We received an image stack from the APS as well as a sample Matlab script and reproduced the results using the Spark framework. Details can be found here. \subsection{Bioinformatics}\label{bioinformatics} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \item \begin{quote} look for patterns in genome streams \end{quote} \item \begin{quote} and stuff \end{quote} \end{enumerate} \subsection{EVS}\label{evs} \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \item \begin{quote} look for patterns in genome streams \end{quote} \item \begin{quote} and stuff \end{quote} \end{enumerate} \section{Evaluation}\label{evaluation} \subsection{Pipeline Benchmarks}\label{pipeline-benchmarks} \begin{itemize} \item \begin{quote} map \end{quote} \item \begin{quote} flatmap \end{quote} \item \begin{quote} reduce \end{quote} \item \begin{quote} reducemap (ex: aggregation for file i/o) \end{quote} \item \begin{quote} other? \end{quote} \end{itemize} \subsection{Application Benchmarks}\label{application-benchmarks} - data sizes - data rates - \# of stages - system scale - M:N coupling (in flatmaps/reducemaps, fan-in/fan-out degress) - hw - spilling? - magellan vs cooley (shared storage vs node-level storage) \section{Meeting Notes}\label{meeting-notes} \emph{17Jun} 1. \href{https://docs.google.com/document/d/1fq3z1-oEcCBhjKA__vl8LsVm3-uArJik7YFLYvYoN1Y/edit?usp=sharing}{\emph{spark on cooley}} - verify number of threads is actually correct, that we can control/monitor threading o identify multiple threads/worker - try to load binary data (aps data or bio data) o partition a huge binary file o load a directory of smaller binary files - use /projects/SDAV/cam directory, which is persistent note: file(s) must be accessible to all workers, either through shared path or hdfs/tachyon/etc - when multiple files are read (or one big file is split) where does the data go? - dissect RDD (esp in context of binary blob): o is RDD different than Tachyon or do RDD blocks get written using Tachyon (or HDFS, etc)? o what data models does it support? o how does it handle fault tolerance? o what about transformation/views? (rows vs columns, more generally chunks/blocks) o when/how is the data replicated? - explore pipelines 1b. \href{https://docs.google.com/document/d/1lyzEHap1EznES0DKiMa3fsalepqPD6vnbZqSEMjCVPQ/edit?usp=sharing}{\emph{spark on magellan}} - ceph (distributed file system looking thing, ask Dan Olsen about it) - ... 2. spark streaming, pipelines articulate pipeline stages (benchmarks): - map - flatmap - reduce - reducemap (e.g. aggregation for file i/o) - ? once we introduce, it makes sense to talk about rates of production and consumption for each pipeline component, and how we can make them work together sanely ("frame dropping", etc) - add another stage, now rates of production/consumption matter evaluate (application benchmarks): - data sizes - data rates - \# of stages - system scale - M:N coupling (in flatmaps/reducemaps, fan-in/fan-out degress) - hw - spilling? - magellan vs cooley (shared storage vs node-level storage) 3. plan for summer - Try a real problem on both cooley and magellan, compare pros and cons. - ideas: aps data analysis, biology data, and even convert it to idx at the end and attach a visus viewer. - could use an existing spark analysis (such as their machine learning module), or create our own Understanding the overall system and tradeoffs and how cloud-iness can help hpc will be a good contribution. - could also have an mpi app that does the same thing to compare performance week 3-7 - explore and code week 8-9 - benchmarking (aps, bioinformatics, or both) week 10 - write report/paper w3: - google doc - pipelines/streaming investigation - spark running on Magellan - binary data importing and RDD dissection w4: - start APS/BIO - pipelining/streaming benchmark - explore pegasus a little. We know the project lead (Eva Delman) so we can ask questions. This is mainly to have a notion of related work. - rdd vs tachyon? w5: - refine aps/bio app - identify analyses w6: - 2nd app (either the aps or the bio) - (spillover from prev weeks) w7: - finish and have working apps \emph{15Jun} Pegasus mpi*, Legion, Uintah task graph, Create a simulator using mpi/spark, try different dataflows Different "feeds" produce at different rates, Have different "coupling" characteristics (\#consumers vs producers) Different primitives (join, scatter, etc) Streaming (small chunks), dataflow (overall processing task) Can we create a benchmark to characterize these types of *in-memory* pipelines using these building blocks with their given characteristics? Including not only dag but also feedback loops. Challenge is to separate performance from effectiveness of pipeline. We can try this using both spark and Pegasus using Cooley and Magellan. Magellan has a lot of flexibility in configuration, so we could even using it to test heterogeneous frameworks. We're going to let some task graph scheduler decide how to schedule the Dataflow (default spark and Pegasus ways). We want to explore dynamic modification of pipelines (eg add a new processing node) and speculate as to what Api would be required to facilitate this (ex: pause upstream then insert node then resume). We'll generate examples from meta genomics for which we have in house operations, we could also try climate. First we'll try sample data to debug: gen a big 1d array, compute sum of each successive pair, etc Google doc to track thoughts \section{Benchmarking} \label{sec:benchmarking} We've developed comprehensive benchmark tests for each language (MPI/C, Python, and Scala) designed to work in parallel on large, distributed array data structures. From the user's point of view these data structures are continuous, but on the system they are distributed across all available nodes as blocks, and the size of each block can be explicitly specified. Smaller blocks have the advantage of being able to be rapidly transferred between nodes while larger blocks require less overhead and can therefore be processed more efficiently. The data can be loaded from disk, from the network, or generated directly on the node itself by the application. Our benchmarks are designed to emulate a typical image analysis dataflow based on the \textit{map-reduce} paradigm, which consists of four stages: data loading/generation, then some number of operations executed in parallel on that data, and finally a global reduction to obtain a final result. Each stage can be timed separately in order to more precisely analyze overall execution time. The following parameters apply to all of the benchmarks: \begin{itemize} \item blocks: Number of blocks to be created \item block size: Basic block size \item nodes: Number of nodes actually present in the cluster \item cores: Number of cores per node in the cluster \end{itemize} \subsection{Benchmark Machines} \label{sec:benchmark-machines} \subsubsection{\textsc{Cooley}\xspace} Our first test-bed is a mid-size cluster at the Argonne Leadership Computing Facility (ALCF), aimed primarily but not exclusively at support high-performance visualization processing and analytics. It has a total of 126 compute nodes. Each node has two 2.4 GHz Intel Haswell E5-2620 v3 processors (6 cores per CPU, 12 cores total), with 384GB RAM. This system uses FDR Infiniband interconnect. \subsubsection{\textsc{Theta}\xspace} The second test-bed is a large-scale Cray XC40 supercomputer with only a single compute node type: the Intel Knights Landing 7230 processors. It features 16 GiB of MCDRAM and 192 GiB of DDR4 per node. Each node has one 1.3 GHz Intel Xeon Phi 7230 processor with 64 cores, and each core has 4 simultaneous multhreading (SMT) hardware threads available. At the time of writing, the platform has a total of 4392 compute nodes (281,088 cores). For our experiments, we were able to use 1024 of these nodes. The Cray Aries network is the high-speed interconnect used on this system. This network is a 3-level Dragonfly topology. \subsection{Benchmark Setup} \label{sec:benchmark-setup} We study the strong and weak scaling behavior of our three code bases on \textsc{Cooley}\xspace and \textsc{Theta}\xspace using simple, but still realistic, parameters for our tests. Though we have developed a comprehensive benchmark, in this work we only compare performance of data {generated} by a simulation or other analysis, not data loaded from disk. Another affordance of Spark is its innate support for of out-of-core execution \emph{implicitly} and \emph{by default} using various strategies to cache to RAM, to disk, and even combinations of the two. This is something that is simply not possible in traditional ahead-of-time compiled languages such as C without essentially reproducing many of the ideas of the JVM, building a memory-management framework, or using memory caching services such as memcached. While our benchmark has been designed to allow for \emph{data spilling} by adjusting any of the benchmark parameters (blocks, block size, or the partition multiplier), the tests presented in this work deliberately avoid this in order to keep the focus narrowed on the difference between cloud computing and MPI/C without the additional layers required to facilitate out-of-core memory management. All of our Spark benchmarks are performed with the latest release of Apache Spark version 2.3.2. As we focus on the out-of-box experience of Spark usage on supercomputers, we use the Spark binary package pre-built for Apache Hadoop 2.7 and later, directly downloaded from Spark website. On both systems we use Scala version 2.11.8. On \textsc{Cooley}\xspace, we used the server Java SE version \texttt{1.8.0\_60} and Python version \texttt{3.5.1} using Anaconda 4.0.0 (64-bit), and the Intel C Compiler (icc) version \texttt{18.0.3}. On \textsc{Theta}\xspace, we used the Java SE version \texttt{1.8.0\_51}. Intel Distribution for Python version \texttt{3.5.2} and the Intel C compiler (icc) \texttt{18.0.0}. Our benchmark makes use of NumPy~\cite{Walt:2011:NAS:1957373.1957466}, which is one of the most commonly used linear algebra/numerical analysis libraries in Python. This library allows us to work with large-dimension array structures and perform the benchmark operation on dense vectors as efficiently as possible in an interpreted language. The NumPy design has also been making its way to other frameworks, including the Breeze Scala library~\cite{BreezeScala} that we also used in our Scala version of the performance benchmark. Both NumPy and Breeze are needed, simply put, because the native array support in Python and Scala lacks the ability to work with dense vector and matrix structures, especially when it comes to supporting higher order operations associated with linear algebra. On the other hand, for our MPI/C implementation, we use a straightforward loop with in-place operations for the linear algebra. As the actual computation performed in the benchmark is mostly limited by the memory bandwidth, we expect this setup accurately reflects the typical scaling behavior. We perform benchmark on \textsc{Cooley}\xspace using 1 to 96 nodes, while using 128 to 1,024 nodes on \textsc{Theta}\xspace. In the strong scaling study, we fix the number of total blocks to be 9,216 on \textsc{Cooley}\xspace, and 131,072 on \textsc{Theta}\xspace. In the weak scaling study, we fix the number of blocks per node to be 768 on \textsc{Cooley}\xspace, and 512 on \textsc{Theta}\xspace. In both cases, the block size is kept to be one, meaning $2^{20}$ vectors, each with three IEEE double precision floating point elements. The C/MPI implementation uses 12 ranks per \textsc{Cooley}\xspace node, and 128 ranks per \textsc{Theta}\xspace node. Both Scala and Python implementations partition their RDD in 12 parts per \textsc{Cooley}\xspace node, and 256 parts per \textsc{Theta}\xspace node. \subsection{Spark Parameters} \label{sec:spark_params} Since we are executing tests that involve a much larger number of nodes and cores than those conducted on \textsc{Cooley}\xspace, we had to disable the Spark \emph{heartbeat} because it overwhelmed the network with hundreds of TCP/IP connections per second and significantly disturbed performance. See Section~\ref{sec:tuning-spark} for an explanation of potential TCP/IP issues others have experienced in large-scale Cray systems similar to ours. We have also significantly increased Spark parameters related the network timeout, in order to eliminate the false error reports of network timeout, where in fact the expensive network RPC is flooding the Linux TCP/IP stack due to the extreme high node count. We have increased the Spark driver memory and executor memory, according to our machine resources. We also enabled parallel garbage collector (GC) in JVM with limited amount of GC threads, 2 on \textsc{Cooley}\xspace and 8 on \textsc{Theta}\xspace. \section{Discussion and Future Work} This work is the beginning of a more detailed understanding of the performance of Apache Spark for HPC dataflows, which have a longstanding tradition of being done in MPI C/C++. Spark dataflows can employ linear algebra libraries like NumPy (for Python) and Breeze (for Scala) aimed at providing near-native C performance. Related work has already shown that MPI C/C++ continues to perform better than variuos alternatives, which therefore helped to focus our efforts on understanding what is possible in Python and Scala (and Java by association). Executing Spark computations on thousands of nodes using tens of thousands of cores may provide a plausible addition to the currently available tools on these systems. And overheads aside, operating at scale on supercomputers makes cloud-based frameworks a viable approach for co-processing and/or postprocessing of more optimized ``native'' computations, such as simulations, etc. Although overheads associated with Java and Apache Spark are significant, the results show that scaling to a large number of nodes and core counts is not only possible but a promising direction. We have been successful to overcome many challenges to get Spark running on leadership class supercomputers, e.g. \textsc{Theta}\xspace. With an eye to future leadership systems, work remains to be done, especially when it comes to ameliorating the effects of Spark overheads and ensuring full use of HPC architectures. Alleviating these overheads will require vendor commitments to tune Java for large core counts and Apache Spark for a large number of connections to the master node. The code, scripts, and results we've shared will be of value to researchers who want to evaluate the efficacy of cloud-based frameworks on new supercomputing systems. \section*{Acknowledgment} The submitted manuscript has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory (``Argonne"). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. Special thanks to Professor Valerio Pascucci for providing valuable feedback and suggestions for improving this work. \section{About Apache Spark and Dataflows} Apache Spark is a general purpose cluster computing system similar to Hadoop. It provides a new data abstraction that facilitates fast sharing and history-based resilience, as well as an expanded set of data transformation and actions, in addition to traditional map-reduce. In addition, Spark provides a streaming processing abstraction as well as bindings to common processing languages such as GraphX, R, and MLlib. Spark is \emph{lazy}, and this philosophy underlies much of its design. Computations will not be performed until their result is requested and data will not be consolidated or repartitioned unless explicitly requested. \subsection{Submitting Spark Jobs on Cluster} \label{running} \subsubsection{Starting Spark in Cluster Mode on Cooley} The following snippet shows how we start an ephemeral Spark cluster within our job submission script\footnote{Note that there are some minor differences from our actual version to improve formatting within the paper.} \begin{lstlisting}[basicstyle=\ttfamily\small] pushd $HOME/code/spark cat $COBALT_NODEFILE > conf/slaves cat $COBALT_NODEFILE >> $JOB_LOG ./sbin/start-all.sh NODES=`wc -l conf/slaves | cut -d" " -f1` popd MASTER=`hostname` ANL=cooley.pub.alcf.anl.gov SPARK_STATUS_URL=http://$MASTER.$ANL:8000" SPARK_MASTER_URI=spark://$MASTER:7077 SPARK_HOME=$HOME/code/spark \end{lstlisting} The job scheduler used on Cooley is Cobalt. When a Cobalt job script is submitted, it runs on the head node, after which services can be started on the head and other nodes in the allocation. As shown above, a variable \texttt{\$COBALT\_NODEFILE} points to a file containing the hostnames that were allocated to the job. We use this list to populate \texttt{conf/slaves} in the Apache Spark configuration folder. Then we use the Apache Spark \texttt{start-all.sh} script to launch the Apache Spark master and worker daemons on the entire cluster. Once Apache Spark is started in cluster mode, we now have the equivalent of a local cloud, albeit ephemeral, that we can use to run one or more Spark jobs. As shown above, variables are set so we can refer to the master URI. In the above, \texttt{\$MASTER} is the head node allocated by Cobalt. \texttt{spark://\$MASTER:7077} is what the Spark submit command (discussed next) will use to submit a Python or Java/Scala job for execution on the cluster. \subsubsection{Running a Spark Job on Cooley} Once the Spark network is started, one or more actual Apache Spark jobs can be executed. This excerpt shows how to run the Scala version of our benchmark on the ephemeral Apache Spark cluster we just created: \begin{lstlisting}[basicstyle=\ttfamily\small] BUILD=./target/scala-2.10 J=$BUILD/simplemap-spark-scala-assembly-1.0.jar $SPARK_HOME/bin/spark-submit \ --master $SPARK_MASTER_URI $J \ --generate --blocks 128 --block_size 64 --nodes 1 \ --nparts 1 --cores 12 \ --json $JOB_JSON >> $JOB_LOG \end{lstlisting} We'll speak a bit to the parameters when discussing the performance study, but the most important thing, for now, is to observe that the variables set earlier are used to run the job in a relocatable way. In the Scala version of our benchmark, a simple script is written to generate scripts to start Spark and run the benchmark-specific code for the parameter studies of interest. This allows for completely non-interactive execution of the benchmark study on Cooley, subject to available resources. Although this is mostly of interest to developers, we think this can be helpful to other researchers doing work with Spark in clustered environments, especially when they want to set up and tear down a Spark cluster for each job. \subsubsection{Resilient Distributed Data}\label{resilient-distributed-data} When a dataset is loaded by Spark, it becomes an immutable Resilient Distributed Data (RDD) collection. This abstraction allows the data to be treated as a whole when in fact it may be partitioned across many nodes of a distributed system. Each partition also contains the history of transformations with which it was created, called a \emph{lineage}, with which the partition can be recomputed if necessary, such as in the case of a node failure. This lineage is a more compact form of resiliency compared to data duplication as utilized by Hadoop. An RDD is split into \emph{partitions} whose size is at minimum the size of a block on whatever storage device is being utilized (e.g. 32MB). Each partition is further divided into \emph{records}, typically a single line for text processing, or an entire binary file for binary data. Binary data records can be explicitly specified. Large binary files will be broken down into multiple partitions only if these partitions can themselves be divided into records. Spark distributes the blocks/data among the workers, if it does at all. Apache Spark supports fault tolerance among the workers. For example, if a worker is lost, processing can continue. Although node failure rates are expected to be low in small-to-medium size clusters, larger clusters are more likely to see single node failures, thus making Apache Spark potentially compelling even in highly-reliable supercomputing clusters. As is well-known in the MPI community, attaining such functionality requires application-specific checkpointing and/or a fault-tolerant runtime, both of which incur significant overheads. With Apache Spark, the transparent support for fault tolerance allows the application code to be written without such overheads (at least in theory). While not the subject of this paper, we're intrigued by the potential to look at performance in future work, especially in the presence of one or two node failures. \paragraph{Storage Levels}\label{storage-levels} RDDs can be stored in memory, on disk, a combination of the two, or off heap (using Tachyon). MEMORY\_ONLY. In our experiments, we focus on RDDs that persist to RAM. We also have designed our experiments to consider out-of-core persistence strategies and spill rates (which we can simulate by multiplying the number of blocks, the number of partitions, or both). Partitions that do not fit in memory will be recomputed from their history when they are needed. This recomputation is possible because RDDs are basically Scala immutable collections and are manipulated using map (to create a new RDD) and reduce (to gather results from nodes and apply a function between all pairs). In our benchmark, we are primarily considering mapping dataflow performance but also report initial results on reduce performance. \paragraph{DISK\_ONLY}\label{diskux5fonly} All partitions are stored on disk, just like Hadoop. RDD partitions can be optionally replicated across up to two nodes. An RDD partition must be small enough to fit \paragraph{Persistence}\label{persistence} RDDs can be persisted in memory for fast access between successive operations. This is particularly important for iterative algorithms that reuse some intermediate result. An RDD will be persisted according to its specified storage level. Caches are cleared according to a least recently used policy, or manually using RDD.unpersist() \paragraph{Input}\label{input} In order to get binary data into spark, we use the binaryFiles or binaryRecords SparkContext functions. Both will read a directory filled with binary files. The first will create one record per file, while the second will partition files into records of an explicitly stated number of bytes. For example, the following line will create an RDD with partitions containing records of size float{[}3{]}: vecs = sc.binaryRecords(sys.argv{[}1{]}, 12) The binary data is read into a string. If you want to use the data as a numpy array, the numpy.fromstring function can be used: arr= np.fromstring(vec, dtype=np.float32) This function can be used by RDD.map in order to convert the string records into numpy records: data = vecs.map(parseVector) \subsubsection{Streaming}\label{streaming-1} How to run spark in streaming mode. \subsubsection{Spark and Tachyon}\label{spark-and-tachyon} How do they work together? Are the performance characteristics different? Is it the same as ceph on Magellan? (ask Dan Olsen about ceph) \subsubsection{Pegasus}\label{pegasus} todo \subsubsection{MPI}\label{mpi} todo: reference implementation \subsection{System Architectures} \label{system-architectures} \subsubsection{Cooley}\label{cooley} Cooley is a 225 Tflop cluster with 40TB memory, large shared disk (GPFS) + 355TB local scratch space per node. \subsection{Sample Applications}\label{sample-applications} \todo{I don't believe we are covering sample applications in this paper. All of that text can be found in the original google document, so I have removed it from this workshop paper..} \section{Introduction} In this early experience paper, our ultimate goal is to understand and articulate the characteristics of cloud-based dataflow processing in the context of HPC analysis tasks, implement sample apps using Spark and MPI and evaluate their performance on both a supercomputing cluster and in a high-performance cloud. Here we focus our efforts on a micro-benchmark that can be used to understand strong and weak scaling using two of the preferred development languages for Spark: Python and Scala. In our experience, many data-intensive MPI programs follow a typical pattern: data loading/generation phase, perform a one or more computations on the data (possibly transforming it in the process) and performing a global operation in each step to get a final result. The workflow described above is found in many applications, including image processing. Consider for example a set of images to be analyzed (over time). An application would load the image data. The image data could be transformed (for example, color space transformation for feature extraction). An operation would be performed in place on the pixels to identify one or more features. Finally, information about the presence/absence of a feature across all images would be reported. Thus we chose to create a micro-benchmark based on abstracting away details of this while ensuring that the actual workload would be realistic and reproducible in real world situations. In addition, we wanted to understand whether the languages supported by Apache Spark play a role. The two main choices likely to be of interest to traditional HPC developers are Python and Scala (a promising JVM-based language with characteristics similar to Python but with ahead-of-time compilation as found in Java, C, and C++). We'll speak more to the affordances and constraints of these two choices in general, especially when it comes to use in Apache Spark, but we are encouraged by the results we are seeing in both of these approaches. We're also intrigued by Apache Spark for software engineering reasons, namely the potential for developers to be productive without making a significant compromise to performance. While the story of Java and HPC is a long--and somewhat troubled--one, dating back to the Java Grande Forum effort and one of the author's previous efforts, the recent work on OpenJDK and Oracle's Java implementation have been progressively improving in performance. Notably, JVM startup time (which often leads to exaggerated impressions of bad performmance) has improved noticeably in recent iterations and is expected to continue improving with more regular updates to the Java Platform. Another affordance of Apache Spark is its innate support for of out-of-core execution \emph{implicitly} and \emph{by default}. With its support for various strategies to cache to RAM, to disk, and even com binations of the two, this is something that is simply not possible in traditional ahead-of-time (C) compiled environments without essentially reproducing many of the ideas of the Java Virtual Machine or building a memory-management framework (or by using memory caching services, e.g. memcached). Our benchmark has been designed to allow for \emph{data spilling} by adjusting any of the benchmark parameters (blocks, block size, or the partition multiplier). We'll speak to these issues in the detailed technical discussion. Lastly, we wish to clarify what is meant when we use the term \emph{data flows}. A data flow in Apache Spark is where a data set is used to create a resilient data set (one that persists in memory and/or disk for the lifetime of the computation). The resident data set can be transformed using mapping (a term that comes from map/reduce but originates in functional programming language research). This approach is not to be confused with a related (but different) computational approach known as dataflow--a fine-grain parallel architecture that was the subject of much research in the 1980s (cite: Arvind and others), which focused on dataflow as a mechanism for triggering instructions. We also note that data flows are not to be confused with the compiler principle of data flow analysis, which is used to perform optimizations, including but not limited to strategies such as dead-code elimination, common subexpression elimination, and loop unrolling. \section{Performance} We evaluate the potential of Spark in the analysis of large-scale scientific datasets. In this section we discuss performance results from the comparison of various Spark configurations with each other. For the purposes of comparison, we utilize a simple map example that transforms input data directly. We chose to concentrate on this example because it illustrates the most basic use of the map-reduce paradigm for data processing~\footnote{The Python version does map only, while the Scala version does map and reduce.}. The operations performed are simple calculations, very fast compared to the memory transfers involved in loading data. \subsection{Infrastructure} We utilize the Cooley visualization cluster at Argonne National Lab for our performance comparison. This system has 128 Infiniband-connected nodes, each with 12 Xeon cores, 384 MB system memory, and a 384 MB local SSD cache. \subsection{Spark} Each node of Cooley is much more powerful and equipped with more memory than a typical cloud computing node and we leverage this in our tests, configuring the system to use 32 MB ram per spark process \todo{probably 32? is this a manual setting, I can't remember}. We scaled our testing from one to 16 nodes, utilizing from 12 to 192 cores, with one Spark process per core. Our Spark tests involve variation of block size, number of Spark processes, number of physical processor resources, and overall data size. We perform tests that load data from disk as well as generating data directly on the node. \subsection{Python-based Experiments} We first present the results of a set of strong and weak scaling experiments using a simple mapping application written in Python. While Python is not the native representation for Spark, the broad availability of analysis libraries makes it an appealing selection for scientific data processing. We later compare the results of our Python-based experiments with native Scala versions of the application. \textbf{Loading data from disk.} Our initial tests load data directly from shared storage using NFS-mounted directories on each node. The results of these tests for strong scaling can be seen in Figure~\ref{fig:strong-scaling-loaded}. The speedup as the number of nodes is increased is shown in Figure~\ref{fig:strong-scaling-speedup-loaded}. Note the maximum speedup reported is only 68.8\% efficient (11.98x) compared to the 16x resources provided. Both 64GB and 128GB data sizes exhibited the best scaling, while small data sizes (up to 8GB) were quickly overwhelmed by the overhead of distribution. Of the more massive data, the 2 TB case scaled at 60.3\% efficiency (9.64x), while data sizes between 256GB and 1024GB scaled slightly worse. \todo{in discussion we could speculate about this distribution, which is fascinating} \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/strong_scaling_loaded} \caption{Strong Scaling Loaded} \label{fig:strong-scaling-loaded} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/strong_scaling_speedup_loaded} \caption{Strong Scaling Loaded} \label{fig:strong-scaling-speedup-loaded} \end{figure} Our results for weak scaling of loaded data are shown in Figure~\ref{fig:weak-scaling-loaded}. The weak scaling tests using generated data show an obviously poor scaling when more nodes are added. Since the amount of computation is identical for each increment of nodes in this scenario, the time should remain about the same as the number of nodes is increased. Since larger jobs take more time, the implication is that loading the data itself does not scale well. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/weak_scaling_loaded} \caption{Weak Scaling Generated} \label{fig:weak-scaling-loaded} \end{figure} \textbf{Generating data on node.} Since the weak scaling tests above indicated a diminished ability to scale as more data was loaded, we postulated that distributing the data among the nodes prior to beginning the job would likely result in better performance. Distributed file system such as HDFS or Tachyon\todo{introduce Tachyon and HDFS in background, cite appropriate work} provide exactly this sort of functionality, ideally enabling data to be loaded very near the computation. Although we performed preliminary tests using Tachyon, an in-memory distributed file system, we encountered some configuration issues and the results were not yet available at the time of this writing. Instead, we elected to generate data directly on the nodes. This generation simulates the use of an actual distributed file system and allows us to reason about the role of disk I/O in our previous experiments. The results of the tests for strong scaling using generated data can be seen in Figure~\ref{fig:strong-scaling-generated}. The speedup as the number of nodes is increased is shown in Figure~\ref{fig:strong-scaling-speedup-generated}. Note the maximum speedup reported in this new case is 93.6\% efficient (14.98x) of the 16x nodes used for the experiment. The 128GB data sizes once again exhibited the best scaling, while small data sizes were again less effective due to high system overhead compared to the amount of computation to be performed. The larger experiments up to 2 TB all scaled at least as well as the poorest of the group when data was being loaded, between 70\% and 80\% efficiency. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/strong_scaling_generated} \caption{Caption Here} \label{fig:strong-scaling-generated} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/strong_scaling_speedup_generated} \caption{Strong Scaling Speedup Generated} \label{fig:strong-scaling-speedup-generated} \end{figure} Even more confirming of our suspicion that shared file resource contention is an issue for massive data processing using this system are the results of our weak scaling experiments using generated data, shown in Figure~\ref{fig:weak-scaling-generated}. This figure clearly illustrates almost perfect weak scaling as the number of nodes and amount of work are both linearly increased. This suggests the adoption of a distributed file system could be advantageous even for traditional MPI-based applications which read massive amounts of data. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/weak_scaling_generated} \caption{Weak Scaling Generated} \label{fig:weak-scaling-generated} \end{figure} \todo{Cameron, I completely agree. We need to drop the MPI language, except to say that we considered the types of applications/flows that people do in MPI. This paper is only about Spark Python/Scala stuff, specifically benchmarking. Great!} \subsection{Scala-based Experiments} Figure~\ref{fig:mapshifttime} shows the results for map time. This is the time dedicated to creating the RDD and to persist it in memory (if available, of course). Figure~\ref{fig:mapshifttime} shows the time to perform a global generate + shift on all of the vectors, using the cached RDD. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/weak_scaling_scala} \caption{Generating synthetic data and applying a global operation, shift, to a large distributed array of vector data.} \label{fig:mapshifttime} \end{figure} Figure~\ref{fig:avgtime} shows the time to perform a global average (reduce) on all of the vectors, using the cached RDD. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/weak_scaling_scala_reduce} \caption{Performing a large reduce (global vector average) on a large, distributed array of vector data.} \label{fig:avgtime} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/strong_scaling_scala} \caption{Creating RDD of a large, distributed array of vector data and applying a shift mapping operation to the data} \label{fig:scalastrong} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/strong_scaling_scala_speedup} \caption{Speedup when scaling: Creating RDD of a large, distributed array of vector data and applying a shift mapping operation to the data} \label{fig:scalastrongspeedup} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/strong_scaling_scala_reduce} \caption{Performing a large reduce (global vector average) on a large, distributed array of vector data.} \label{fig:scalareduce} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/strong_scaling_scala_reduce_speedup} \caption{Speedup when scaling: Performing a large reduce (global vector average) on a large, distributed array of vector data.} \label{fig:scalareducespeedup} \end{figure} \section{Summary} \section{Related Work} \todo{Chiamov summary. Will likely have to shorten.} There is growing interest in working with Apache Spark in traditional and high-performance clustering environments. This makes sense, considering that Apache Spark provides direct support for configuring a clustered mode. As well, there is also growing interest in understanding Apache Spark performance, especially when it comes to in and out of core execution. Chaimov et al~\cite{Chaimov:2016:SSH:2907294.2907310} have done extensive work on porting Spark to large production HPC systems, specifically the Cray. Some key takeaways from their work include: Similar to our work, they configure Spark to use local storage for the shuffle stage, which improves scalability in their problem to 10,000 cores. Our micro-benchmark can be used in such an environment, but our testbed limits us in this study to 126 * 12 cores. Similar to our work, they also evaluated a configuration with storage attached closer to compute nodes for I/O acceleration. Our paper is mostly focused on large, generated data sets, but we have also evaluated I/O from the shared disk and experienced similar results. The authors uncovered several scaling issues on HPC systems, which would likely be worse on lower-performing clusters. Fixing the YARN resource manager and improving the block management in the shuffle block manager will benefit performance. Our experiments have mostly been confined to being careful with persistence schemes, especially with how data are serialized (especially on the Java side, where you want to avoid serializing complex object hierarchies). Slightly different focus than our paper but addresses lower-level aspects that could radically boost performance. We're interested in what we can do with the out-of-box Spark experience and limiting tuning to the configuration file itself (e.g. using scratch storage allows us to get fast disks as done in this paper). \todo{Matrix factorization at scale. This is a good paper with a bit different focus but certainly one that helps to establish interest in what we're doing.} Gittens et al~\cite{DBLP:journals/corr/GittensDRRGKLMC16} have done a recent study comparing MPI/C++ and Apache Spark Versions. In this work, the authors developed three different parallel versions of matrix factorizations and apply them to TB (terabyte) size data sets--on the same scale as our experiments. Their testbed is the Cray XC40 with up to 1600 nodes. We're currently limited to 128 in our testbed (the only computer where we can run full Java stack code). Their findings confirm a performance gap between MPI/C++ and Apache Spark, which is expected. The performance gaps are attributable to task-related overheads. In addition, serialization can play a major role, even when data are persisted to RAM. Our study is a bit more focused on strong/weak scaling in the presence of these overheads. One thing not discussed in this paper is developer productivity, in particular the development effort to create the MPI version vs. the Apache Spark version. We have found that the development time for both our Python and Scala versions of the benchmark to be short, amounting to a few lines of code for each phase of the computation. We also did not find any information about strong vs. weak scaling results in the paper, which is more focused on understanding the low-level issues (task execution timeline) and performance overheads within a computation. In any event, this paper gives us encouragement that large-scale data processing can work at scale. On our cluster of fat compute nodes, strong and weak scaling results are encouraging. \subsection{References} \section{Introduction} \label{sec:introduction} In this paper, we are motivated to understand and articulate the characteristics of cloud-based dataflow processing in the context of high-performance data analysis tasks since large HPC systems are beginning to resemble high performance clouds. As such, being able to schedule and run cloud-based data analysis software allows simulations to perform co- or post-processing with a relatively straightforward computational model (map/reduce) based on functional programming. In order to compare the performance and deployment challenges of such applications with traditional MPI-based solutions, we have developed a data processing benchmark using both MPI and the Apache Spark~\cite{Zaharia:2016:ASU:3013530.2934664} cluster-computing framework (hereafter referred to as \textit{Spark}), that we execute on two supercomputers. We compare performance of this benchmark by measuring strong and weak scaling. For Spark, we also examine performance of two supported languages: Python and Scala (the primary language used to author Spark itself). The purpose of these comparisons is not to make a value judgment about which system is best but to demonstrate whether Spark is viable in a supercomputing environment. We envision many scenarios where Spark can be incorporated in mixed MPI/Spark environments (e.g. simulations) where an MPI-based computation could be co-scheduled with a Spark-based application, thereby allowing popular higher-level Python/Java/Scala frameworks to be incorporated into supercomputing applications. To keep our focus narrow, we avoid introducing additional layers, such as data loading and out-of-core analyses, though these will be of great interest in future work. The aim of this work is to assess the out-of-box Spark experience on computational clusters. We therefore made no changes to the Spark runtime itself, and limit customizations to only the key Spark properties necessary to utilize the cloud computing framework on HPC systems. We use the term \emph{dataflow} to describe a set of operations for the creation, processing and transformation of a given dataset. This differs from a related computational approach from the 1980s \cite{Arvind:1977:IMD:1067625.806559}, in which dataflow is a fine-grain parallel architecture focused on triggering instructions using tokens. Tokens arrive at instruction nodes, which cause an instruction to \emph{fire}, resulting in tokens that can be passed to subsequent instructions. Nor should the term be confused with the compiler principle of data-flow analysis, which is used to perform various code analyses and optimizations, including but not limited to dead-code elimination, common subexpression elimination, and loop unrolling. While our efforts do not consider every type of MPI application, such as tightly-coupled simulations, many data-intensive programs built on MPI follow a typical map-reduce pattern when performing multistage analyses, in which data is first loaded or generated, then one or more computations are performed on this data (possibly transforming it in the process), and finally a global operation is utilized to obtain a result. This multistage dataflow is commonly found in many applications. Some recent papers points to increased use of map-reduce in large-scale scientific and technical computing to support distributed scientific calculations using a single, parallel map/reduce job that were previously done with separate one-node jobs~\cite{5708835}; streaming data analysis of streaming scientific (e.g. sensor) and business (e.g. financial markets) data~\cite{6493198}; and large-scale data science and machine learning.~\cite{Shanahan:2017:LSD:3041021.3051108}. Consider, for example, an image processing application for feature extraction. This program would load the set of images to be analyzed, transform this data by color space, and perform an in-place \textit{map} operation on the pixels to identify one or more features. Finally, the \textit{reduce} operation gathers information about the presence or absence of features across all images. Our benchmark implements a similar multistage set of operations, abstracting some details while ensuring that the actual workload remains realistic and reproducible in real-world situations. It is described in detail in Section~\ref{sec:benchmarking}. The Spark dataflow creates a resilient distributed dataset (RDD) that can be persisted in memory and/or on disk for the lifetime of the computation. These RDDs can be quickly transformed into {new} RDDs using the \emph{map} operation (the notion of mapping originates in \emph{functional programming languages}~\cite{McCarthy:1962:LPM:1096473}). An earlier system, Hadoop~\cite{Shvachko:2010:HDF:1913798.1914427}, included the map/reduce framework but focused on storage, which is not absolutely required in every meaningful dataflow (per the cited examples above). Spark also supports map/reduce but is focused on objects in memory which can be persisted on demand or when available memory is exhausted, which is attractive for out-of-core memory workflows. In addition to understanding how cloud computing frameworks compare with MPI/C, we wanted to understand whether the specific languages used by the cloud frameworks affect overall performance. The two main choices supported by Spark and likely to be of interest to HPC developers are Python, a dynamic language already popular in computational and data science, and Scala, an object-functional JVM language. We are encouraged by the results we are seeing in both of these approaches, and we will speak more generally to their affordances and constraints in our detailed performance analysis and conclusions. Lastly, we are also intrigued by cloud computing frameworks for software engineering reasons. Python and Java are used by a large number of data analysts and machine learning researchers and feature many libraries that enable this work. The availability of cloud computing on HPC systems without making a significant compromise to performance will help enable more of these practitioners to maximally utilize available resources. The experiments presented in this work are among the largest studies, both in terms of number of nodes and total core count, and we believe they can be used to help understand and improve cloud computing performance in large-scale supercomputing systems. Our work to create general-purpose job submission scripts (see~\ref{sec:running}) allows us to create multiple Spark networks of the appropriate size to best enable optimal performance of application-specific code. We currently utilize our own job scheduler, but these scripts can be ported (easily) to other popular job schedulers, enabling Spark applications to run on other HPC systems. In sum, our paper focuses on the following aspects: \begin{itemize} \item Comparison of traditional MPI data processing with cloud-computing frameworks \item Real scalability using up to 65,536 cores on a leadership-class HPC system \item Comparison between Python or Scala Spark versions and MPI/C version \item Identifying and addressing Spark performance overheads \item Demonstration of portability between two significantly different leadership class systems \end{itemize} \section{Benchmarking} \todo{It will benefit us to describe our benchmark comprehensively in a single (sub-)section rather than have pieces of its description in many places. There is a now a reference to this section in the Intro when the benchmark we created is introduced.} \todo{GKT: I completely agree with this comment. It is part of the plan. We will then be able to simplify the presentation of the overall results as well. In fact, that will be necessary. Thanks for thse great comments, Cameron!} \section{Performance} We evaluate the potential of Spark in the analysis of large-scale scientific datasets. In this section we discuss performance results from the comparison of various Spark configurations with each other. For the purposes of comparison, we utilize a simple map example that transforms input data directly. We chose to concentrate on this example because it illustrates the most basic use of the map-reduce paradigm for data processing~\footnote{The Python version does map only, while the Scala version does map and reduce.}. The operations performed are simple calculations, very fast compared to the memory transfers involved in loading data. \subsection{Infrastructure} We utilize the Cooley visualization cluster at Argonne National Lab for our performance comparison. This system has 128 Infiniband-connected nodes, each with 12 Xeon cores, 384 MB system memory, and a 384 MB local SSD cache. \subsection{Spark} Each node of Cooley is much more powerful and equipped with more memory than a typical cloud computing node and we leverage this in our tests, configuring the system to use 32 GB RAM per spark process. We scaled our testing from one to 16 nodes, utilizing from 12 to 192 cores, with one Spark process per core. Our Spark tests involve variation of block size, number of Spark processes, number of physical processor resources, and overall data size. We perform tests that load data from disk as well as generating data directly on the node. \subsection{Python-based Experiments} We first present the results of a set of strong and weak scaling experiments using a simple mapping application written in Python. While Python is not the native representation for Spark, the broad availability of analysis libraries makes it an appealing selection for scientific data processing. We later compare the results of our Python-based experiments with native Scala versions of the application. \textbf{Loading data from disk.} Our initial tests load data directly from shared storage using NFS-mounted directories on each node. The results of these tests for strong scaling can be seen in Figure~\ref{fig:strong-scaling-loaded}. The speedup as the number of nodes is increased is shown in Figure~\ref{fig:strong-scaling-speedup-loaded}. Note the maximum speedup reported is only 68.8\% efficient (11.98x) compared to the 16x resources provided. Both 64GB and 128GB data sizes exhibited the best scaling, while small data sizes (up to 8GB) were quickly overwhelmed by the overhead of distribution. Of the more massive data, the 2 TB case scaled at 60.3\% efficiency (9.64x), while data sizes between 256GB and 1024GB scaled slightly worse. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/strong_scaling_loaded} \caption{Strong Scaling Loaded} \label{fig:strong-scaling-loaded} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/strong_scaling_speedup_loaded} \caption{Strong Scaling Loaded} \label{fig:strong-scaling-speedup-loaded} \end{figure} Our results for weak scaling of loaded data are shown in Figure~\ref{fig:weak-scaling-loaded}. The weak scaling tests using generated data show an obviously poor scaling when more nodes are added. Since the amount of computation is identical for each increment of nodes in this scenario, the time should remain about the same as the number of nodes is increased. Since larger jobs take more time, the implication is that loading the data itself does not scale well. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/weak_scaling_loaded} \caption{Weak Scaling Generated} \label{fig:weak-scaling-loaded} \end{figure} \textbf{Generating data on node.} Since the weak scaling tests above indicated a diminished ability to scale as more data was loaded, we postulated that distributing the data among the nodes prior to beginning the job would likely result in better performance. Distributed file system such as HDFS~\cite[hdfs] or Tachyon~\cite{tachyon} provide exactly this sort of functionality, ideally enabling data to be loaded very near the computation. Although we performed preliminary tests using Tachyon, an in-memory distributed file system, we encountered some configuration issues and the results were not yet available at the time of this writing. Instead, we elected to generate data directly on the nodes. This generation simulates the use of an actual distributed file system and allows us to reason about the role of disk I/O in our previous experiments. The results of the tests for strong scaling using generated data can be seen in Figure~\ref{fig:strong-scaling-generated}. The speedup as the number of nodes is increased is shown in Figure~\ref{fig:strong-scaling-speedup-generated}. Note the maximum speedup reported in this new case is 93.6\% efficient (14.98x) of the 16x nodes used for the experiment. The 128GB data sizes once again exhibited the best scaling, while small data sizes were again less effective due to high system overhead compared to the amount of computation to be performed. The larger experiments up to 2 TB all scaled at least as well as the poorest of the group when data was being loaded, between 70\% and 80\% efficiency. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/strong_scaling_generated} \caption{Caption Here} \label{fig:strong-scaling-generated} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/strong_scaling_speedup_generated} \caption{Strong Scaling Speedup Generated} \label{fig:strong-scaling-speedup-generated} \end{figure} Even more confirming of our suspicion that shared file resource contention is an issue for massive data processing using this system are the results of our weak scaling experiments using generated data, shown in Figure~\ref{fig:weak-scaling-generated}. This figure clearly illustrates almost perfect weak scaling as the number of nodes and amount of work are both linearly increased. This suggests the adoption of a distributed file system could be advantageous even for traditional MPI-based applications which read massive amounts of data. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figures/weak_scaling_generated} \caption{Weak Scaling Generated} \label{fig:weak-scaling-generated} \end{figure} \subsection{Scala-based Experiments} In addition to Python, we also conducted experiments by porting our Python benchmark to Scala using the best available libraries that support numerical methods. We started with the Breeze~\cite{BreezeScala}, which is a core library used in the ScalaNLP (Scala Natural Language Processing) framework. (It is also a core library used in Apache Spark itself.) \textbf{Comparing to Python version} The Scala benchmark supports all of the core features of the Python benchmark, including our ability to explore disk-based performance. Based on our initial results with Apache Spark and the shared GPFS filesystem, we decided to focus our energy on understanding how well Scala does compared to Python for distributed generated data sets. We also took advantage of the initial conclusion that 64MB block sizes are also a \emph{sweet spot} of sorts when it comes to understanding strong and weak scaling performance. \textbf{Changes to Spark configuration for Java} Because our code is 100\% Java (Scala being a JVM language), we had to make a few changes in order to test Scala performance. Whereas the Python version can more or less rely upon Apache Spark's default configuration, we needed to configure the driver and executor (Spark's terms for what runs on the master and worker nodes) to use most of the available memory on the nodes. Because our workers need the most memory, we set this to 320GB.~\footnote{It should have been set higher as our performance charts clearly show. If our paper is accepted, we'll re-run these experiments.} To be clear, Scala and Java require the use of JVM-managed memory, whereas the Python version actually uses unmanaged memory outside of the JVM running Spark itself. \textbf{Other subtle differences} There are other subtle differences to how we developed the Scala version, this being owed to the developer (Thiruvathukal) wanting to get a more granular idea of how lazy evaluation plays a role, since this is a prevalent feature when working with Scala. The Scala version computes the times separately for the distributed data creation/generation, the mapped computation, and the reduced computation. At each of the first two stages, we also force a cache on the result to ensure that it is fully computed. Otherwise, it is not a fair comparison to what is happening in the Python version. \textbf{Other Java/Scala nuisances} One issue that continues to plague Java (and Scala) is the 32-bit restriction on array sizes. This only affects our benchmark when we try to run with extremely large data sizes. At the same time, we don't want to write different code for when there are 32-bit and 64-bit indices required. For the purpose of benchmarking, we need to wrap the dense vector representation of our benchmark data using an ordinary Scala (outer) array. This has the end result of putting the Scala code at a theoretical disadvantage, but our initial charts show that the Scala version does have better overall execution times, owing to the \emph{compiled} nature of Java code in general. In addition, the Breeze library we are using actually has support for native code, similar to NumPy in Python. So it is clear that the performance should be on par (not worse) at a minimum, given that native code is used when we need to perform operations on array structures. \textbf{Initial Scala Results} The results for strong scaling are shown in figure \ref{fig:scalastrong}. The speedup as the number of nodes is increased is shown in \ref{fig:scalastrongspeedup}. Contrasted with the Python version, we are experiencing some issues at the 64 GiB and 128 GiB sizes. In looking at our performance logs, we notice that we are experiencing a higher rate of data spilling when working with larger arrays, which speaks to a need to up the JVM memory in our executors. Nevertheless, we're encouraged by Scala's performance. Note that our time scale tops out at 4000 seconds for Scala and Breeze but is close to 10000 for Python and NumPy. We attribute these differences, however, not to the innate performance of each library itself but the overheads also associated with Apache Spark firing up Python for RDD processing. Clearly, further study is required. Figure \ref{fig:mapshifttime} shows the results of our weak scaling experiments. Similar to the Python version, as the number of nodes and amount of work are linearly increased, we see similar (promising) results. Execution time at the high end (128 GiB) is generally better than the Python version. We stress that these results are initial, and it is likely that the Java version is experiencing issues that require more study in future work. Our logs reveal data spilling happening, which is probably not occurring at all in the Python version, and there are likely more interactions with the garbage collector in Java. Nevertheless, we are encouraged by what is possible without doing too much tweaking. The initial results for Python and Scala versions of our benchmark clearly illustrate that promising scaling is possible.~\footnote{And we expect that we would have even better results for Scala by the time we present our paper at the workshop.} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/weak_scaling_scala} \caption{Generating synthetic data and applying a global operation, shift, to a large distributed array of vector data.} \label{fig:mapshifttime} \end{figure} Figure~\ref{fig:avgtime} shows the time to perform a global average (reduce) on all of the vectors, using the cached RDD. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/weak_scaling_scala_reduce} \caption{Performing a large reduce (global vector average) on a large, distributed array of vector data.} \label{fig:avgtime} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/strong_scaling_scala} \caption{Creating RDD of a large, distributed array of vector data and applying a shift mapping operation to the data} \label{fig:scalastrong} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/strong_scaling_scala_speedup} \caption{Speedup when scaling: Creating RDD of a large, distributed array of vector data and applying a shift mapping operation to the data} \label{fig:scalastrongspeedup} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/strong_scaling_scala_reduce} \caption{Performing a large reduce (global vector average) on a large, distributed array of vector data.} \label{fig:scalareduce} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figures/strong_scaling_scala_reduce_speedup} \caption{Speedup when scaling: Performing a large reduce (global vector average) on a large, distributed array of vector data.} \label{fig:scalareducespeedup} \end{figure} \section{Related Work} Bringing cloud-based frameworks such as Spark to supercomputing frameworks is a subject of increased interest in high-performance parallel/distributed computing environments. We consider related work on benchmarking (the most closely related papers to our study), challenges of Spark development, scientific applications using Spark, tuning-related issues, and numerical libraries used in our study for the Python and Scala versions. \subsection{Benchmarks} Chiamov et al~\cite{Chaimov:2016:SSH:2907294.2907310} described their experiences porting Spark to large-scale HPC systems. They observed that I/O is the main bottleneck (using Lustre metadata) on supercomputers for this type of High Performance Data Analytics (HPDA) workload. To mitigate that, they developed a file pooling layer and ran experiments using NVRAM buffers (comparable to the on-node SSDs found on \textsc{Theta}\xspace, one of our testbeds). Some key takeaways from their work include using local storage for the shuffle stage, which improved scalability in their problem to 10,000 cores. This work also evaluated a configuration with storage attached closer to compute nodes for I/O acceleration. Our paper is mostly focused on large, generated in-core data sets, but we are also evaluating I/O from the shared disk and experienced similar results. The authors uncovered several scaling issues on HPC systems, which would likely be worse on lower-performing clusters. Fixing the YARN resource manager and improving the block management in the shuffle block manager will benefit performance. Our experiments have mostly been confined to being careful with persistence schemes, especially with how data are serialized (on the Java side in particular, where you want to avoid serializing complex object hierarchies). Our Scala experiments do take advantage of said serialization, which probably explains some of the overheads we are seeing in our performance charts. Gittens et al~\cite{DBLP:journals/corr/GittensDRRGKLMC16} done a study comparing MPI/C++ and Spark Versions. In this work, the authors developed three different parallel versions of matrix factorizations and apply them to TB (terabyte) size data sets. Their testbed is the Cray XC40 with up to 1600 nodes. Initial findings confirm a performance gap between MPI/C++ and Spark---a gap that we also observe in our experiments. The performance gaps are attributable to task-related overheads that would not be present in MPI, which creates and schedules its tasks at compile time in typical computations. Similar to Chiamov et al, serialization can play a major role in Spark, even when data are persisted to RAM. Our study is a bit more focused on strong/weak scaling in the presence of these overheads. Ringenburg~\cite{CUG16:Ringenburg} considers performance characteristics of HPDA workloads on a Cray XC40 system. The authors analyze the log and workload of two publicly available benchmarks: Intel's HiBench 4.0 Suite and a CX matrix decomposition algorithm. The CX matrix decomposition experiments use up to 960 cores. The presentation provides guidelines for improving performance on this particular Cray machine, including tuning of various Java Virtual Machine parameters, e.g. garbage collection threads, etc. The platform was Spark 1.5 with no local storage available. Our results are based on relatively recent Spark releases (discussed in experimental setup, section~\ref{sec:benchmark-setup}). Marcu et al~\cite{marcu:hal-01347638} et al and Garcia et al~\cite{García-Gil2017} propose a comparison between Spark and Flink. Marcu et al introduce a methodology based on a set of benchmarks (word count, grep, tera sort, K-means, page rank, connected components, many of which are example programs in the Spark distribution) ran on up to 100 nodes to understand performance in this type of frameworks. Garcia et al shows the results of three ML algorithms running on 10 nodes, 16 cores/node. Our benchmark, while synthetic in nature, runs at much higher node and cores per node counts. Chunduri et al~\cite{Chunduri:2017:RVX:3126908.3126926} discuss run-to-run variability on \textsc{Theta}\xspace (one of the supercomputers we performed our study in this paper) a Cray XC40 system, where the authors observed that the MPI\_AllReduce on small message sizes appears to have the highest variability due to inter-job contention. We have also observed about 15\% variability in our experiments. In this paper, we focus on the general scaling behavior, and our Spark benchmarks indicate other overheads playing an even more dominant role, especially when it comes to TCP connections (see Section~\ref{sec:tuning-spark}), we leave the quantification of variability in our future study. \subsection{Challenges of Spark Deployment} Armbrust et al \cite{Armbrust:2015:SSR:2824032.2824080} present feedback from a company deploying Spark to various organizations. This paper presents some of the main difficulties encountered such as large-scale I/O or memory management. The authors developed some memory management features as well as a custom network module. To make Spark more accessible to non-experts, they also wrote an API based on data frames (like in Python or R). Our paper focuses on the Spark RDD, which is the underlying fabric used to implement data frames and data sets. Tous et al~\cite{7363768} discuss an optimized deployment of Spark on MareNostrum, the BSC's supercomputer. The authors developed a framework to automate the usage of Spark. They also provide guidelines on using Spark on this system (number of workers, size of the workers). As part of our work, we developed generalized job submission scripts, discussed in detail in Section~\ref{sec:spark_setup}, which allow the Spark daemons to be launched in a generalized way (on two of our clusters), followed by launching application-specific code in Spark (which acts as a container). \subsection{Scientific Applications} Yan et al~\cite{7363985} explored the scalability of Spark on a set of seismic data processing algorithms. To do so, the authors propose to change the way data is given to Spark (templates for seismic data sets). Souza et al~\cite{souza:lirmm-01620161} present a scalability analysis of Spark through a synthetic implementation of a scientific workflow based on a real use-case in oil and gas domains. They processed 330 GB of data on a 936-cores HPC cluster. The application is not written for Spark. Instead, they used a "black-box" approach (run external program with Spark). One of the outcomes is that the task duration has a better scalability than the number of tasks (typical strong versus weak scaling). \subsection{Java and Big Data Processing Tools on HPC Systems} \label{sec:tuning-spark} Nowicki et al~\cite{10.1007/978-3-319-78054-2_27} present a library for Java called PCJ, whose purpose is to easily allow parallel computation. This library is evaluated on a KNL-based platform and the authors come to the conclusion that Java can run successfully on architectures designed for parallelism. In~\cite{8030575}, the authors also targets the Intel KNL architecture and describe how they evaluated Hadoop on it through a custom plugin. Again, one of the conclusions is that it's feasible to leverage intra-node parallelism with Big Data processing frameworks. Jacobsen et al~\cite{CUG16:Jacobsen} describes the SLURM implementation on large-scale Cray systems, including a KNL system similar to \textsc{Theta}\xspace. In particular, the team found the need for Linux kernel tuning to increase max TCP connections to address SYN (connection) backlog. As Spark itself relies on a significant number of connections from executor nodes to the driver, we see some evidence of connection issues in our own experiments, resulting in significant overheads. We have not done these Linux kernel level tunings in our production cluster but hope to address the issue in future work. \section{Results} Now we present the results of our large benchmark runs on \textsc{Cooley}\xspace and \textsc{Theta}\xspace. On \textsc{Cooley}\xspace we used up to 1,152 cores (96 12-core nodes), and on \textsc{Theta}\xspace we executed much larger runs, using up to 65,536 cores (1024 64-core nodes). Our experiments were conducted for both strong and weak scaling on each system. Weak scaling, the simplest type of performance comparison, is when the size of the dataset to be analyzed grows proportionally to the size of the system used for its analysis. Ideal weak scaling should be level and independent of the number of cores. Strong scaling attempts to solve a large problem more quickly by keeping the input data size constant while increasing the resources used for its analysis. Ideal strong scaling should be inversely proportional to the number of cores. In addition, we performed a few tests on each system, probing the throughput and reliability of the interconnect. We expect Spark is prone to network issues and thread contention. In contrast to the execution model of MPI, the standalone scheduler in Spark requires message passing in order to actively manage jobs among available workers. Our benchmark results presented compare dataflow execution times for MPI/C, Spark Python, and Spark Scala. For each of the tests, we provide a figure comparing overall performance. We also show the trend of an ideal scaling (not based on a particular value) \subsection{\textsc{Cooley}\xspace and \textsc{Theta}\xspace Networking Overheads} The MPI libraries on both \textsc{Cooley}\xspace and \textsc{Theta}\xspace use API that directly uses the interconnect fabric, the InfiniBand and the Aries, respectively. Apache Spark communicates at the TCP layer over virtual Ethernet adapters that runs on top of interconnect fabric. This negatively affects the communication performance of Apache Spark, because all the traffic is going through the normal TCP/IP stack, which requires system calls, memory copies, and network protocols, to run on the CPU, in contrast to MPI that sends traffic dirctly to the interconnect hardware. Similar observations can be found in reference~\cite{CUG16:Jacobsen}. To investigate further, we tested point-to-point throughput over the network between two nodes using the \texttt{iperf3}~\cite{iperf3} application. On \textsc{Cooley}\xspace, the \texttt{iperf3} reaches maximum about 16 Gbits/sec using a single connection, which is nearly one third of the MPI point-to-point bandwidth at about 6 GB/sec. On \textsc{Theta}\xspace, the \texttt{iperf3} reaches maximum about 14 Gbits/sec using a single connection, which is less than one fifth of the MPI point-to-point bandwidth at about 8 GB/sec. As Spark uses the open-source Netty~\cite{netty} communication library framework, which uses the Java New IO (NIO) libraries, we wrote a simple benchmarking code using Netty directly to check the network reliability. Our Netty benchmark shows that the communication on \textsc{Theta}\xspace is consistently worse than \textsc{Cooley}\xspace. Establishing 8192 connections on \textsc{Cooley}\xspace takes 0.5 seconds, while on \textsc{Theta}\xspace it takes 2 to 3 seconds. These overheads will affect Apache Spark, simply because its architecture requires a large number of connections to be made to the driver (master). The maximum response time among these connections is about $20\pm 5$ milliseconds on \textsc{Cooley}\xspace, and $45\pm 10$ milliseconds on \textsc{Theta}\xspace. In addition, on \textsc{Theta}\xspace, among 16 different runs, there are 5 cases where 1 connection failed out of 8192 connections. The intermittent connection failures, on the one hand, reinforce the use case of Spark, which has builtin fault tolerance and was able to complete all runs successfully in the presence of failures (which only occurred at high node counts). On the other hand, it increases the run time of Apache Spark. In addition to the network, each CPU core on \textsc{Theta}\xspace has lower clock frequency than the CPU cores on \textsc{Cooley}\xspace, 1.3 GHz versus 2.4 GHz, respectively, which contributes to poor TCP performance on \textsc{Theta}\xspace, as the data sending over the TCP/IP stack requires CPU. Spark potentially has additional performance overhead, because the lower single core performance could worsen thread contention for the Spark scheduler. The combination of these issues (unoptimized TCP/IP, point-to-point bandwidth, and connection failures at high node counts) explains the overheads we see in our results. Nevertheless, we are encouraged that once the Spark network is bootstrapped, the results are encouraging. \subsection{\textsc{Cooley}\xspace} The benchmarks conducted on \textsc{Cooley}\xspace, a middle-tier supercomputer, used up to 1,152 cores (96 12-core nodes) and required no customization of the Spark framework. For both weak and strong scaling, the results of utilizing cloud computing are highly comparable to those of traditional MPI/C. This suggests the use of such frameworks could be valuable and immediately useful for some research. \paragraph{Weak Scaling} Figure~\ref{fig:cooley-weak-scaling} shows the overall weak scaling comparison of MPI/C with Spark-based Python and Spark-based Scala tests. We can notice that the Python version is about 50\% slower than the MPI/C version, while the Scala version is about 3X slower. The overall time is mostly flat, indicating a good weak scaling from 12 cores up to 1,152 cores. While the difference between MPI/C and Spark is explained in the previous subsection, the performance gap between the Scala and the Python versions is more surprising. We postulate that the extra overhead in the PySpark serialization and RDD context switches could be the potential cause of the reversed performance, and we will investigate this behavior further in our future study. \paragraph{Strong Scaling} Figure~\ref{fig:strong-scaling-cooley} shows our strong scaling experiment comparing MPI/C with Spark-based Python and Spark-based Scala tests on the same platform. The figures utilize a logarithmic scale to more clearly show the time elapsed since the timings become faster inversely proportional to the core counts for this set of tests. We observed that the speed of the Python implementation approaches the MPI/C version from around 24 to 192 cores, while the Scala version stays about 2X the Python version. This result is correlated with our observations on weak scaling experiments. While the MPI/C scales extremely well, the Spark implementations, especially the Python code, start to show some level of departure from ideal strong scaling above 384 cores. Nevertheless, those results are still reasonable and show how a data processing tool can take advantage of a HPC system. \subsection{\textsc{Theta}\xspace} Next we present results from our benchmark runs on \textsc{Theta}\xspace, a much larger system than \textsc{Cooley}\xspace. On \textsc{Theta}\xspace we start our scaling study with 8192 cores (128 64-core nodes), the smallest size allocation for the default queue of the job manager on this system. The following results show a more significant difference in overall performance for which we suspect network overhead may be responsible with the increased parallelization, an indication we observe on \textsc{Cooley}\xspace jobs using more than 768 cores. Another suggestion of these performance differences being caused by networking issues was the necessity to disable the Spark \textit{heartbeat} which otherwise overwhelmed the system as mentioned in~\ref{sec:spark_params}. \paragraph{Weak Scaling} Figure~\ref{fig:theta-weak-scaling} shows the overall weak scaling comparison of MPI/C with Spark-based Python and Spark-based Scala tests. The overall weak scaling performance with increasing node counts seems to be similar in how it scales for both Spark and MPI/C, though the Spark runs are about 30X longer than MPI/C. The figure utilizes a logarithmic scale for the axes to depict the similar scaling behaviors: instead of the ideal scaling, which should be flat, the timing clearly increases linearly for all three benchmarks. We expect these networking issues can be resolved. At the moment it means that larger systems such as the one we use here might require further tuning (e.g. TCP/IP and possibly further JVM tuning) to offer a satisfactory scalability. \paragraph{Strong Scaling} The strong scaling experiments carried out on \textsc{Theta}\xspace are depicted in Figure~\ref{fig:theta-strong-scaling}. Again, the figure utilizes a logarithmic scale as in the weak scaling figure above and shows similarly poor scaling for the three benchmarks, all of which depart from ideal scaling, which should be inversely proportional to the number of cores. Similarly to the weak scaling tests, the Spark-based runs are around 30X slower than the MPI/C-based runs. Unlike experiments on \textsc{Cooley}\xspace, we can notice here that both versions of our Spark benchmark performs the same way. \section{Source Code} \label{sec:source-code} The source code for all of our work can be downloaded from GitHub. The Python, Scala, and MPI benchmark codes described in this paper can be found in our GitHub organization, \url{https://github.com/SparkHPC/}, under the repositories, \verb|simplemap-spark-python|, \verb|simplemap-spark-scala|, and \verb|simplemap-mpi-c|. This GitHub organization also contains the framework startup scripts, and job launching scripts described in Section~\ref{sec:spark_setup} and \ref{sec:running}, under the repository, \verb|Spark_Job|. We welcome contributions that enable this framework to be used with other machines. Our run scripts for both machines and results are under the repository, \verb|spark-benchmark-study|. This GitHub organization page also contains links to related work, including demonstrations of how to use Apache Spark with Jupyter Python notebooks (other work in progress by our team). \section{About Apache Spark} \label{sec:about_spark} Spark is a general purpose cluster computing system similar to Hadoop. It provides a new data abstraction that facilitates fast sharing and history-based resilience, as well as an expanded set of data transformations and actions, in addition to traditional map-reduce. In addition, Spark provides a streaming processing abstraction as well as bindings to common processing languages such as GraphX, R, and MLlib. When a dataset is loaded by Spark, it becomes an immutable Resilient Distributed Data (RDD) collection. This abstraction allows the data to be treated as a whole when in fact it may be partitioned across many nodes of a distributed system. Each partition also contains the history of transformations with which it was created, called a \emph{lineage}, with which the partition can be recomputed if necessary, such as in the case of a node failure. This lineage is a more compact form of resiliency compared to data duplication as utilized by Hadoop. Spark is \emph{lazy}, and this philosophy underlies much of its design. Computations will not be performed until their result is requested and data will not be consolidated or repartitioned unless explicitly requested. In order to facilitate flexible and generalizable task parallelism RDDs are read-only data structures and therefore lazy evaluation can result in faster overall run times since it avoids unnecessary memory allocation. In contrast, MPI/C computations can be carried out in place and therefore the notion of deferring a computation is not relevant. An RDD is split into \emph{partitions} whose size is at minimum the size of a block on whatever storage device is being utilized. Each partition is further divided into \emph{records}, typically a single line for text processing, or an entire binary file for binary data. Binary data records can be explicitly specified. Large binary files will be broken down into multiple partitions only if these partitions can themselves be divided into records. Spark distributes the blocks/data among the workers, if it does at all. Spark supports fault tolerance among the workers. For example, if a worker is lost, processing can continue. Although node failure rates are expected to be low in small-to-medium size clusters, larger clusters are more likely to see single node failures, thus making Spark potentially compelling even in highly-reliable supercomputing clusters. As is well-known in the MPI community, attaining such functionality requires application-specific checkpointing and/or a fault-tolerant runtime, both of which incur significant overheads. With Spark, the transparent support for fault tolerance allows the application code to be written without such overheads (at least in theory). While not the subject of this paper, we are intrigued by the potential to look at performance in future work, especially in the presence of one or two node failures. RDDs can be \emph{persisted} or \emph{cached} in memory, on disk, a combination of the two, or off heap. In our experiments, we focus on RDDs that persist to RAM. We also have designed our experiments to consider out-of-core persistence strategies and spill rates (which we can simulate by multiplying the number of blocks, the number of partitions, or both). Partitions that do not fit in memory will be recomputed from their history when they are needed. This recomputation is possible because RDDs, which include the data \emph{lineage}, are basically Scala immutable collections and are manipulated using map (to create a new RDD) and reduce (to gather results from nodes and apply a function between all pairs). In our benchmark, we are primarily considering mapping dataflow performance but also report initial results on reduce performance. There are many ways to launch Spark. In cloud environments, Mesos or YARN are popular cluster managers. Spark, however, also provides a built-in standalone deploy mode, which is achieved with a collection of start/stop scripts. These are easy to deploy in our job scheduling environment, where a set of nodes is allocated to the user for a time duration requested at job submission. When our job starts, we launch Spark in the standalone deploy mode, with one Spark master on one node and one Spark worker on each node of our allocation, and submit our benchmark script to the Spark master. \subsection{Submitting Spark Jobs on our Clusters} \label{sec:running} The supercomputers we use for this paper schedule and execute compute jobs via a job scheduler. A user can request a specific number of compute nodes and a fixed maximum wall clock time for a compute job given as a binary or a script. Such job will then wait in the queue and get launched when the requested compute resources become available. When the job starts, the user, in a job script, typically launches the executable through computing resource specific methods, such as \verb|mpirun| on \textsc{Cooley}\xspace or \verb|aprun| on \textsc{Theta}\xspace. We will discuss our computing resources in detail in Section~\ref{sec:benchmark-machines}. In collaboration with Argonne National Laboratory, We have developed a set of scripts that automate the job submission procedure, where the main user facing interface is a bash script. \subsection{Starting the Spark Framework} \label{sec:spark_setup} In order to run Spark jobs on our two HPC system we need to first start up the framework. This entails specification of the size of the desired cloud-computing cluster as well as the internal URL from which it can be accessed. Since it is designed to work with a general set of nodes accessible via SSH, we use the built-in Spark standalone cluster mode in our study. All of this is automated in our scripts. On our first test-bed, we find that the Spark built-in scripts, which uses SSH, work well by default, once we overwrite the SSH command in a bash function and provide essential environment variables through SSH invocation. On the second test-bed, due to the large amount of nodes, we use the Cray recommended command \verb|aprun| to launch the Spark master and workers. \subsection{Running the Spark Job} Once Spark is started in cluster mode, we now have the equivalent of a local cloud, albeit ephemeral, that we can use to run one or more Spark jobs. We use the environment variable passed through our modified SSH command or the \verb|aprun| to specify the Spark master URI, and use the Spark built-in \verb|bin/spark-submit| on the Spark master node to submit a Python or Java/Scala job for execution on the cluster. These are also automated in our scripts
1,477,468,750,243
arxiv
\section{Introduction} Does a smooth proper variety in positive characteristic know the Hodge number of its liftings? In this paper, we construct an example showing that the answer is no in general. There are some obstructions to make such an example. For instance, such an example must be of dimension at least \(3\) (see Proposition~\ref{surfaces}). The examples we constructed here are \(3\)-folds in characteristic at least \(3\), see Section~\ref{L'exemple} and Subsection~\ref{three}. \section{Notations} \label{notations} Throughout this paper, unless specified otherwise, let \(p \geq 5\) be a prime, let \(\mathrm{R} = \mathbb{Z}_p[\zeta_p]\) where \(\zeta_p\) is a \(p\)-th root of unity. Let \(\mathcal{E}/\mathrm{Spec}(\mathrm{R})\) be an ordinary elliptic curve possessing a \(p\)-torsion \(P \in \mathcal{E}(\mathrm{R})[p]\) which does not specialize to identity element.\footnote{There are such pairs over \(\mathbb{Z}_p\). Indeed, the Honda--Tate theory tells us the polynomial \(x^2 - x + p\) corresponds to an ordinary elliptic curve \(\mathcal{E}_0\) over \(\mathbb{F}_p\) with \(p\) rational points (c.f.~\cite[TH\'{E}OR\'{E}ME 1.(i)]{Honda--Tate}). In particular, we see that \(\mathcal{E}_0(\mathbb{F}_p) \cong \mathbb{Z}/p\). Now the Serre--Tate theory (c.f.~\cite[Chapter 2]{Serre--Tate}) tells us \(\mathcal{E}\), the canonical lift of \(\mathcal{E}_0\) over \(\mathbb{Z}_p\), satisfies \({\mathcal{E}[p]}^{\'{e}t} \cong \mathbb{Z}/p \times \mathrm{Spec}(\mathbb{Z}_p)\). Hence we see that all the rational points of \(\mathcal{E}_0\) are liftable over \(\mathbb{Z}_p\).} Fix such an auxiliary elliptic curve along with this \(p\)-torsion point. Denote the uniformizer \(\zeta_p - 1 \in \mathrm{R}\) by \(\pi\). Denote the fraction field of \(\mathrm{R}\) by \(\mathrm{K}\), the residue field by \(\kappa\). We will use \(\mathcal{O}\) to denote an unspecified mixed characteristic discrete valuation ring, which will only show up in Proposition~\ref{surfaces}. We will use curly letters to denote integral objects over \(\mathrm{Spec}(\mathrm{R})\), use the corresponding straight letter to denote its generic fibre and use subscript \({(\cdot)}_0\) to denote its special fibre, i.e., reduction mod \(\pi\). For example, we will denote the generic fibre of \(\mathcal{E}\) by \(E\) and the special fibre by \(\mathcal{E}_0\). To simplify the notations, whenever no confusion seems to arise, we will not denote the base over which we make the fibre product. \section{L'exemple} \label{L'exemple} Let \(\mathcal{C}\) be the proper smooth hyper-elliptic curve over \(\mathrm{Spec}(\mathrm{R})\) defined by \[ v^2 = \sum_{i=0}^{p-1} \frac{\binom{p}{i}}{{(\zeta_p - 1)}^{i}} u^{p-i}.\footnote{We leave it to the readers to verify that this indeed defines a smooth proper curve with the other affine piece given by \(v^2 = \sum_{i=0}^{p-1} \frac{\binom{p}{i}}{{(\zeta_p - 1)}^{i}}u^{i+1}\).} \] One checks easily that this curve has genus \(\frac{p-1}{2}\) and \(\mathcal{C}_0\), its reduction mod \(\pi\), is the hyper-elliptic curve defined by \[ v^2 = u^p - u. \] After inverting \(\pi\) and making the substitution \begin{align*} x & = (\zeta_p - 1) u + 1 \\ y & = v, \end{align*} we see that \(C\), the generic fibre of \(\mathcal{C}\), is the hyper-elliptic curve defined by \[ {(\zeta_p - 1)}^p y^2 = x^p - 1. \] There is an \(\mathrm{R}\)-linear \(\mathbb{Z}/p = \langle \sigma \rangle\)-action on \(\mathcal{C}\) given by \begin{align*} \sigma(u) & = \zeta_p \cdot u + 1 \\ \sigma(v) & = v. \end{align*} One checks that in the generic fibre, using \(xy\)-coordinate, this action becomes \(\sigma(x) = \zeta_p \cdot x\) and \(\sigma(y) = y\). In the special fibre, this action becomes \(\sigma(u) = u + 1\) and \(\sigma(v) = v\). We have a canonical character \(\mathbb{Z}/p \to K^\times\) given by \begin{align*} \chi \colon & \langle \sigma \rangle \rightarrow K^\times \\ & \sigma \mapsto \zeta_p. \end{align*} \begin{proposition} \label{conjugation and decomposition} Using notations as above, we have \begin{enumerate} \item in the special fibre, the action of \(\sigma\) and \(\sigma^4\) are conjugate by an automorphism of \(\mathcal{C}_0\); \item in the generic fibre, we have a decomposition \[ H^0(C , \Omega^1) = \bigoplus_{1 \leq i \leq \frac{p-1}{2}} \chi^i \] as representations of \(\langle \sigma \rangle\). \end{enumerate} \end{proposition} \begin{proof} (1) Consider the automorphism \(\tau \colon \mathcal{C}_0 \to \mathcal{C}_0\) given by \begin{align*} \tau(u) & = 4u \\ \tau(v) & = 2v. \end{align*} One easily verifies that this preserves the equation \(v^2 = u^p - u\) hence an automorphism of \(\mathcal{C}_0\), and that \(\tau \circ \sigma \circ \tau^{-1} = \sigma^4\). This completes the proof of (1). (2) Recall that \(\left\{ \frac{\mathop{dx}}{y}, \frac{x \mathop{dx}}{y}, \ldots, \frac{x^{g-1} \mathop{dx}}{y} \right\}\) form a basis of \(H^0(C , \Omega^1)\) whenever \(C\) is a genus \(g\) hyper-elliptic curve given by \(y^2 = f(x)\). One checks immediately that under this basis, \(\sigma\) acts by the characters as in the Proposition. \end{proof} Recall that in the Section~\ref{notations}, we have fixed an auxiliary elliptic curve \(\mathcal{E}\) over \(\mathrm{R}\) and a \(p\)-torsion point \(P\) on it which does not specialize to identity element. Hence translating by \(P\) defines an order \(p\) automorphism of \(\mathcal{E}\) over \(\mathrm{R}\) which acts trivially on the global \(1\)-forms, let us denote this action by \(\tau_P\). \begin{construction} Let \(\mathcal{X} \coloneqq (\mathcal{C} \times \mathcal{C} \times \mathcal{E}) / \langle (\sigma,\sigma,\tau_P) \rangle\) and let \(\mathcal{Y} \coloneqq (\mathcal{C} \times \mathcal{C} \times \mathcal{E}) / \langle (\sigma,\sigma^4,\tau_P) \rangle\). Here we mean the schematic quotient by the indicated \textit{diagonal} action. \end{construction} Then we have the following: \begin{proposition} Both \(\mathcal{X}\) and \(\mathcal{Y}\) are smooth proper over \(\mathrm{Spec}(\mathrm{R})\), and their special fibers are isomorphic as smooth proper \(k\)-varieties. Moreover we have \(H^0(X, \Omega^3_X) = 0\) and \(H^0(Y, \Omega^3_Y) \not= 0\). \end{proposition} \begin{proof} The third component ensures that the action is fixed point free. Therefore the quotient is smooth and proper, and it satisfies the following base change of taking quotient: \begin{align*} \mathcal{X}_0 & \cong (\mathcal{C}_0 \times \mathcal{C}_0 \times \mathcal{E}_0) / \langle (\sigma,\sigma,\tau_P) \rangle \\ \mathcal{Y}_0 & \cong (\mathcal{C}_0 \times \mathcal{C}_0 \times \mathcal{E}_0) / \langle (\sigma,\sigma^4,\tau_P) \rangle. \end{align*} By~\ref{conjugation and decomposition}~(1), \(\sigma\) and \(\sigma^4\) are conjugate to each other by \(\tau\) (with notations loc.~cit.). We see that \((\mathop{id}, \tau, \mathop{id})\) induces an isomorphism between \(\mathcal{X}_0\) and \(\mathcal{Y}_0\). In the generic fibre, we have that the global \(3\)-forms of the quotient are identified as the invariant (regarding respective actions) global \(3\)-forms of \(C \times C \times E\). By K\"{u}nneth formula and~\ref{conjugation and decomposition}~(2), we have the following decomposition \[ H^{3,0} (C \times C \times E) = \bigoplus_{1 \leq i \leq \frac{p-1}{2}} \chi^i \otimes \bigoplus_{1 \leq i \leq \frac{p-1}{2}} \chi^i \otimes \mathbbm{1} \] as \( (\sigma,\sigma,\tau_P) \)-representations. Therefore we see that \(H^{3,0}(X) = 0\). To see that \(H^0(Y, \Omega^3_Y) \not= 0\), we note that in the above decomposition \(\frac{x_1 \mathop{d x_1}}{y_1} \wedge \frac{x_2^{\frac{p-3}{2}} \mathop{d x_2}}{y_2} \wedge \omega\) is invariant under \((\sigma, \sigma^4, \tau_P)\), where \(\omega\) is some translation invariant nonzero \(1\)-form on \(E\). Here we have used \( p \geq 5\), so that \(\frac{x_1 \mathop{d x_1}}{y_1}\) is a \textit{holomorphic} global \(1\)-form on \(C\). Hence we get that \(H^0(Y, \Omega^3_Y) \not= 0\). \end{proof} \begin{remark} Those readers who are familiar with Deligne--Lusztig varieties perhaps have realized the curve \(\mathcal{C}_0 = \{y^2 = x^p - x\}\) is nothing but the quotient of the Drinfel'd curve \(\{y^{p+1} = x^p z - x z^p\}\) (c.f.~\cite[Ch. 2]{Deligne--Lusztig}), where the quotient is with respect to the subgroup \(\mu_{\frac{p+1}{2}} \subset \mu_{p+1}\) acts on \(y\) by multiplication and fixes \(x\) and \(z\). Hence the curve \(\mathcal{C}_0\) bears the action of \(\mathrm{SL}_2(\mathbb{F}_p) \times \mathbb{Z}/2\) where the second factor is the hyper-elliptic structure of \(\mathcal{C}_0\). Under this identification, the \(\sigma\) (resp.~\(\tau\)) we find above correspond to \( \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \) (resp.~\( \begin{pmatrix} 2 & 0 \\ 0 & \frac{1}{2} \end{pmatrix} \) (possibly times the nontrivial involution depending on whether \(2^{\frac{p+1}{2}} = 2 \text{ or } -2\) in \(\mathbb{F}_p\))). \end{remark} Following the same spirit, we construct similar example in the case \( p = 3\) (see Subsection~\ref{three}). Concerning the case \( p =2 \) one may ask the following: \begin{question} Can one lift a Deligne--Lusztig curve of Suzuki type (c.f.~\cite{Suzuki}) in characteristic \(2\) equivariantly with respect to an element in the Suzuki group \({}^2B_2\), so that similar example can be made out of? \end{question} \section{Complements and Remarks} \subsection{Case \(p = 3\)} \label{three} Let us consider the case \(p=3\) in this subsection. Let \(\mathrm{R} = \mathbb{Z}_3[\omega,i]\) where \(\omega\) is a \(3\)-rd root of unity and \(i^2 =-1\). Denote the uniformizer \(\omega - 1 \in \mathrm{R}\) by \(\pi\). Let \(\mathcal{C}\) be the proper smooth hyper-elliptic curve over \(\mathrm{Spec}(\mathrm{R})\) defined by \[ v^2 = {(u^3 + (\omega^2 - 1) u^2 - \omega^2 u)}^3 + (u^3 + (\omega^2 - 1) u^2 - \omega^2 u). \] One checks easily that this curve has genus \(4\) and \(\mathcal{C}_0\), its reduction mod \(\pi\), is the hyper-elliptic curve defined by \[ v^2 = u^9 - u. \] After inverting \(\pi\) and making the substitution \begin{align*} x & = (\omega - 1) u + 1 \\ y & = v, \end{align*} we see that \(C\), the generic fibre of \(\mathcal{C}\), is the hyper-elliptic curve defined by \[ y^2 = \frac{1}{{(\omega - 1)}^9} \cdot {(x^3 - 1)}^3 + \frac{1}{{(\omega - 1)}^3} \cdot {(x^3 - 1)}. \] There is an \(\mathrm{R}\)-linear \(\mathbb{Z}/3\)-action on \(\mathcal{C}\) given by \begin{align*} \sigma(u) & = \omega \cdot u + 1 \\ \sigma(v) & = v. \end{align*} Similar to the Section~\ref{L'exemple} and use analogous notation as there, we state the following: \begin{proposition} \label{auxiliary proposition three} Using notations as above, we have \begin{enumerate} \item in the special fibre, the action of \(\sigma\) and \(\sigma^2\) are conjugate by an automorphism of \(\mathcal{C}_0\); \item in the generic fibre, we have a decomposition \[ H^0(C , \Omega^1_C) = \chi^{\oplus 2} \oplus \chi^2 \oplus \mathbbm{1} \] as representations of \(\langle \sigma \rangle\). \end{enumerate} \end{proposition} The proof is similar, notice that now the automorphism group of \(\mathcal{C}_0\) is \(\mathrm{SL}_2(\mathbb{F}_9) \times \mathbb{Z}/2\) and \(2 = -1 = i^2\) is a square in \(\mathbb{F}_9\). Possibly passing to an unramified extension of \(\mathrm{R}\), we may assume as before that there is an elliptic curve \(\mathcal{E}\) over \(\mathrm{R}\) together with a nonzero \(3\)-torsion point \(P\). Then we make the following: \begin{construction} Let \(\mathcal{X} \coloneqq (\mathcal{C} \times \mathcal{C} \times \mathcal{E}) / \langle (\sigma,\sigma,\tau_P) \rangle\) and let \(\mathcal{Y} \coloneqq (\mathcal{C} \times \mathcal{C} \times \mathcal{E}) / \langle (\sigma,\sigma^2,\tau_P) \rangle\). \end{construction} \begin{proposition} Both of \(\mathcal{X}\) and \(\mathcal{Y}\) are smooth proper over \(\mathrm{Spec}(\mathrm{R})\) and we have \(h^{3,0}(X) = 5\) and \(h^{3,0}(Y) = 6\). \end{proposition} \subsection{Final Remarks} The following Proposition shows that our example is sharp in terms of its dimension (the case of curve is trivial). \begin{proposition} \label{surfaces} Let \(\mathcal{X}\) and \(\mathcal{Y}\) be smooth proper schemes over \(\mathrm{Spec}(\mathcal{O})\) of relative dimension \(2\). Suppose \(\mathcal{X}_0 \cong \mathcal{Y}_0\), then \(h^{i,j}(X) = h^{i,j}(Y)\) for all \(i\),\(j\). \end{proposition} \begin{proof} Since for surfaces we have \(\frac{1}{2} b_1 = h^{0,1} = h^{1,0} = h^{0,3} = h^{3,0}\), by smooth proper base change we know that these numbers only depend on the special fibre. Therefore the Hodge numbers of \(X\) and \(Y\) agree except for the degree \(2\) part. Now the fact that the Euler characteristic of a flat coherent sheaf stays constant in a family shows that the degree \(2\) Hodge numbers of \(X\) and \(Y\) also agree. \end{proof} In order to make such an example, dimension is certainly not the only constraint. \begin{proposition} \label{torsions} Let \(\mathcal{X}\) and \(\mathcal{Y}\) be smooth proper schemes over \(\mathrm{Spec}(\mathcal{O})\) with \(\mathcal{X}_0 \cong \mathcal{Y}_0\). Suppose the Hodge-to-de Rham spectral sequence for \(\mathcal{X}_0\) degenerates at \(E_1\)-page and \(H^r_{\text{crys}}(\mathcal{X}_0/W(k))\) is torsion-free for all \(r\). Then \(h^{i,j}(X) = h^{i,j}(Y)\) for all \(i\),\(j\). \end{proposition} \begin{proof} The crystalline cohomology being torsion-free implies that \(h^r_{\mathrm{dR}}(X) = h^r_{\mathrm{dR}}(\mathcal{X}_0)\). In the generic fibre, by Hodge theory, we have \(\sum_{i+j = r} h^{i,j}(X) = h^r_{\mathrm{dR}}(X)\). In the special fibre, by the degeneration of Hodge-to-de Rham spectral sequence, we have \(\sum_{i+j = r} h^{i,j}(\mathcal{X}_0) = h^r_{\mathrm{dR}}(\mathcal{X}_0)\). These three equalities along with upper semi-continuity of \(h^{i,j}\) imply \(h^{i,j}(\mathcal{X}_0) = h^{i,j}(X)\). Then same argument implies \(h^{i,j}(\mathcal{X}_0) = h^{i,j}(Y)\). Hence we see that the Hodge numbers of \(X\) and \(Y\) are the same. \end{proof} \begin{remark} Using the fact that \(H_1(C;\mathbb{Z})\) as a \(\mathbb{Z}/p\)-module is the augmentation ideal in \(\mathbb{Z}[\mathbb{Z}/p]\), one can show that \(h^1_{\mathrm{dR}}(\mathcal{X}_0) = 4\) and \(h^1_{\mathrm{dR}}(X) = 2\), which implies that \(\dim_{\mathbb{F}_p} H^2_{\text{crys}}(\mathcal{X}_0/W(k))[p] = 2\). A more detailed study shows that the length of torsions in the crystalline cohomology groups of our examples stay bounded for all primes \(p\), however the discrepancy between \(h^{3,0}(X)\) and \(h^{3,0}(Y)\) grows linearly in \(p\). \end{remark} We conclude this paper by observing that the examples we found are over ramified base with absolute ramification index \(p-1\) and asking: \begin{question} \label{question} Is there a pair of smooth proper schemes \(\mathcal{X}\) and \(\mathcal{Y}\) over \(\mathrm{Spec}(W(k))\), such that \begin{enumerate} \item \(\mathcal{X}_0 \cong \mathcal{Y}_0\) and; \item \(h^{i,j}(X) \not= h^{i,j}(Y)\) for some \(i,j\)? \end{enumerate} \end{question} Note that by~\cite[Corollaire 2.4]{Deligne--Illusie} the Hodge-to-de Rham spectral sequence for any smooth proper \(\mathcal{X}_0\) degenerates at \(E_1\)-page, provided that \(\dim(\mathcal{X}_0) < p\) and \(\mathcal{X}_0\) lifts to \(W_2(k)\). In particular, the example asked for in Question~\ref{question}, if it exists and is of small dimension (say, \(3\)-fold), must have torsion in \(H^*_{\text{crys}}(\mathcal{X}_0/W(k))\) by Proposition~\ref{torsions}. \section*{Acknowledgement} The author would like to thank Brian Lawrence for asking him the question this paper is concerned with. He thanks Johan de Jong heartily for warm encouragement and stimulating discussions. He would also like to thank Daniel Litt, Qixiao Ma, Shuai Wang, Yihang Zhu and Ziquan Zhuang for helpful discussions. \bibliographystyle{alpha}
1,477,468,750,244
arxiv
\section{Introduction} Wall-shear stress fluctuation is a crucial physical quantity in wall-bounded turbulence, as it is of importance for noise radiation, structural vibration, drag generation, and wall heat transfer, among others \citep{Diaz-Daniel2017,Cheng2020a}. In the past two decades, ample evidence has shown that the root mean squared value of streamwise wall-shear stress fluctuations ($\tau_{x,rms}^{'}$) is sensitive to the flow Reynolds number \citep{Abe2004,Schlatter2010,Yang2017,Guerrero2020}. It indicates that large-scale energy-containing eddies populating the logarithmic and outer regions in high-Reynolds-number wall turbulence have non-negligible influences on the near-wall turbulence dynamics, and thus the wall friction \citep{Giovanetti2016,Li2019a}. Till now, several models have been proposed on the organization of motions in logarithmic and outer regions and their interactions with the near-wall dynamics. \cite{Marusic2010} have established that superposition and modulation are the two basic mechanisms that large-scale motions (LSM) and very-large-scale motions (VLSM) exert influences on the near-wall turbulence. The former refers to the footprints of LSMs and VLSMs on the near-wall turbulence, while the latter indicates the intensity amplification or attenuation of near-wall small-scale turbulence by the outer motions. \cite{Mathis2013} extended the model to interpret the generation of wall-shear stress fluctuations in high-Reynolds number flows. They emphasized that superposition and modulation are still two essential factors. This inner-outer interaction model (IOIM) has also been successfully developed to predict the near-wall velocity fluctuations with data inputs from the log layer \citep{Marusic2010,Baars2016,Wang2021}. On the other hand, the most elegant conceptual model describing the motions in logarithmic region is the attached-eddy model (AEM) \citep{Townsend1976,Perry1982}. It conjectures that the logarithmic region is occupied by an array of self-similar energy-containing motions (or eddies) with their roots attached to the near-wall region. Extensive validations support the existence of attached eddies in high-Reynolds number turbulence, such as the logarithmic decaying of streamwise velocity fluctuation intensities \citep{Meneveau2013}, as originally predicted by \cite{Townsend1976}. The reader is referred to a recent review work by \cite{Marusic2019} for more details. Given the existence of wall-attached energy-containing motions in the logarithmic region, it would be quite natural to hypothesize that the near-wall part of these motions would affect the generation of the wall-shear fluctuations to some extent, maybe, via the superposition and modulation mechanisms. However, some fundamental questions may be raised, e.g., whether the IOIM and AEM are consistent with each other? There's a possibility that the superposition component of $\tau_x'$ decomposed by the IOIM in physical space can not fully follow the predictions made by the AEM quantitatively. If yes, whether these two models can shed light on the mechanism of wall-shear fluctuation generation and be indicative for modeling approaches? Previous study \citep{Yang2017} verified that the generation of wall-shear stress fluctuations can be interpreted as the outcomes of the momentum cascade across momentum-carried eddies of different scales, and modeled by an additive process. Here, we first aim to couple the additive description with the AEM to portray the generation process of streamwise wall-shear fluctuations, resulting from wall-attached eddies. Two scaling laws describing their intensities and the linkages with the characteristic scales of attached eddies can be derived (the characteristic scales of attached eddies are their wall-normal heights according to AEM \citep{Townsend1976}). Then, we intend to isolate the streamwise wall-shear stress fluctuations generated by attached eddies in a turbulent channel flow at $Re_{\tau}=2003$ ($Re_{\tau}=hu_{\tau}/\nu$, $h$ denotes the channel half-height, $u_{\tau}$ the wall friction velocity and $\nu$ the kinematic viscosity) by resorting to the IOIM \citep{Marusic2010,Baars2016}. Here, IOIM is employed as a tool to estimate the streamwise wall-shear fluctuations generated by attached eddies. The statistics from IOIM can be processed to verify the scaling laws deduced by AEM, so as to demonstrate their consistency. Moreover, a simple algebraic model describing the instantaneous distributions of the streamwise wall-shear stress fluctuations generated by attached eddies will be proposed. \section{Streamwise wall-shear stress fluctuations generated by attached eddies} According to \cite{Mandelbrot1974} and \cite{Yang2017}, the generation of streamwise wall-stress fluctuations can be modeled as an additive process within multifractal formalism, which takes the form of \begin{equation} \tau_{x}^{'+}=\sum_{i=1}^{n}a_i, \end{equation} where $a_i$ are random addends, representing an increment in $\tau_x^{'+}$ due to eddies with wall-normal height $h/2^i$, and superscript $+$ denotes the normalization with wall units. Here, we intend to isolate the contributions from the eddies populating logarithmic region ($\tau_{x,o}^{'+}$) and link to their wall-normal positions $y$. $\tau_{x,o}^{'+}$ can be expressed as \begin{equation}\label{Gaa} \tau_{x,o}^{'+}=\sum_{i=n_s}^{n_o}a_i, \end{equation} where $n_s$ and $n_o$ represent the additives that correspond to the eddies with the wall-normal height at $y_s$ and $y_o$, respectively. Here, $y_s$ is the lower bound of logarithmic region, and generally believed to be $80 \leq y_s^+\leq 100$ \citep{Jimenez2018,Baars2020a}; $y_o$ is the outer reference height. It can be found that $1\textless n_s \textless n_o \textless n$. The addends $a_i$ are assumed to be identically and independently distributed (i.i.d) and equal to $a$. The number of the addends should be proportional to \begin{equation} n_o-n_s+1\sim \int_{y_s}^{y_o} p(y) dy \sim \int_{y_s}^{y_o} \frac{1}{y} dy\sim \ln(\frac{y_o}{y_s}), \end{equation} where $p(y)$ is the eddy population density, which is proportional to $1/y$ according to AEM \citep{Townsend1976,Perry1982}. A momentum generation function $\left\langle exp(q\tau_{x,o}^{'+})\right\rangle$, where $<>$ represents the averaging in the temporal and spatially homogeneous directions, is defined to scrutinize the scaling behavior of $\tau_{x,o}^{'+}$ \citep{Yang2016a}. $\left\langle exp(q\tau_{x,o}^{'+})\right\rangle$ can be evaluated as \begin{equation}\label{SSS} \left\langle exp(q\tau_{x,o}^{'+})\right\rangle=\left\langle exp(qa)\right\rangle^{n_o-n_s+1}\sim (\frac{y_o}{y_s})^{s(q)}, \end{equation} where $q$ is a real number, $s(q)=C_1\ln\left\langle exp(qa)\right\rangle$ is called anomalous exponent, $C_1$ is a constant. Eq.~(\ref{SSS}) is called strong self similarity (SSS). If $a$ is a Gaussian variable, the anomalous exponent can be recast as \begin{equation} \label{Ga} s(q)=C_2q^2, \end{equation} where $C_2$ is another constant. On the other hand, an extended self-similarity (ESS) is defined to describe the relationship between $\left\langle exp(q\tau_{x,o}^{'+})\right\rangle$ and $\left\langle exp(q_0\tau_{x,o}^{'+})\right\rangle$ (fixed $q_0$) \citep{Benzi1993}, i.e., \begin{equation}\label{equ:M5} \begin{aligned} \left\langle exp(q\tau_{x,o}^{'+})\right\rangle = \left\langle exp(q_0\tau_{x,o}^{'+})\right\rangle^{\xi(q,q_0)}, \end{aligned} \end{equation} where $\xi(q,q_0)$ is a function of $q$ (fixed $q_0$). Note that ESS does not strictly rely on i.i.d of the addends, but the additive process Eq.~(\ref{Gaa}). \section{DNS database and scale decomposition method} The direct numerical simulation (DNS) database used in the present study is an incompressible turbulent channel flow at $Re_{\tau}=2003$, which has been extensively validated by previous studies \citep{Hoyas2006,Jimenez2008}. The decomposition of $\tau'_x$ is based on the IOIM first proposed by \cite{Marusic2010}. \cite{Baars2016} modified the computational process by introducing spectral stochastic estimation to avoid artificial scale decomposition. In this work, the modified version of IOIM is adopted to investigate the multi-scale characteristics of $\tau'_x$. It can be expressed as \begin{equation} u_{p}^{+}\left(y^{+}\right)=\underbrace{u^{*}\left(y^{+}\right)\left\{1+\Gamma_{u u} u_{L}^{+}\left(y^{+}\right)\right\}}_{u_{s}^{+}}+u_{L}^{+}\left(y^{+}\right), \end{equation} where $u_{p}^{+}$ denotes the predicted near-wall streamwise velocity fluctuation, $u^{*}$ denotes the universal velocity signal without large-scale impact, $u_L^{+}$ is the superposition component, $\Gamma_{u u}$ is the amplitude-modulation coefficient, and $u_{s}^{+}$ denotes the amplitude modulation of the universal signal $u^{*}$. $u_{L}^{+}$ is obtained by spectral stochastic estimation of the streamwise velocity fluctuation at the logarithmic region $y_o^+$, namely, \begin{equation} u_{L}^{+}\left(x^{+}, y^{+}, z^{+}\right)=F_{x}^{-1}\left\{H_{L}\left(\lambda_{x}^{+}, y^{+}\right) F_{x}\left[u_{o}^{+}\left(x^{+}, y_{o}^{+}, z^{+}\right)\right]\right\} \end{equation} where $u_{o}^{+}$ is the streamwise velocity fluctuation at $y_o^+$ in the logarithmic region, and, $F_x$ and $F_x^{-1}$ denote FFT and inverse FFT in the streamwise direction, respectively. $H_L$ is the transfer kernel, which evaluates the correlation between $u^+(y^+)$ and $u_{o}^{+}(y_o^+)$ at a given length scale $\lambda_{x}^{+}$, and can be calculated as \begin{equation}\label{HL} H_{L}\left(\lambda_{x}^{+}, y^{+}\right)=\frac{\left\langle\hat{u}\left(\lambda_{x}^{+}, y^{+}, z^{+}\right) \overline{\hat{u}_o}\left(\lambda_{x}^{+}, y_{o}^{+}, z^{+}\right)\right\rangle}{\left\langle\hat{u}_o\left(\lambda_{x}^{+}, y_{o}^{+}, z^{+}\right) \overline{\hat{u}_o}\left(\lambda_{x}^{+}, y_{o}^{+}, z^{+}\right)\right\rangle}, \end{equation} where $\hat{u}$ is the Fourier coefficient of $u$, and $\overline{\hat{u}}$ is the complex conjugate of $\hat{u}$. \begin{figure} \centering \subfigure{ \label{fig:AEH:a} \includegraphics[width=3.5in]{fig1.eps} } \caption{A schematic of the attached-eddy model \citep{Hwang2015}. Each circle represents an individual attached eddy. $y_s^+$ and $y_e^+$ are the lower and upper bound of the logarithmic region, respectively. $y_0^+$ is the outer reference height, and varies from $y_s^+$ to $y_e^+$.} \label{fig:AEH} \end{figure} In this work, we mainly pay attention to the quantity, $\tau'_x$ generated by the attached eddies. Thus, the predicted position $y^+$ is fixed at $y^+=0.3$, and the outer reference height $y_o^+$ varies from $100$ (namely $y_s^+$) to $0.2h^+$ (denoted as $y_e^+$), i.e., the upper boundary of logarithmic region \citep{Jimenez2018}. We have checked that as long as the predicted position is around $y^+\le O(1)$, the results presented below are insensitive to the choice of specific $y^+$. Once $u_L^+$ is obtained, the superposition component of $\tau_x^{'+}$ can be calculated by definition (i.e., $\frac{ \partial u_L^{'+}} { \partial y^+}$ at the wall) and denoted as $\tau_{x,L}^{'+}(y_o^+)$. According to the hierarchical attached eddies in high-Reynolds number wall turbulence (see Fig.~\ref{fig:AEH}), $\tau_{x,L}^{'+}(y_o^+)$ represents the superposition contributed from the wall-coherent motions with their height larger than $y_o^+$. Thus, the difference value $\tau_{x,L}^{'+}(y_s^+)-\tau_{x,L}^{'+}(y_o^+)$ can be interpreted as the superposition contribution generated by the wall-coherent eddies with their wall-normal heights within $y_s^+$ and $y_o^+$, i.e., $\tau_{x,o}^{'+}$ in Eq.~(\ref{Gaa}). Considering that $y_s^+$ is the lower bound of the logarithmic region, the increase of $y_o^+$ corresponds to the enlargement of the addends in the additive description (see Eq.~(\ref{Gaa})). In this way, the connection between AEM and IOIM are established, and the AEM predictions (see Eqs.~(\ref{SSS})-(\ref{equ:M5})) can be verified directly. \section{Results and discussion} \subsection{Scaling laws of $\tau_{x,o}^{'+}$} \begin{figure} \centering \subfigure{ \label{fig:SSS:a} \includegraphics[width=2.5in]{fig2a.eps} } \subfigure{ \label{fig:SSS:b} \includegraphics[width=2.5in]{fig2b.eps} } \caption{($a$) $G$ as functions of $y_o/y_s$ for $q=\pm5$ and $q=\pm3$; ($b$) premultiplied $G$ as functions of $y_o/y_s$ for $q=\pm5$ and $q=\pm3$.} \label{fig:SSS} \end{figure} Here, we further define a moment generation function based on the IOIM, i.e., \begin{equation} G(q,y_o^+)=\left\langle exp(q(\tau_{x,L}^{'+}(y_s^+)-\tau_{x,L}^{'+}(y_o^+)))\right\rangle . \end{equation} Fig.~\ref{fig:SSS}($a$) shows the variations of $G$ as a function of $y_o/y_s$ for $q=\pm5$ and $q=\pm3$. Power-law behaviours can be found in the interval between $1.7\le y_o/y_s\le2.9$ for positive $q$ and $1.7\le y_o/y_s\le4$ for negative $q$, justifying the validity of SSS, i.e., Eq.~(\ref{SSS}). Fig.~\ref{fig:SSS:b} is in aid of accessing the scalings by displaying the variations of premultiplied $G$. This observation highlights that the superpositions of wall-attached log-region motions on wall surface follow the additive process, characterized by Eq.~(\ref{Gaa}). It is also worth mentioning that the power-law behaviour can be observed for larger wall-normal intervals for negative $q$. As $G(q,y_o^+)$ quantifies $\tau_{x,L}^{'+}(y_s^+)-\tau_{x,L}^{'+}(y_o^+)$, which features the same sign as $q$, this observation is consistent with the work of \cite{Cheng2020a}, which showed that the footprints of the inactive part of attached eddies populating the logarithmic region are actively connected with large-scale negative $\tau'_x$. Other $q$ values yield similar results and are not shown here for brevity. \begin{figure} \centering \subfigure{ \label{fig:SSS2:a} \includegraphics[width=2.5in]{fig3a.eps} } \subfigure{ \label{fig:SSS2:b} \includegraphics[width=2.5in]{fig3b.eps} } \caption{($a$) Anomalous exponent $s(q)$ as a function of $q$. The black line is a quadratic fit; ($b$) second- to sixth- order moments of $\tau_{x}^{'+}$ as functions of $Re_{\tau}$. The dashed lines are the log-normal predictions from Eq.~(\ref{Ga2})-(\ref{Ga6}).} \label{fig:SSS2} \end{figure} The anomalous exponent $s(q)$ can be obtained by fitting the range $2\le y_o/y_s\le2.9$, where both positive and negative $q$ display good power-law scalings. Fig.~\ref{fig:SSS2}($a$) displays the variation of the anomalous exponent $s(q)$ as a function of $q$. The solid line denotes the quadratic fit within $-0.5\le q \le 0.5$. It can be seen that the variation of $s(q)$ is very close to the model prediction, i.e., the quadratic function as Eq.~(\ref{Ga}) with $C_2=0.00629$. Only minor discrepancies between DNS data and model predictions can be observed. As such, it is reasonable to hypothesize that the streamwise wall-shear stress fluctuation $\tau'_x$ generated by attached eddies of a given size follows the Gaussian distribution. Moreover, we can also estimate the statistical moments of $\tau_{x,o}^{+}$ by taking the derivative of $G(q,y_e^+)$ with respect to $q$ around $q=0$ \citep{Yang2016a}, i.e., \begin{equation}\label{Ga2} \left\langle \tau_{x,o}^{'2+} \right\rangle =\left.\frac{\partial^{2} G(q ;y_o^+)}{\partial q^{2}}\right|_{q=0} \sim 2C_2\ln(y_o/y_s) \sim 2C_2\ln Re_{\tau}, \end{equation} \begin{equation}\label{Ga4} \left\langle \tau_{x,o}^{'4+} \right\rangle^{1/2}=(\left.\frac{\partial^{4} G(q ;y_o^+)}{\partial q^{4}}\right|_{q=0})^{1/2} \sim 2\sqrt{3}C_2\ln(y_o/y_s) \sim 2\sqrt{3}C_2\ln Re_{\tau}, \end{equation} \begin{equation}\label{Ga6} \left\langle \tau_{x,o}^{'6+} \right\rangle^{1/3} =(\left.\frac{\partial^{6} G(q ;y_o^+)}{\partial q^{6}}\right|_{q=0})^{1/3} \sim 2\sqrt[3]{15}C_2\ln(y_o/y_s) \sim 2\sqrt[3]{15}C_2\ln Re_{\tau}. \end{equation} Fig.~\ref{fig:SSS2}($b$) shows the variations of second- ($p=1$) to sixth- ($p=3$) order moments of $\tau'_x$ calculated from DNS of channel flows \citep{Iwamoto2002,DelAlamo2003,Abe2004,DelAlamo2004,Hu2006,Lozano-Duran2014a,Lee2015,Cheng2019,Kaneda2021} and compares them with the model prediction, i.e., Eq.~(\ref{Ga2})-(\ref{Ga6}). For the second- and fourth- order variances, the model predictions are roughly consistent with the DNS results. The comparisons also indicate a Reynolds-number dependence of $\left\langle \tau_{x}^{'2+} \right\rangle$, which has been reported by vast studies \citep{Schlatter2010,Mathis2013,Guerrero2020}, and may be ascribed to the superposition effects of the wall-attached log-region motions. \cite{Wang2020} speculated that the amplitude modulation effect plays a more prominent role in affecting the statistic characteristics of $\tau_{x,rms}^{'+}$ than the superposition effect, which contradicts the present findings. In fact, amplitude modulation has been demonstrated to exert a negligible effect on the even-order moments \citep{Mathis2011,Blackman2019}. Therefore, the deduction of \cite{Wang2020} needs to be revisited. For sixth-order moments, the model prediction displays substantial discrepancies with the DNS data. It is expected since high-order moments are dominated by the rare events resulting from the intermittent small-scale motions \citep{Frisch}, which can not be captured by IOIM (see Fig.~\ref{fig:pdf}($a$)). \begin{figure} \centering \subfigure{ \label{fig:ESS:a} \includegraphics[width=2.5in]{fig4a.eps} } \subfigure{ \label{fig:ESS:b} \includegraphics[width=2.5in]{fig4b.eps} } \caption{($a$) $G(q)$ as functions of $G(-2)$ for $q=-1,-3,-5,-7$; ($b$) $G(q)$ as functions of $G(2)$ for $q=1,3,5,7$. Both vertical and horizontal axes in ($a$) and ($b$) are plotted in logarithmic form.} \label{fig:ESS} \end{figure} ESS (i.e., Eq.~(\ref{equ:M5})) is another scaling predicted by the multifractal formalism. Different from SSS, ESS does not rely on the i.i.d of the addends, but the additive process (see Eq.~(\ref{Gaa})). Fig.~\ref{fig:ESS}($a$) and \ref{fig:ESS}($b$) shows the ESS scalings for $q_0=-2$ and $q_0=2$, respectively. ESS holds for the entire logarithmic region. The observation suggests that the streamwise wall-shear fluctuations generated by logarithmic motions obey the additive process, though the streamwise wall shear fluctuations generated by attached eddies with wall-normal heights at approximately $0.2h^+$ are not identically and independently distributed due to the scale interactions (see Fig.~\ref{fig:SSS}), which are not described by the attached-eddy model. \subsection{Instantaneous distribution of $\tau'_x$} Furthermore, the instantaneous $\tau_x^{'+}$ can be decomposed as \begin{equation}\label{MIOP2} \tau_x^{'+}=\tau_{x,s}^{'+}+\underbrace{\tau_{x,L}^{'+}(y_s^+)-\tau_{x,L}^{'+}(y_e^+)}_{\tau_{x,log}^{'+}}+\underbrace{\tau_{x,L}^{'+}(y_e^+)}_{\tau_{x,out}^{'+}}, \end{equation} where $\tau_{x,s}^{'+}$ denotes the amplitude modulation of the universal signal $\tau_{x}^{'*+}$, $\tau_{x,log}^{'+}$ and $\tau_{x,out}^{'+}$ are the superposition components contributed from the log region and the outer wall-coherent motions, respectively. The methodology of removing modulation effects can be found in \cite{Mathis2011} and \cite{Baars2016}, whose details are out of the range of present study. Fig.~\ref{fig:pdf}($a$) shows the probability density functions (p.d.f.s) of $\tau_{x}^{'*+}$, $\tau_{x,s}^{'+}$, $\tau_{x,log}^{'+}$ and $\tau_{x,out}^{'+}$, and compares with the p.d.f for the full channel data. The p.d.f.s of $\tau_{x,s}^{'+}$ and $\tau_{x}^{'*+}$ nearly coincide with that of $\tau_{x}^{'+}$ with asymmetric and positively skewed shape, which demonstrates that removing the superposition and modulation effects barely affects the instantaneous distributions. The asymmetries between the positive and negative wall-shear fluctuations are the essential characters of the near-wall small-scale turbulence, which may be associated with the celebrated near-wall sustaining process \citep{Schoppa2002}. In contrast, the p.d.f.s of $\tau_{x,log}^{'+}$ and $\tau_{x,out}^{'+}$ are more symmetric with rare events invisible, suggesting that the superposition components of logarithmic and outer motions are less intermittent than the small-scale universal signals. This also explains the reason why the log-normal model describes the additive process well (see Fig.~\ref{fig:SSS2}($a$)), although the log-normal model is inapplicable for rare events \citep{Landau}. Moreover, the skewness and flatness of $\tau_{x,log}^{'+}$ are 0.05 and 2.91, which are very close to those of a Gaussian distribution. It strongly supports the conclusion drawn above that the streamwise wall-shear stress fluctuations generated by attached eddies populating logarithmic region can be absolutely treated as Gaussian variables with \begin{equation}\label{pp1} p(\xi)=\frac{1}{\sqrt{2 \pi} \sigma} \exp \left(-\frac{\xi^{2}}{2 \sigma^{2}}\right), \end{equation} where $p(\xi)$ denotes the p.d.f, and $\xi$ is the independent variable. It is worth noting that the variation of variance can be well predicted by the log-normal model, namely \begin{equation}\label{pp2} \sigma^{2}=\left.\frac{\partial^{2} G(q ;y_o^+)}{\partial q^{2}}\right|_{q=0}=2C_2\ln(Re_{\tau})+C_3, \end{equation} where $C_2\approx0.00629$, and $C_3\approx-0.07959$ is a constant and determined by the DNS data at $Re_{\tau}=2003$. Fig.~\ref{fig:pdf}($b$) shows the p.d.f.s of $\tau_{x,log}^{'+}$ and the model prediction by Eq.~(\ref{pp1}), results of other two Reynolds numbers \citep{DelAlamo2004,Lozano-Duran2014a} are also included for comparison. It can be seen that the Gaussian model proposed here works reasonably well and can cover a wide range of Reynolds numbers. The model remains to be validated by higher-Reynolds number DNS data. \begin{figure} \centering \subfigure{ \label{fig:PDF:a} \includegraphics[width=2.5in]{fig5a.eps} } \subfigure{ \label{fig:PDF:b} \includegraphics[width=2.5in]{fig5b.eps} } \caption{($a$) P.d.f.s of $\tau_{x}^{'*+}$, $\tau_{x,s}^{'+}$, $\tau_{x,log}^{'+}$, $\tau_{x,out}^{'+}$, and $\tau_{x}^{'+}$; ($b$) P.d.f.s of $\tau_{x,log}^{'+}$ in channel flows with $Re_{\tau}=934$, $2003$, and $4179$. Dashed lines denote the Gaussian model predictions with Eqs.~(\ref{pp1})-(\ref{pp2}).} \label{fig:pdf} \end{figure} \section{Concluding remarks} In summary, the present study reveals that IOIM and AEM are consistent to each other quantitatively. The statistical characteristics of the superpositions of log-region eddies follow the predictions of AEM, namely, the SSS and ESS scalings. Based on these observations, we conclude that the streamwise wall-shear stress fluctuations generated by attached eddies populating the logarithmic region can be treated as Gaussian variables. A Gaussian model is then proposed to describe their instantaneous distributions and verified by DNS data spanned broad-band Reynolds numbers. Considering the fact that the intensity of wall-shear stress fluctuations is typically underpredicted by the state-of-the-art wall-modelled large-eddy simulation (WMLES) approaches \citep{Park2016}, the Gaussian model proposed in the present study may be constructive for the development of the LES methodology, and the distribution characteristics of $\tau_{x}^{'*+}$ are helpful for developing more accurate near-wall models of WMLES approaches. It is noted that some previous works adopted IOIM to investigate the spectral characteristics of the wall-coherent components of the signals in near-wall region, such as the work of \cite{Marusic2017}, but whether they are consistent with the AEM predictions in physical space quantitatively has not been verified in detail. The consistency of the two models demonstrated here fills the gap and complements their works. Moreover, the findings in present study indicate that we can isolate the footprints of attached eddies within a selected wall-normal range by employing IOIM, i.e., by adjusting $y_1^+$ and $y_2^+$ in $\tau_{x,L}^{'+}(y_1^+)-\tau_{x,L}^{'+}(y_2^+)$. Here $y_1^+$ and $y_2^+$ are two selected wall-normal heights in the logarithmic region, and $y_1^+<y_2^+$. In this regard, the present study may provide a new perspective for analyzing some flow physics in wall-bounded turbulence, such as the inner peak of the intensity of $u'$, and the streamwise inclined angles of attached eddies. All these are under investigation currently and will be reported in separate forthcoming papers. \section*{Acknowledgments} We are grateful to the authors cited in Fig.~\ref{fig:SSS2}($b$) for making their invaluable data available. L.F. acknowledges the fund from CORE as a joint research center for ocean research between QNLM and HKUST. \section*{Declaration of interests} The authors report no conflict of interest. \bibliographystyle{jfm}
1,477,468,750,245
arxiv
\subsection{#1}} \newcommand{\displaystyle}{\displaystyle} \newcommand{\Longrightarrow}{\Longrightarrow} \renewcommand{\iff}{\Longleftrightarrow} \newcommand{\Bbb{Q}}{\Bbb{Q}} \newcommand{\F}[1]{\Bbb{F}_{#1}} \newcommand{\bF}[1]{\bar{\Bbb{F}}_{#1}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\Bbb{Z}}{\Bbb{Z}} \newcommand{\Bbb{N}}{\Bbb{N}} \newcommand{\Bbb{R}}{\Bbb{R}} \newcommand{\Bbb{C}}{\Bbb{C}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{{\mathcal{N}}}{{\mathcal{N}}} \newcommand{\tilde{N}}{\tilde{N}} \newcommand{\mathcal{N}^S}{\mathcal{N}^S} \newcommand{\tilde{X}}{\tilde{X}} \newcommand{K}{L^{\tau}} \newcommand{\tilde{\lambda}}{\tilde{\lambda}} \renewcommand{\ll}{\tilde{L}} \newcommand{\s}[1]{{#1}_\mathrm{s}} \newcommand{\hs}[1]{\hat{\s{#1}^\times}} \newcommand{\DD}[2]{\mathrm{D}_{#1,#2}} \newcommand{\ZZ}[1]{\Bbb{Z}_{#1}} \newcommand{\QQ}[1]{\Bbb{Q}_{#1}} \newcommand{\QZ}[1]{\QQ{#1}/\ZZ{#1}} \newcommand{\ed}[2]{Q_{#1,#2}} \newcommand{\oddpart}[1]{\left( #1\right)_{\mbox{ \tiny odd}}} \newcommand{{\zeta}}{{\zeta}} \renewcommand{\b}{{\beta}} \renewcommand{\a}{\alpha} \renewcommand{\S}{\Sigma} \renewcommand{\l}{\lambda} \newcommand{{\gamma}}{{\gamma}} \newcommand{{\Gamma}}{{\Gamma}} \newcommand{j}{j} \newcommand{\zp}[1]{{\zeta}^{+}_{#1}} \newcommand{\zm}[1]{{\zeta}^{-}_{#1}} \newcommand{\tz}[1]{{\zeta}_{2^{#1}}} \newcommand{\tzp}[1]{\zp{2^{#1}}} \newcommand{\tzm}[1]{\zm{2^{#1}}} \newcommand{\h}[1]{\tilde{H}_{#1}} \newcommand{\Delta}{\Delta} \newcommand{\embk}[2]{\mathcal{E}_{#1}(#2)} \newcommand{\wn}[1]{\mathcal{W}_{#1} \newcommand{\twn}[1]{\tilde{\mathcal{W}}_{#1}} \newcommand{\discn}[2]{\Delta(#1,#2)} \newcommand{\mathbb{D}}{\mathbb{D}} \newcommand{\spl}[2]{\operatorname{SL}_{#1}\left(#2\right)} \newcommand{\pspl}[2]{\mathrm{PSL}_{#1}(#2)} \newcommand{\gl}[2]{\mathrm{GL}_{#1}(#2)} \newcommand{\pgl}[2]{\mathrm{PGL}_{#1}(#2)} \newcommand{\lsl}[2]{\mathrm{sl}_{#1}(#2)} \newcommand{\pcong}[2]{\Gamma(#1,#2)} \newcommand{\tcong}[2]{\Gamma_{0}(#1,#2)} \newcommand{\ucong}[2]{\Gamma_{1}(#1,#2)} \newcommand{\tpcong}[2]{\tilde{\Gamma}(#1,#2)} \newcommand{\pb}[1]{\mathtt{P}(#1) \newcommand{\apb}[1]{\mathcal{P}(#1) \newcommand{\qpb}[1]{\widetilde{\mathcal{P}}(#1) \newcommand{\rpb}[1]{{\mathtt{RP}}(#1) \newcommand{\arpb}[1]{\mathcal{RP}(#1) \newcommand{\rpbker}[1]{\mathtt{RP}_1(#1) \newcommand{\arpbker}[1]{\mathcal{RP}_1(#1) \newcommand{\qrpb}[1]{\widetilde{\mathtt{RP}}(#1) \newcommand{\pqrpb}[1]{\widetilde{\mathcal{RP}}^+(#1)} \newcommand{\rpbp}[1]{\mathcal{RP}_+(#1)} \newcommand{\qrpbker}[1]{\widetilde{\mathtt{RP}}_1(#1) \newcommand{\aqrpbker}[1]{\widetilde{\mathcal{RP}}_1(#1) \newcommand{\bl}[1]{\mathtt{B}(#1) \newcommand{\abl}[1]{\mathcal{B}(#1) \newcommand{\qbl}[1]{\widetilde{\mathcal{B}}(#1) \newcommand{\rbl}[1]{{\mathtt{RB}}(#1) \newcommand{\arbl}[1]{{\mathcal{RB}}(#1) \newcommand{\rblker}[1]{\mathcal{RB}_0(#1)}% \newcommand{\qrblker}[1]{\widetilde{\mathcal{RB}}_0(#1)}% \newcommand{\rrbl}[1]{\overline{\mathcal{RB}}(#1) \newcommand{\qrbl}[1]{\widetilde{\mathcal{RB}}(#1) \newcommand{\gpb}[1]{\left[ #1\right]} \newcommand{\agpb}[1]{\left[#1\right]'} \newcommand{\sgpb}[1]{[ #1 ]} \newcommand{\sus}[1]{\psi(#1) \newcommand{\suss}[2]{\psi_{#1}\left( #2\right)} \newcommand{\rfmod}[2]{{#1}\left\{ #2\right\}} \newcommand{\rpbm}[2]{\mathcal{RP}^{1}_{#1}(#2)} \newcommand{\ks}[2]{{\mathcal{K}}^{\mbox{\tiny $(#1)$}}_{#2}} \newcommand{\kks}[1]{{\mathcal{K}}_{#1}} \newcommand{\kl}[1]{\mathcal{S}_{#1}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\kv}[1]{\mathcal{L}_{#1}} \newcommand{\pbconst}[1]{\tilde{C}_{#1}} \newcommand{\pcconst}[1]{\tilde{D}_{#1}} \newcommand{\bconst}[1]{C_{#1}} \newcommand{\cconst}[1]{D_{#1}} \newcommand{\cconstmod}[1]{\mathcal{D}_{#1}} \newcommand{\icconstmod}[1]{\mathcal{D}^0_{#1}} \newcommand{\bconstmod}[1]{\mathcal{RC}_{#1}} \newcommand{\mathrm{cr}}{\mathrm{cr}} \newcommand{\Lambda}{\Lambda} \newcommand{\widetilde{\bw}}{\widetilde{\Lambda}} \newcommand{\rr}[1]{\mathcal{R}_{#1}} \newcommand{\spec}[1]{S_{#1}} \newcommand{\sspec}[1]{\bar{S}_{#1}} \newcommand{\pn}[2]{\mathbb{P}^{#1}(#2)} \newcommand{\projl}[1]{\pn{1}{#1}} \newcommand{\sq}[1]{{#1}^\times/({#1}^\times)^2} \newcommand{\tnorm}[1]{\mathcal{N}_{#1} \newcommand{\rsq}[1]{\overline{\mathcal{G}}_{#1} \newcommand{\denom}[1]{\mathrm{denom}(#1)} \newcommand{\zz}[1]{{#1}^\times\setminus\{ 1\} } \newcommand{\mathrm{Primes}}{\mathrm{Primes}} \newcommand{\laur}[2]{{#1}[{#2},{#2}^{-1}]} \newcommand{\shift}[1]{{#1}_{-1}} \renewcommand{\dim}[2]{\mathrm{dim}_{#1}\left(#2\right)} \newcommand{\<}{\langle} \renewcommand{\>}{\rangle} \newcommand{\il}[1]{\< #1 \>} \newcommand{\card}[1]{\left| #1 \right|} \newcommand{\pset}[2]{\{#1 : #2 \}} \newcommand{\sset}[1]{\{ #1 \}} \renewcommand{\forall}{\mbox{ for all }} \newcommand{\psyl}[2]{{#1}_{(#2)}} \newcommand{\tsyl}[1]{\psyl{#1}{2}} \newcommand{\dvv}[1]{\mathrm{Div}(#1)} \newcommand{\dual}[1]{\widehat{#1}} \newcommand{\pdual}[1]{{#1}^{*}} \newcommand{\Pdual}[1]{\pdual{(#1)}} \newcommand{\tdual}[1]{{#1}'} \newcommand{\mathrm{colim}}{\mathrm{colim}} \newcommand{\id}[1]{\mathrm{Id}_{#1}} \renewcommand{\ker}[1]{\mathrm{Ker}(#1)} \newcommand{\image}[1]{\mathrm{Im}(#1)} \newcommand{\Ker}[1]{\mathrm{Ker}\left( #1\right)} \newcommand{\coker}[1]{\mathrm{Coker}(#1)} \newcommand{\edm}[1]{\mathrm{End}(#1)} \newcommand{\Edm}[2]{\mathrm{End}_{#1}(#2)} \newcommand{\rres}[2]{{#1}\vert_{#2}} \renewcommand{\hom}[3]{\mathrm{Hom}_{#1}(#2,#3)} \newcommand{\hmz}[2]{\mathrm{Hom}(#1,#2)} \newcommand{\aut}[1]{\mathrm{Aut}(#1)} \newcommand{\Aut}[2]{\mathrm{Aut}_{#1}(#2)} \newcommand{\unitb}[1]{\left(#1\right)^{\times}} \newcommand{\units}[1]{#1^{\times}} \newcommand{\unitr}[1]{{#1}^\times} \newcommand{\dv}[1]{2\mbox{-}\mathrm{div}(#1)} \newcommand{\modgen}[3]{\< #1\ |\ #2\>_{#3} \newcommand{\nn}[1]{N_{#1}} \newcommand{\free}[2]{\mathrm{Fr}_{#1}\left( #2 \right)} \newcommand{\{ 0 \}}{\{ 0 \}} \newcommand{\ab}[1]{#1^{\mbox{\tiny ab}}} \newcommand{\modtwo}[1]{{#1}/2} \newcommand{\chr}[3]{{#1}\times_{#2}{#3} \newcommand{\tor}[4]{\mathrm{Tor}^{#1}_{#2}(#3,#4)} \newcommand{\trs}[1]{\Tor{\Bbb{Z}}{1}{#1}{#1}} \newcommand{\ptor}[2]{{#1}_{(#2)}} \newcommand{\ntor}[2]{_{#1}{#2}} \newcommand{\trs{\Lambda}{X}}{\trs{\Lambda}{X}} \newcommand{\syp}[1]{\mathrm{Sp}(#1)} \newcommand{\torsion}[1]{\left(#1\right)_{\mbox{\tiny tors}}} \newcommand{\sgr}[1]{\mathrm{R}_{#1} \newcommand{\sgrp}[1]{\mathrm{R}^+_{#1}} \newcommand{\qsgr}[1]{\widetilde{\sgr{#1}}} \newcommand{\rsgr}[1]{\widehat{\sgr{#1}}} \newcommand{\an}[1]{\left\langle{#1}\right\rangle} \newcommand{\pf}[1]{\left\langle\!\left\langle{#1}\right\rangle\!\right\rangle} \newcommand{\gw}[1]{\mathrm{GW}(#1) \newcommand{\aug}[1]{\mathcal{I}_{#1}} \newcommand{\gwaug}[1]{\mathrm{I}(#1) \newcommand{\gwrel}[1]{\mathcal{J}_{#1} \newcommand{\pows}[2]{#1\left[\!\left[ #2 \right]\!\right] \newcommand{\laurs}[2]{{#1}\left(\!\left( #2 \right)\!\right) \newcommand{\igr}[1]{\Bbb{Z}[#1]} \newcommand{\pp}[1]{\mathrm{p}^{#1}_+} \newcommand{\ppm}[1]{\mathrm{p}^{#1}_-} \newcommand{\ep}[1]{\mathrm{e}^{#1}_+} \newcommand{\epm}[1]{\mathrm{e}^{#1}_-} \newcommand{\idem}[1]{\mathrm{e}_{#1}} \newcommand{\idm}[2]{\mathrm{e}^{#1}_{#2}} \newcommand{\upp}[2]{{#1}^{#2}} \newcommand{\dwn}[2]{{#1}_{#2}} \newcommand{\matr}[4]{\left[\begin{array}{cc} #1&#2\\ #3&#4\\ \end{array} \right]} \newcommand{\col}[2]{\left[\begin{array}{c} #1\\ #2\\ \end{array} \right]} \newcommand{\vect}[1]{\mathbf{#1}} \newcommand{\extpow}[1]{{#1}\wedge{#1}} \newcommand{\Extpow}[3]{\wedge^{#1}_{#2}(#3)} \newcommand{\sextpow}[3]{\mathrm{SE}^{#1}_{#2}(#3)} \newcommand{\tens}[3]{\mathrm{T}^{#1}_{#2}(#3)} \newcommand{\sym}[3]{\mathrm{Sym}^{#1}_{#2}(#3)} \newcommand{\ast}{\ast} \newcommand{\circ}{\circ} \newcommand{\asym}[3]{\mathrm{S}^{#1}_{#2}(#3)} \newcommand{\qasym}[3]{\tilde{\mathrm{S}}^{#1}_{#2}(#3)} \newcommand{\rasym}[3]{\mathrm{RS}^{#1}_{#2}(#3)} \newcommand{\rrasym}[3]{\widetilde{\rasym{#1}{#2}{#3}}} \newcommand{\rsf}[2]{[#1,#2]} \newcommand{\zhalf}[1]{{#1}\left[ \tfrac{1}{2}\right]} \newcommand{\zinv}[2]{{#1}\left[ \tfrac{1}{#2}\right]} \newcommand{\zzhalf}[1]{{#1}'} \newcommand{\emb}[1]{\Omega_{#1}} \newcommand{\Inv}[2]{\mathrm{H}^0\left( #2,#1\right)} \newcommand{\genu}[2]{\mathcal{U}^{\mathrm{gen}}_{#1}(#2)} \newcommand{\gnu}[1]{\mathcal{U}^{\mathrm{gen}}_{#1}} \newcommand{B}{B} \newcommand{\sgn}[1]{\mathrm{sgn}(#1)} \newcommand{\diag}[3]{\mathrm{diag}(#1,#2,#3)} \newcommand{\znth}[2]{{#1}\left[\frac{1}{#2}\right]} \newcommand{\eex}{very special \newcommand{\dex}{special \newcommand{\ntr}[1]{\mathcal{O}_{#1}} \newcommand{\sntr}[2]{\mathcal{O}_{#2}^{#1}} \newcommand{\pntr}[2]{\ntr{#2}[1/#1]} \newcommand{\tworing}[1]{R_{#1}} \newcommand{\roots}[1]{\mu(#1)} \newcommand{\proots}[1]{\mu_{p^\infty}(#1)} \newcommand{\resf}[1]{k({#1}) \newcommand{\artin}[3]{\left(\frac{#1,#2}{#3}\right)} \newcommand{\res}[1]{\arrowvert_{#1}} \newcommand{\gal}[2]{\mathrm{Gal}(#1 /#2)} \newcommand{\val}[1]{v_{#1}} \newcommand{\dec}[1]{Z_{#1}} \newcommand{\pic}[1]{\mathrm{Pic}(#1)} \newcommand{\br}[1]{\mathrm{Br}(#1)} \newcommand{\cl}[1]{\mathrm{Cl}(#1)} \newcommand{\cls}[2]{\mathrm{Cl}^{#1}(#2)} \newcommand{\as}[1]{A^S(#1)} \newcommand{\mid}{\mid} \newcommand{\tat}[2]{#1(#2)} \newcommand{\tate}[2]{\left( #1\right)(#2)} \newcommand{\tw}[2]{\tat{\ZZ{#2}}{#1}} \newcommand{\mtw}[3]{\tate{\Bbb{Z}/{#3}^{#1}}{#2}} \newcommand{\tp}[3]{\rho_{#1}(#2,#3)} \newcommand{\stp}[1]{\rho_{#1}} \newcommand{\vt}[1]{v_2(#1)} \newcommand{\un}[2]{\mathrm{U}^{#1}_{#2}} \newcommand{\norm}[2]{\mathrm{N}_{#1/#2}} \newcommand{\ram}[2]{S_{\mbox{\tiny ram}}(#1/#2)} \newcommand{\logun}[1]{\tilde{\mathcal{E}}_{#1}} \newcommand{\pun}[1]{\tilde{\mathcal{U}}_{#1}} \newcommand{\logcls}[1]{\tilde{Cl}_{#1}} \newcommand{\ldiv}[1]{\tilde{Dl}_{#1}} \newcommand{\gk}[1]{\tilde{\mathcal{E}}_{#1}} \newcommand{\gkk}[1]{\mathcal{G}_{#1}} \newcommand{\es}[1]{\mathcal{U}_{#1}} \newcommand{\rk}[1]{\mathcal{R}_{#1}} \newcommand{\dl}[1]{\tilde{Dl}_{#1}} \newcommand{\kf}[1]{K_2 (#1)} \newcommand{\dk}[1]{\dv{\kf{#1}}} \newcommand{\ddk}[1]{K_2^{\infty}(#1)} \newcommand{\wk}[1]{WK_2(#1)} \newcommand{\wke}[2]{WK_{#1}^{\mbox{\tiny{\'et}}}(#2)} \newcommand{\ket}[3]{K_{#1}^{\mbox{\tiny{\'et}}}(\sntr{#2}{#3})} \newcommand{\pk}[1]{K_2^+(#1)} \newcommand{\ko}[1]{K_{2} (\ntr{#1})} \newcommand{\kos}[2]{K_{#1} (\ntr{#2}^S)} \newcommand{\tr}[2]{\mathrm{tr}^{#1}_{#2}} \newcommand{\trf}[1]{T_{#1}} \newcommand{\symb}[2]{\{#1,#2\}} \newcommand{\dd}[2]{\delta_{#1}^{#2} \newcommand{\chern}[1]{C_{#1}} \newcommand{\tatk}[2]{D^{(#2)}_{#1}} \newcommand{\gtatk}[3]{D^{[#2,#3]}_{#1}} \newcommand{\ntatk}[3]{D^{(#2,#3)}_{#1}} \newcommand{\mwk}[2]{K^{\mathrm{\small MW}}_{#1}({#2})} \newcommand{\spk}[2]{K^{\mathrm{\small MM}}_{#1}({#2})} \newcommand{\milk}[2]{K^{\mathrm{\small M}}_{#1}({#2})} \newcommand{\milkt}[2]{k^{\mathrm{\small M}}_{#1}({#2})} \newcommand{\kind}[1]{K^{\mathrm{\small ind}}_3(#1)} \newcommand{\mil}[2]{ \{ #1,#2 \} } \newcommand{\hgen}[2]{{[#1,#2]}} \newcommand{\st}[2]{\mathrm{St}(#1,#2)} \newcommand{\sti}[1]{\mathrm{St}(#1)} \newcommand{\abel}[1]{#1^{\mathrm{ab}}} \newcommand{\ho}[3]{\mathrm{H}_{#1}\left( #2,#3 \right)} \newcommand{\qho}[3]{\overline{\mathrm{H}}_{#1}\left( #2,#3 \right)} \newcommand{\qhot}[3]{\overline{\mathrm{H}}_{#1}\left( #2,#3 \right)_0} \newcommand{\hoz}[2]{\ho{#1}{#2}{\Bbb{Z}}} \newcommand{\hot}[2]{\mathrm{H}_3\left( \spl{2}{#1},#2\right)_0} \newcommand{\coh}[3]{\mathrm{H}^{#1}(#2,#3 )} \newcommand{\coc}[3]{\mathrm{C}^{#1}(#2,#3 )} \newcommand{\cyc}[3]{\mathrm{Z}^{#1}(#2,#3 )} \newcommand{\coinv}[2]{\left( #2\right)_{#1}} \newcommand{\inv}[2]{#2^{#1}} \newcommand{\hinv}[2]{\mathrm{inv}_{#1}{#2}} \newcommand{\conj}[2]{\mathrm{Conj}(#1,#2)} \newcommand{\etcoh}[3]{\mathrm{H}^{\ #1}_{\mbox{\tiny{\'et}}}(#2,#3)} \newcommand{\tcoh}[3]{\hat{\mathrm{H}}^{\ #1}_{\mbox{\tiny{\'et}}}(#2,#3)} \newcommand{\pcoh}[3]{\mathrm{H}^{\ #1}_{+}(#2,#3)} \newcommand{\ind}[2]{\mathrm{Ind}^{#1}_{#2}} \newcommand{\cp}[1]{\Gamma_{#1}} \newcommand{\mathrm{res}}{\mathrm{res}} \newcommand{\mathrm{cor}}{\mathrm{cor}} \newcommand{\sha}[3]{\mbox{\tencyr\cyracc Sh}^{#1}(#2,#3)} \newcommand{\shar}[4]{\mbox{\tencyr\cyracc Sh}_{#1}^{#2}(#3,#4)} \newcommand{\shas}[3]{\mbox{\tencyr\cyracc Sh}_{#1}^{#2}(#3)} \newcommand{\m}[1]{M^0_{#1}} \newcommand{\Ind}[3]{\mathrm{Ind}_{#1}^{#2}#3} \newcommand{\Tor}[2]{\mathrm{Tor}^{\Bbb{Z}}_{1}(#1,#2)} \newcommand{\covtor}[1]{\mathrm{Tor}^{\Bbb{Z}}_1\widetilde{(\mu_{#1},\mu_{#1})}} \newcommand{\St}[2]{\mathrm{St}(#1,#2)} \newcommand{\vskip 3pt}{\vskip 3pt} \newcommand{\vskip 6pt}{\vskip 6pt} \newcommand{\vskip 9pt}{\vskip 9pt} \newcommand{\stand}[1]{\mathrm{st}(#1)} \title{$\mathrm{GE}_2$-Rings and a graph of unimodular rows} \author{ Kevin Hutchinson} \address{School of Mathematics and Statistics, University College Dublin} \email{ kevin.hutchinson@ucd.ie} \date{\today} \keywords{special linear group, elementary matrices, $K$-theory } \subjclass{19C20,20G30} \begin{document} \maketitle \begin{abstract} For a commutative ring $A$ we consider a related graph, $\Gamma(A)$, whose vertices are the unimodular rows of length $2$ up to multiplication by units. We prove that $\Gamma(A)$ is path-connected if and only if $A$ is a $\mathrm{GE}_2$-ring, in the terminology of P. M. Cohn. Furthermore, if $Y(A)$ denotes the clique complex of $\Gamma(A)$, we prove that $Y(A)$ is simply connected if and only if $A$ is universal for $\mathrm{GE}_2$. More precisely, our main theorem is that for any commutative ring $A$ the fundamental group of $Y(A)$ is anti-isomorphic to the group $K_2(2,A)$ modulo the subgroup generated by symbols. \end{abstract} \section{Introduction} For a commutative ring $A$ we consider the following functorially associated graph $\Gamma(A)$: The vertices of $\Gamma(A)$ are unimodular rows $(a,b)$ up to multiplication by units. Two (equivalence classes of) rows form an edge of $\Gamma(A)$ if they are the rows of a matrix in $\gl{2}{A}$. Let $Y(A)$ be the clique complex of $\Gamma(A)$. Thus $Y(A)$ is the simplicial complex whose set of $n$-simplices $Y_n(A)$ is the set of $n$-cliques (complete subgraphs on $n$ vertices) of $\Gamma(A)$. For reasons detailed below, we wished to determine the class of rings $A$ for which $Y(A)$ is homologically $1$-connected and the investigations in this paper were motivated by this question. By considering paths from $\infty:=(1,0)$, it is not too difficult to see that the space $Y(A)$, or equivalently the graph $\Gamma(A)$, is path-connected if and only if $\spl{2}{A}$ is generated by elementary matrices; i.e., if and only if $A$ is a $\mathrm{GE}_2$-ring in the terminlogy of P. M. Cohn (\cite{cohn:gln}). More precisely, we show (Theorem \ref{thm:pi0}) that the set of path components, $\pi_0(Y(A))$, of $Y(A)$ is naturally in bijective correspondence with the coset space $E_2(A)\backslash \spl{2}{A}$, where $E_2(A)$ is the subgroup generated by elementary matrices. We recall that while every Euclidean Domain is a $\mathrm{GE}_2$-ring, there are many examples of PIDs which are $\mathrm{GE}_2$-rings. In Section \ref{sec:path} below we also show how paths from $\infty$ in $\Gamma(A)$ correspond to weak Euclidean algorithms for unimodular pairs and, when $A$ is an integral domain with field of fractions $F$, to the existence of continued fraction expansions of elements of $F$. The fundamental group of $Y(A)$, on the other hand, is, as we show, related to Cohn's notion of rings which are \emph{universal for $\mathrm{GE}_2$}. The group $E_2(A)$ is generated by the matrices $E(a):=\left[ \begin{array}{cc} a&1\\ -1&0\\ \end{array} \right], a\in A$ and Cohn (\cite{cohn:gln}) writes down certain universal relations satisfied by these matrices. The ring $A$ is, by definition, \emph{universal for $\mathrm{GE}_2$}, if these generators and relations give a presentation of the group $E_2(A)$. In section \ref{sec:univ} below, we consider the group $C(A)$ described by this presentation (in the case of \emph{commutative} $A$ ) and the kernel $U(A)$, of the surjective homomorphism $C(A)\to E_2(A)$. Thus, by definition, $A$ is universal for $\mathrm{GE}_2$ if and only if $U(A)=1$. Our main theorem (Theorem \ref{thm:main}) gives an explicit anti-isomorphism, with explicit inverse, from the fundamental group $\pi_1(Y(A),\infty)$ to the group $U(A)$. It follows that $Y(A)$ is simply connected if and only if $A$ is universal for $\mathrm{GE}_2$. The essential work in the article consists in preparing the ground for the proof of the main theorem by writing down a presentation for this fundamental group. In fact, it is well-known that, for a commutative ring $A$, the condition of being universal for $\mathrm{GE}_2$ is equvalent to the condition that the rank one $K_2$ group $K_2(2,A)$ is generated by symbols; i.e., that $K_2(2,A) = C(2,A)$ where $C(2,A)$ is the subgroup generated by the set of symbols $\{ c(u,v)\ |\ u,v\in A^\times\}$. However, I am not aware that a proof of this result has been published. For example, Dennis and Stein state this fact on page 228 of \cite{dennisstein:dvr} and refer to unpublished notes for the proof. Other articles which use this result generally cite Dennis-Stein. Since we use this result in an essential way below, we have added, for our own convenience and that of the reader, an appendix in which we verify that there is a natural isomorphism $U(A)\cong K_2(2,A)/C(2,A)$ for any commutative ring $A$ (Theorem \ref{thm:gamma}). In view of this isomorphism, our main theorem also states that the fundamental group of $Y(A)$ is anti-isomorphic to $K_2(2,A)/C(2,A)$. We now describe our original motivation for studying the questions addressed in this article. To a commutative ring $A$ we associate a complex of abelian groups $L_\bullet(A)$ as follows: Let $X_n(A)$ denote the set all ordered $(n+1)$-tuples $(x_0,\ldots,x_n)$ where $\{ x_0,\ldots,x_n\}\in Y_n(A)$ is an $(n+1)$-clique of $\Gamma(A)$. Let $L_n(A):=\Bbb{Z}[X_n(A)]$ and let $d_n: L_n(A)\to L_{n-1}(A)$ be the standard simplicial boundary homomorphism. We also have an augmentation $\epsilon:L_0(A)\to \Bbb{Z}, (x_0)\mapsto 1$. The complex $L_\bullet(A)$ is naturally a complex of right modules over the group $\spl{2}{A}$, and has proved very useful in calculating the low-dimensional homology of this group, when $A$ is a field or local ring, in terms of refined Bloch groups (\cite{hut:rbl11},\cite{hut:slr},\cite{hut:sl2Q}). With R. C. Coronado, we have investigated the possibility of extending these results and methods to more general rings, such as local rings with small residue fields, principal ideal domains and rings of integers (\cite{corohut:bloch}). We have found that in order for the Bloch groups of a ring, appropriately defined, to have the desired relationship to $H_3(\spl{2}{A},\Bbb{Z})$ or to the indecomposable $K_3$ of $A$, we must have that the complex \[ L_2(A)\to L_1(A)\to L_0(A)\to \Bbb{Z}\to 0 \] is exact; i.e., that $H_0(L_\bullet(A))\cong \Bbb{Z}$ and $H_1(L_\bullet(A))=0$. However, there is a natural map of complexes $L_\bullet(A)\to C_\bullet(Y(A))$ (the oriented chain complex of $Y(A)$) and it is not hard to see that this induces an isomorphism on homology in dimensions $0$ and $1$\footnote{On the other hand, $H_2(L_\bullet(\Bbb{Z}))\not=0$ while $H_2(C_\bullet(Y(\Bbb{Z})))=H_2(Y(\Bbb{Z}))=0$.}. Thus our question becomes: for which rings $A$ is $Y(A)$ homologically $1$-connected. Since $H_1(Y(A))\cong \pi_1(Y(A),\infty)^{\mathrm{ab}}$ our main theorem gives an answer to this question in terms of the $K$-theory group $K_2(2,A)$. The $K_2$ calculations of Morita in \cite{morita:k2zs} show that the Euclidean domains $\Bbb{Z}[\frac{1}{m}]$ are universal for $\mathrm{GE}_2$ for many integers $m$ which are divisible by $2$ or $3$ (Example \ref{exa:pi1m}). However, Morita's calculations in \cite{morita:mab} can be used to show that for any prime $p\geq 5$, $H_1(L_\bullet(\Bbb{Z}[\frac{1}{p}])\not=0$ and they even allow us to write down an explicit short $1$-cycle in $L_1(\Bbb{Z}[\frac{1}{p}])$ which represents a homology class of infinite order (Example \ref{exa:pi1p}). \subsection{Notation and conventions} \emph{All rings below are commutative.} For a ring $A$, $A^\times$ denotes its group of units. For a simplicial complex $Y= \{ Y_n\}_n$, $|Y|$ will denote its geometric realization with the weak topology. \section{The graph $\Gamma(A)$} Let $A$ be a ring and let $\Gamma(A)$ be the following associated graph: The vertices of $\Gamma(A)$ are equivalence classes, $[u]$, of unimodular rows $u=(u_1,u_2)\in A^2$ under scalar multiplication by units in $A$. The pair $\{ [u],[v]\}$ is an edge in $\Gamma(A)$ if the matrix \[ M= \left[ \begin{array}{c} u\\ v\\ \end{array} \right] \] lies in $\gl{2}{A}$; i.e., if $\mathrm{det}(M)\in A^\times$. Let $Y_0(A)$ denote the set of vertices of $\Gamma(A)$ and let $Y_1(A)$ denote the set of edges. Observe that both of these sets are naturally right $\gl{2}{A}$-sets (or, indeed, $\pgl{2}{A}$-sets) via right multiplication by matrices. More generally, we let $Y(A)$ denote the \emph{clique complex} of the graph $\Gamma(A)$. This is the simplicial complex whose set of $n$-simplices, $Y_n(A)$, is the set of $(n+1)$-cliques of $\Gamma(A)$. Recall that an $n$-clique is a set of $n$ distinct vertices $\{ [u_1],\ldots, [u_n]\}$ with the property that every pair is an edge of $\Gamma(A)$. \subsection{The vertices of $\Gamma(A)$} We will use the following notations: For any $a\in A$, $a_+$ denotes (the class of) $(a,1)$ in $Y_0(A)$ and $a_-$ denotes the class of $(1,a)$. Thus if $a$ is a unit then $a_-=(a^{-1})_+$ in $Y_0(A)$. Furthermore, we set \[ 0:=0_+=(0,1),\quad \infty: = 0_-=(1,0),\quad 1:=1_+=1_-,\quad -1:= (-1)_+=(-1)_- \quad \mbox{ in } Y_0(A). \] The action of $\gl{2}{A}$ on $Y_0(A)$ is transitive: If $u$ is any unimodular row then, by definition, there exists a unimodular row $v$ such that $ X:=\left[ \begin{array}{c} u\\ v\\ \end{array} \right] \in \gl{2}{A}. $ Then $\infty\cdot X=[u]$. The stabilizer of $\infty$ is the subgroup $\tilde{B}=B(\gl{2}{A})$ of lower triangular matrices in $\gl{2}{A}$. Furthermore, $\spl{2}{A}$ acts transitively on the vertices of $\Gamma(A)$ and the stabilizer of $\infty$ is $B=B(\spl{2}{A}:=\tilde{B}\cap\spl{2}{A}$. Now let $A$ be an integral domain with field of fractions $F$. We will say that $(0,0)\not=(a,b)\in A\times A$ is a \emph{B\'ezout pair} if the ideal $\an{a,b}$ is principal. Recall that $A$ is said to be a \emph{B\'ezout domain} if every nonzero pair is a B\'ezout pair; equivalently, if every finitely-generated ideal is principal. We will call an element $x\in \projl{F}=F\cup\{ \infty\}$ a \emph{B\'ezout point} if $x=a/b$ for some B\'ezout pair $(a,b)$. (Note that if $(a,b)$ is B\'ezout pair and if $a/b=a'/b'$ in $\projl{F}$, then $(c,d)$ is a B\'ezout pair. ) \begin{rem} Of course, the notion of B\'ezout point or B\'ezout pair depends on the choice of subring $A$ of the field $F$. If necessary, we will refer to an \emph{$A$-B\'ezout point} or an \emph{$A$-B\'ezout pair}. \end{rem} \begin{lem} Let $A$ be an integral domain with field of fractions $F$. There is a natural injective map from $Y_0(A)$ to $\projl{F}$ sending $\bar{u}=(a,b)\to \frac{a}{b}$, whose image is the set of B\'ezout points. \end{lem} \begin{proof} The map $Y_0(A)\to \projl{F}$, sending the class of $(a,b)$ to $ a/b$ is clearly well-defined. Suppose that $(a,b),(c,d)$ are both unimodular and that $a/b=c/d$. Then $ad=bc$. So $a| bc$. Since there exist $r,s\in A$ with $ra+sb=1$, it follows that $a|c$. Similarly, $c|a$. Thus $c=ua$ for some $u\in A^\times$. It follows that $b=ud$ and hence $(a,b)=(c,d)$ in $Y_0(A)$. Suppose now that $(a,b)$ is a B\'ezout pair. Let $q=a/b\in F$. Suppose that $\an{a,b}=\an{c}$ for $c\in A$. Then $a=a'c$ and $b=b'c$ for some $a',b,\in A$ with $(a',b')$ unimodular. So $a/b=a'/b'$ is the image of (the class of) $(a',b')\in Y_0(A)$. Conversely, suppose that $a/b$ lies in the image of this map. Then $a/b=a'/b'$ for some $a',b'$ such that $(a',b')$ is unimodular. Since $a'|ab'$, it follows that $a'|a$. So $a=ca'$ for some $c\in A$. We deduce that $b=cb'$ also. Hence $\an{a,b}=\an{ca',cb'}=\an{c}\an{a',b'}=\an{c}$ and $(a,b)$ is a B\'ezout pair. \end{proof} \begin{cor} If $A$ is a B\'ezout domain with field of fractions $F$ then the set of vertices, $Y_0(A)$, of $\Gamma(A)$ is naturally identified with $\projl{F}$. \end{cor} In general, when $A$ is an integral domain we will identify $Y_0(A)$ with the set of B\'ezout points in $\projl{F}$ . \begin{rem} Recall that $\gl{2}{A}$ acts transitively (on the right) on $Y_0(A)$ and this action is compatible with the natural right action on $\projl{F}$: If $M=\left[ \begin{array}{cc} a&b\\ c&d\\ \end{array} \right]\in \gl{2}{A}$, then $qM=\frac{aq+c}{bq+d}$ for all $q\in \projl{F}$. Thus $Y_0(A)$, the set of B\'ezout points, is precisely the orbit $\infty\cdot\gl{2}{A}\subset \projl{F}$. \end{rem} \begin{exa} \label{exa:field} If $F$ is a field then $\Gamma(F)$ is the complete graph on $Y_0(F)=\projl{F}$. So $Y_n(F)$ consists of all $(n+1)$-tuples $(x_0,\ldots,x_n)$ of distinct points of $\projl{F}$. If $F=\F{q}$ is the finite field with $q$ elements then $Y(F)$ is just a $q$-simplex and hence the space $|Y(F)|$ is contractible. For infinite fields $F$, $Y(F)$ is an ìnfinite' simplex and it is also the case that $|Y(F)|$ is contractible. \end{exa} \subsection{The edges of $\Gamma(A)$} \begin{lem}\label{lem:edge} Let $A$ be a ring. Then $\spl{2}{A}$ and $\gl{2}{A}$ act transitively on the set $X_1(A)$ of ordered edges of $\Gamma(A)$ and the stabilizer of the ordered edge $( \infty, 0)$ is the group $T$ of diagonal matrices in $\spl{2}{A}$ and $\gl{2}{A}$ respectively. \end{lem} \begin{proof} Let $( [a], [b])$ be an ordered edge and let $ X:=\left[ \begin{array}{c} a\\ b\\ \end{array} \right] \in \gl{2}{A}. $ Replacing $a$ by $\mathrm{det}(X)^{-1}a$ if necessary, we can suppose $X\in \spl{2}{A}$. Clearly $( \infty\cdot X, 0\cdot X)= ( [a],[b])$. The statement about the stabilizer is clear. \end{proof} Let us say that two vertices $[u]$ and $[v]$ of $\Gamma(A)$ are \emph{neighbours} if $\{ [u],[v]\}$ is an edge. Then $[b_1,b_2]$ is a neighbour of $\infty=[1,0]$ if and only if $b_2\in A^\times$ and thus if and only if $[b_1,b_2]= [(b_1b_2^{-1},1)]$ lies in $A_+$. Likewise, the set of neighbours of $0$ is $A_-$. Hence the set of common neighbours of $\infty$ and $0$ is the set $A_+\cap A_-=\{ [u,1]\ | u\in A^\times\}=\{ [1,u]\ | u\in A^\times\}$. \subsection{$3$-cliques and $4$-cliques in $\Gamma(A)$}\label{sec:clique} \begin{lem} \label{lem:3clique} Let $A$ be a ring. $\gl{2}{A}$ acts transitively on the set of ordered $3$-cliques $X_3(A)$. The stabilizer of the ordered $3$-clique $(\infty,0,1)$ is the group of scalar matrices in $\gl{2}{A}$. \end{lem} \begin{proof} Let $([a],[b],[c])$ be any ordered $3$-clique. By the proof of Lemma \ref{lem:edge} we can choose $X\in \gl{2}{A}$ such that $[a]\cdot X=\infty, [b]\cdot X = 0$. It follows that $[c]\cdot X$ is a neighbour of both $\infty$ and $0$ and hence $[c]\cdot X = [u,1]$ for some $u\in A^\times$. Now let $X':= X\mathrm{diag}(1,u)$. Then $[a]\cdot X'=\infty, [b]\cdot X'=0$ and $[c]\cdot X'=[u,1]\cdot \mathrm{diag}(1,u)=[u,u]=1$ as required. \end{proof} \begin{cor}\label{cor:3clique} Let $a$, $b$ be unimodular rows and suppose that $\{ [a],[b]\}$ is an edge of $\Gamma(A)$. Then there is a bijection from $A^\times$ to the set of vertices $[c]$ such that $\{ [a], [b],[c]\}$ is a $3$-clique, given by $u\mapsto [ a+ub]$. \end{cor} \begin{proof} Let $X=\left[ \begin{array}{c} a\\ b\\ \end{array} \right]\in \gl{2}{A}$. As remarked above, the $3$-cliques containing $\{ \infty,0\}$ are precisely the triples $\{\infty,0,[u,1]\}$ for some $u\in A^\times$. For any $u\in A^\times$, $X$ sends the $3$-clique $(\infty, 0, [1,u])$ to $([a], [b],[a+ub])$ and hence this accounts for all $3$-cliques containing $[a]$ and $[b]$. \end{proof} \begin{lem}\label{lem:4clique} Let $A$ be a ring. The graph $\Gamma(A)$ contains $4$-cliques if and only if the set $\wn{A}:=\{ u\in A^\times\ |\ 1-u\in A^\times\}$ is non-empty. \end{lem} \begin{proof} Let $\{ x,y,z,w\}$ be a $4$-clique in $\Gamma(A)$. There exists $X\in \spl{2}{A}$ such that $x=\infty\cdot X$ and $y=0\cdot X$. Multipying by $X^{-1}$ if necessary, we can assume $x=\infty$, $y=0$. Since $z$ is a neighbour of $\infty$, $z\in A_+$. Since $z$ is a neighbour of $0$, $z\in A_-$. Thus $z\in A_+\cap A_-$ so $z=[r,1]$ for some unit $r$. Similarly $w=[s,1]$ for some unit $s$. Since $z$ is a neighbour of $w$, $r-s=t$ is a unit. Thus, letting $u=r^{-1}s\in a^\times$, we have $1-u=r^{-1}t\in A^\times$. Conversely, if $u,1-u$ are units in $A$, then $\infty,0,[u,1],[1-u,1]\}$ is a $4$-clique in $\Gamma(A)$. \end{proof} \begin{exa} Since $\wn{\Bbb{Z}}=\emptyset$, $\Gamma(\Bbb{Z})$ has no $4$-cliques and hence $Y(\Bbb{Z})$ has no $3$-simplices. Thus $Y(\Bbb{Z})=Y_2(\Bbb{Z})$ is $2$-dimensional simplicial complex. \end{exa} \subsection{Example: The graph $\Gamma(\Bbb{Z})$ and space $Y(\Bbb{Z})$}\label{sec:gammaz} Since $\Bbb{Z}$ is a PID, the vertices of $\Gamma(\Bbb{Z})$ are naturally the points of $\projl{\Bbb{Q}}$. The neighbours of the vertex $\infty$ are precisely the integers $\Bbb{Z}\subset\projl{\Bbb{Q}}$ and each $n\in \Bbb{Z}$ is a neighbour of $n+1$. Recall that fractions $p/q$ and $r/s$ (written in reduced terms ) are neighbours in $\Gamma(\Bbb{Z})$ if and only if $\left| \frac{p}{q}-\frac{r}{s}\right|=\frac{1}{qs}$. Thus thus the structure of $\Gamma(\Bbb{Z})$ and $Y(\Bbb{Z})$ are closely related to the \emph{Farey sequences} $F_n$: Recall that the \emph{$n$th Farey sequence} $F_n$ is the sequence of reduced fractions between (and including) $0$ and $1$ with denominator at most $n$, written in ascending order of size. If $a/b,c/d$ are two successive terms in $F_n$ (called \emph{Farey neighbours}), then $\left| \frac{a}{b}-\frac{c}{d}\right|=\frac{1}{bd}$. Conversely, if the reduced fractions $a/b$ and $c/d$ lie between $0$ and $1$ and satisfy $\left| \frac{a}{b}-\frac{c}{d}\right|=\frac{1}{bd}$, then $a/b$ and $c/d$ are neighbours in $F_n$ where $n=\mathrm{max}(b,d)$. Observe also that $F_n\setminus F_{n-1}=\{ \frac{a}{n}\ |\ (a,n)=1\}$. Thus this set has $\phi(n)$ elements. We now embed the graph $\Gamma(\Bbb{Z})$ and the space $|Y(\Bbb{Z})|$ in $\Bbb{R}^2$ as follows: Identify $\infty$ with the origin $(0,0)\in \Bbb{R}^2$. Now for any $x\in \Bbb{Q}$, write $x=\frac{p}{q}$ with $(p,q)=1$ and $q>0$ and identify $p/q$ with the point $(q,p/q)\in \Bbb{R}^2$. We join any two points which are neighbours in $\Gamma(\Bbb{Z})$ to get an embedding $\Gamma(\Bbb{Z})\subset \Bbb{R}^2$. Our arguments below will show that none of the resulting edges intersect except at vertices. Recall also that each edge $(x,y)=(p/q,r/s)$ in $\Gamma(\Bbb{Z})$ lies in precisely two $2$-simplices of $Y(\Bbb{Z})$: those with third vertex $x\oplus y:=\frac{p+r}{q+s}$ or $x\ominus y:=\frac{p-r}{q-s}$ (see Corollary \ref{cor:3clique}). We construct the subspace $\tilde{Y}(Z)\subset\Bbb{R}^2$ as follows: We start with $\Gamma(\Bbb{Z})$ and whenever $\{ x,y,z\}\subset \Bbb{R}^2$ is a $2$-simplex we add all the points in the closed triangle $[x,y,z]$ to $\tilde{Y}(Z)$. We will show that $\tilde{Y}(\Bbb{Z})$ is homeomorphic to $|Y(\Bbb{Z})|$. For all $n\geq 1$, let $\Gamma(\Bbb{Z},n)\subset \Bbb{R}^2$ be the induced subgraph of $\Gamma(\Bbb{Z})$ consisting of vertices with denominator at most $n$ (including $\infty$, which has denominator $0$). Let $\tilde{Y}(\Bbb{Z},n)$ be the corresponding subspace of $\tilde{Y}(\Bbb{Z})$. Thus $\Gamma(\Bbb{Z})=\cup_{n=1}^\infty \Gamma(\Bbb{Z},n)$ and $\tilde{Y}(Z)=\cup_{n=1}^\infty \tilde{Y}(\Bbb{Z},n)$. Thus the vertices of $\Gamma(\Bbb{Z},1)$ are $\infty$ and the integers. It decomposes into $2$-simplices $[\infty,n,n+1]$, $n\in \Bbb{Z}$, and thus $\tilde{Y}(\Bbb{Z},1)$ is the set $\left(\Bbb{R}\times (0,1]\right)\cup \{ (0,0)\}$. Observe that for any $n\in \Bbb{Z}$, translation by $n$ ($=$ right multiplication by $\left[ \begin{array}{cc} 1&0\\ n&1\\ \end{array} \right] \in \spl{2}{A}$) is an automorphism of $\Gamma(\Bbb{Z})\subset \projl{\Bbb{Q}}$. When we embed $\Gamma(\Bbb{Z})$ in $\Bbb{R}^2$, it follows that $\Gamma(\Bbb{Z})\cap H_1$ is invariant under horizontal translation by any $n\in \Bbb{Z}$, where $H_1$ is the closed half-plane $\{ (x,y)\in \Bbb{R}^2\ |\ y\geq 1\}$. Similarly, $\Gamma(\Bbb{Z},m)\cap H_1$ and $\tilde{Y}(\Bbb{Z},n)\cap H_1$ are invariant under horizontal translation by integers. In turn, it follows that the spaces $\tilde{Y}(\Bbb{Z},n+1,n):= \tilde{Y}(\Bbb{Z},n+1)\setminus \tilde{Y}(\Bbb{Z},n)$, $n\geq 1$ are all invariant under horizontal translations by integers. Now $\tilde{Y}(\Bbb{Z})=\tilde{Y}(\Bbb{Z},1)\cup \left(\bigcup_{n=1}^{\infty}\tilde{Y}(\Bbb{Z},n+1,n)\right)$. The vertices of $\Gamma(\Bbb{Z},n)$ are $\infty$, the Farey sequence $F_n\subset \Bbb{R}^2$ and all horizontal translates of the points of $F_n$ by integer distances. The points and additional edges of of $\Gamma(\Bbb{Z},n+1,n)$ arise as follows: Because of the translation-invariance property it is enough to determine the additional points and edges in the vertical strip $[0,1]\times \Bbb{R}$. The existing vertices are the points of $F_n$. Whenever a pair of neighbours $x=p/q<r/s=y$ in this sequence has the property that $q+s=n+1$, then $x\oplus y$ is a point of $F_{n+1}\setminus F_n\subset \Gamma(\Bbb{Z},n+1,n)$ with $y$-coordinate $n+1$. We have $x<x\oplus y <y$, and we get two additional edges $(x,x\oplus y)$ and $(x\oplus y,y)$ and an additional $2$-simplex $\{ x,x\oplus y,y\}$. Recall that the edge $(x,y)$ bounds two $2$-simplices. The other has vertex $x\ominus y$ which lies strictly below $x$ and $y$ since it has denominator $|q-s|<q,s$. In this way, the new edges which are added in $\tilde{Y}(\Bbb{Z},n+1,n)$ do not intersect the interior of any any existing edges and the new simplices do not intersect the interior of any any existing simplices. Thus we see by induction that each of the spaces $|Y(\Bbb{Z},n)|$ embeds $\Bbb{R}^2$ with image $\tilde{Y}(\Bbb{Z},n)$. Hence $|Y(\Bbb{Z})|$ embeds in $\Bbb{R}^2$ with image $\tilde{Y}(\Bbb{Z})=\cup \tilde{Y}(\Bbb{Z},n)$. Note further that since $x<x\oplus y <y$, given any point $z$ in the closed simplex $\{ x,x\oplus y,y\}$ the vertical line segment joining $z$ to the edge $(x,y)$ is fully contained in this $2$-simplex. It follows that $\tilde{Y}(\Bbb{Z},n+1)$ homotopy retracts to $\tilde{Y}(\Bbb{Z},n)$ by moving along vertical lines, and hence that the space $\tilde{Y}(\Bbb{Z})$ homotopy retracts to the contractible space $\tilde{Y}(\Bbb{Z},1)$ by restricting to $\tilde{Y}(\Bbb{Z})$ the obvious contraction of $\Bbb{R}^2$ to $\{ (x,y)\ | y\leq 1\}$ along vertical lines. It follows that $\tilde{Y}(\Bbb{Z})$, and hence $|Y(\Bbb{Z})|$, is a contractible space. Finally, we describe explicitly the space $\tilde{Y}(\Bbb{Z})\subset \Bbb{R}^2$.: Let $(x,t)\in \Bbb{R}^2$ with $t>0$. \begin{enumerate} \item If $x=\frac{p}{q}$ is rational, then $(x,t)\in \tilde{Y}(\Bbb{Z})$ if and only if $t\leq q$. Let $L_{p/q}$ be the open vertical half-line $ \{ (\frac{p}{q},t)\ | \ t>q\}$. Then $L_{p/q}\cap \tilde{Y}(\Bbb{Z})=\emptyset$. \item Suppose now that $x$ is irrational. Choose rationals $y=p/q$ and $z=r/s$ satisfying (i) $q,s>t$ (ii) $y$ and $z$ are neighbours in the Farey sequence $F_n$ where $n=\mathrm{max}(q,s)$ and (iii) $y<x<z$. Then $(x,t)$ lies vertically below the edge $((y,q),(z,s)\in \tilde{Y}(\Bbb{Z},n)$ and hence $(x,t)$ itself lies in $ \tilde{Y}(\Bbb{Z},n)\subset \tilde{Y}(\Bbb{Z})$. \end{enumerate} Putting these two observations together, we conclude that \[ \tilde{Y}(\Bbb{Z})=\{ (0,0)\}\cup \left( \Bbb{R}\times (0,\infty)\right)\setminus\left(\cup_{p/q} L_{p/q}\right). \] \section{Path components of $\Gamma(A)$ and the condition ($\mathrm{GE}_2$)}\label{sec:path} Let $A$ be a ring. Given $a\in A$, we consider the following matrices in $\spl{2}{A}$: \[ E_{2,1}(a):=\left[ \begin{array}{ll} 1&0\\ a&1\\ \end{array} \right],\ E_{1,2}(a):= \left[ \begin{array}{ll} 1&a\\ 1&0 \end{array} \right],\ E(a):= \left[ \begin{array}{ll} a&1\\ -1&0 \end{array} \right]=W\cdot E_{2,1}(a)= E_{1,2}(-a)\cdot W \] where \[ W:=E(0)= \left[ \begin{array}{ll} 0&1\\ -1&0 \end{array} \right]= E_{1,2}(1)E_{2,1}(-1). \] Let $E_2(A)$ denote the subgroup of $\spl{2}{A}$ generated by the set of elementary matrices $E_{i,j}(a)$, $a\in A$ for $i\not= j\in \{ 1,2\}$. Let $\mathrm{GE}_2(A)$ denote the subgroup of $\gl{2}{A}$ generated by $E_2(A)$ and the group $T=D_2(A)$ of invertible diagonal matrices. Thus all the above matrices belong to $\mathrm{E}_2(A)$. Furthermore, for any $a\in A$ we let \[ S(a):=\left[ \begin{array}{ll} a&1\\ 1&0 \end{array} \right]= \left[ \begin{array}{ll} 1&0\\ 0&-1 \end{array} \right]\cdot E(a)\in \gl{2}{A}. \] \begin{lem}\label{lem:path} Let $A$ be a ring and let $x_0,x_1,\ldots,x_n$ be a path in $\Gamma(A)$. Fix $X\in \gl{2}{A}$ with $x_0=\infty\cdot X$. Then there exist unique $a_1,\ldots,a_n\in A$ satisfying \[ x_i=\infty E(a_i)E(a_{i-1})\cdots E(a_1)X\mbox{ for }i=1,\ldots, n. \] Conversely, given $X\in\gl{2}{A}$ and $a_1,\ldots, a_n\in A$ , if we set $x_0:=\infty \cdot X$ and $x_i:=\infty\cdot E(a_i)\cdots E(a_1)X$ for $1\leq i\leq n$, then $x_0,x_1,\cdots,x_n$ is a path in $\Gamma(A)$. \end{lem} \begin{proof} For the first statement, we proceed by induction on $n$. When $n=1$, we have that $x_1$ is a neighbour of $x_0=\infty\cdot X$ and hence $x_1\cdot X^{-1}$ is a neighbour of $x_0\cdot X^{-1}=\infty$. It follows that $x_1\cdot X^{-1}=[a_1,1]=\infty\cdot E(a_1)$ for some uniquely determined $a_1\in A$. Thus $x_1=\infty\cdot E(a_1)X$ as required. For the inductive step, simply replace $X$ by $X_n:= E(a_n)\cdots E(a_1)X$. The converse statement follows from the observation that if $Y\in \gl{2}{A}$ and $a\in A$, then $\{ \infty, \infty\cdot E(a)\}$ is an edge in $\Gamma(A)$, and hence so is $\{ \infty\cdot Y,\infty\cdot E(a)Y\}$. \end{proof} \begin{rem} Since for any $a\in A$ we have $[a,1]=\infty\cdot E(a)=\infty\cdot S(a)$, we could have stated Lemma \ref{lem:path} using the matrices $S(a)$ rather than the matrices $E(a)$. One then finds $x_i=\infty E(a_n)\cdots E(a_1)X= \infty S(b_n)\cdots S(b_1)X$ where $b_i=(-1)^{i-1}a_i$ for all $i$. \end{rem} \begin{thm}\label{thm:pi0} Let $A$ be a ring. Then there is a natural bijection of right $\gl{2}{A}$-sets $\pi_0(\Gamma(A))\leftrightarrow \mathrm{GE}_2(A)\backslash \gl{2}{A}$. \end{thm} \begin{proof} For $u,v\in A^\times$, let $D=\mathrm{diag}(u,v), D'=\mathrm{diag}(v,u)\in \gl{2}{A}$. Then for any $a\in A$, we have $E(a)D=D'E(v^{-1}au)$. It follows that any element of $\mathrm{GE}_2(A)$ can be written as a product $DE$ where $D$ is a diagonal matrix and $E\in E_2(A)$. For $x\in Y_0(A)$, let $p(x)$ denote the path component of $x$ in $\Gamma(A)$. Suppose now that vertices $x$ and $y$ are in the same path component of $\Gamma(A)$. Choose $X,Y\in\gl{2}{A}$ such that $x=\infty\cdot X$ and $y=\infty\cdot Y$. By Lemma \ref{lem:path} there exist $a_1,\ldots,a_n\in A$ such that \[ y=\infty\cdot E(a_n)\cdots E(a_1)X=\infty\cdot Y. \] It follows that $E(a_n)\cdots E(a_1)XY^{-1}$ stabilizes $\infty$ and hence belongs to $\tilde{B}\subset \mathrm{GE}_2(A)$. Thus $\mathrm{GE}_2(A)X=\mathrm{GE}_2(A)Y$ in $\mathrm{GE}_2(A)\backslash \gl{2}{A}$. We have shown that there is a well-defined map of right $\gl{2}{A}$-sets \[ \pi_0(\Gamma(A))\to \mathrm{GE}_2(A)\backslash \gl{2}{A},\ p(x)\mapsto \mathrm{GE}_2(A) X \] where $X\in\gl{2}{A}$ is any matrix such that $x=\infty\cdot X$. Conversely, the map \[ \gl{2}{A}\to \pi_0(\Gamma(A),\ X\mapsto p(\infty\cdot X) \] induces a well-defined map $\mathrm{GE}_2(A)\backslash \gl{2}{A}\to \pi_0(A)$: Let $Y\in \mathrm{GE}_2(A)$. By our preliminary remarks, $Y=DE$ where $D$ is diagonal and $E\in E_2(A)$. Thus $\infty\cdot YX=\infty \cdot DEX=\infty \cdot EX$ lies in $p(x\cdot X)$ by Lemma \ref{lem:path} again. This map is now clearly inverse to the map defined in the previous paragraph. \end{proof} \begin{rem} For any ring $A$, the inclusion $\spl{2}{A}\to \gl{2}{A}$ induces a bijection of right $\spl{2}{A}$-sets $E_2(A)\backslash \spl{2}{A}\leftrightarrow \mathrm{GE}_2(A)\backslash \gl{2}{A}$. \end{rem} We recall that a ring $A$ \emph{satisfies condition ($\mathrm{GE}_2$)}, or \emph{is a $\mathrm{GE}_2$-ring}, if $\mathrm{GE}_2(A)=\gl{2}{A}$ (Cohn, \cite{cohn:gln}). Since $E_2(A)=\spl{2}{A}\cap \mathrm{GE}_2(A)$ and $\gl{2}{A}=\spl{2}{A}\cdot D_2(A)$, this condition is equivalent to $E_2(A)=\spl{2}{A}$. \begin{cor} \label{cor:pi0} Let $A$ be a ring. Then $A$ is a $\mathrm{GE}_2$-ring if and only if the graph $\Gamma(A)$ is path-connected. \end{cor} We recall how these properties are related to the classical Euclidean algorithm: \begin{lem}\label{lem:euclid} Let $A$ be a ring and let $(a,b)$ be a unimodular row. The following are equivalent: \begin{enumerate} \item There exist $a_0,\ldots,a_n\in A$ such that $[a,b]= \infty\cdot S(a_n)S(a_{n-1})\cdots S(a_0)$. \item There exist $a_0,\ldots, a_n, ,r_0,\ldots, r_{n-1}\in A$ satisfying the following:\\ Let $r_{-2}:=a,r_{-1}:=b,r_n:=0$. Then \[ r_{k-2}=a_kr_{k-1}+r_k \mbox{ for } 0\leq k\leq n. \] (Following the terminology of \cite{czz}, we will say that the pair $(a,b)$ \emph{satisfies a weak Euclidean algorithm}.) \end{enumerate} \end{lem} \begin{proof} We begin by noting that for $a\in A$ and $(x,y)\in A^2$, we have \[ (x,y)S(a)^{-1}=(x,y)\left[ \begin{array}{ll} 0&1\\ 1&-a \end{array} \right]=(y,x-ay). \] Thus $(z,w)=(x,y)S(a)^{-1}$ if and only if $z=y$ and $x=ay+w$. Suppose now that (1) holds. Lifting the equation to $A^2$ we have \[ (a,b)=(u,0)\cdot S(a_n)S(a_{n-1})\cdots S(a_0) \] for some unit $u$ in $A$. Now define $(r_{-2},r_{-1}):=(a,b)$ and $(r_{k-1},r_k):=(a,b)\cdot S(a_0)^{-1}\cdots S(a_k)^{-1}$ for $0\leq k\leq n$. These definitions are consistent since they satisfy $(r_{k-1},r_k)=(r_{k-2},r_{k-1})S(a_k)^{-1}$. Furthermore it follows that $r_{k-2}=a_kr_{k-1}+r_k$. Finally note that $(r_{n-1},r_n)=(a,b)\cdot S(a_0)^{-1}\cdots S(a_n)^{-1}=(u,0)$ so that $r_n=0$ (and $r_{n-1}=u$), as required. Conversely, suppose that (2) holds. Then we have that $(r_{-2},r_{-1})=(a,b)$ and $(r_{k-1},r_k)=(r_{k-2},r_{k-1})S_{a_k}^{-1}$ for $0\leq k\leq n$. It follows that \[ (a,b)\cdot S(a_0)^{-1}\cdots S(a_n)^{-1}=(r_{n-1},r_n)=(r_{n-1},0) \] where $r_{n-1}$ is necessarily a unit. Hence (1) holds (since then $\infty =[r_{n-1},0]$). \end{proof} \begin{prop}\label{prop:euclid} For any ring $A$, the following are equivalent: \begin{enumerate} \item $\Gamma(A)$ is path-connected. \item For any unimodular row $(a,b)$, there exist $a_0,\ldots,a_n\in A$ such that \\ $(a,b)= \infty\cdot S(a_n)S(a_{n-1})\cdots S(a_0)$. \item Any unimodular pair $(a,b)$ satisfies a weak Euclidean algorithm. \item $A$ is a $\mathrm{GE}_2$-ring. \end{enumerate} \end{prop} \begin{proof} (1) and (4) are equivalent by Corollary \ref{cor:pi0}. (1) and (2) are equivalent by Lemma \ref{lem:path} (and the remark that follows it). (2) and (3) are equivalent by Lemma \ref{lem:euclid}. \end{proof} \begin{exa} Dennis, Magurn and Vaserstein (\cite{dmv},1984) have shown that $\Bbb{Z}[C_n]$ is $GE_2$ring for all $n\geq 1$, where $C_n$ denotes the cyclic group of order $n$. Thus the graph $\Gamma(\Bbb{Z}[C_n])$ is path-connected. \end{exa} \begin{exa} If $\mathcal{O}$ is a ring of $S$-integers in a number field $F$ and if the group of units $\mathcal{O}^\times$ is infinite then $\mathcal{O}$ is a $\mathrm{GE}_2$-ring (Vaserstein \cite{vas},1972), and hence $\Gamma(\mathcal{O})$ is path-connected. \end{exa} When $A$ is an integral domain the conditions of Proposition \ref{prop:euclid} are also equivalent to the existence of certain continued fraction expansions: \begin{rem} Since $x\cdot S(a)=a+\frac{1}{x}$ for $x\in \projl{F}$, we have $\frac{a}{b}=\infty\cdot S(a_n)S(a_{n-1})\cdots S(a_0)$ if and only if \[ \frac{a}{b}= a_0+\cfrac{1}{a_{1}+\cfrac{1}{a_{2}+\cfrac{1}{a_{3}+\cfrac{1}{\ddots+\cfrac{1}{a_n}}}}}\quad\mbox{ in }\projl{F}. \] Thus the B\'ezout point $a/b$ lies in the path component of $\infty\in \Gamma(A)$ if and only if $a/b$ admits a finite continued fraction expansion (with entries in $A$). Of course, it also follows that any point in $\projl{F}$ admitting a finite continued fraction expansion is necessarily a B\'ezout point, since it lies in the orbit $\infty\cdot \gl{2}{A}$. \end{rem} We thus have: \begin{cor}\label{cor:pathconn} Let $A$ be an integral domain with field of fractions $F$. The following are equivalent: \begin{enumerate} \item $\Gamma(A)$ is path-connected. \item For every B\'ezout point $x\in \projl{F}$ there exist $a_0,\ldots,a_n\in F$ with\\ $x=\infty\cdot S(a_n)S(a_{n-1})\cdots S(a_0)$. \item Every B\'ezout pair satisfies a weak Euclidean algorithm. \item $A$ is a $\mathrm{GE}_2$-ring. \item Every B\'ezout point in $\projl{F}$ admits a finite continued fraction expansion with entries in $A$. \end{enumerate} \end{cor} \begin{cor}\label{cor:euclid} If $A$ is a Euclidean domain then $Y_0(A)=\projl{A}$ and $\Gamma(A)$ is path-connected. \end{cor} \begin{proof} A Euclidean domain is a principal ideal domain, hence a B\'ezout domain. Any non-zero pair of elements of $A$ satisfies a weak Euclidean algorithm using the division algorithm in the usual way. \end{proof} On the other hand, there exist PIDs $A$ for which $\Gamma(A)$ is not path-connected: \begin{exa}[ Cohn, \cite{cohn:gln}] \label{exa:cohn} The ring $A=\Bbb{Z}\left[ \frac{1+\sqrt{-19}}{2}\right]$ is a principal ideal domain but is not a $\mathrm{GE}_2$-ring. Thus $\Gamma(A)$ is not path-connected. \end{exa} \begin{exa}[Cossu, Zanardo, Zannier, \cite{czz}]\label{exa:czz} Let $k$ be a field. Let $\mathcal{C}\subset \mathbb{P}^2$ be a smooth projective curve of genus $0$ over $k$. Let $\mathcal{C}_0:=\mathcal{C}\cap \mathbb{A}^2$. If $\mathcal{C}(k)=\emptyset$ then the coordinate ring $k[\mathcal{C}_0]$ is a PID which is not a $\mathrm{GE}_2$-ring (\cite[Corollary 3.6]{czz}). For example, take $k=\Bbb{R}$ and let $\mathcal{C}$ be the curve with homogeneous equation $x^2+y^2+z^2=0$. Then the ring $\Bbb{R}[\mathcal{C}_0]\cong \Bbb{R}[x,y]/\an{ x^2+y^2+1}$ is a PID but not a $\mathrm{GE}_2$-ring. \end{exa} \begin{exa}[Cossu, Zanardo, Zannier, \cite{czz} again]\label{exa:czz2} Let $k$ be a field. Let $\mathcal{C}$ be a smooth curve of genus $\geq 1$ over $k$ with a unique point at infinity. If the coordinate ring $k[\mathcal{C}_0]$ is a PID then it is not a $\mathrm{GE}_2$-ring (\cite[Theorem 3.11, Corollary 3.12]{czz}). For example, let $k$ be a perfect field. Let $f(x)\in k[x]$ be a cubic with three distinct roots. Let $E\subset \mathbb{P}^2$ be th elliptice curve $y^2=f(x)$. It has a unique point, $P_\infty$ say, at infinity. Suppose that $E(k)=\{ P_\infty\}$. Then the affine coordinate ring $k[E_0]=k[x,y]\an{y^2-f(x)}$ is a PID which is not a $\mathrm{GE}_2$-ring. \end{exa} \section{Rings which are universal for $\mathrm{GE}_2$ and the group $C(A)$}\label{sec:univ} Cohn \cite{cohn:gln} notes some universal relations satisfied by elementary matrices: Let $A$ be a ring. For $u\in A^\times$, let $D(u):=\mathrm{diag}(u,u^{-1})\in \spl{2}{A}$. Observe that $D(u)=E(-u)E(-u^{-1})E(-u)\in E_2(A)$. Then we have \begin{eqnarray*} D(u)D(v)&=& D(uv) \mbox{ for all } u,v\in A^\times. \\ E(a)E(0)E(b)&=&-E(a+b)=D(-1)E(a+b)\mbox{ for all }a,b\in A\\ D(u)E(a)D(u)&=& E(u^2a)\mbox{ for all } a\in A,u\in A^\times.\\ \end{eqnarray*} A ring $A$ a said to be a \emph{universal for $\mathrm{GE}_2$} if all relations among elementary matrices in $E_2(A)=\spl{2}{A}$ are consequences of these three (families of) relations. We now spell out this condition in more explicitly. Let $C(A)$ denote the group with generators $\epsilon(a),a\in A$ subject to the following three relations: \begin{enumerate} \item For $u\in A^\times$, let $h(u):=\epsilon(-u)\epsilon(-u^{-1})\epsilon(-u)$. Then $h(u)h(v)=h(uv)$ for all $u,v\in A^\times$. \item $\epsilon(a)\epsilon(0)\epsilon(b)=h(-1)\epsilon(a+b)$ for all $a,b\in A$. \item $h(u)\epsilon(a)h(u)=\epsilon(u^2a)$ for all $u\in A^\times$, $a\in A$. \end{enumerate} Because of Cohn's relations above, there is a well-defined group homomorphism $\psi:C(A)\to \spl{2}{A}$ sending $\epsilon(a)$ to $E(a)$ (and hence $h(u)$ to $D(u)$). The image of $\psi$ is clearly $E_2(A)$ and thus $A$ is a $\mathrm{GE}_2$-ring if and only if $\psi$ is surjective. Furthermore, $A$ is universal for $\mathrm{GE}_2$ if and only if $\psi$ is injective (and hence an isomorphism). We will denote $\ker{\psi}$ by $U(A)$. So a ring $A$ is universal for $\mathrm{GE}_2$ if and only if $U(A)=\{ 1\}$. Relation (1) implies $h(1)=1$ in $C(A)$ and hence $h(-1)^2=h(1)=1$. Furthermore $h(-1)$ is central in $C(A)$ by (3). We note also that letting $a=b=0$ in (2) gives the identity $\epsilon(0)^2=h(-1)$ and hence $\epsilon(0)^4=1$. For any $a\in A$, letting $b=-a$ in (2) gives $\epsilon(a)\epsilon(0)\epsilon(-a)=h(-1)\epsilon(0)=\epsilon(0)^3=\epsilon(0)^{-1}$ and hence \[ \epsilon(a)^{-1}=\epsilon(0)\epsilon(-a)\epsilon(0)\mbox{ in } C(A) \] for all $a\in A$. For $a\in A$, let $y(a):=\epsilon(0)^3\epsilon(a)=h(-1)\epsilon(0)\epsilon(a)\in C(A)$. Thus $\psi(y(a))=E(0)^3E(a)=E_{2,1}(a)\in B:=B(\spl{2}{A})$. \begin{lem}\label{lem:ya} Let $A$ be a ring. \begin{enumerate} \item $y(a)y(b)=y(a+b)$ in $C(A)$ for all $a,b\in A$. \item For all $u\in A^\times$, $a\in A$ we have \[ y(a)^{h(u)}:=h(u)^{-1}y(a)h(u)=y(u^2a) \mbox{ in }C(A). \] \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item Let $a,b\in A$. Then \begin{eqnarray*} y(a)y(b)&=& h(-1)\epsilon(0)\epsilon(a)h(-1)\epsilon(0)\epsilon(b)\\ &=& \epsilon(0)\epsilon(a)\epsilon(0)\epsilon(b)\mbox{ (since $h(-1)$ is central, $h(-1)^2=1$)}\\ &=& \epsilon(0)h(-1)\epsilon(a+b)=y(a+b)\mbox{ (using relation (2))}. \end{eqnarray*} \item Let $u\in A^\times$, $a\in A$. Then \begin{eqnarray*} h(u)^{-1}y(a)h(u)&=& h(u^{-1})h(-1)\epsilon(0)\epsilon(a)h(u)\\ &=&h(-1)h(u^{-1})\epsilon(0)h(u^{-1})h(u)\epsilon(a)h(u) \mbox{( using relation (1))}\\ &=& h(-1)\epsilon(0)\epsilon(u^2a)=y(u^2a) \mbox{ (using relation (2))}. \end{eqnarray*} \end{enumerate} \end{proof} More generally, for any $u\in A^\times$, $a\in A$ we let $\beta(u,a):=h(u)y(ua)\in C(A)$. Thus \[ \psi(\beta(u,a))=D(u)E_{2,1}(ua)= \left[ \begin{array}{cc} u&0\\ a&u^{-1}\\ \end{array} \right]\in B. \] \begin{lem}\label{lem:b} Let $\mathbb{B}$ denote the subset $\{ \beta(u,a)\in C(A)\ |\ u\in A^\times, a\in A\}$ of $C(A)$. Then $\mathbb{B}$ is a subgroup and $\psi:\mathbb{B}\to B$ is a group isomorphism. \end{lem} \begin{proof} Let $u,v\in A^\times$, $a,b\in A$. Then \begin{eqnarray*} \beta(u,a)\beta(v,b)&=&h(u)y(ua)h(v)y(vb)\\ &=&h(u)h(v)y(uav^2)y(vb) \mbox{ by Lemma \ref{lem:ya} (2)}\\ &=& h(uv)y(uav^2+vb)\mbox{ by Lemma \ref{lem:ya} (1) }\\ &=& h(uv)y(uv(av+u^{-1}b))\\ &=&\beta(uv,av+u^{-1}b).\\ \end{eqnarray*} It follows that $\mathbb{B}$ is closed under multiplication in $C(A)$. Furthermore this formula tells us that $\beta(u,a)^{-1}=\beta(u^{-1},-a)\in \mathbb{B}$ for all $u\in A^\times$, $a\in A$. So $\mathbb{B}$ is a subgroup of $C(A)$ and $\psi:\mathbb{B}\to B$ is a surjective homomorphism. Of course $\psi(\beta(u,a))=D(u)E_{2,1}(ua)=1$ if and only if $u=1$ and $a=0$. Thus $\ker{\psi}\cap \mathbb{B}=\{ \beta(1,0)\}=\{ 1\}$. So $\psi$ is an isomorphism as required. \end{proof} We will regard the inverse isomorphism $B\to \mathbb{B}$ and the resulting embedding $B\to C(A)$ as the standard embedding of $B$ in $C(A)$ and will denote this map by $\mathrm{st}$. Thus, in particular, $\mathrm{st}(E_{2,1}(a))=y(a)$ for all $a\in A$ and $\mathrm{st}(D(u))=h(u)$ for all $u\in A^\times$. \begin{cor}\label{cor:b} The map $\mathbb{B}\to A^\times$, $\beta(u,a)\mapsto u$ is a well-defined group homomorphism whose kernel is the subgroup $\{ y(a)\ |a\in A\}$ which is isomorphic to the additive group $A$. \end{cor} If $\beta\in \mathbb{B}$, we will let $u(\beta)$ denote the associated unit. Recall that $\spl{2}{A}$ acts on $Y_0(A)$ and hence $B$ acts by restriction. Furthermore, if $a=[a,1]\in A_+$, then $a\cdot X\in A$ for all $X\in B$ (since $B$ stabilizes $\infty$ and sends neighbours to neighbours). Via the isomorphism $\psi$ there is thus a natural action of $\mathbb{B}$ on $A$; i.e., $a\cdot \beta:= a\cdot\psi(\beta)$ for all $\beta\in \mathbb{B}$ (and hence also, $a\cdot Z=a\cdot \stand{Z}$ for all $Z\in B$). Explicitly we have: \[ a\cdot \beta(u,b)=au^2+bu\mbox{ for all }a,b\in A, u\in A^\times. \] The following lemma will play a key role below. \begin{lem}\label{lem:key} Let $a\in A$, $\beta\in \mathbb{B}$. Then \[ \epsilon(a\cdot \beta)=h(u(\beta))\epsilon(a)\beta \mbox{ in } C(A). \] \end{lem} \begin{proof} Let $u=u(\beta)$. So $\beta=\beta(u,b)=h(u)y(ub)$ for some $b\in A$. Thus \begin{eqnarray*} \epsilon(a)\beta=\epsilon(a)h(u)y(ub)\\ &=& h(u^{-1})\epsilon(u^2a)y(ub) \mbox{ by (3)}\\ &=& h(u^{-1})\epsilon(u^2a)h(-1)\epsilon(0)\epsilon(ub)\\ &=& h(u^{-1})\epsilon(u^2a+ub) \mbox{ by (2)}\\ &=& h(u)^{-1}\epsilon(a\cdot\beta) \end{eqnarray*} as required. \end{proof} Applying the map $\psi$ (or, by direct calculation) we have \begin{cor}\label{cor:key1} Let $a\in A$, $X\in B $. Then $a\cdot X\in A$ and \[ E(a\cdot X)=D(u(X))E(a)X\mbox{ in } E_2(A), \] where the unit $u(X)$ is the $(1,1)$-entry of $X$. \end{cor} \begin{prop}\label{prop:key} Let $a_1,\ldots,a_n\in A$. Let $\beta\in \mathbb{B}\subset C(A)$ and let $u=u(\beta)$. Then there exist unique $b_1,\ldots,b_n\in A$ satisfying \[ \epsilon(b_i)\cdots\epsilon(b_1)=h(u)^{(-1)^{i-1}}\epsilon(a_i)\cdots\epsilon(a_1)\beta \mbox{ in }C(A) \mbox{ for all }i. \] Furthermore, we have \[ b_1=a_1\cdot \beta\mbox{ and }b_i=u^{(-1)^{i-1}2}a_i\mbox{ for all }i\geq 2. \] \end{prop} \begin{proof} The uniqueness part of the statement follows from the observation that $\epsilon(b)=\epsilon(b')$ in $C(A)$ implies $b=b'$ in $A$. (Apply the homomorphism $\psi$ and this follows from the corresponding statement for the elements $E(b)$ in $\spl{2}{A}$.) Thus it is enough to verify that the elements given in the final line of the statement of the theorem satisfy the relevant identities in $C(A)$. We will proceed by induction on $n$. The case $n=1$ is just Lemma \ref{lem:key}. Suppose the result is known for a given $n\geq 1$ and that $a_1,\ldots,a_n,a_{n+1}\in A$ are given. Let $b_1,\ldots,b_{n+1}$ be the elements defined in the final line of the theorem. Then \begin{eqnarray*} \epsilon(b_{n+1})\epsilon(b_n)\cdots \epsilon(b_n)&=&\epsilon(u^{(-1)^n2}a_{n+1})h(u)^{(-1)^{n-1}}\epsilon(a_n)\cdots\epsilon(a_1)\beta\\ &=& h(u)^{(-1)^n}\left(h(u)^{(-1)^{n-1}}\epsilon(u^{(-1)^n2}a_{n+1})h(u)^{(-1)^{n-1}}\right)\epsilon(a_n)\cdots\epsilon(a_1)\beta\\ &=& h(u)^{(-1)^n}\epsilon(a_{n+1})\epsilon(a_n)\cdots\epsilon(a_1)\beta\\ \end{eqnarray*} as required, using defining relation (3) of $C(A)$. \end{proof} Applying the homomorphism $\psi$ we deduce the corresponding statement for the group $\spl{2}{A}$: \begin{cor}\label{cor:key2} Let $a_1,\ldots,a_n\in A$. Let $Z\in B\subset \spl{2}{A}$ and let $u=u(Z)$. Then there exist unique $b_1,\ldots,b_n\in A$ satisfying \[ E(b_i)\cdots E(b_1)=D(u)^{(-1)^{i-1}}E(a_i)\cdots E(a_1)Z \mbox{ in }\spl{2}{A}\mbox{ for all }i. \] Furthermore, we have \[ b_1=a_1\cdot Z\mbox{ and }b_i=u^{(-1)^{i-1}2}a_i\mbox{ for all }i\geq 2. \] \end{cor} \section{The Edge-Path Groupoid of the simplicial complex $Y(A)$} \subsection{The edge-path groupoid} We begin by reviewing the notion of the \emph{edge-path groupoid} of a simplicial complex $Y$ (Spanier \cite[Chapter 4]{spanier}. A path $\mathbf{p}$ in $Y$ is an $(n+1)$-tuple $(x_0,x_1,\ldots, x_n)$ for some $n\geq 0$, where the $x_i$ are vertices (or $0$-simplices) and for $i=0,n-1$, each $\{ x_i,x_{i+1}\}$ is and edge (or $1$-simplex) of $Y$. If $\mathbf{p}=(x_0,\ldots, x_n)$ is a path, we set $i(\mathbf{p}):=x_0$, the initial point of the path, and $t(\mathbf{p})=x_n$, the terminal point. If $\mathbf{p}=(x_o,\ldots,x_n)$ and $\mathbf{q}=(y_0,\ldots,y_m)$ are two paths satisfying $t(\mathbf{p})=i(\mathbf{q})$ (i.e., $x_n=y_0$) then we can form the \emph{concatenation} $\mathbf{p}\star\mathbf{q}:=(x_0,\ldots,x_n,y_1,\ldots, y_m)$. We consider the equivalence relation generated by the following two relations (which we will refer to as the \emph{homotopy relations}): \begin{enumerate} \item $(x_0,\ldots, x_i,x_{i+1},x_{i+2},\ldots,x_n) \sim (x_0,\ldots,x_i,\ldots,x_n)$ for $0\leq i\leq n-2$ whenever $x_i=x_{i+2}$ \item $(x_0,\ldots, x_i,x_{i+1},x_{i+2},\ldots,x_n)\sim (x_0,\ldots,x_i,x_{i+2},\ldots, x_n)$ for $0\leq i\leq n-2$ whenever $\{ x_i,x_{i+1},x_{i+2}\}$ is a $2$-simplex of $Y$. \end{enumerate} We denote the equivalence path of $\mathbf{p}=(x_0,\ldots,x_n)$ by $[\mathbf{p}]=[x_0,\ldots,x_n]$. Note that this equivalence relation preserves initial and terminal points of paths and that concatenation of equivalence classes is well-defined by $[\mathbf{p}]\star[\mathbf{q}]=[\mathbf{p}\star\mathbf{q}]$ is $t(\mathbf{p})=i(\mathbf{q})$. \subsection{Algebraic representation of paths and concatenation} Let now $A$ be ring. We have already noted, in Lemma \ref{lem:path} above, that a path $(x_0,\ldots, x_n)$ in $\Gamma(A)$ (or, equivalently, in $Y(A)$) is determined by an element $X\in \spl{2}{A}$ and a sequence $a_1,\ldots, a_n$ of $A$, uniquely determined by $\mathbf{p}$ and the choice of $X$ such that $\infty\cdot X=x_0$. In this case, we have $x_i=\infty \cdot E(a_i)\cdots E(a_1)X$ for each $i\geq 1$. Since the stabilizer of $\infty$ in $\spl{2}{A}$ is the group $B$ of lower-triangular matrices, $X$ is determined up to left multiplication by an element of $B$. Corollary \ref{cor:key2} allows us to track the effect that changing the choice of $X$ has on the sequence of elements $a_1,\ldots, a_n$. It immediately implies: \begin{prop}\label{prop:changeX} Let $\mathbf{p}=(x_0,\ldots,x_n)$ be a path in $Y(A)$. Let $X\in \spl{2}{A}$ such that $\infty\cdot X=x_0$. Let $Y\in \spl{2}{A}$ satisfy $Z:=X\cdot Y^{-1}\in B$. Let $a_1,\ldots,a_n$ be the elements associated to $X$ specifying the path $\mathbf{p}$ and let $b_1,\ldots, b_n$ be the corresponding elements associated to $Y$. Let $u=u(Z)$. Then \[ b_1=a_1\cdot Z \mbox{ and }b_i=u^{(-1)^{i-1}2}a_i\mbox{ for all }i\geq 2. \] \end{prop} \begin{proof} By Corollary \ref{cor:key2} we have \[ E(b_i)\cdots E(b_1)=D(u)^{(-1)^{i-1}}E(a_i)\cdots E(a_1)Z\mbox{ in }\spl{2}{A}\mbox{ for all }i. \] and hence \[ E(b_i)\cdots E(b_1)Y=D(u)^{(-1)^{i-1}}E(a_i)\cdots E(a_1)X \mbox{ in }\spl{2}{A}\mbox{ for all }i. \] \end{proof} Thus we have the following algebraic description of paths in $Y(A)$, for any commutative ring $A$: Let $M_n:= \spl{2}{A}\times A^n$. Given $p=(X,a_1,\ldots,a_n)\in M_n$ define \[ I(p):=T_0(p):=X \mbox{ and } T_i(p):= E(a_i)\cdots E(a_1)X\in \spl{2}{A} \mbox{ for } i\geq 1 \] and let $T(p):=T_n(p)$. We define a relation $\sim$ on each of the sets $M_n$ as follows: \[ p=(X,a_1,\ldots,a_n)\sim (Y,b_1,\ldots,b_n)=q \] if and only if $Z:=X\cdot Y^{-1}=I(p)I(q)^{-1}\in B$ and \[ b_1=a_1\cdot Z\mbox{ and }b_i=u^{(-1)^{i-1}2}a_i\mbox{ for all }i\geq 2 \] where $u=u(Z)$. It follows from Corollary \ref{cor:key2} that $p\sim q$ if and only if there exists $Z\in B$ such that \[ T_0(q)=Z^{-1}T_0(p)\mbox{ and } T_i(q)=D(u(Z))^{(-1)^{i-1}}T_i(p) \mbox{ for } i\geq 1. \] We will verify below (Lemma \ref{lem:star} (1)) that $\sim$ is indeed an equivalence relation. Let $\mathcal{M}_n$ be the set of equivalence classes of $M_n$. If $p=(X,a_1,\ldots,a_n)\in M_n$ we will denote its image in $\mathcal{M}_n$ by $ [X,(a_1,\ldots,a_n)]$. When $n=0$, $M_0=\spl{2}{A}$ and $\mathcal{M}_0=B\backslash\spl{2}{A}$. Note that $i([p]):= [I(p)], t([p]):=[T(p)]\in B\backslash\spl{2}{A}=\mathcal{M}_0$ are well-defined. Thus the paths of $Y(A)$ are naturally in bijective correspondence with the elements of $\mathcal{M}:=\cup_{n=0}^\infty \mathcal{M}_n$: The path $\mathbf{p}=(x_0,\ldots,x_n)$ corresponds to the class $[X, (a_1,\ldots, a_n)]\in \mathcal{M}_n$ where $\infty\cdot X=x_0$ and for all $i\geq 1$, $x_i=\infty E(a_i)\cdots E(a_1)X$, or equivalently, \[ a_i=\infty\cdot E(a_i)=x_i\cdot X^{-1}E(a_1)^{-1}\cdots E(a_{i-1})^{-1}\mbox{ for }i\geq 1. \] Conversely, the class $[p]=[X,(a_1,\ldots,a_n)]$ corresponds to the path $(x_0,\ldots,x_n)$ where $x_i=\infty\cdot T_i(p)$ for all $i$. We now give an algebraic description of the operation of concatenating paths. For later convenience, we lift this operation to the level of the sets $M_n$ as follows: Given $p=(X, a_1,\ldots, a_n)\in M_n, q=(Y,b_1,\ldots,b_m)\in M_m$, we define \[ Z_{p,q}:=I(q)T(p)^{-1}=Y(E(a_n)\cdots E(a_1)X)^{-1} = Y\cdot X^{-1}E(a_1)^{-1}\cdots E(a_n)^{-1} \] in $\spl{2}{A}$. (We will refer to this matrix below as the \emph{connection matrix for $p$ and $q$}.) Then $p\star q\in M_{n+m}$ is defined if $Z=Z_{p,q}\in B$ and is given by \[ p\star q:= (X,a_1,\ldots,a_n, b'_1,\ldots,b'_m) \] where \[ b'_1:=b_1\cdot Z\mbox{ and } b'_j:=u^{(-1)^{j-1}2}b_j\mbox{ for } 1\leq j\leq m \] where $u:=u_{p,q}=u(Z_{p,q})$ in this case. Observe that, by Corollary \ref{cor:key2}, the $b'_j$ are entirely determined by the requirement that \[ E(b'_j)\cdots E(b'_1)T(p)=D(u)^{(-1)^{j-1}}E(b_j)\cdots E(b_1)X'\mbox{ for } 1\leq j\leq m; \] i.e., \[ T_{j+n}(p\star q)=D(u_{p,q})^{(-1)^{j-1}}T_j(q) \mbox{ for } 1\leq j\leq m. \] In particular, taking $j=m$, $T(p\star q)=D(u_{p,q})^{(-1)^{m-1}}T(q)$. Form the definition of the operation $\star$, we have: \begin{lem}\label{lem:dec} Given any $p=(X,a_1,\ldots, n)\in M_n$ with $n>1$ and given $i$ with $1<i<n$ \[ p=(X,a_1,\ldots,a_n)=(X,a_1,\ldots,a_{i-1})\star(T_{i-1}(p),a_i,\ldots,a_n). \] \end{lem} By induction on $n$ we deduce immediately: \begin{cor}\label{cor:dec} For any $n\geq 1$ and any $p=(X,a_1,\ldots, n)\in M_n$ we have \[ p=\bigstar_{i=1}^n (T_{i-1}(p), a_i) \] \end{cor} If $Y\in M_0=\spl{2}{A}$ and if $p= (X,a_1,\ldots,a_n)\in M_n$ then $Y\star p$ is defined if and only if $Z= XY^{-1}\in B$ and in this case $Y\star p =(Y,b_1,\ldots,b_n)$ satisfies $T_i(q)=D(u(Z))^{(-1)^{n-1}}T_i(p)$ for $i\geq 1$. Note that we always have $Y=I(Y\star p)$ whenever this exists. We immediately deduce: \begin{lem}\label{lem:equiv} Let $p,q\in M_n$. Then $p\sim q$ if and only if $I(q)\star p$ exists and equals $q$. \end{lem} Furthermore, we observe, from the definitions, if $p\in M_n$ and $X\in \spl{2}{A}$, then $p\star X$ exists if and only if $XT(p)^{-1}\in B$ and, in this case, $p\star X=p$. The operation $\star$ on $\cup_{n=1}^\infty M_n$ is associative, where defined: \begin{lem}\label{lem:assoc} Let $p\in M_n,q\in M_m$ and $r\in M_\ell$. Suppose that $p\star q$ and $q\star r$ are defined. Then $(p\star q)\star r$ and $p\star(q\star r)$ are defined and equal in $M_{n+m+\ell}$. \end{lem} \begin{proof} We have \begin{eqnarray*} Z_{p\star q,r}&=&I(r)T(p\star q)^{-1}\\ &=&I(r)\left(D(u_{p,q})^{(-1)^{m-1}}T(q)\right)^{-1}\\ &=& I(r)T(q)^{-1}D(u_{p,q})^{(-1)^m}\\ &=&Z_{q,r}D(u_{p,q})^{(-1)^m}.\\ \end{eqnarray*} Thus $Z_{p\star q,r}\in B$ if $Z_{q,r}$ and $Z_{p,q}\in B$. Furthermore \[ u_{p\star q,r}=u_{q,r}(u_{p,q})^{(-1)^m}. \] Likewise \[ Z_{p,q\star r}=I(q\star r)T(p)^{-1}=I(q)T(p)^{-1}=Z_{p,q}. \] So $Z_{p,q\star r}\in B$ if $Z_{p,q},Z_{q,r}\in B$. Furthermore, $u_{p,q\star r}=u_{p,q}$. Clearly, \[ T_i((p\star q)\star r)=T_i(p\star q)=T_i(p)=T_i(p\star (q\star r)) \mbox{ for }o\leq i\leq n. \] For $1\leq j\leq m$, we have \begin{eqnarray*} T_{n+j}((p\star q)\star r)=T_{n+j}(p\star q)=D(u_{p,q})^{(-1)^{j-1}}T_j(q) \end{eqnarray*} and thus \begin{eqnarray*} T_{n+j}(p\star (q\star r))=D(u_{p,q\star r})^{(-1)^{j-1}}T_j(q\star r)=D(u_{p,q})^{(-1)^{j-1}}T_j(q)=T_{n+j}((p\star q)\star r). \end{eqnarray*} Finally, suppose $1\leq k\leq \ell$. Then \begin{eqnarray*} T_{n+m+k}((p\star q)\star r)&=&D(u_{p\star q})^{(-1)^{k-1}}T_k(r)\\ &=&D(u_{p,q})^{(-1)^{(m+k)-1}}\left(D(u_{q,r})^{(-1)^{k-1}}T_k(r)\right)\\ &=&D(u_{p,q})^{(-1)^{(m+k)-1}}T_{m+k}(q\star r) \end{eqnarray*} and thus \begin{eqnarray*} T_{n+m+k}(p\star (q\star r))&=& D(u_{p,q\star r})^{(-1)^{(m+k)-1}}T_{m+k}(q\star r)\\ &=& D(u_{p,q})^{(-1)^{(m+k)-1}}T_{m+k}(q\star r)\\ &=& T_{n+m+k}((p\star q)\star r) \end{eqnarray*} as required. \end{proof} We now confirm that $\sim$ is indeed an equivalence relation, that the operation $\star$ on $\cup_{n=0}^\infty M_n$, descends to a well-defined operation on $\mathcal{M}=\cup_{n=0}^\infty\mathcal{M}_n$ and that this corresponds to the operation of concatenation of paths in $Y(A)$: \begin{lem}\label{lem:star} Let $A$ be a ring. \begin{enumerate} \item For all $n\geq 0$, $\sim$ is an equivalence relation on $M_n$. \item Suppose that $p\in M_n$, $q\in M_m$ and $p\star q$ is defined. If $p\sim p'$ and $q\sim q'$ then $p'\star q'$ is defined and $p'\star q'$ is defined and $p\star q\sim p'\star q'$ in $M_{n+m}$. \item If, furthermore, $p$ and $q$ correspond to the paths $\mathbf{p}$ and $\mathbf{q}$ respectively, then $\mathbf{p}\star\mathbf{q}$ is defined if and only if $p\star q$ is defined, and in this case, $[p]\star [q]\in \mathcal{M}_{n+m}$ corresponds to $\mathbf{p}\star\mathbf{q}$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item For any $p\in M_n$, we have $p=I(p)\star p$. So $p\sim p$. if $p\sim q$, then $q=I(q)\star p$ and hence, using associativity of $\star$, \[ I(p)\star q=I(p)\star(I(q)\star p)=(I(p)\star I(q))\star p =I(p)\star p=p \] so that $q\sim p$. Suppose that $p\sim q$ and $q\sim r$. Then $q=I(q)\star p$ and $r=I(r)\star q$. So \[ r=I(r)\star q= I(r)\star (I(q)\star p)=(I(r)\star I(q))\star p = I(r)\star p \] and hence $p\sim r$. \item By Lemma \ref{lem:equiv}, $p\sim p'$ and $q\sim q'$ if and only if $Z_1:=I(p)I(p')^{-1}$ and $Z_2:=I(q)I(q')^{-1}\in B$ and $p'=I(p')\star p$, $q'=I(q')\star q$. In this case, \begin{eqnarray*} Z_{p',q'}&=&\left( Z_2^{-1}I(q)\right)\cdot \left(D(u(Z_1))^{(-1)^{n-1}}T(q)\right)^{-1}\\ &=&Z_2^{-1}Z_{p,q}D(u(Z_1))^{(-1)^{n}}. \end{eqnarray*} So $Z_{p',q'}\in B$ if and only if $Z_{p,q}\in B$. Furthermore, using associativity of $\star$, \begin{eqnarray*} p'\star q'=(I(p')\star p)\star(I(q')\star q)=I(p')\star((p\star I(q'))\star q)=I(p')\star(p\star q) \end{eqnarray*} and thus $p'\star q'\sim p\star q$. \item Recall that $t(\mathbf{p})=\infty\cdot T(p)$ and $i(\mathbf{q})=\infty\cdot I(q)$. Thus $t(\mathbf{p})=i(\mathbf{q})$ if and only if $\infty\cdot T(p)=\infty \cdot I(q)$ if and only if $I(q)T(p)^{-1}\in B$ ; i.e., $\mathbf{p}\star\mathbf{q}$ exists if and only if $p\star q$ exists. Now let $\mathbf{p}=(x_0,\ldots, x_n)$ and let $\mathbf{q}=(y_0,\ldots, y_m)$ and suppose $x_n=y_0$. Then \[ \infty\cdot T_i(p\star q)=\infty\cdot T_i(p)=x_i \mbox{ for } i\leq n \] and \[ \infty\cdot T_{n+j}(p\star q)=\infty\cdot D(u)^{(-1)^{j-1}}T_j(q)=\infty\cdot T_j(q)=y_j\mbox{ for } 1\leq j\leq m. \] Thus $[p]\star[q]:=[p\star q]$ corresponds to $\mathbf{p}\star\mathbf{q}$. \end{enumerate} \end{proof} To summarize: We have a well-defined category whose objects are $\mathcal{M}_0=B\backslash \spl{2}{A}$ and whose morphisms are $\mathcal{M}=\cup_{n=0}^\infty\mathcal{M}_n$. For any $x,y\in\mathcal{M}_0$ and $[p]\in \mathcal{M}$,we have $[p]\in \mathrm{Hom}(x,y)$ if and only if $i([p])=x$ and $t[p]=y$. When $t[p]=i[q]$ then composition is given by $[q]\circ [p]:=[p]\star [q]$. There is a natural right action of $\spl{2}{A}$ on this category corresponding to the action of $\spl{2}{A}$ on paths of $\Gamma(A)$ by right multiplication. More particularly: \begin{lem}\label{lem:action} Given $Y\in \spl{2}{A}$ and $p=(X,a_1,\ldots,a_n)\in M_n$ define\\ $p\cdot Y:= (XY,a_1,\ldots,a_n)$. \begin{enumerate} \item If $p\in M_n,q\in M_n$ and $Y\in \spl{2}{A}$ such that $p\star q$ is defined, then $(p\cdot Y)\star (q\cdot Y)$ is defined and \[ (p\cdot Y)\star (q\cdot Y)=(p\star q)\cdot Y. \] \item If $p\sim q$ in $M_n$, then $p\cdot Y\sim q\cdot Y$ for all $Y\in \spl{2}{A}$. \item If the class $[p]$ corresponds to the path $\mathbf{p}=(x_0,\ldots,x_n)$ then for all $Y\in\spl{2}{A}$, $[p]\cdot Y:=[p\cdot Y]$ corresponds to $\mathbf{p}\cdot Y:= (x_0,\cdot Y,\ldots,x_n\cdot Y)$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item Observe that $T_i(p\cdot Y)=T_i(p)Y$ for all $p\in m_n$, $Y\in \spl{2}{A}$. Thus for any $p\in M_n$, $q\in M_m$, $Z_{p\cdot Y,q\cdot Y}=(I(q)Y)(T(p)Y)^{-1}=I(q)T(p)^{-1}=Z_{p,q}$. Thus $ p\star q$ is defined if and only if $(p\cdot Y)\star (q\cdot Y)$ is. Furthermore, for $1\leq i\leq n$ \[ T_i((p\star q)\cdot Y)=T_i(p\star q)Y=T_i(p)Y=T_i(p\cdot Y)=T_i((p\cdot Y)\star (q\cdot Y)), \] and if $1\leq j\leq m$ \begin{eqnarray*} T_{n+j}((p\star q)\cdot Y) &=& T_{n+j}(p\star q) Y\\ &=& D(u_{p,q})^{(-1)^{j-1}}T_j(q)Y\\ &=& D(u_{p\cdot Y,q\cdot Y})^{(-1)^{j-1}}T_j(q\cdot Y)\\ &=& T_{n+j}((p\cdot Y)\star (q\cdot Y)).\\ \end{eqnarray*} \item $p\sim q\implies q=I(q)\star p$ and thus $q\cdot Y=(I(q)Y)\star(p\cdot Y)=I(q\cdot Y)\star (p\cdot Y)$ which means that $p\cdot Y\sim q\cdot Y$. \item $\mathbf{p}=(x_0,\ldots,x_n)$ corresponds to $[p]\in\mathcal{M}_n$ is and only if $x_i=\infty\cdot T_i(p)$ for all $i$. If so, it follows that $x_iY=\infty\cdot T_i(p)Y=\infty\cdot T_i(p\cdot Y)$ for all $i$. \end{enumerate} \end{proof} \subsection{Homotopy relation (1)} We use the correspondence between paths in $\Gamma(A)$ and elements of $\mathcal{M}$ to express the homotopy relation (1) of the edge-path groupoid of $Y(A)$ in terms of the category $\mathcal{M}$. First note that if $\{ x,y\} $ is an edge in $\Gamma(A)$ and if $x=\infty\cdot X$ for some $X\in \spl{2}{A}$, then $y=a\cdot X=\infty\cdot E(a)X$ for some $a\in A$. Now $x\cdot X^{-1}E(a)^{-1}=\infty\cdot E(a)^{-1}=0=\infty\cdot E(0)\implies x=\infty\cdot E(0)E(a)X$. So the path $(x,y,x)$ corresponds to $[X, (a,0)]=[1,(a,0)]\cdot X\in \mathcal{M}_2$. So the relation derived from homotopy relation (1) has the form $[X,(a,0)]\sim_{(1)}[X]$ and, more generally, for $1\leq i\leq n-1$ \begin{eqnarray*} &&[X,(a_1,\ldots, a_{i-1},a_i,0,a_{i+2},\ldots, a_n)]\\ &=&[X,(a_1,\ldots,a_{i-1})]\star [T_{i-1}(p), (a_i,0)]\star[T_{i+1}(p),(a_{i+2},\cdots a_n)]\\ &\sim_{(1)}& [X,(a_1,\ldots,a_{i-1})]\star [T_{i-1}(p)]\star[T_{i+1}(p),(a_{i+2},\cdots a_n)]\\ &=& [X,(a_1,\ldots,a_{i-1})]\star[T_{i+1}(p),(a_{i+2},\cdots a_n)]\\ \end{eqnarray*} Now the connection matrix in this case is \[ Z=T_{i+1}(p)T_{i-1}(p)^{-1}=E(0)E(a_i) = \left[ \begin{array}{cc} -1&0\\ -a_i&-1 \end{array} \right]\in B \] with unit $u(Z)=-1$. Thus $a_{i+2}Z= a_i+a_{i+2}$ and we conclude that \begin{eqnarray*} [X,(a_1,\ldots,a_{i-1},\underbrace{ a_i,0,a_{i+2}},\ldots, a_n)]\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\\ \sim_{(1)}\left\{ \begin{array}{ll} \ [X,(a_1,\ldots,a_{i-1})],& \mbox{ if } i+1=n\\ \ [X,(a_1,\ldots,a_{i-1},\underbrace{a_i+a_{i+2}},a_{i+3},\ldots,a_n)],& \mbox{ if } i+1<n\\ \end{array} \right. \end{eqnarray*} We spell out the complete equivalence relation $\sim_{(1)}$ on $\mathcal{M}$ in greater detail: Let us say that replacing $[X,(a_1,\ldots, a_{i-1},a_i,0,a_{i+2},\ldots, a_n)]$ with $[X,(a_1,\ldots,a_{i-1},a_i+a_{i+2},a_{i+3},\ldots,a_n)]$ (or replacing $[X,(a_1,\ldots,a_{n-1},0)]$ with $[X,(a_1,\ldots,a_{n-2})]$) is a \emph{type (1) contraction}. Conversely, replacing $[X, (a_1,\ldots,a_{i-1}, a_i,\ldots, a_n)]$ with $[X, (a_1,\ldots, a_{i-1},a,0,a_{i}-a,\ldots,a_n)]$ (or with $[X,(a_1,\ldots,a_n,a,0)]$ ) for any $a\in A$ will be referred to as a \emph{type (1) expansion}. Then $[p]\sim_{(1)}[q]$ if and only if $[q]$ can be obtained from $[p]$ by a finite sequence of replacements, each of which is either a type (1) contraction or a type (1) expansion. We note that modulo the equivalence relation $\sim_{(1)}$ the category $\mathcal{M}$ becomes a groupoid (indeed, the fundamental groupoid of the graph $\Gamma(A)$). For example, we have \[ [X,(a)]\star [E(a)X,(0)]=[X, (a,0)]\sim_{(1)} [X] \] and thus $[E(a)X,(0)]$ is a right inverse of $[X,(a)]$ in the quotient category $\mathcal{M}/\sim_{(1)}$. \subsection{Homotopy relation (2)} Now, in a similar manner, we transfer the homotopy relation (2) to the category $\mathcal{M}$: Let $\{ x,y,z\}$ be a $3$-clique in $\Gamma(A)$, and hence a $2$-simplex in $Y(A)$. Choose $X\in \spl{2}{A}$ with $x=\infty\cdot X$. Since $y,z$ are neighbours of $x$, there exist $a,b\in A$ with $y=a\cdot X$ and $z= b\cdot X$. Since $y$ and $z$ (and hence $a$ and $b$) are neighbours there exists $u\in A^\times$ with $b=u+a$. Thus $y=\infty \cdot E(a)X$, $z=\infty\cdot E(b)X=\infty\cdot E(a+u)X$. Furthermore, \[ z\cdot X^{-1}E(a)^{-1}=\infty\cdot E(u+a)E(a)^{-1}=-u^{-1}=\infty\cdot E(-u^{-1}) \] and hence $z=\infty\cdot E(u^{-1})E(a)X$ Thus the path $\mathbf{p}_1:=(x,y,z)$ corresponds to the class $[X,(a,-u^{-1})]\in \mathcal{M}$ and the path $\mathbf{p}_2:=(x,z)$ corresponds to the class $[X,(a+u)]$, Since $\mathbf{p}_1$ and $\mathbf{p}_2$ are identified by the second homotopy relation, we must have $[X,(a,-u^{-1})]\sim_{(2)}[X,(a+u)]$ for any $X\in\spl{2}{A}$, $a\in A$ and $u\in A^\times$, or, equivalently, $[X,(a,u)]\sim_{(2)}[X,(-u^{-1})]$ for all $X,a,u$. More generally, we have \begin{eqnarray*} &&[X,(a_1,\ldots, a_{i-1},a_i,0,a_{i+2},\ldots, a_n)]\\ &=&[X,(a_1,\ldots,a_{i-1})]\star [T_{i-1}(p), (a_i,u)]\star[T_{i+1}(p),(a_{i+2},\cdots a_n)]\\ &\sim_{(1)}& [X,(a_1,\ldots,a_{i-1})]\star [T_{i-1}(p),(a_i-u^{-1})]\star[T_{i+1}(p),(a_{i+2},\cdots a_n)]\\ &=& [X,(a_1,\ldots,a_{i-1}, a_i-u^{-1})]\star[T_{i+1}(p),(a_{i+2},\cdots a_n)].\\ \end{eqnarray*} The connection matrix $Z$ here is \[ T_{i+1}(p)\left(E(a_i-u^{-1})T_{i-1}(p)\right)^{-1}= E(u)E(a_i)E(a_i-u^{-1})^{-1}= \left[ \begin{array}{cc} u&0\\ -1&u^{-1} \end{array} \right]\in B. \] So $u(Z)=u$ and $a_{i+2}\cdot Z= u^2a_{i+2}-u$ and we get: \begin{eqnarray*} [X,(a_1,\ldots, a_{i-1},a_i,u,a_{i+2},\ldots, a_n)]\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\\ \sim_{(2)} \left\{ \begin{array}{ll} \ [X,(a_1,\ldots,a_{i-1},a_i-u^{-1})],& \mbox{ if } i+1=n\\ \ [X,(a_1,\ldots,a_{i-1},a_i-u^{-1},u^2a_{i+2}-u,u^{-2}a_{i+3},u^2a_{i+4},\ldots],& \mbox{ if } i+1<n\\ \end{array} \right.\\ \end{eqnarray*} Let us define a a \emph{type (2) contraction} to be a replacement of $[X,(a_1,\ldots, a_{i-1},a_i,u,a_{i+2},\ldots, a_n)]$ with $ [X,(a_1,\ldots,a_{i-1},a_i-u^{-1},u^2a_{i+2}+u,u^{-2}a_{i+3},u^2a_{i+4},\ldots]$ (or of $ [X,(a_1,\ldots, a_{n-2},a_{n-1},u)]$ with $[X,(a_1,\ldots,a_{n-2},a_{n-1}-u^{-1})]$) . Conversely, replacing $[X,(a_1,\ldots,a_i,a_{i+2},a_{i+2},\cdots)]$ with $[X,(a_1,\ldots,a_{i-1},a_i+u^{-1},u,u^{-2}a_{i+1}-u^{-1},u^{-2}a_{i+2},\ldots)]$ (or replacing $[X,(a_1,\ldots,a_n)]$ with $[X,(a_1,\ldots,a_{n-1},a_n+u^{-1},u]$) will be called a \emph{type (2) expansion}. We have $[p]\sim_{(2)}[q]$ if $[q]$ can be obtained from $[p]$ by a finite sequence of replacements, each of which is a type (2) contraction or a type (2) expansion. Thus the edge-path groupoid $\mathcal{E}(Y(A))$ is naturally isomorphic to the quotient of the category $\mathcal{M}$ by the equivalence relations $\sim_{(1)}$ and $\sim_{(2)}$. \section{The fundamental group of $Y(A)$} \subsection{A presentation for $\pi_1(Y(A),\infty)$} Let $A$ be a ring. Let $\mathcal{E}:=\mathcal{E(Y(A))}$ be the edge-path groupoid of $Y(A)$, identified as a quotient of the category $\mathcal{M}$. We are now in a position to give a presentation of the fundamental group of $Y(A)$ at $\infty$; $\pi_1(Y(A),\infty):=\mathrm{Aut}_{\mathcal{E}}(\infty)$. Observe that $[p]\in M_n$ represents an automorphism of $\infty$ if and only if $\infty\cdot I(p)=\infty\cdot T(p)=\infty$, and hence if and only if $I(p),T(p)\in B$. Let $M_n(B)=\{ p\in m_n\ |\ I(p),T(p)\in B\}$ and let $M(B):=\cup_{n=0}^\infty M_n(B)$. Note that if $p,q\in M(B)$, then $p\star q$ is always defined. Let $\mathcal{M}(B)$ be the image of $M(B)$ in $\mathcal{M}$. Thus $\mathrm{Aut}_{\mathcal{E}}(\infty)$ is the image of $\mathcal{M}(B)$ in $\mathcal{E}$. Using the description of the edge-path groupoid $\mathcal{E}$ in the last section, as well as the relationship between the operation $\star$ on $M$ and the equivalence relation $\sim$, we deduce the following presentation of $\pi_1(Y(A),\infty)$. \begin{thm}\label{thm:pi1} $\pi_1(Y(A),\infty)$ is generated by the symbols $p=\an{X,(a_1,\ldots,a_n)}$ $n\geq 0$, where $X\in B$, and (if $n\geq 1$) $a_1,\ldots,a_n\in A$ satisfy $E(a_n)\cdots E(a_1)\in B$. Given such a $p=\an{X,(a_1,\ldots,a_n)}$ we let $I(p):=T_0(p):=X$ , $T_i(p):=E(a_i)\cdots E(a_1)X$ for $i\geq 1$ and $T(p):=T_n(p)$. These symbols are subject to the following families of relations: \begin{enumerate} \item Given generators $p=\an{X,(a_1,\ldots,a_n)}$, $q={Y,(b_1,\ldots,b_m)}$ we have \[ \an{X,(a_1,\ldots,a_n)}\cdot \an{Y,(b_1,\ldots,b_m)}=\an{X,(a_1,\cdots,a_n,b'_1,\ldots,b'_n)} \] where $b'_1=b_1\cdot Z$, $b'_j=u^{(-1)^{j-1}2}b_j$ for $j\geq 2$. Here $Z=Z_{p,q}=I(q)T(p)^{-1}=YX^{-1}E(a_1)^{-1}\cdots E(a_n)^{-1}\in B$ and $u=u(Z)$. \item $\an{X}=1$ for all $X\in B$. \item (Type (1) contractions) $\an{X, (a_1,\ldots, a_{n-1},0)}=\an{X,(a_1,\ldots,a_{n-2})}$ and \[ \an{X,(a_1,\ldots,a_i,0,a_{i+2},\ldots)}=\an{X,(a_1,\ldots,a_i+a_{i+2},a_{i+3},\ldots)}. \] \item (Type (2) contractions) $\an{X,(a_1,\ldots,a_{n-1},u)}=\an{X,(a_1,\ldots,a_{n-1}-u^{-1})}$ and \[ \an{X,(a_1,\ldots,a_i, u,a_{i+2},a_{i+3},\ldots)}=\an{X,(a_1,\ldots,a_{i}-u^{-1},u^2a_{i+2}-u,u^{-2}a_{i+3},\ldots)}. \] \end{enumerate} \end{thm} \subsection{Proof of the main theorem} Recall that $\mathrm{st}:B\to C(A)$ denotes the standard embedding with image the subgroup $\mathbb{B}$ satisfying $\psi\circ\mathrm{st}=\mathrm{Id}_B$. Given a symbol $p=\an{X,(a_1,\ldots,a_n)}$ we define $\Lambda(p)\in C(A)$ by the formula \[ \Lambda(p):=\stand{T(p)}^{-1}\epsilon(a_n)\cdots\epsilon(a_1)\stand{I(p)}. \] Observe that $\psi(\Lambda(p))= T(p)^{-1}E(a_n)\cdots E(a_1)I(p)=T(p)^{-1}T(p)=1$, so that $\Lambda(p)\in U(A)$ for all $p$. To begin with, we will show that $\Lambda$ determines a group anti-homomorphism $\pi_1(Y(A),\infty)\to U(A)$. We must verify that $\Lambda$ respects the relations (1)-(4). \begin{lem}\label{lem:rel1} Given $p=\an{X,(a_1,\ldots,a_n)}$, $q=\an{Y,(b_1,\ldots,b_m)}$ we have \[ \Lambda(p\cdot q)=\Lambda(q)\cdot \Lambda(p) \] where $p\cdot q$ is given by relation (1) of Theorem \ref{thm:pi1}. \end{lem} \begin{proof} By Proposition \ref{prop:key}, we have, setting $\beta_{p,q}:=\stand{Z_{p,q}}\in\mathbb{B}$, \begin{eqnarray*} \epsilon(b'_m)\cdots \epsilon(b'_1)&=&h(u)^{(-1)^{m-1}}\epsilon(b_m)\cdots \epsilon(b_1)\beta_{p,q}\\ &=& h(u)^{(-1)^{m-1}}\epsilon(b_m)\cdots \epsilon(b_1)\stand{I(q)}\stand{T(p)}^{-1}\\ \end{eqnarray*} and hence \[ \epsilon(b'_m)\cdots \epsilon(b'_1)\stand{T(p)}=h(u)^{(-1)^{m-1}}\epsilon(b_m)\cdots \epsilon(b_1)\stand{I(q)}. \] Applying $\psi$ this also gives \[ T(p\cdot q)=E(b'_m)\cdots E(b'_1)T(p)=D(u)^{(-1)^{m-1}}E(b_m)\cdots E(b_1)I(q)=D(u)^{(-1)^{m-1}}T(q). \] Hence $T(p\cdot q)^{-1}=T(q)^{-1}D(u)^{(-1)^m}$ and $\stand{T(p\cdot q)}^{-1}=\stand{T(q)}^{-1}h(u)^{(-1)^m}$. Thus \begin{eqnarray*} \Lambda(q)\Lambda(p)&=&\stand{T(q)}^{-1}\epsilon(b_m)\cdots \epsilon(b_1)\stand{I(q)}\stand{T(p)}^{-1}\epsilon(a_n)\cdots \epsilon(a_1)\stand{I(p)}\\ &=& \stand{T(q)}^{-1}\epsilon(b_m)\cdots \epsilon(b_1)\beta_{p,q}\epsilon(a_n)\cdots \epsilon(a_1)\stand{I(p)}\\ &=& \stand{T(q)}^{-1}h(u)^{(-1)^m}\epsilon(b'_m)\cdots \epsilon(b'_1)\epsilon(a_n)\cdots \epsilon(a_1)\stand{I(p)}\\ &=& \stand{T(p\cdot q)}^{-1}\epsilon(b'_m)\cdots \epsilon(b'_1)\epsilon(a_n)\cdots \epsilon(a_1)\stand{I(p)}\\ &=& \Lambda(p\cdot q) \end{eqnarray*} as required. \end{proof} If $p=\an{X}$, we have $I(p)=X=T(p)$ and hence $\Lambda(\an{X})=1$: \begin{lem}\label{lem:rel2} For all $X\in B$, $\lambda(\an{X})=1$ in $U(A)$. \end{lem} \begin{lem}\label{lem:rel3} Let $X\in B$. \begin{enumerate} \item Suppose that $a_1,\ldots,a_i,a_{i+2},\ldots,a_n\in A$ satisfy $E(a_n)\cdots E(a_{i+2})E(0)E(a_i)\cdots E(a_1)\in B$. Then $\Lambda(p)=\Lambda(q)$ in $U(A)$ where $p=\an{X,(a_1,\ldots,a_i,0,a_{i+2},\ldots,a_n)}$ and $q=\an{X,(a_1,\ldots,a_{i-1},a_i+a_{i+2},\ldots,a_n)}$. \item Suppose that $a_1,\ldots,a_{n-1}\in A$ satisfy $E(0)E(a_{n-1})\cdots E(a_1)\in B$. Then $\Lambda(p)=\Lambda(q)$ in $U(A)$ where $p=\an{X,(a_1,\ldots,a_{n-1},0)}$ and $q=\an{X,(a_1,\ldots,a_{n-2})}$. \end{enumerate} \end{lem} \begin{proof} Recall that $\epsilon(a)\epsilon(0)\epsilon(b)=h(-1)\epsilon(a+b)$ in $C(A)$ and that $E(a)E(0)E(b)=D(-1)E(a+b)$ in $E_2(A)$. \begin{enumerate} \item Note that $I(p)=I(q)$ and \begin{eqnarray*} T(p)&=& E(a_n)\cdots \left(E(a_{i+2})E(0)E(a_i)\right)\cdots E(a_1)X\\ &=& D(-1)E(a_n)\cdots E(a_i+a_{i+2})\cdots E(a_1)X\\ &=& D(-1)T(q). \end{eqnarray*} It follows that $\stand{T(p)}=h(-1)\stand{T(q)}$ in $ C(A)$. Thus (recalling that $h(-1)$ is central in $C(A)$) \begin{eqnarray*} \Lambda(p)&=&\stand{T(p)}^{-1}\epsilon(a_n)\cdots \left(\epsilon(a_{i+2})\epsilon(0)\epsilon(a_i)\right)\cdots \epsilon(a_1)\stand{I(p)}\\ &=& \stand{T(p)}^{-1}h(-1)\epsilon(a_n)\cdots \epsilon(a_i+a_{i+2})\cdots \epsilon(a_1)\stand{I(p)}\\ &=& \stand{T(q)}^{-1}h(-1)^2\epsilon(a_n)\cdots \epsilon(a_i+a_{i+2})\cdots \epsilon(a_1)\stand{I(q)}\\ &=& \stand{T(q)}^{-1}\epsilon(a_n)\cdots \epsilon(a_i+a_{i+2})\cdots \epsilon(a_1)\stand{I(q)}\\ &=&\Lambda(q). \end{eqnarray*} \item We have, by definition of $y(a)$ and $\beta(u,a)$, $\epsilon(-1)\epsilon(a_{n-1})=h(-1)y(a_{n-1})=\beta(-1,-a_{n-1})$ in $\mathbb{B}\subset C(A)$ and $E(0)E(A_{n-1})=D(-1)E_{21}(a_{n-1})$ in $B$. It follows that $\stand{E(0)E(a_{n-1}}=\epsilon(0)\epsilon(a_{n-1})$. Now $T(p)=E(0)E(a_{n-1}) E(a_{n-2})\cdots E(a_1)X=E(0)E(a_{n-1})T(q)$. Hence $\stand{T(p)}= \epsilon(0)\epsilon(a_{n-1})\stand{T(q)}$ in $\mathbb{B}$. It follows that \begin{eqnarray*} \Lambda(p)&=& \stand{T(p)}^{-1}\epsilon(0)\epsilon(a_{n-1})\epsilon(a_{n-2})\cdots \epsilon(a_1)\stand{I(p)}\\ &=& \stand{T(q)}^{-1}\epsilon(a_{n-2})\cdots \epsilon(a_1)\stand{I(q)}\\ &=& \Lambda(q). \end{eqnarray*} \end{enumerate} \end{proof} \begin{lem}\label{lem:rel4} Let $X\in B$. Let $u\in A^\times$. \begin{enumerate} \item Suppose that $a_1,\ldots,a_i,a_{i+2},\ldots,a_n\in A$ satisfy $E(a_n)\cdots E(a_{i+2})E(u)E(a_i)\cdots E(a_1)\in B$. Then $\Lambda(p)=\Lambda(q)$ in $U(A)$ where $p=\an{X,(a_1,\ldots,a_i,u,a_{i+2},\ldots,a_n)}$ and $q=\an{X,(a_1,\ldots,a_{i-1},a_i-u^{-1},u^2a_{i+2}-u,u^{-2}u_{i+3},\cdots)}$. \item Suppose that $a_1,\ldots,a_{n-1}\in A$ satisfy $E(u)E(a_{n-1})\cdots E(a_1)\in B$. Then $\Lambda(p)=\Lambda(q)$ in $U(A)$ where $p=\an{X,(a_1,\ldots,a_{n-1},u)}$ and $q=\an{X,(a_1,\ldots,a_{n-2},a_{n-1}-u^{-1})}$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item In $\mathbb{B}$ we have \begin{eqnarray*} \beta(-u,1)=h(-u)y(-u)=\epsilon(u)\epsilon(u^{-1})\epsilon(u)\epsilon(0)^3\epsilon(-u)\\ =h(-1)\epsilon(u)\epsilon(u^{-1})\epsilon(u)\epsilon(0)\epsilon(-u) =h(-1)\epsilon(u)\epsilon(u^{-1})\epsilon(0) \end{eqnarray*} using relation (2) in the definition of $C(A)$. Thus consider $\beta:=h(-1)\beta(-u,1)=\epsilon(u)\epsilon(u^{-1})\epsilon(0)\in \mathbb{B}$. Then \[ \beta\epsilon(a_i-u^{-1})=\epsilon(u)\epsilon(u^{-1})\epsilon(0)\epsilon(a_i-u^{-1})=\epsilon(u)\epsilon(a_i) \mbox{ in }C(A). \] Furthermore, if we let $Z:=\psi(\beta)\in B$, then it follows that $ZE(a_i-u^{-1})=E(u)E(a_i)$ in $B$. By Proposition \ref{prop:key} we have \[ \epsilon(a'_n)\cdots \epsilon(a'_{i+2})=h(u)^{(-1)^{n-i}}\epsilon(a_n)\epsilon(a_{i+2})\beta \] where $a'_{i+2}:=u^2a_{i+2}-u=a_{i+2}\beta$, $a'_{i+3}=u^{-2}a_{i+3},\ldots$ etc. We thus also have \[ E(a'_n)\cdots E(a'_{i+2})=D(u)^{(-1)^{n-i}}E(a_n)\cdots E(a_{i+2})Z. \] Hence \begin{eqnarray*} T(q)&=& E(a'_n)\cdots E(a'_{i+2})E(a_i-u^{-1})T_{i-1}(p)\\ &=& D(u)^{(-1)^{n-i}}E(a_n)\cdots E(a_{i+2})ZE(a_i-u^{-1}T_{i-1}(p)\\ &=& D(u)^{(-1)^{n-i}}E(a_n)\cdots E(a_{i+2})E(u)E(a_i)T_{i-1}(p)\\ &=& D(u)^{(-1)^{n-i}}T(p). \end{eqnarray*} So \begin{eqnarray*} \Lambda(q)&=& \stand{T(q)}^{-1}\epsilon(a'_n)\cdots \epsilon(a'_{i+2})\epsilon(a_i-u^{-1})\cdots \epsilon(a_1)\stand{X}\\ &=& \stand{T(p)}^{-1}h(u)^{(-1)^{n-i+1}}h(u)^{(-1)^{n-i}}\epsilon(a_n)\epsilon(a_{i+2})\beta\epsilon(a_i-u^{-1})\cdots \epsilon(a_1)\stand{X}\\ &=& \stand{T(p)}^{-1}\epsilon(a_n)\epsilon(u)\epsilon(a_i)\cdots \epsilon(a_1)\stand{X}\\ &=& \Lambda(p). \end{eqnarray*} \item Let $\beta$, $Z$ be as in (1), taking $i=n-1$. Then $\epsilon(u)\epsilon(a_{n-1})=\beta\epsilon(a_{n-1}-u^{-1})$ and $E(u)E(a_{n-1})=ZE(a_{n-1}-u^{-1})$. Thus \[ T(p)=E(u)E(a_{n-1})T_{n-2}(p)=Z E(a_{n-1}-u^{-1})T_{n-2}(q)=ZT(q) \] and so $\stand{T(p)}=\beta\stand{T(q)}$. Thus \begin{eqnarray*} \Lambda(p)&=& \stand{T(p)}^{-1}\epsilon(u)\epsilon(a_{n-1})\cdots\epsilon(a_1)\stand{X}\\ &=&\stand{T(q)}^{-1}\beta^{-1}\cdot\beta\epsilon(a_{n-1}-u^{-1})\cdots\epsilon(a_1)\stand{X}\\ &=& \Lambda(q). \end{eqnarray*} \end{enumerate} \end{proof} Lemmas \ref{lem:rel1}--\ref{lem:rel4} imply that $\Lambda$ defines a group anti-homomorphism $\pi_1(Y(A),\infty)\to U(A)$. We now proceed to construct an inverse, $\Theta$, to $\Lambda$: Let $\tilde{C}(A)$ denote the free group with generators $\tilde{\epsilon}(a), a\in A$. Let $\tilde{\psi}:\tilde{C}(A)\to \spl{2}{A}$ be the homomorphism sending $\tilde{\epsilon}(a)$ to $E(a)$ for all $a\in A$. Let $\tilde{U}(A):=\ker{\tilde{\psi}}$. We begin by defining a map of \emph{sets} $\tilde{\Theta}:\tilde{U}(A)\to \pi_1(Y(A),\infty)$. An element of $\tilde{C}(A)$ is an expression $\alpha=\tilde{\epsilon}(b_1)^{c_1}\cdots\tilde{\epsilon}(b_n)^{c_n}$ where $c_i\in \{ \pm 1\}$. Such an element $\alpha$ lies in $\tilde{U}(A)$ if and only if $\prod_{i=1}^nE(b_i)^{c_i}=1$ in $\spl{2}{A}$. Now, for $b\in A$, we let $s_1(b)=b$ and we define $s_{-1}(b)$ to be the string $0,-b,0$. Given $\alpha=\prod_{i=1}^n\tilde{\epsilon}(b_i)^{c_i}\in \tilde{U}(A)$ we define \[ \tilde{\Theta}(\alpha):= \an{1,(s_{c_n}(b_n),\ldots,s_{c_1}(b_1))}\in \pi_1(Y(A),\infty). \] We note that $\tilde{\Theta}$ is well-defined: \begin{lem}\label{lem:thetadef} Let $\alpha=\prod_{i=1}^n\tilde{\epsilon}(b_i)^{c_i}\in \tilde{U}(A)$ and suppose that for some $i<n$ we have $b_i=b_{i+1}$ and $c_i=-c_{i+1}$. Then $\alpha=\alpha':= \tilde{\epsilon}(b_1)^{c_1}\cdots\tilde{\epsilon}(b_{i-1})^{c_{i-1}}\tilde{\epsilon}(b_{i+2})^{c_{i+2}}\cdots \tilde{\epsilon}(b_n)^{c_n}$ in $\tilde{U}(A)$ and $\tilde{\Theta}(\alpha)=\tilde{\Theta}(\alpha')$. \end{lem} \begin{proof} We consider the case $c_i=1,c_{i+1}=-1$. The other case is entirely similar. \begin{eqnarray*} \tilde{\Theta}(\alpha)&=&\an{1,(\ldots, s_{c_{i+2}}(b_{i+2}), 0,\underbrace{-b_{i},0,b_i},s_{c_{i-1}}(b_{i-1}),\ldots)}\\ &=& \an{1,(\ldots, s_{c_{i+2}}(b_{i+2}),\underbrace{ 0,0,s_{c_{i-1}}(b_{i-1})},\ldots)}\\ &=& \an{1,(\ldots, s_{c_{i+2}}(b_{i+2}),s_{c_{i-1}}(b_{i-1}),\ldots)}=\tilde{\Theta}(\alpha')\\ \end{eqnarray*} using type (1) contractions in each of the last two lines. \end{proof} $\tilde{\Theta}$ defines an anti-homomorphism of groups: \begin{lem}\label{lem:thetaprod} For all $\alpha,\beta\in \tilde{U}(A)$ we have $\tilde{\Theta}(\alpha\beta)=\tilde{\Theta}(\beta)\tilde{\Theta}(\alpha)$. \end{lem} \begin{proof} Let $\alpha=\prod_{i=1}^n\tilde{\epsilon}(b_i)^{c_i}$ and let $\beta=\prod_{i=1}^n\tilde{\epsilon}(d_i)^{e_i}$. Then \begin{eqnarray*} \tilde{\Theta}(\beta)\tilde{\Theta}(\alpha)&=&\an{1,(s_{e_n}(d_n),\ldots,s_{e_1}(d_1))}\an{1,(s_{c_n}(b_n),\ldots,s_{c_1}(b_1))}\\ &=&\an{1,(s_{e_n}(d_n),\ldots,s_{e_1}(d_1),s_{c_n}(b_n),\ldots,s_{c_1}(b_1))}=\tilde{\Theta}(\alpha\beta).\\ \end{eqnarray*} \end{proof} Now $U(A)$ is isomorphic to $\tilde{U}(A)$ modulo the normal subgroup generated by the following three families of elements: For $u\in A^\times$, let $\tilde{h}(u):=\tilde{\epsilon}(-u)\tilde{\epsilon}(-u^{-1})\tilde{\epsilon}(-u)$. \begin{enumerate} \item For $u,v\in A^\times$ \[ \alpha(u,v):=\tilde{h}(u)\tilde{h}(u)\tilde{h}(uv)^{-1} \] \item For $a,b\in A$ \[ \gamma(a,b):= \tilde{\epsilon}(a)\tilde{\epsilon}(0)\tilde{\epsilon}(b)\tilde{\epsilon}(a+b)^{-1}\tilde{h}(-1)^{-1}. \] \item For $u\in A^\times$, $a\in A$ \[ \delta(u,a):=\tilde{h}(u)\tilde{\epsilon}(a)\tilde{h}(u)\tilde{\epsilon}(u^2a)^{-1}. \] \end{enumerate} \begin{prop}\label{prop:theta} $\tilde{\Theta}$ induces a well-defined anti-homomorphism \[ \Theta:U(A)\to \pi_1(Y(A),\infty). \] \end{prop} \begin{proof} We must show that $\tilde{\Theta}$ vanishes on each of the three families -- $\alpha$, $\gamma$ ,$\delta$ -- of elements of $\tilde{U}(A)$. \begin{enumerate} \item Let $u,v\in A^\times$. Then $\tilde{\Theta}(\alpha(u,v))=$ \begin{eqnarray*} \an{1,(0,uv,0,0,(uv)^{-1},0,0,uv,0,-v,-v^{-1},-v,-u,-u^{-1},-u)}. \end{eqnarray*} Since a type (1) contraction replaces the string $0,0,a$ with $a$, this is equal to \[ \an{1,(0,uv,(uv)^{-1},uv,0,-v,-v^{-1},-v,-u,-u^{-1},-u)}. \] Since a type (2) contraction replaces the string $u,u^{-1},u$ with $0,0$ (and multiplies terms further to the right by powers of $u^2$), making three such contractions, starting on the right for convenience, gives the element \[ \an{1,(0,0,0,0,0,0,0,0)} \] which is trivial by further type (1) contractions. \item Let $a,b\in A$. Then $\tilde{\Theta}(\gamma(a,b))=$ \[ \an{1,(0,-1,0,0,-1,0,0,-1,0,0,-(a+b),0,b,0,a)}. \] Replacing $0,0,x$ with $x$ in three places, this is equal to \[ \an{1,(0,-1,-1,-1,-(a+b),0,b,0,a)}. \] A type (2) contraction replaces $-1,-1,-1$ with $0,0$, and leaves terms to the right unaltered since $(-1)^2=1$). So our element becomes $\an{1,(0,0,0,-(a+b),0,b,0,a)}$, which is again trivial by a sequence of type (1) contractions. \item Let $u\in A^\times$, $a\in A$. Then $\tilde{\Theta}(\delta(u,a))=$ \[ \an{1,(0,-u^2a,0,-u,-u^{-1},-u,a,-u,-u^{-1},-u)}. \] Applying a type (2) contraction to the last three terms, this is equal to \[ \an{1,(0,-u^2a,0,-u,-u^{-1},-u,a,0,0)}. \] Applying a further type (2) contraction, this becomes $\an{1,(0,-u^2a,0,0,0,u^2a,0,0)}$, which is trivial by a sequence of type (1) contractions. \end{enumerate} \end{proof} \begin{thm}\label{thm:main} Let $A$ be a ring. The anti-homomophisms $\Lambda:U(A)\to\pi_1(Y(A),\infty)$ and $\Theta:\pi_1(Y(A),\infty)\to U(A)$ are inverse to each other, and hence \[ \pi_1(Y(A),\infty)\cong U(A)^{\mathrm{op}}\cong\left(\frac{K_2(2,A)}{C(2,A)}\right)^{\mathrm{op}}. \] \end{thm} \begin{proof} Let $p=\an{X,(a_1,\ldots,a_n)}\in \pi_1(Y(A),\infty)$. Then $p=\an{1}\cdot p =\an{1,(b_1,\ldots,b_n)}$ for some $b_i\in A$. Thus $\Lambda(p)=\epsilon(b_n)\cdots \epsilon(b_1)$ and hence $\Theta(\Lambda(p))=\an{1,(b_1,\ldots,b_n)}=p$. Let $\alpha\in U(A)$. Since $\epsilon(a)^{-1}=\epsilon(0)\epsilon(-a)\epsilon(0)$ in $U(A)$, we can assume without loss that $\alpha=\prod_{i=1}^n\epsilon(b_i)$ for some $b_1,\ldots,b_n\in A$. Then $\Theta(\alpha)=\an{1,(b_n,\ldots, b_1)}\in \pi_1(Y(A),\infty)$ and hence $\Lambda(\Theta(\alpha))=\prod_{i=1}^n\epsilon(b_i)=\alpha$, as required. The second stated isomorphism is Theorem \ref{thm:gamma} in Appendix \ref{sec:k2} below. \end{proof} Combining this with Proposition \ref{prop:euclid} (and recalling that $\mathrm{GL}_2(A)$ acts transitively on the set of path components of $|Y(A)|$) we deduce: \begin{cor}\label{cor:main} Let $A$ be a ring. Then $A$ is universal for $\mathrm{GE}_2$ if and only if any path component of the space $|Y(A)|$ is simply-connected. \end{cor} \subsection{Some examples} \begin{exa} \label{exa:pi1f} For any field $F$, the space $|Y(F)|$ is contractible (see Example \ref{exa:field} above) and hence $\pi_1(Y(F),\infty)=1$. The well-known fact that $K_2(2,F)$ is generated by symbols and the resulting presentation of $\spl{2}{F}$ follow from this. \end{exa} \begin{exa} \label{exa:pi1z} The space $|Y(\Bbb{Z})|$ is contractible (see Section \ref{sec:gammaz} above) and hence $\pi_1(Y(\Bbb{Z}),\infty)=1$. Thus $K_2(2,\Bbb{Z})$ is generated by symbols. The only possibly non-trivial symbol is $c(-1,-1)$. Thus we re-derive the well-known fact that $K_2(2,\Bbb{Z})$ is cyclic and generated by $c(-1,-1)$. (In fact, $c(-1,-1)\in K_2(2,\Bbb{Z})$ has infinite order.) \end{exa} \begin{exa}\label{exa:pi123} The calculations of Morita \cite{morita:k2zs} show that for the Euclidean domains $A=\Bbb{Z}[\frac{1}{2}]$ or $\Bbb{Z}[\frac{1}{3}]$, $K_2(2,A)$ is generated by symbols. \end{exa} \begin{exa}\label{exa:pi1m} Furthermore, by \cite[Proposition 2.13]{morita:k2zs}, if $A$ is a Dedekind domain and if $K_2(2,A)$ is generated by symbols, then the same is true of $K_2(2,A[\frac{1}{\pi}])$ for any prime element $\pi$ of $A$ for which the homomorphism $D^\times\to (D/\an{\pi})^\times$ is surjective. It follows that $K_2(2,\Bbb{Z}[\frac{1}{m}])$ is generated by symbols whenever $m$ can be expressed as a product of primes $m=p_1^{a_1}\cdots p_t^{a_t}$ ($a_i\geq 1$) with the property that $(\Bbb{Z}/p_i)^\times$ is generated by the residue classes $\{ -1, p_1,\ldots,p_{i-1}\}$ for all $i\leq t$. (In particular, $p_1\in\{ 2,3\}$). \end{exa} When $p\geq 5$ is a prime, however, the situation is quite different. The following result, as we explain in the proof, is essentially due to J. Morita (\cite{morita:braid},\cite{morita:mab}). \begin{lem}\label{lem:morita} Let $p\geq 5$. Then $K_2(2, \Bbb{Z}[\frac{1}{p}])\not=C(2,\Bbb{Z}[\frac{1}{p}])$. More precisely, write $p=6k+\epsilon$ where $\epsilon\in \{ \pm1\}$. Furthermore, let $k=2^{\ell}m$ where $m$ is odd. Then the Dennis-Stein symbol $D(a,b):=D(-\epsilon\cdot 2^{\ell+1},3m)\in K_2(2,\Bbb{Z}[\frac{1}{p}])$ represents an element of infinite order in $K_2(2,\Bbb{Z}[\frac{1}{p}])/C(2,\Bbb{Z}[\frac{1}{p}])$. In fact, $D(a,b)$ represents an element of infinite order in $\left(K_2(2,\Bbb{Z}[\frac{1}{p}])/C(2,\Bbb{Z}[\frac{1}{p}])\right)^{\mathrm{ab}}$. \end{lem} \begin{proof} Let $\mathrm{St}(2,\Bbb{Z}[\frac{1}{p}])$ denote the rank one Steinberg group (see the appendix). For a group $G$, let $G^{\mathrm{mab}}$ denote $G/G^{(2)}$, where $G^{(1)}:=[G,G]$ and $G^{(2)}:=[G^{(1)},G^{(1)}]$. For $p\geq 5$, let $M_p$ denote the group given by the presentation \[ M_p=\an{\sigma,\tau_1,\tau_2\ |\ \sigma^{p^2-1}=[\tau_1,\tau_2]=1, \sigma\tau_1\sigma^{-1}=\tau_1\tau_2^{-1}, \sigma\tau_2\sigma^{-1}=\tau_1} \] Morita (\cite{morita:braid}) has shown that there is an isomorphism $\mathrm{St}(2,\Bbb{Z}[\frac{1}{p}])^{\mathrm{mab}}\cong M_p$. Now, by definition, the subgroup of $M_p$ generated by $\tau_1$ and $\tau_2$ is isomorphic to $\Bbb{Z}\oplus \Bbb{Z}$ and the element $\sigma$ operates by conjugation as right multiplication by the matrix $ \left[ \begin{array}{cc} 1&1\\ -1&0\\ \end{array} \right]=E(1)$. Thus $\sigma^{3}$ is multiplication by $-1$ and the element $\sigma^6$ is central in $M_p$. Since $\sigma$ has no fixed points in $\an{\tau_1,\tau_2}\cong \Bbb{Z}\oplus \Bbb{Z}$, it follows that the centre, $Z(M_p)$, of $M_p$ is the cyclic group of order $(p^2-1)/6$ generated by $\sigma^6$. However, in \cite[proof of Theorem 1]{morita:mab}, Morita shows that kernel of the composite homomorphism \[ M_p \cong \mathrm{St}\left(2,\Bbb{Z}\left[\frac{1}{p}\right]\right)^{\mathrm{mab}} \to \spl{2}{\Bbb{Z}\left[\frac{1}{p}\right]}^{\mathrm{mab}} \] is contained in the abelian group \[ M_p^0:=\an{\tau_1,\tau_2,\sigma^{12}} \cong \Bbb{Z}\oplus\Bbb{Z}\oplus \left(\Bbb{Z}/\left((p^2-1)/{12}\right)\right). \] Thus the image of the natural map $K_2(2,\Bbb{Z}[\frac{1}{p}])\to M_p$ lies in $M_p^0$. Any central element of $\mathrm{St}(2,\Bbb{Z}[\frac{1}{p}])$ must have image in $Z(M_p)=\an{\sigma^6}$. Thus the image of \\ $C(2,\Bbb{Z}[\frac{1}{p}])\subset K_2(2,\Bbb{Z}[\frac{1}{p}])\cap Z(\mathrm{St}(2,\Bbb{Z}[\frac{1}{p}])$ in $M_p$ lies in $M_p^0\cap \an{\sigma^6}=\an{\sigma_{12}}$. It follows that there is a well-defined homomorphism \[ \frac{K_2\left(2,\Bbb{Z}\left[\frac{1}{p}\right]\right)}{C\left(2,\Bbb{Z}\left[\frac{1}{p}\right]\right)}\to \left( \frac{K_2\left(2,\Bbb{Z}\left[\frac{1}{p}\right]\right)}{C\left(2,\Bbb{Z}\left[\frac{1}{p}\right]\right)}\right)^{\mathrm{ab}}\to \frac{M_p^0}{\an{\sigma^{12}}}\cong \an{\tau_1,\tau_2}\cong \Bbb{Z}\oplus \Bbb{Z} \] Finally, the calculations of Morita \cite[p74]{morita:mab} show that the image of $D(a,b)$ under this homomorphism is nontrivial (and hence has infinite order). \end{proof} \begin{cor} For $p\geq 5$, $\Bbb{Z}[\frac{1}{p}]$ is a Euclidean domain which is not universal for $\mathrm{GE}_2$. \end{cor} \begin{exa}\label{exa:pi15} Let $p\geq 5$ be a prime number. Write $p=6k+\epsilon$ where $\epsilon\in \{ \pm1\}$. Let $k=2^{\ell}m$ where $m$ is odd. By Lemma \ref{lem:morita}, the Dennis-Stein symbol $D(-\epsilon\cdot 2^{\ell+1},3m)$ corresponds to (the class of) a loop in $\pi_1(Y(\Bbb{Z}[\frac{1}{p}),\infty)$ which has infinite order. We give an explicit ula for this loop in the next section. \end{exa} \subsection{The loop in $Y(A)$ corresponding to a Dennis-Stein symbol} Let $A$ be a ring and let $a,b\in A$ such that $u:=1-ab$ is a unit. Then these determine the \emph{ Dennis-Stein symbol} \[ D(a,b):=x_{21}(-bu^{-1})x_{12}(-a)x_{21}(b)x_{12}(au^{-1})h_{12}(u)^{-1}\in K_2(2,A). \] We now calculate the image of such an element under the composite map\\ $K_2(2,A)\to K_2(2,A)/C(2,A)\to \pi_1(Y(A),\infty)$: The map $\tilde{\alpha}$ (see Appendix \ref{sec:k2}) sends $D(a,b)$ to \[ y(-bu^{-1})\tilde{y}(-a)y(b)\tilde{y}(au^{-1})h(u^{-1})\in U(A), \] which, by definitions of the terms, is equal to \[ \epsilon(0)^3\epsilon(-bu^{-1})\epsilon(a)\epsilon(0)^3\epsilon(0)^3\epsilon(b)\epsilon(-au^{-1})\epsilon(0)^3h(u^{-1}). \] Using the facts that $\epsilon(0)^2=h(-1)$, $h(-1)$ is central of order $2$ and $h(-1)h(u^{-1})=h(-u^{-1})$ in $C(A)$, this simplifies to: \[ \epsilon(0)\epsilon(-bu^{-1})\epsilon(a)\epsilon(b)\epsilon(-au^{-1})\epsilon(0)h(-u^{-1}). \] The map $\Lambda$ in turn sends this to the element \[ \an{1, (u^{-1},u,u^{-1},0,-au^{-1},b,a,-bu^{-1},0)}\in \pi_1(Y(A),\infty). \] Now applying a type (2) contraction to $u^{-1},u,u^{-1}\ldots$, this is in the same path homotopy class as \[ \an{1, (0,0,0,-ua,u^{-2}b,u^2a,-bu^{-3},0)}. \] Applying two type (1) contractions, this is in the same class as $\an{1, (0,-ua,u^{-2}b,u^2a)}$. The corresponding loop in $Y(A)$ is \begin{eqnarray*} L_{a,b}&:=&(\infty,\infty\cdot E(0),\infty\cdot E(-ua)E(0),\infty\cdot E(u^{-2}b)E(-ua)E(0),\infty)\\ &=& (\infty,0,(ua)_-,(u^{-1}b)_+,\infty). \end{eqnarray*} When $A$ is an integral domain we thus have \[ L_{a,b}=(\infty,0,\frac{1}{ua},\frac{b}{u},\infty). \] Thus whenever $D(a,b)$ represents a nontivial element of $K_2(2,A)/C(2,A)$, the loop $L_{a,b}$ represents a nontrivial element of $\pi_1(Y(A),\infty)$ (and conversely). \begin{exa}\label{exa:ds1} Let $p\geq 5$ and let $m,\ell$,$\epsilon$ be as in Example \ref{exa:pi15}. It follows that the loop \[ L_{-\epsilon\cdot 2^{\ell+1},3m}=\left(\infty,0,-\frac{1}{2^{\ell+1}p},\frac{3m}{\epsilon p},\infty\right) \] represents an element of infinite order in $\pi_1(Y(\Bbb{Z}[\frac{1}{p}]),\infty)$. For example, when $p=5$, we have $m=1$, $\ell=0$, $\epsilon=-1$ . The loop $(\infty,0,-\frac{1}{10},-\frac{3}{5},\infty)$ represents an element of infinite order in $\pi_1(Y(\Bbb{Z}[\frac{1}{5}]),\infty)$. \end{exa} \section{Application: The complex $L_\bullet(A)$} For a ring $A$, $L_\bullet(A)$ is the following complex of abelian groups. $L_n(A)$ is the free abelian group with basis $X_n(A)$ where $X_n(A):= \{ (x_0,\ldots, x_n))\in Y_0(A)^n\ |\ \{ x_0,\ldots,x_n\}\in Y_n(A)\}$; i.e., a basis of $L_n(A)$ consists of \emph{ordered} $(n+1)$-cliques of $\Gamma(A)$. The boundary map is the standard simplicial boundary \[ d_n: L_n\to L_{n-1},\ d_n((x_0,\ldots, x_n)):=\sum_{i=0}^n(-1)^i(x_0,\ldots,\widehat{x_i},\ldots,x_n). \] More generally, for any simplicial complex $Y$ let us define the complex $L_\bullet(Y)$ as follows: $L_n(Y)$ is the free abelian group whose basis consists of ordered $n$-simplices of $\Gamma$, together with the above simplicial boundary map. Thus $L_\bullet(A)=L_\bullet(Y(A))$ in this terminology. We wish to compare this complex with the \emph{ordered chain complex} $\Delta_\bullet(Y)$, defined as follows: A basis of $\Delta_n(Y)$ consists of ordered $(n+1)$-tuples $(y_0,\ldots,y_n)$ where $\{ y_0,\ldots,y_n\}$ is a simplex of $Y$ dimension $\leq n$; i.e., we allow \emph{degenerate simplices} for which $y_i=y_j$ for some $i\not=j$. It is well known that $H_n(\Delta_\bullet(Y))\cong H_n(|Y|)$ for all $n\geq 0$ (Spanier \cite[Chapter 5]{spanier}\footnote{In \cite[Chapter 5, Section 3]{spanier} it is not clear from the definition that the complex $\Delta_\bullet(Y)$ includes degenerate simplices. However, they are certainly required in the proof of Theorem 5.3.6, which in turn is required implicitly in the proof, via acyclic models, that the homology groups of $\Delta_\bullet(Y)$ agree with those of the more standard \emph{oriented chain complex}, $C_\bullet(Y)$, of $Y$, which computes $H_\bullet(|Y|)$. }). The obvious chain map $L_\bullet(Y)\to \Delta_\bullet(Y)$ does not generally induce an isomorphism on homology: Clearly $H_0(L_\bullet(Y))=H_0(\Delta_\bullet(Y))$ for any simplicial complex $Y$. However, if, for example, $Y$ is an $n$-simplex then $H_i(\Delta_\bullet(Y))=0$ for all $i>0$ but $H_{n}(L_\bullet(Y)) $ is a free group of rank $(n+1)!\left(\frac{1}{0!}-\frac{1}{1!}+\cdots +\frac{(-1)^{n+1}}{(n+1)!}\right)$. As observed above in section \ref{sec:clique}, any edge in $\Gamma(A)$ is contained in $3$-clique; equivalently, any $1$-simplex in $Y(A)$ is contained in a $2$-simplex. \begin{lem}\label{lem:h1} Let $Y$ be a simplicial set with the property that each $1$-simplex is contained in a $2$-simplex. Then the chain map $L_\bullet(Y)\to \Delta_\bullet(Y)$ induces an isomorphism \[ H_1(L_\bullet(Y))\cong H_1(\Delta_\bullet(Y)). \] \end{lem} \begin{proof} Let $f=(f_n)_n$ be the chain map $L_\bullet\to \Delta_\bullet$. We begin by observing that if $x$ is a $0$-simplex, then in $\Delta_\bullet$, we have $(x,x)=d_2(x,x,x)$. Thus if $z=\sum_{i}n_i(x_i,y_i)+\sum_jm_j(z_j,z_j)$ is a $1$-cycle in $\Delta_1$ , where $x_i\not=y_i$ then $d_1(\sum_{i}n_i(x_i,y_i))=0$ and $z=f_1(\sum_{i}n_i(x_i,y_i))$ modulo boundaries and thus $H_1(L_\bullet)$ maps onto $H_1(\Delta_\bullet)$. Now suppose that $z=\sum_in_i(x_i,y_i)\in L_1$ has the property that $z=d_2(w)$ for some $w\in \Delta_2$. We must prove that $z\in d_2(L_2)$. We can write \[ w=\sum_jm_j(a_j,b_j,c_j)+\sum_kt_k(r_k,s_k,r_k)+\sum_\ell w_\ell(u_\ell,u_\ell,v_\ell)+\sum_m c_m(d_m,e_m,e_m). \] where the $a_j,b_j,c_j$ are distinct from each other and the $r_k\not=s_k$. Since $d_2(u,u,v)=(u,u)$ and $d_2(d,e,e)=(e,e)$ we must $z=d_2(w')$ where \[ w'=\sum_jm_j(a_j,b_j,c_j)+\sum_kt_k(r_k,s_k,r_k). \] Now $d_2(r_k,s_k,r_k)=((r_k,s_k)+(s_k,r_k))-(r_k,r_k)$. Comparing both sides of $z=d_2(w')$, it follows that $\sum_kt_k(r_k,r_k)=0$ and hence \[ d_2\left(\sum_kt_k(r_k,s_k,r_k)\right)=\sum_kt_k\left( (r_k,s_k)+(s_k,r_k)\right). \] Now for each $k$, choose $z_k$ such that $\{ r_k,s_k,z_k)$ is $2$-simplex in $Y$. Then \[ d_2\left( \sum_kt_k\left( (r_k,s_k,z_k)+(s_k,r_k,z_k)\right)\right)=\sum_kt_k\left( (r_k,s_k)+(s_k,r_k)\right). \] Thus $z=d_2(w'')$ where \[ w''=\sum_jm_j(a_j,b_j,c_j)+ \sum_kt_k\left( (r_k,s_k,z_k)+(s_k,r_k,z_k)\right)\in L_2 \] as required. \end{proof} \begin{thm}\label{thm:h1} let $A$ be a ring. Then there are isomorphisms \[ H_1(L_\bullet(A))\cong U(A)^{\mathrm{ab}}\cong \left(\frac{K_2(2,A)}{C(2,A)}\right)^{\mathrm{ab}}. \] \end{thm} \begin{proof} By Lemma \ref{lem:h1}, we have \[ H_1(L_\bullet(A))\cong H_1(\Delta_\bullet(Y(A)))\cong H_1(|Y(A)|)\cong \pi_1(Y(A),\infty)^{\mathrm{ab}}. \] By Theorem \ref{thm:main} $\pi_1(Y(A),\infty)^{\mathrm{ab}}\cong U(A)^\mathrm{ab} \cong \left(K_2(2,A)/C(2,A)\right)^{\mathrm{ab}}$. \end{proof} \begin{cor} If $A$ is a $\mathrm{GE}_2$-ring which is universal for $\mathrm{GE}_2$, then $H_0(L_\bullet(A))\cong \Bbb{Z}$ and $H_1(L_\bullet(A))=0$. Furthermore, if $K_2(2,A)$ is solvable the converse holds. \end{cor} \begin{proof} If $A$ is a $\mathrm{GE}_2$-ring then $H_0(L_\bullet(A))\cong H_0(Y(A))\cong \Bbb{Z}[\pi_0(Y(A))]\cong \Bbb{Z}$ and if $A$ is universal for $\mathrm{GE}_2$ then $H_1(L_\bullet(A))\cong U(A)^{\mathrm{ab}}=1$. If $K_2(2,A)$ is solvable, then so is $K_2(2,A)/C(2,A)$, and hence so is $U(A)$. In this case $U(A)=1$ if and only if $U(A)^{\mathrm{ab}}=1$. \end{proof} \begin{exa}\label{exa:lm} $\Bbb{Z}[\frac{1}{m}]$ is universal for $\mathrm{GE}_2$ if $m$ satisfies the condition described in Example \ref{exa:pi1m}. Thus \[ H_0(L_\bullet(\Bbb{Z}\left[\frac{1}{m}\right]))=\Bbb{Z}\mbox{ and } H_1(L_\bullet(\Bbb{Z}\left[\frac{1}{m}\right]))=0 \] for all such $m$. \end{exa} \begin{exa}\label{exa:pi1p} Let $p\geq 5$ and let $m,\ell$,$\epsilon$ be as in Example \ref{exa:pi15}. By Lemma \ref{lem:morita}, $\left(K_2(2,\Bbb{Z}[\frac{1}{p}])/C(2,\Bbb{Z}[\frac{1}{p}])\right)^{\mathrm{ab}}\not=0$ and hence $H_1(L_\bullet(\Bbb{Z}[\frac{1}{p}])\not=0$. More precisely, the image of the loop $L_{-\epsilon\cdot 2^{\ell+1},3m}$ in $L_1(\Bbb{Z}[\frac{1}{p}])$ represents a homology class of infinite order. Thus, in particular, the cycle \[ \left(\infty,0\right)+\left(0,-\frac{1}{2^{\ell+1}p}\right)+\left(-\frac{1}{2^{\ell+1}p},\frac{3m}{\epsilon p}\right)+\left(\frac{3m}{\epsilon p},\infty\right)\in L_1\left(\Bbb{Z}\left[\frac{1}{p}\right]\right) \] is not a boundary in the complex $L_\bullet(\Bbb{Z}[\frac{1}{p}])$. \end{exa} \vskip 50pt
1,477,468,750,246
arxiv
\section{Introduction} Deciding that two groups are isomorphic is a clear task: exhibit an invertible homomorphism between the groups. On the other-hand, understanding why two groups are non-isomorphic can take many different forms, and in this paper we demonstrate how little we know about non-isomorphism. To illustrate the situation, we can prove that the dihedral group $D_{2^n}$ of order $2^n$ is non-isomorphic to the quaternion group $Q_{2^n}$ of order $2^n$ by checking that no mapping of generators for $D_{2^n}$ to generators for $Q_{2^n}$ extends to a homomorphism. Instead, we usually report on some group isomorphism invariant, e.g. that $D_{2^n}$ has many elements of order $2$ whereas $Q_{2^n}$ has only one. The latter is both informative and easier to prove. In this article, we produce a family of groups each with size $p^n$ that have $p^{O(n^2)}$ different isomorphism types, but for which no obvious isomorphism invariant presents itself to distinguish a pair of groups from the family. Yet, given a pair of groups from the family we can efficiently (in polynomial time) test if they are isomorphic. If the algorithm does not produce an isomorphism, then we have proved that the groups are non-isomorphic. Such a proof is as informative as a proof that $D_{2^n}\not\cong Q_{2^n}$ by exhausting all possible functions between them. Such ``zero-knowledge'' non-isomorphism tests rightfully raise suspicion. The family we produce is one of many, and it arose out of a larger study of Camina groups; we will say more about this in Section \ref{sec:closing}. Though our family is very simple to describe, it also lies within the class of groups for which isomorphism appears most difficult to understand. As a consequence, the group theory aspects of the proof are modest and straight-forward, but most of the proof is accomplished by use of bilinear maps, rings with involutions, and tensor products. As these are not yet common tools for groups, we survey in Section \ref{sec:survey} the main ideas of these tools. \subsection{Main results} A group $H$ is a {\em generalized Heisenberg group} if there is a field $K$ and an integer $m$ such that $H$ is isomorphic to \begin{align}\label{eq:gen-Hei} H_m(K) & = \left\{ \begin{bmatrix} 1 & u & s \\ 0 & I_m & v^t \\ 0 & 0 & 1 \end{bmatrix}: s\in K, u,v\in K^m\right\} \end{align} When $m = 1$ we call $H$ a \emph{Heisenberg group}. The family of groups in which we are interested are the nonabelian quotients of $H$. First, a generalized Heisenberg group $H$ has an extraordinary number (compared to $|H|$) of nonisomorphic quotients of a fixed order. We prove: \begin{thm}\label{thm:main} For every prime $p>2$ and every integer $n\geq 12$, there is a generalized Heisenberg group (in fact a Heisenberg group) of order $p^{5n/4+O(1)}$ that has $p^{n^2/24+O(n)}$ isomorphism classes of quotient groups that have order $p^n$. \end{thm} It is not surprising that a group will have a large number of nonisomorphic quotients (consider free groups). For comparison, Higman \cite[Section 2]{Higman:enum} created groups $F_N$ having $N^{O(\log_p^2 N)}$ distinct isomorphism classes appearing as quotients of $F_N$ and with size $N = p^n$; yet, $F_N$ has size $N^{O(\log_p N)}$. The surprise in \thmref{thm:main} is that we obtain $N^{O(\log_p N)}$ distinct isomorphism classes of groups of size $N$ from a group of size as small as $N^{1.2+O(1/\log_p N)}$. As these quotients are so large compared to the size of the parent group, they must have an extraordinary number of relations in common, but yet, they still display enormous diversity. Despite the great number of isomorphism classes guaranteed by \thmref{thm:main}, our second result claims that we can relatively simply determine when two quotients of a generalized Heisenberg group are isomorphic. Algorithms to test for an isomorphism between general groups of order $N$ return an answer in $N^{\log_p N+O(1)}$-time \cite{Miller:nlogn}, where $p$ is the smallest prime dividing $N$, and where \emph{time} indicates an upper bound on the number of steps a routine performs. It is an important open problem to determine if isomorphism testing of groups can be done in polynomial time in the order $N$ of the groups, but progress in this direction has been slow. Amongst the hardest cases are the groups of order $N = p^n$, where $p$ is a prime, and having nilpotence class 2, such as quotients of generalized Heisenberg groups. Indeed, for these groups, the most advanced method, known as the \emph{nilpotent quotient algorithm}, runs in time $N^{\log_c N} = p^{n^2/c'+O(n)}$, where $c$ and $c'$ depend only on $p$; see \remref{rem:nil-q-algo}. For a survey of group isomorphism algorithms see \cite{Babai:iso,CH:iso,OBrien:iso}. The algorithm in our next theorem works with groups given by generators (as permutations or matrices) and also with groups specified by black-box polycyclic presentations,\footnote{We say `black-box' here because multiplication in polycyclic groups is in the worst case exponential in the length of the presentation. However, in practice operating in polycyclic groups is amongst the most efficient means for working with $p$-groups. So we regard the cost of multiplication as an acceptable constant and measure efficiency in that setting in terms of number of group operations.} and so polynomial time in these contexts is a function of the these very terse input methods. Hence, our algorithm represents an exponential improvement over all other known isomorphism tests that apply to these $p$-groups. We had originally proved it only in the context of permutation representations. We are indebted to L. Ronyai for an elegant adaptation (\lemref{lem:Ronyai-trick}) that extends our earlier algorithm to the remaining common input methods for groups. We prove: \begin{thm}\label{algo:main} There are algorithms that determine \begin{enumerate}[(i)] \item if a group $G$ (given by permutations, matrices, or a black-box polycyclic presentation) is an epimorphic image of an odd order generalized Heisenberg group, and if so, then returns an epimorphism $H_m (K) \to G$ with $|H_m (K)|$ as small as possible, and \item if two groups, that are epimorphic images of odd order generalized Heisenberg groups, are also isomorphic. \end{enumerate} The algorithms are deterministic polynomial-time in $\log |G|+p$ and Las Vegas\footnote{Las Vegas algorithms always return correct answers but with a user specified probability of $\varepsilon>0$, they may abort without an answer.} polynomial-time in $\log |G|$ (owing to the implicit need to factor polynomials over finite fields of characteristic $p$). \end{thm} In our third and final result, we list our failures to distinguish the quotients of odd order generalized Heisenberg groups $H$ by traditional means. In light of \thmref{thm:main}, one might expect that two quotients $G_1$ and $G_2$ of $H$ with the same order $p^n$ will be considerably distinct as groups, and in view of \thmref{algo:main} (ii), it would likely be straightforward to describe these differences. Unfortunately, the algorithm of \thmref{algo:main} (ii) does not appear to produce a group-theoretic property to characterize each isomorphism class. Because of \thmref{algo:main} (i), we are concerned only with the differences between quotients $G_1$ and $G_2$ of a common generalized Heisenberg group $H = H_m (K)$ for which $|H|$ is as small as possible. We say such quotients are \emph{indigenous} to $H$. So our effort is to find isomorphism invariants for indigenous quotients $G_1$ and $G_2$ of $H$. We also assume $|G_1| = |G_2|$, but amazingly that assumption appears to force a great number of typically discerning isomorphism invariants to be the same for both $G_1$ and $G_2$. Every non-trivial element of $G_1$ and $G_2$ has order $p$. Also, $G_1$ and $G_2$ have isomorphic character tables, indeed the centralizer of every non-central element has the same size. Next, we consider recent advances on decompositions of $p$-groups as in \cite{Wilson:unique-cent}, but we find indigenous quotients are directly and centrally indecomposable and of the same `type' of indecomposability. With some modest constraints on the $|G_i|$ relative to $|H|$, we retain the large number of isomorphism types described in \thmref{thm:main} but also constrain the automorphism groups of the $G_i$ to have identical subgroups $C_i=C_{\Aut G_i}(G'_i)$ and furthermore, $\Aut G_i/C_i$ can take at most $2d(K)$ different values where $d(K)$ is the number of divisors of $\log_p |K|$. In fact, if $\log_p |K|$ is prime, we have at most 2 types of automorphism groups possible. The isomorphism invariants just described are often quite powerful even in difficult contexts involving $p$-groups of class $2$, e.g. \cite[pp. 143--144]{Verardi} \& \cite[p. 99]{Elation}. Therefore, we found it startling to have no use for them on such a large family of groups. We hope we have illustrated the need for creative alternative structural properties that will apply to $p$-groups of class $2$. Ideally, these new properties would be easily computed (say in polynomial time) and would lead to isomorphism invariants that would help us understand isomorphism of $p$-groups in broader contexts. Admittedly, the interest in quotients of generalized Heisenberg groups is narrow, but we use these as an example of an entirely obvious family of groups for which the group isomorphism problem presents some of its most puzzling properties. \subsection{Survey}\label{sec:survey} Because of \thmref{algo:main}, we cannot assume that a group is specified in any manner relating to the natural definition of a Heisenberg group. Therefore, our first step is to uncover properties of a group $G$ that determine when it is a generalized Heisenberg group and when it is an epimorphic image of a generalized Heisenberg group. To obtain a usable algorithm, we also take care to involve properties of $G$ that can be computed efficiently. The first step uses the commutation map of a $p$-group of class $2$. This map $b =\Bi(G): G/Z(G) \times G/Z(G) \to G'$ assigns $b(Z(G)x,Z(G)y) = [x,y]$. Baer observed that $b$ is biadditive. Using this observation, we are able to translate our group questions to linear algebra and classical geometry. From this result, we can identify when $G$ is a generalized Heisenberg group by determining the largest commutative ring $K = \Cent (b)$ for which $b$ becomes $K$-bilinear. We show $G$ is a generalized Heisenberg group if and only if $K$ is a field and $b$ is an alternating nondegenerate $K$-form (\thmref{thm:rec-Hei}). To recognize epimorphic images $G$ of a generalized Heisenberg group $H$, we first remark that $b=B(G)$ factors through $\Bi(H)$. To construct a suitable group $H$ from $G$, we construct $A = \Adj (b)$ as the largest ring over which $b$ factors through the tensor product $\otimes_A : G/Z(G) \times G/Z(G) \to (G/Z(G)) \otimes_A (G/Z(G))$ -- that requires that $A$ be defined to act on the right and left of $G/Z(G)$ and so $A$ is equipped with an anti-isomorphism of order at most $2$, i.e. an \emph{involution}. Using properties of simple rings with involutions and their representations, we show that for epimorphic images of Heisenberg groups, the tensor product $\otimes_{A}$ is a nondegenerate alternating $F$-form for the center $F$ of $A$. Indeed, $G$ is an epimorphic image of $H_m (F)$ where $2m=\dim_F (G/Z(G))$; in fact, $G$ is indigenous to $H_m(F)$ (\thmref{thm:rec-q-Hei}). Our tools so far are computable and rely mostly on linear algebra techniques and factoring polynomials. In particular we have described enough already to prove \thmref{algo:main}(i). The next crucial step is to show that when $G_1$ and $G_2$ are indigenous quotients of a generalized Heisenberg group $H$, then every isomorphism $\phi : G_1 \to G_2$ lifts to an automorphism of $H$ (\thmref{thm:lift-iso}). This is done by using $\phi$ to induce a pseudo-isometry $(\varphi;\ct{\varphi})$ from $b_1=\Bi(G_1)$ to $b_2=\Bi(G_2)$ which is then extended to a pseudo-isometry $(\varphi;\ct{\Phi})$ between the tensors $\otimes_{\Adj (b_1)}$ and $\otimes_{\Adj (b_2)}$ (pseudo-isometry is the appropriate equivalence relation between alternating biadditive maps). As the $G_i$ are indigenous to $H$, $\Bi(H)$ is pseudo-isometric to both $\otimes_{\Adj (b_1)}$ and $\otimes_{\Adj (b_2)}$, and so, we can obtain an automorphism of $H$ from $(\varphi;\ct{\Phi})$. Finally, we prove our main theorems by considering the well-known structure of the automorphism group of a generalized Heisenberg group $H$. From the isomorphism lifting property, two epimorphic images of $H$ are isomorphic if and only if their kernels lie in the same $(\Aut H)$-orbit. As these kernels can be identified with $\mathbb{Z}/p$-subspaces of a finite field $K$, this amounts to understanding the $(\Gal(K)\ltimes K^{\times})$-orbits of the $\mathbb{Z}/p$-subspaces of $K$. Each of these orbits is small, and so, there are many obits. That explains the many isomorphism types in \thmref{thm:main}. We use Ronyai's modification to test when two subspaces lie in the same orbit and so produce a very efficient test of isomorphism; \thmref{algo:main} (ii). \subsection{Outline} Section \ref{sec:back} gives background and Section \ref{sec:rec} deals with recognizing quotients of generalized Heisenberg groups. We prove our main theorems in Section \ref{sec:main}. Section \ref{sec:invariants} demonstrates a list of typically sensitive group isomorphism invariants which here are of no use. Section \ref{sec:closing} considers $2$-groups and a problem of Brauer. \section{Background}\label{sec:back} Throughout $p$ will denote an odd prime. All our groups, rings, and modules will be finite unless context makes this obviously false. We will use the following standard group theory notations. For elements $g,h\in G$, write $g^h = h^{-1}g h$, $[g,h] = g^{-1} g^h$, and $g^G = \{g^h:h\in G\}$. To fit these conventions, homomorphisms $\varphi : G \to H$ are evaluated as $g \varphi$, for $g \in G$, and all other functions are, as usual, on the left. Given subgroups $H,K \leq G$, set $[H,K] = \langle [h,k] : h \in H, k\in K\rangle$. Also, for a subset $S\subseteq G$, we write $C_G(H) = \{h\in G:\forall g\in S, [g,h] = 1\}$ to denote the \emph{centralizer} of $S$ in $G$. Call $G' = [G,G]$ the \emph{commutator} subgroup of $G$, and $Z (G) = C_G (G)$ the \emph{center} of $G$. We say that $G$ is \emph{nilpotent of class $2$} if $1 < G'\leq Z (G) < G$. A group $G$ has \emph{exponent $p$} if $G^p = \langle g^p : g\in G \rangle$ is trivial. \subsection{Bimaps} In this work, we will typically need $k$ to be a finite field, but for the moment we require only that $k$ be a commutative unital ring and that $U$, $V$, and $W$ be $k$-modules. We write $\End_k U$ for the ring of $k$-linear endomorphisms of $U$ and $\GL_k (U)$ for the group of $k$-linear automorphisms of $U$. In cases were $k$ is omitted from the notation, it should be assumed to be the integers, which in most contexts could further reduce to the appropriate prime subfield $\mathbb{Z}/p$. A \emph{$k$-bimap} is a function $b : U \times V \to W$ of $k$-modules $V$ and $W$ with \begin{align*} b (u+rx,v) & = b (u,v) + rb (x,v) & ( \forall u,x \in U,\forall v \in V,\forall r \in k)\\ b (u,v+rx) & = b (u,v) + rb (u,x) & ( \forall u \in U,\forall v,x \in V,\forall r \in k). \end{align*} We say $b$ is {\em alternating} if $U = V$ and $b (u,u) = 0$ for all elements $u \in V$. Every $k$-bimap is also a $\mathbb{Z}$-bimap (even a $\mathbb{Z}/e$-bimap where $e$ annihilates $U \times V \times W$). We say that $b$ is a \emph{$k$-form} if $W$ is a cyclic $k$-module. Given $X,Y \subseteq V$, define $b (X,Y) = \langle b (x,y) : x \in X, y \in Y \rangle$. For a $k$-linear map $\varphi : W \to Z$, we use $b\varphi$ for the bimap $V \times V \to Z$ defined as follows: \begin{align*} (b\varphi) (u,v) & = b (u,v) \varphi & (\forall u,v \in V). \end{align*} In general we say a bimap $c:U\times V\to X$ \emph{factors through} $b$ if there is a $\phi:W\to X$ such that $c=b\phi$. The left and right \emph{radicals} of $b$ are the submodules $U^{\bot} = \{ v \in V : b (U,v) = 0 \}$ and $V^{\top} = \{u \in U : b (u,V) = 0 \}$. Say that $b$ is \emph{nondegenerate} if $U^{\bot} = 0$ and $V^{\top} = 0$. If $b$ is alternating, then $U^{\top} = V^{\bot}$. A pair $b : U \times V \to W$ and $b' : U' \times V' \to W'$ of $k$-bimaps are \emph{(strongly) $k$-isotopic} if there is a triple $(\lt{f} : U \to U', \rt{f} : V \to V'; \ct{f} : W \to W')$ of $k$-linear isomorphisms such that \begin{align*} b (u,v)\ct{f} & = b'(u\lt{f}, v\rt{f}) & (\forall u,v \in V). \end{align*} (There is a notion of weak isotopism which will not be needed here.) If $U = V$ and $U' = V'$, then we can consider a $k$-{\em pseudo-isometry} which is a $k$-isotopism $(\lt{f},\rt{f};\ct{f})$ where $\lt{f} = \rt{f} =: f$. We abbreviate $(\lt{f},\rt{f};\ct{f})$ by $(f;\ct{f})$ in that instance, but we remark that $\ct{f}$ is not completely determined by $f$ unless $W = b (V,V)$. Finally, if $W = W'$, then we define an \emph{isometry} as a pseudo-isometry $(f;\ct{f})$ with $\ct{f} = 1_W$. In particular, we have the following natural groups of pseudo-isometries and isometries for a $k$-bimap $b : V \times V \to W$: \begin{align*} \Psi\Isom_k(b) & = \{(f;\ct{f})\in \GL_k(V) \times \GL_k(W):\forall u,v\in V, b (uf,vf) = b (u,v)\ct{f}\}\\ \Isom_k(b) & = \{(f;\ct{f})\in \Psi\Isom_k(b): \ct{f} = 1\}\normaleq \Psi\Isom_k(b). \end{align*} \begin{remark}\label{rem:one-alt-form} Every alternating nondegenerate $K$-form $j:V \times V \to K$ has a $K$-basis $\{e_1,\dots, e_m,f_1,\dots,f_m\}$ such that $j(e_i,e_j) = 0 = j(f_i,f_j)$ and $j(e_i,f_j) = \delta_{ij}$, for all $i$ and $j$ in $\{1,\dots,m\}$. Hence, there is only one $K$-pseudo-isometry class of nondegenerate alternating $K$-form and we take the bimap of \eqref{eq:alt} as a canonical representative from that class, defined by \begin{align}\label{eq:alt} j(u,v) & = u\begin{bmatrix} 0 & I_m\\ -I_m & 0 \end{bmatrix} v^t & (\forall u,v \in K^{2m}). \end{align} \end{remark} \subsection{Baer's correspondence}\label{sec:Baer} We work with odd $p$-groups by means of bimaps as introduced by Baer \cite{Baer:class-2}. This method is the first approximation of the now well-established use of the Mal'cev-Kaloujnine-Lazard correspondence (sometimes inadequately referred to as the Baker-Campbell-Hausdorff formula); see \cite[Section V.5]{Jacobson:Lie} and \cite[Section 10]{Khukhro} for details. In Section \ref{sec:2-groups}, we make a modest effort to extend this correspondence for use with Heisenberg $2$-groups. Associated to each group $G$ of nilpotence class $2$ (without restriction on its order) is a function $b = \Bi (G):G/Z(G) \times G/Z(G) \to G'$ where \begin{align} b (Z(G)x,Z(G)y) & = [x,y] & (\forall x,y \in G ). \end{align} Baer showed that $b$ is an alternating nondegenerate $\mathbb{Z}$-bimap and now we write it additively. If the exponent of $G$ is a prime $p$ (or more generally, if $G^p\leq Z(G)$ and $(G')^p = 1$), then $b$ is a $\mathbb{Z}/p$-bimap. We say that groups $G_1$ and $G_2$ of nilpotence class $2$ are \emph{isoclinic} if $\Bi (G_1)$ and $\Bi (G_2)$ are $\mathbb {Z}$-pseudo-isometric. (This agrees with the usual broader meaning of isoclinism introduced by P. Hall.) When $G_1$ and $G_2$ are isomorphic, they are immediately isoclinic. Yet, $D_8$ and $Q_8$ are isoclinic but nonisomorphic groups. \begin{ex}\label{ex:j-Hei} If $H = H_m(K)$, then \begin{align*} H' = Z(H) = \left\{\begin{bmatrix} 1 & 0 & s \\ 0 & I_m & 0 \\ 0 & 0 & 1 \end{bmatrix} : s \in K\right\}, \end{align*} and $\Bi (H)$ is an alternating nondegenerate $K$-form. \end{ex} In particular, $\Bi (H)$ is $\mathbb {Z}$-pseudo-isometric to $j:K^{2m} \times K^{2m} \to K$ in \eqref{eq:alt}. (Later in Section \ref{sec:centroid} we show $\Bi (H)$ is a natural $K$-bimap and as such is $K$-pseudo-isometric to $j$, but for now $\Bi (H)$ is defined only as a $\mathbb {Z}$-bimap.) Baer's bimap (above) establishes a natural correspondence between certain nilpotent groups of class $2$ and alternating bimaps. If $b : V \times V \to W$ is an alternating $\mathbb{Z}[1/2]$-bimap, then define the corresponding Baer group $G=\Grp(b)$ for $b$ as the set $V \times W$ equipped with the product: \begin{align}\label{def:Baer-group} (u;s) (v;t ) & = \left(u+v;s+t+\frac{1}{2}b (u,v)\right) \end{align} This is a group with familiar properties including: $\forall u,v\in V$, $\forall s,t\in W$,$\forall e\in\mathbb{Z}$, \begin{align} \label{eq:exp} (u;s)^e & = \left(eu; es \right), \textnormal{ and }\\ \label{eq:comm} [(u;s),(v;t)] & = (0; b (u,v) ). \end{align} Hence, the center and commutator subgroups are as follows: \begin{align} G'& = 0 \times b (V,V)\leq 0 \times W \leq V^{\bot(b)} \times W = Z(G). \end{align} In particular, $G$ is nilpotent of class $2$. Notice that every $\mathbb{Z}$-pseudo-isometry $(\varphi;\hat{\varphi})$ from $b$ to another bimap $b':V' \times V' \to W'$ induces an isomorphism $(u;s)\mapsto (u\varphi;s\hat{\varphi})$ from $\Grp(b)$ to $\Grp(b')$. Hence, if $b$ is nondegenerate and $W = b (V,V)$, then \eqref{eq:comm} implies that $b$ and $\Bi (\Grp(b))$ are naturally pseudo-isometric (by identifying $W$ with $0 \times W = \Grp(b)' = Z(\Grp(b))$ and $V$ with $(V \times W)/(0 \times W)$). Also, for nilpotent groups $G$ of class $2$ for which $G/Z(G)$ and $G'$ have no $2$-torsion, it follows that $G$ is isoclinic to $\Grp(\Bi (G))$. When $G^p = 1$ (which implies $p>2$) and $G' = Z(G)$, it is possible to upgrade isoclinism to isomorphism. \begin{prop}[Baer, 1939]\label{prop:Baer-correspondence} If $G$ is a $p$-group where $1 = G^p < G' = Z(G) < G$ (so $p>2$), then every transversal $\ell:G/G' \to G$ with $0\ell = 1$ induces an isomorphism $\varphi_{\ell}:G \to \Grp(\Bi (G))$. Also, \begin{align*} \Aut G\cong \Psi\Isom_{\mathbb{Z}/p}(\Bi (G)) \ltimes_{\tau} \hom_{\mathbb{Z}/p}(G/Z(G),G'). \end{align*} where for each $f\in \hom_{\mathbb{Z}/p}(G/Z(G),G')$ and each $(\varphi;\ct{\varphi})\in\Psi\Isom_{\mathbb{Z}/p}(\Bi (G))$, $(f)(\varphi;\ct{\varphi})\tau = \varphi^{-1} f\ct{\varphi}$. Specifically, if $G = \Grp(b)$ for an alternating $\mathbb{Z}/p$-bimap $b:V \times V \to W$ with $W = b (V,V)$, then \begin{enumerate}[(i)] \item for all $(\varphi;\ct{\varphi})\in \Psi\Isom_{\mathbb{Z}/p}(\Bi (G))$ and all $(u;s)\in V \times W$, $(u;s)^{(\varphi;\ct{\varphi})} = (u\varphi;s\hat{\varphi})$, and \item after canonically identifying $V$ with $G/Z(G) = (V \times W)/(0 \times W)$ and $W$ with $G' = 0 \times W$, for all $\tau\in\hom(V,W)$ and all $(u;s)\in V \times W$, $(u;s)^{\tau} = (u;s+u\tau)$. \end{enumerate} \end{prop} \begin{proof} For the isomorphism of $G$ to $\Grp(\Bi (G))$ see \cite[Proposition 3.10]{Wilson:unique-cent}. For the remaining properties, observe $\Psi\Isom(\Bi (G))$ embeds in $\Aut \Grp(\Bi (G))$ as argued above. Since $G' = Z(G)$ is characteristic, $\Aut G \to \Psi\Isom(\Bi (G))$ by $\phi\mapsto (\phi|_{G/Z(G)};\phi|_{G'})$. The kernel is $C_{\Aut G}(G/Z(G))\cong \hom(G/Z(G),G')$ acting as described in (ii). Compare \cite[Propositions 3.8]{Wilson:unique-cent}. \end{proof} \begin{remark} A detour into abstraction explains a few subtle choices in our definitions. Baer's design for $\Bi$ is more clever than our treatment in that the role of $Z(G)$ can be replaced with a normal subgroup $M$ between $G'$ and $Z(G)$. This allows one to insist that $M$ be fully invariant, perhaps even $G'$. That choice makes $G \mapsto \Bi_M(G)$ a functor from the category of nilpotent groups of class at most $2$ to the category of alternating bimaps equipped with an appropriate set of morphisms. However, such bimaps can be degenerate. Instead, our choice of $M = Z(G)$ establishes a functor from the category of nilpotent groups of class at most $2$ equipped with isoclinisms into the category of nondegenerate alternating bimaps equipped with $\mathbb{Z}$-pseudo-isometries. \end{remark} \section{Recognizing quotients of Heisenberg groups}\label{sec:rec} In this section, we focus on determining when a group $G$ is an epimorphic image of a generalized Heisenberg group $H_m(K)$. To be clear, we do not mean that $G$ should be specified by matrices over the field $K$, in fact, both $K$ and $m$ are not known at the start and instead the abstract group properties of $G$ must be used to reconstruct $K$ and $m$. This is necessary since we might only know a set of generators as permutations or matrices for an arbitrary representation of $G$, or a polycyclic presentation of $G$. In such instances, $K$ and $m$ are not provided. Indeed, one may even ask if the field $K$ is necessary to define a generalized Heisenberg group, which we affirm by proving that one may always recover an isomorphic copy of $K$ from the multiplication of a generalized Heisenberg group (\thmref{thm:rec-Hei}). Therefore, the representation of the group is irrelevant. We then generalize this technique to recognize abstract groups that are epimorphic images of generalized Heisenberg groups (\thmref{thm:rec-q-Hei}). The tools used to recognize these groups lead directly to the proofs of our main theorems in the following section. \subsection{Centroids}\label{sec:centroid} A \emph{centroid}\footnote{This definition is the generalization of centroids of non-associative rings \cite[pp. 147--153]{Kaplansky:rings}. For bimaps this appears for the first time in \cite{Myasnikov} under the name \emph{enrichment ring}, and in this general form in \cite[Section 5.2]{Wilson:direct-prod}.} of an alternating bimap $b:V \times V \to W$ is a ring $C$ over which $b$ is a $C$-bimap and $C$ is universal with that property. That is to say, if $b$ is also an $R$-bimap, then there is a unique homomorphism $\varphi:R \to C$ such that for all $r\in R$, all $v\in V$, and all $w\in W$, $vr = v(r\varphi)$ and $wr = w(r\varphi)$. As with non-associative algebras (cf. \cite[pp. 147--153]{Kaplansky:rings}), a centroid $C$ for $b$ always exists and it can be described as the ring: \begin{align*} \Cent(b) & = \{(f;h)\in\End V \times \End W: \forall u,v\in V, b (uf,v) = b (u,v)h = b (u,vf)\}. \end{align*} The universal property of a centroid for $b$ makes it unique to $b$, up to a canonical isomorphism. If $b$ is nondegenerate and $b (V,V) = W$, then $\Cent(b)$ is commutative: for all $(f;h),(f';h')\in \Cent(b)$, all $u\in U$, and all $v\in V$ \begin{align*} b (u(ff'),v) & = b (uf,vf') = b (u,vf')h = b (uf',v)h = b (u(f' f), v). \end{align*} As $b$ is nondegenerate, $f f' = f'f$.\footnote{The basic heuristic used here is a \emph{three-pile-shuffle}: given three piles of cards (the three places for the functions), by moving one card from the top of one pile to the top of another eventually every possible permutation of the three piles can be had. We argue similarly later without details.} If $(f;h), (f';h')\in \Cent(b)$ and $h = h'$, then \begin{align*} b (u(f-f'),v) & = b (uf,v) - b (uf',v) = b (u,v)h - b (u,v)h' = 0. \end{align*} Hence, $u (f - f') = 0$ for all $u \in U$ so that $f = f'$. In a similar fashion, it follows that $\Cent(b)$ is faithfully represented in its restriction to $W$. In particular, if $j:V \times V \to W$ is a nondegenerate $K$-bimap for a field $K$ with $\dim_K W = 1$ (i.e. a $K$-form), then $K$ embeds in $\Cent(j)$ and so $K\hookrightarrow \Cent(j)|_W\subseteq \End_K W\cong K$; thus, $\Cent(j)\cong K$. For more on centroids of bimaps see \cite[Section 5.2]{Wilson:direct-prod}. We use the centroid to recover $K$ from the multiplication of a generalized Heisenberg group $H$ over $K$. \begin{thm}\label{thm:rec-Hei} Let $H$ be a finite group with $1 = H^p < H' = Z(H) < H$. Then $H$ is a generalized Heisenberg group if and only if $\Cent(\Bi (H))$ is a field and $Z(H)$ is $1$-dimensional over $\Cent(\Bi (H))$. \end{thm} \begin{proof} For the forward direction, let $H$ be a generalized Heisenberg group. By \exref{ex:j-Hei}, the map $\Bi (H) : V \times V \to W$, where $V = H/Z(H)$ and $W = H'$, is $\mathbb{Z}/p$-pseudo-isometric to a nondegenerate alternating $K$-form $j : K^{2m} \times K^{2m} \to K$, for some field $K$. As above, $\Cent(\Bi (H))\cong K$, and as $W$ is $1$-dimensional over $K$, $W$ is also $1$-dimensional over $\Cent(\Bi (H))$. Now, for the converse, suppose that $H$ is a finite group with $1 = H^p < H' = Z(H) < H$ and that $K := \Cent(\Bi (H))$ is a field with $H'$ a one-dimensional vector space over $K$. By \propref{prop:Baer-correspondence}, $H$ is isomorphic to $\Grp(\Bi (H))$. Our $\Bi (H)$ is a nondegenerate alternating $K$-form. So, there is a $K$-pseudo-isometry $(\varphi;\hat{\varphi})$ from $\Bi (H)$ to $j:K^{2m} \times K^{2m} \to K$ as in \eqref{eq:alt} where $2m = \dim_K H/H'$ (\remref{rem:one-alt-form}). Hence, $\Grp(\Bi (H))\cong \Grp(j)\cong H_m(K)$ (the final isomorphism from \propref{prop:Baer-correspondence} and \exref{ex:j-Hei}). Therefore, $H$ is a generalized Heisenberg group over $K$. \end{proof} \subsection{Quotients of Heisenberg groups}\label{sec:q-Hei} In this section, we focus on quotients of generalized Heisenberg groups $H$ and derive their initial properties. Throughout this section, $H$ is a generalized Heisenberg group. \begin{lemma}\label{lem:Camina} If $H$ is a generalized Heisenberg group, then \begin{enumerate}[(i)] \item for all $g \in H-H'$, $[g,H] = H'$ (equivalently $g^H = gH'$), \item $H' = Z(H)$, and \item For all $N\leq H$, $N\normaleq H$ if and only if $N \leq H'$ or $H'\leq N$. \end{enumerate} \end{lemma} \begin{proof} As in \exref{ex:j-Hei}, $H' = Z(H)$, $\Bi (H)$ is a nondegenerate alternating $K$-form, for the field $K = \Cent(\Bi (H))$, and $H'$ is a $1$-dimensional $K$-vector space (\thmref{thm:rec-Hei}). In particular, for each $g \in H - H'$, $u = Z(H)g$ is non-zero so $[g,H] = j(u,H/Z(H)) = K = H'$, so (i) holds. Finally, for (iii) in the forward direction, if $g\in N - H'$, then $H'\leq [g,H]\leq N$. For the converse, observe that $H' = Z(H)$, so all its subgroups are normal in $H$. Likewise, all subgroups containing $H'$ are normal in $H$. \end{proof} Groups with the property of \lemref{lem:Camina}(i) are called {\em Camina groups}. Note that all Camina groups of nilpotence class $2$ satisfy conditions (ii) and (iii). These groups have many strong properties some of which contribute to the similarities between the many quotients of $H = H_m(K)$, and so, we return to this point of view in Section \ref{sec:invariants}. For now, we simply note that the quotients of $H$ by normal subgroups containing $H'$ are elementary abelian and so unremarkable. Thus, we only consider the remaining normal subgroups -- those properly contained in $H'$. Fix a nonabelian group $G$ of class $2$ and an epimorphism $\phi:H \to G$. First, we obtain alternating bimaps $j' = \Bi (H)$ and $b = \Bi (G)$. As $G$ is nonabelian, by \lemref{lem:Camina}, $\ker \phi\leq H' = Z(H)$ and so $\phi$ factors through the natural $\mathbb{Z}/p$-linear isomorphism $\varphi : H/H' \to G/G'$ and also induces a $\mathbb{Z}/p$-linear epimorphism $\ct{\varphi}:H' \to G'$ where $\ker \ct{\varphi} = \ker\phi$. It follows that $b(u\varphi,v\varphi)=j'(u,v)\ct{\varphi}$. Indeed, $\varphi$ is invertible so we induce an alternating nondegenerate $K$-form $j:G/Z(G) \times G/Z(G) \to K$ by assigning $j(u,v) = j'(u\varphi^{-1}, v\varphi^{-1})$. We observe that $b = j\ct{\varphi}$. Thus, we have translated from epimorphisms of generalized Heisenberg groups over $K$ to alternating $\mathbb{Z}/p$-bimaps that factor through nondegenerate alternating $K$-forms. We can also reverse the above translation as follows. If $j : V \times V \to K$ is a nondegenerate alternating $K$-form on a $K$-vector space $V$ and $\pi:K\to W\neq 0$ is an epimorphism, then $(v,s) \mapsto (v,s\pi)$ is a group epimorphism from $\Grp (j)$ to $\Grp (j\pi)$. Notice $H=\Grp(j)$ is generalized Heisenberg group and $\Grp(j\pi)$ is an epimorphic image of $H$. We conclude that to study epimorphic images of a generalized Heisenberg group it suffices to study the $\mathbb{Z}/p$-bimap $j\pi$. To study such bimaps, we introduce the ring of adjoints. \subsection{Adjoints} For a ring $R$, an \emph{$R$-mid-linear bimap} is a bimap $b:U \times V \to W$ where $U$ is a right $R$-module, $V$ is a left $R$-module, and $b$ factors through the $R$-tensor product $\otimes_R:U \times V \to U\otimes_R V$. An \emph{adjoint} ring of a bimap $b:U \times V \to W$ is a ring $A$ over which $b$ is $A$-mid-linear and $A$ is universal with that property. That is, whenever $b$ is $R$-mid-linear for some $R$, there is a unique homomorphism $\varphi:R \to A$ such that for all $r\in R$, all $u\in U$, and all $v\in V$, $ur = u(r\varphi)$ and $rv = (r\varphi)v$. As with centroids (cf. Section \ref{sec:centroid}), an adjoint ring $A$ for $b$ exists and, up to a unique isomorphism, we may assume $A$ is: \begin{align*} \Adj (b) = \{ (f,g)\in \End U \times (\End V)^{op} : \forall u\in U,\forall v\in V,~ b (uf,v) = b (u,vg)~\}. \end{align*} In general, if $A \subseteq \End_K U \times (\End_K V)^{op}$, then $U$ is the right $A$-module and $V$ is a left $A$-module by assigning the actions: for all $(a,a') \in A$, $u (a,a') = ua$, for all $u\in U$; and $(a,a')v = va'$, for all $v \in V$ (where we implicitly involve the property that composition in $\End_K V$ is as $(ab)^{op} = b^{op} a^{op}$, for $a,b \in \End_k V$). So indeed, we are able to form $U \otimes_{\Adj (b)} V$ from the above definition. The universal property follows immediately. Adjoint rings in this generality seem to have appeared first in the study of central products \cite[Section 4]{Wilson:unique-cent}, and we will return to those implications in Section \ref{sec:invariants}. \begin{ex}\label{ex:adj-j} Let $K$ be a field. If $j:K^{2m} \times K^{2m} \to K$ is the nondegenerate alternating $K$-form in \eqref{eq:alt}, then \begin{align} \label{eq:adj-j} \Adj (j) & = \left\{\left(\begin{bmatrix} A & B \\ C & D \end{bmatrix}, \begin{bmatrix} D^t & -B^t \\ -C^t & A^t \end{bmatrix}\right): A,B,C,D\in M_m(K)\right\}. \end{align} \end{ex} We have two important actions by $\GL_K(V)$. First, for each $x\in \GL_K(V)$ and each $\sum_i u_i \otimes v_i \in V \otimes_K V$, \begin{align*} \left( \sum_i u_i \otimes v_i \right)^x & = \sum_i (u_i x \otimes v_i x). \end{align*} Second, for each $x\in \GL_K(V)$ and each $(a,a') \in \End_K V \times (\End_K V)^{op}$, \begin{align*} (a,a')^x & = (x^{-1} a x, x^{-1} a'x) = (a^x, (a')^x). \end{align*} Hence, if $A \subseteq \End_K V \times (\End_K V)^{op}$ then $(V \otimes_A V)^x = V\otimes_{A^x} V$ and $x$ induces a $K$-pseudo-isometry $(x;\ct{x})$ from $\otimes_A$ to $\otimes_{A^x}$. Suppose $b$ is nondegenerate. For all pairs $(f,g), (f',g') \in \Adj (b)$, if either $f = f'$ or $g = g'$ then $(f,g) = (f',g')$. Thus, the projection $\Adj (b)|_U$ of $\Adj (b) \subseteq \End U \times (\End V)^{op}$ to $\End U$ is faithful. As defined, the adjoint ring appears to involve $\mathbb{Z}$-linear endomorphisms. However, a three-pile-shuffle shows that if $b$ is a $K$-bimap and $(f,g)\in \Adj (b)$ then both $f$ and $g$ are $K$-linear. Hence, as $b$ is nondegenerate, $\Adj (b)|_U \subseteq \End_{\Cent (b)} U$. Observe $\Cent (b)$ embeds in the center of $\Adj (b)$, again argued by a three-pile-shuffle; however, there are instances where the center of $\Adj (b)$ is larger than the image of $\Cent (b)$. When $b$ is alternating, we must have $U = V$, and for every $(f,g) \in \Adj (b)$, it follows that $(g,f) \in \Adj (b)$. More generally, we say $b : V \times V \to W$ is \emph{Hermitian} if there is a $\theta \in \GL_{\mathbb{Z}/p} (W)$ such that for all $u, v \in V$, $b (v,u) = b (u,v)\theta$. When $b$ is Hermitian, we see that $(f,g) \in \Adj (b)$ if and only if $(g,f)\in \Adj (b)$. Hence, $*:(f,g) \mapsto (g,f)$ is an anti-isomorphism of order at most $2$ on $\Adj (b)$; that is, it is an \emph{involution}. When $b$ is nondegenerate and Hermitian, an involution is induced on $\Adj (b)|_V$, and we denote this involution by $f \mapsto f^*$ where $(f,f^*) \in \Adj (b)$. For further details, see \cite[Section 3]{Wilson:unique-cent}. We shall need the generality of Hermitian bimaps only long enough to prove that in our context every bimap we rely on remains alternating. In general, for a ring $A$ if $V$ is a right $A$-module and $*$ is an involution on $A$, then we may treat $V$ also as a left $A$-module under the action $av := va^*$. For added clarity, we sometimes express this module by $V^*$. Therefore, the map $\otimes_A : V \times V \to V \otimes_A V^*$ is defined. Indeed, if $A = \Adj (b)|_V$ for a Hermitian bimap $b:V \times V \to W$, then $V \otimes_{\Adj (b)}V$ (as explained by the definition of $\Adj (b)$) is nothing other than $V \otimes_A V^*$. First, we cite the following classic fact; cf. \cite[IX.10-11]{Jacobson} or \cite[Section 5.2]{Wilson:algo-cent}. \begin{thm}\label{thm:*-simples} Let $K$ be a finite field and $V$ a finite-dimensional $K$-vector space. If $A = \End_K V$ and $*$ is an involution on $A$, then there is a nondegenerate Hermitian $K$-form $d : V \times V \to K$ such that $A = \Adj (d)|_V$ with the involutions also equal. \end{thm} \thmref{thm:*-simples} allows us to invoke the classifications of nondegenerate Hermitian forms (which in our context includes alternating and symmetric forms as well as the typical Hermitian form). That classification will be used to prove the next Theorem. \begin{thm}\label{thm:simple-adj-tensor} Let $K$ be a finite field and $V$ a finite-dimensional $K$-vector space. If $A = \End_K V$ and $*$ is an involution on $A$, then $V \otimes_{A} V^* \cong K$; in particular, $\otimes_A : V \times V \to V \otimes_A V^*$ is a nondegenerate $K$-form. Moreover, if $A$ is isomorphic to $\Adj (j)$ (as $*$-rings) for a nondegenerate alternating $K$-form $j : V \times V \to K$, then $j = \otimes_A \hat{\j}$ for a $K$-linear isomorphism $\hat {\j} : V \otimes_A V^* \to K$; indeed, $\otimes_A$ is an alternating nondegenerate $K$-form on $V$. \end{thm} Our proof of \thmref{thm:simple-adj-tensor} uses some vocabulary borrowed from \cite[Sections 3--4]{Wilson:unique-cent}. Suppose that $b : V \times V \to W$ is a nondegenerate Hermitian $k$-bimap. A $\perp$-decomposition is a $\oplus$-decomposition $V = X_1 \oplus \cdots\oplus X_s$ where none of the $X_i$ are trivial and for all $1 \leq i < j \leq s$, we have $b (X_i,X_j) = 0$ (which implies $b (X_j,X_i) = 0$). We denote this by $b = (b|_{X_1}) \perp \cdots \perp (b|_{X_s})$. Observe that $b$ is conceptually an `orthogonal sum' in the following sense: \begin{align} b (x_1 + \cdots +x_s, x'_1 + \cdots + x'_s) & = b (x_1, x'_1) + \cdots + b (x_s, x'_s) \end{align} where for each $i$ satisfying $1 \le i \le s$, we have $x_i,x'_i\in X_i$. A Hermitian bimap $b$ is \emph{$\perp$-indecomposable} if it has exactly one $\perp$-decomposition. A $\perp$-decomposition is \emph{fully refined} if its constituents are $\perp$-indecomposable. \begin{ex}\label{ex:form-decomp} For a finite field $K$, every nondegenerate Hermitian $K$-form $d$ has a fully refined $\perp$-decomposition into \emph{hyperbolic lines} $\langle e,f\rangle$ (where $d(e,e)=0=d(f,f)$ and $d(e,f)=1$), and \emph{anisotropic} points $\langle u\rangle$ (where $d(u,u)\neq 0$). \end{ex} \begin{lemma}\label{lem:ext} Let $K$ be a finite field and $V$ and $W$ two $K$-vector spaces. If $d : V \times V \to W$ is a $\perp$-indecomposable nondegenerate Hermitian $K$-form, then $\dim_K (V \otimes_{\Adj (d)} V)=1$. Furthermore, if $\dim V = 2$ then $\otimes_{\Adj (d)} : V \times V \to V \otimes_{\Adj (d)} V$ is an alternating nondegenerate form. \end{lemma} \begin{proof} By \exref{ex:form-decomp}, $0 < \dim_K V \leq 2$. If $V = Kv$ for some $0 \neq v \in V$ then $\End_K V = \{ (v \mapsto sv) : s \in K \}$. As $d$ is a nondegenerate $K$-form, $\Cent(d) \cong K$. Also, $\End_k V = \Cent (d)|_V \subseteq \Adj (d)|_V \subseteq \End_K V$ so that $\Cent (d)|_V = \Adj (d)|_V$. As $\Adj (d)$ is faithfully represented as $K$-endomorphisms on $V$, $\Adj (d) = \{ (v \mapsto sv, v \mapsto sv) : s \in K \}$. It follows that $(\alpha v) \otimes (\beta v) \mapsto \alpha \beta$ determines an isomorphism $V \otimes_{\Adj (d)} V \cong K$ as $K$-vector spaces. Now, let $\dim_K V = 2$; that is $V = \langle e,f \rangle$ where $d (e,e) = 0 = d (f,f)$ and $d (e,f) = 1$. If $u \in V$ is such that $d (u,u) \neq 0$, then $\langle u \rangle \cap u^{\perp} = 0$, and so, $d$ has a $\perp$-decomposition $V = \langle u \rangle \oplus u^{\perp}$. Yet, we are assuming that $d$ is $\perp$-indecomposable, and so, $d$ must be alternating. Hence, in the $e,f$ basis, $d (u,v) = u \begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix} v^t$. As $\left(\begin{bmatrix}b & b \\ -a & -a \end{bmatrix},\begin{bmatrix} -a & -b \\ a & b \end{bmatrix}\right)\in \Adj (d) =: A$, for all $[a,b] \in K^2$: \begin{align*} 0\otimes_A 0 & = [a,b]\begin{bmatrix} b & b \\ -a & -a \end{bmatrix} \otimes_A [0,1] = [a,b]\otimes_A [0,1]\begin{bmatrix} -a & -b \\ a & b\end{bmatrix} = [a,b]\otimes_A [a,b]. \end{align*} Thus, $K \cong K^2 \wedge_K K^2 := K^2 \otimes_K K^2/\langle u \otimes u : u \in K^2 \rangle$ maps $K$-linearly onto $K^2 \otimes_{A} K^2$; so, $\dim_K (K^2 \otimes_A K^2)\leq 1$. By the definition of the adjoint ring, $d$ factors through $K^2 \otimes_A K^2$ so there is a canonical non-trivial $K$-linear mapping $\hat {d}$ of $K^2 \otimes_A K^2$ into $K$. By considering dimensions, we see that $\hat {d}$ is a $K$-linear isomorphism. \end{proof} Now, we translate these geometric notions into ring theory so that we may prove \thmref{thm:simple-adj-tensor}. In a ring $A$ with involution $*$, we call an element $e\in A$ \emph{$*$-invariant} if $e^*=e$. If $e^2=e\neq 0$, then we say $e$ is \emph{idempotent}. Two idempotents $e,f\in A$ are \emph{orthogonal} if $ef=0=fe$. We say $e$ is a \emph{$*$-invariant-primitive idempotent} if $e^*=e=e^2\neq 0$ and $e$ is not the sum of two orthogonal $*$-invariant idempotents (noting that in the convention of Curtis-Reiner we do not permit $0$ as an idempotent). Every finite $*$-ring $A$ has a set $\mathcal{E}$ of pairwise orthogonal $*$-invariant-primitive idempotents that sum to $1$. See \cite[Section 4]{Wilson:algo-cent}. In general, $\perp$-decompositions are difficult to recognize for arbitrary bimaps, and the key tool is to describe these decompositions through the ring $\Adj (b)$. When $A\subseteq \End V$ and $e\in A$ is an idempotent, we know that $V=Ve\oplus V(1-e)$. Now, if $e\in \Adj (b)$ is a $*$-invariant idempotent, then $b (Ve,V(1-e))=b (V,V(1-e)e)=0$ so that $b=(b|_{Ve})\perp (b|_{V(1-e)})$. This process can also be reversed. These are the mechanics that underpin the following tool. \begin{thm}\cite[Corollary 4.5]{Wilson:algo-cent}\label{thm:perp-idemp} The fully refined $\perp$-decompositions of a nondegenerate Hermitian bimap $b$ are in one-to-one correspondence with the sets of pairwise orthogonal $*$-invariant-primitive idempotents of $\Adj (b)$ that sum to $1$. \end{thm} \begin{proof}[Proof of \thmref{thm:simple-adj-tensor}] By \thmref{thm:*-simples}, there is a nondegenerate Hermitian $K$-form $d : V \times V \to K$, where $K$ is the center of $A$, such that $A = \Adj (d)|_V$ with associated involution. Using \exref{ex:form-decomp}, we obtain a fully refined $\perp$-decomposition of $V$ into hyperbolic points and anisotropic points. Using \thmref{thm:perp-idemp}, there is a set $\mathcal{E}=\{e_1,\dots,e_m\}\subseteq A$ of pairwise orthogonal $*$-invariant-primitive idempotents whose $1$-eigenspaces on $V$ are $1$- or $2$-dimensional over $K$ according to whether the associated $\perp$-factor is anisotropic or hyperbolic. However, $d$ factors through $\otimes_A$ (as $A=\Adj (d)|_V$ as a $*$-ring), and so, $A\subseteq \Adj (\otimes_A)|_V\subseteq \Adj (d)|_V=A$. Furthermore, the involutions also agree. Applying \thmref{thm:perp-idemp} in the opposite direction to the nondegenerate Hermitian bimap $\otimes_A$, we find $\otimes_{A} = b_1 \perp \cdots \perp b_m$ where each $b_i = (\otimes_A)|_{Ve_i} = (\otimes_{e_i A e_i})|_{Ve_i}$ is a $\perp$-indecomposable nondegenerate Hermitian $K$-bimap. Therefore, $\otimes_{A}$ is a $K$-form so long as $1=\dim_K b_i(Ve_i,Ve_i)=\dim_K (Ve_i\otimes_{e_iAe_i} (Ve_i)^*)$, for all $i$ in $\{1,\dots,m\}$. Since $A=\Adj (d)|_V$, $e_i Ae_i=\Adj (d_i)|_{Ve_i}$, where $d_i=d|_{Ve_i}$. By \lemref{lem:ext}, we see that $\dim_K (Ve_i\otimes_{e_i A e_i} (Ve_i)^*)=\dim_K (Ve_i\otimes_{\Adj (d_i)|_{Ve_i}}Ve_i)=1$. Next, suppose that $\tau : A \cong \Adj (j)|_V$ is a $K$-linear $*$-ring isomorphism for an alternating nondegenerate $K$-form $j$ on $V$. Observe that $A=\End_K V=\Adj (j)|_V$ as rings so that $\tau$ is a $K$-linear ring automorphism of $\End_K V$. From the Skolem-Noether theorem \cite[IX.10-11]{Jacobson}, $\tau$ is an inner automorphism, and so, there is an endomorphism $x\in \End_K V$ such that for all $a\in A$, $a\tau=x^{-1} ax$. As $\tau$ is $*$-preserving, it follows that $\Adj (d)=\{(a,a^*):a\in A\}^x=\Adj (j)$ and therefore, $\otimes_{A}=\otimes_{\Adj (\otimes_A)}$ is pseudo-isometric to $\otimes_{\Adj (j)}$. The latter is $K$-pseudo-isometric to $j$. In particular, $\otimes_A$ is an alternating nondegenerate $K$-form. \end{proof} \thmref{thm:simple-adj-tensor} allows us to recognize quotients of Heisenberg groups. Recall from the end of Section \ref{sec:q-Hei} that our interest is to recognize nondegenerate $\mathbb{Z}/p$-bimaps that factor through an alternating nondegenerate $K$-form $j$. \begin{coro}\label{coro:quotient-Adj-1} Let $K$ be a finite field, $V$ a $K$-vector space, and $W\neq 0$ a $\mathbb{Z}/p$-vector space. If $j : V \times V \to K$ is a nondegenerate alternating $K$-form and $\pi : K \to W$ is a $\mathbb{Z}/p$-linear epimorphism, then $j\pi$ is alternating and nondegenerate, $\Adj (j\pi)$ is simple and acts irreducibly on $V$, and $\otimes_{\Adj (j\pi)}$ is an alternating nondegenerate $k$-form where $k$ is a subfield of $K$ isomorphic to the center of $\Adj (j\pi)$. \end{coro} We stress that \corref{coro:quotient-Adj-1} does not insist the $k$ is $K$. For example, a $\mathbb {Z}/p$-linear epimorphism $\pi : K \to \mathbb {Z}/p$ will have $\Adj (j\pi)|_V \cong M_{2me} (\mathbb{Z}/p)$ where $e = [K:\mathbb{Z}/p]$, so it is not possible in general to assume $k=K$. \begin{proof} Suppose for some $0 \neq u \in V$, that for all $v \in V$ we have $j(u,v)\pi = 0$. As $j$ is nondegenerate, there is an element $v \in V$ such that $j (u,v) =: s \neq 0$. Now, for all $t \in K$, $t\pi = j (u,ts^{-1} v)\pi=0$, so $K\pi = 0$. This is excluded by the assumptions on $\pi$. Hence, $j\pi$ is nondegenerate. Next, observe that $(f,f^*) \in \Adj (j)$ implies that for all $u,v \in V$, $j (uf,v) = j (u,vf^*)$, and so, also $j (uf,v) \pi=j (u,vf^*) \pi$, showing that $(f,f^*) \in \Adj (j\pi)$. It follows that $\Adj (j)$ is contained in $\Adj (j\pi)$ as a $*$-subring. As both $j$ and $j\pi$ are nondegenerate, $\Adj (j)|_V$ and $\Adj (j\pi)|_V$ are faithful representations on $V$ and $\Adj (j)|_V \subseteq \Adj (j\pi)|_V$ with the involution on $\Adj (j)|_V$ the restriction of the involution on $\Adj (j\pi)|_V$. Because $j$ is a nondegenerate $K$-form, we have as rings $\End_K V=\Adj (j)|_V$ (cf. \exref{ex:adj-j}), and so, as rings \begin{align*} \End_K V=\Adj (j)|_V \subseteq \Adj (j\pi)|_V \subseteq \End_{\mathbb{Z}/p} V. \end{align*} Because $V$ is a simple $\Adj (j)$-module, it is also a simple $\Adj (j\pi)$-module; in particular, as a ring $\Adj (j\pi)|_V$ is a simple subring of $\End_{\mathbb{Z}/p} V$ (i.e. $\Adj (j\pi)|_V$ is a finite primitive ring so it is simple). Also, $\Adj (j\pi)$ contains a copy of $K$ (as scalar multiplication in $\End_K V$), the center $k$ of $\Adj (j\pi)$ is a subfield of this copy of $K$. Every finite simple ring $R$ is isomorphic to the ring of endomorphisms of a finite-dimensional vector space over the center of $R$. So $\Adj (j\pi) \cong \End_k U$ where $k$ is the center of $\Adj (j\pi)$ and $U$ is a finite-dimensional $k$-vector space. As such, $U$ is an irreducible $\Adj (j\pi)$-module, but finite simple rings have one isomorphism type of simple module and so $U\cong V$ as $\Adj (j\pi)$-modules. In particular, $\Adj (j\pi)\cong \End_k U\cong \End_k V$. Since $\Adj (j\pi)|_{V}$ is a faithful representation of in $\End_k V$, it follows that $\Adj (j\pi)|_V=\End_k V$. The hypotheses of \thmref{thm:simple-adj-tensor} are now satisfied by $\Adj (j\pi)|_V$, and so, $\otimes_{\Adj (j\pi)}$ is a nondegenerate $k$-form. Finally, we must show that $\otimes_{\Adj (j\pi)}$ is alternating. As $\Adj (j)\subseteq \Adj (j\pi)$, $\otimes_{\Adj (j\pi)}$ factors through $\otimes_{\Adj (j)}$. By the final implication of \thmref{thm:simple-adj-tensor}, $\otimes_{\Adj (j)}$ is alternating. Therefore, $\otimes_{\Adj (j\pi)}$ is alternating as well. \end{proof} \begin{remark}\label{rem:tensor} The usual technique for studying the alternating $\mathbb{Z}/p$-bimaps $b:V \times V \to W$ on $V=K^{2m}$ is to pull back to the $\mathbb{Z}/p$-exterior square $\wedge:V \times V \to V\wedge_{\mathbb{Z}/p} V$. However, $\dim_{\mathbb{Z}/p} (V\wedge V)\in \Theta(m^2\dim^2_{\mathbb{Z}/p} K)$. In our context, $\dim_{\mathbb{Z}/p} W\leq \dim_{\mathbb{Z}/p} K$, and so, we have a very large gap between $\dim V\wedge_{\mathbb{Z}/p} V$ and $\dim_{\mathbb{Z}/p} W$. Using $\otimes_{\Adj (b)}$ allows us to pull back (in a canonical way) to an alternating $\mathbb{Z}/p$-bimap $V \times V \to V\otimes_{\Adj (b)} V$, where $\dim_{\mathbb{Z}/p} V\otimes_{\Adj (b)}V \leq \dim_{\mathbb{Z}/p} K$. \end{remark} \subsection{Recognizing quotients of Heisenberg groups} Interpreting \corref{coro:quotient-Adj-1} for generalized Heisenberg groups makes for a simple and computable test for when a group is isomorphic to a quotient of an odd order generalized Heisenberg group. \begin{thm}\label{thm:rec-q-Hei} Fix a group $G$ with $1=G^p < G'=Z(G) < G$, and a generalized Heisenberg group $H_{\ell}(K)$. The following are equivalent. \begin{enumerate}[(i)] \item $G$ is an epimorphic image of $H_{\ell}(K)$. \item $\Adj (\Bi (G))$ acts irreducibly on $G/Z(G)$ and is $*$-isomorphic to $\Adj (j)$ for a nondegenerate alternating $k$-form $j$ on $G/Z(G)$, for a subfield $k$ of $K$ isomorphic to the center of $\Adj (\Bi (G))$. \end{enumerate} \end{thm} \begin{proof} Let $\phi : H_{\ell}(K) \to G$ be an epimorphism. As discussed at the close of Section \ref{sec:q-Hei}, if we set $V = G/Z(G)$, $W = G'$ and $b = \Bi (G)$, then there is an alternating nondegenerate $K$-form $j : V \times V \to K$ induced from $H_{\ell}(K)$, and a $\mathbb{Z}/p$-linear epimorphism $\ct{\varphi} : K \to W$, such that $b = j\ct{\varphi}$. Thus, $\Adj (b) = \Adj (j\pi) = \Adj (\otimes_{\Adj (j\pi)})$ (with equality as $*$-rings). By \corref{coro:quotient-Adj-1}, $\otimes_{\Adj (j\pi)}$ is a nondegenerate alternating $k$-form (possibly different from $j$) where $k$ is a subfield of $K$ and isomorphic to the center of $\Adj (b)$. Furthermore, \corref{coro:quotient-Adj-1} also shows $\Adj (b)$ acts irreducibly on $V$. Since the $*$-isomorphism type and representation of $\Adj (b)$ is a $\mathbb{Z}/p$-pseudo-isometry invariant, it follows that the $*$-isomorphism type and representation of $\Adj (\Bi (G))$ is an isomorphism invariant of $G$. This proves that (i) implies (ii). Next, we show (ii) implies (i). We assume that $A = \Adj (\Bi (G))$ acts (faithfully) irreducibly on $V = G/Z(G)$, so that $A|_V = \End_k V$ for a field $k$ isomorphic to the center of $A$. Furthermore, $A$ is $*$-isomorphic to $\Adj (j)$ for a nondegenerate alternating $F$-form $j$ on $V$, for some subfield $F$ of $K$. The involution on $\Adj (j)$ (and therefore on $A$) preserves the center (cf. \exref{ex:adj-j}), and so, the isomorphism $A \to \Adj (j)$ induces an isomorphism $k \cong F$. Therefore, we treat $j$ as an alternating $k$-form, and $\Adj (j)|_V = \End_k V = A|_V$. We now apply \thmref{thm:simple-adj-tensor}, and we find that $j' := \otimes_{A}$ is an alternating nondegenerate $k$-form. This implies that $H := \Grp (j')$ is a generalized Heisenberg group (cf. \exref{ex:j-Hei}). By the universal properties of tensors, $\Bi (G) = j'\pi$ for a (unique) additive map $\pi : V \otimes_{A} V^* \to G'$. Letting $N=\ker \pi$, we have $\Grp (j'\pi) \cong H/N$. Finally, by \propref{prop:Baer-correspondence}, we know that $G \cong \Grp (\Bi (G)) = \Grp (j'\pi) \cong H/N$. Therefore, $G$ is an epimorphic image of a generalized Heisenberg group. \end{proof} \begin{remark} We can also view \thmref{thm:rec-q-Hei} as stating that $G$, as in \thmref{thm:rec-q-Hei}, is an epimorphic image of $H_{\ell} (K)$ if and only if $\otimes_{\Adj (\Bi (G))}$ is an alternating nondegenerate $k$-form for subfield $k$ of $K$ such that $\dim_k G/Z(G)=2\ell\cdot [K:k]$. This follows by translating condition (ii) using \corref{coro:quotient-Adj-1} and considering the associated requirements on dimensions. \end{remark} \subsection{Indigenous quotients} An implication of \thmref{thm:rec-q-Hei} is that every nonabelian quotient $H/N$ of a generalized Heisenberg group implicitly determines a smallest generalized Heisenberg group of which it is a quotient. Specifically, if $K$ is the center of $\Adj (\Bi (H/N))$ and $(H/N)/(H/N)'$ is $2m$ dimensional over $K$, then we write: \begin{align}\label{eq:floor} \lfloor H/N \rfloor & = H_m(K). \end{align} In the language of our introduction, we say $H/N$ is \emph{indigenous} to $H$ if $H\cong \lfloor H/N\rfloor$. As discussed in Section \ref{sec:q-Hei}, there is a natural $\mathbb{Z}/p$-isometry $\phi$ from $\Bi (H/N)$ to $\Bi (H) \pi$, for an appropriate epimorphism $\pi$. Thus, $\Adj (\Bi (H)) \subseteq \Adj (\Bi (H) \pi) = \Adj (\Bi (H/N))^{\phi}$. So we have proved: \begin{prop}\label{prop:indig} $H/N$ is indigenous to $H$ if and only if $$\Adj (\Bi (H)) = \Adj (\Bi (H) \pi) = \Adj (\Bi (H/N))^{\phi}$$ (where equality includes as rings with involution). \end{prop} There are many indigenous quotients, but to guarantee that all quotients of a certain size are indigenous to a Heisenberg group, we use some elementary number theory. \begin{lemma}\label{lem:good-pair} For every integer $n \geq 12$, there is an integer $d=d_n$ such that \begin{enumerate}[(i)] \item $2d + 2 \le n \le 3d$, \item for all $i$ such that $n - 2d \leq i < d$, $i \ndivides d$, and \item $d - \frac {5}{12} n \in O(1)$ (as functions of $n$). \end{enumerate} \end{lemma} Note, \lemref{lem:good-pair}(ii) is satisfied whenever $d$ is prime. \begin{proof} Suppose first that $n \ge 60$, write $n = 12q + r$ for an integer $0 \leq r < 12$. Note that $q \ge 5$. Set $d = 5q + e$ where $e$ is an integer chosen between $1$ and $4$ so that $d$ is congruent modulo $30$ to one of $1$, $7$, $11$, $17$, $23$, or $29$. Immediately (iii) follows. Observe that $2d + 2 = 10 q + 2e + 2 \le 10 q + 10 < 12 q \le n$ since $q \ge 5$ and so $10 \le 2q$. Also, $3d = 15q + 3e > 15q > 12q + r$ since $3q \ge 12 > r$. Observe that $n - 2d = 12q + r - 2(5q + e) = 2q + r - 2e \ge 2q - 2e$. Notice that $2e \le 8$, so if $q \ge 8$, then $n - 2d \ge q$. Let $p$ be the smallest prime dividing $d$, and note that $p > 6$. We have $d/p < d/6 = 5/6 q + e/6 \le 5/6 q + 4/6 < 5/6 q + 1/6 q = q$. It follows that for all $i$ if $n-2d \leq i < d$, then $d/p < i$, and $d < ip$. On the other hand, if $i$ divides $d$, then $d/i \ge p$, and so, $d \ge ip$. This is a contradiction, so $i \ndivides d$. If $q = 5$, then $d = 29$, if $q = 6$, then $d = 31$, and if $q = 7$, then $d = 37$. In each of these cases, $d$ is prime, and since $n - 2d \ge 2$, $(n,d)$ satisfies (ii). For $12 \le n \le 15$, take $d = 5$. For $16 \le n \le 21$, we take $d = 7$. For $22 \le n \le 23$, take $d = 8$. For $24 \le n \le 33$, take $d = 11$. For $34 \le n \le 39$, take $d = 13$. For $40 \le n \le 57$, take $d = 19$. For $n = 58$ or $n = 59$, take $d = 23$. One can check by hand that each of these pairs $(n,d)$ satisfy (i) and (ii). \end{proof} First, we show \lemref{lem:good-pair} (i) and (ii) guarantees that indigenous quotients exist. Later, we will use part (iii) to show that indigenous quotients are plentiful. \begin{prop}\label{prop:stable-q} Let $(n,d)$ be a pair as in \lemref{lem:good-pair} parts (i) and (ii). If $H$ is a Heisenberg group of order $p^{3d}$ and $N\leq H'$ with $[H:N]=p^n$, then $H\cong \lfloor H/N\rfloor$. \end{prop} \begin{proof} Let $b=\Bi (H/N):V \times V \to W$. By \corref{coro:quotient-Adj-1} and \eqref{eq:adj-j}, the ring $\Adj (\Bi (H/N))$ is isomorphic as a ring to $M_{2m} (F)$ for a subfield $F$ of $K$ and where $K^2 \cong V \cong F^{2m}$. Furthermore, $H_2= \lfloor H/N \rfloor$ is a generalized Heisenberg group over $F$ of degree $m$; hence, define $f$ by $|H_2'| = |F| = p^f$. Let $H_2/M \cong H/N$ (and such an $M$ exists as $H_2 = \lfloor H/N \rfloor$). It follows that \begin{align} p^{n-2d} & = [H':N] = [H_2':M] = p^{n-2mf}. \end{align} Thus, $d = mf$, and furthermore, $n - 2d \leq f \leq d$ since $[H_2':M] \leq p^f$. By the assumptions that $(n,d)$ satisfies \lemref{lem:good-pair} (ii) and $f \divides d$, it follows that $f = d$. Thus, $F = K$ and $m = 1$. So $H_2 \cong H$. \end{proof} \section{Proof of main theorems}\label{sec:main} In this section we prove Theorems \ref{thm:main} and \ref{algo:main}. \subsection{Lifting isomorphisms} We begin with an observation which is likely well-known. \begin{thm}\label{thm:aut-Hei} If $H$ is a generalized Heisenberg group of degree $m$ over $K$ of characteristic $p$, then $\Aut H = \Psi\Isom(\Bi (H))\ltimes \hom_{\mathbb{Z}/p}(K^{2m},K))$ and \begin{align*} \Psi\Isom(\Bi (H)) & = \Gal(K)\ltimes (K^{\times}\ltimes \Sp(2m,K)). \end{align*} (Note that $\hom_{\mathbb{Z}/p}(K^{2m},K)$ corresponds to the inner automorphisms of $H$.) \end{thm} \begin{proof} The structure of $\Aut H$ is explained by \propref{prop:Baer-correspondence}; so we concentrate on $\Psi\Isom(\Bi (H))=\Psi\Isom(j)$ where $j:K^{2m} \times K^{2m} \to K$ a nondegenerate alternating $K$-form. First, for all $(\phi;\ct{\phi}) \in \Psi\Isom(j)$, and all $(f,g) \in \Adj (j)$, $(f,g)^{(\phi,\ct{\phi})} := (f^{\phi}, g^{\phi})\in \Adj (j)$. Therefore, $\Psi\Isom(j)$ acts on the center $K$ of $\Adj (j)$ as a group of ring automorphisms. The action on the center induces a group homomorphism $\Psi\Isom(j) \to \Gal(K)$ denoted $s \mapsto s^{\phi}$. In particular, if $s \in K$ and $u \in K^{2m}$, then \begin{align*} (su)\phi & =u(s I_{2m}\cdot \phi)=u(\phi\cdot s^{\phi} I_{2m})=s^{\phi} (u\phi). \end{align*} In particular, elements of $\Psi\Isom(j)|_V$ are $K$-semilinear. Let $u,v\in V$ be such that $j(u,v)=s\neq 0$. For each $t\in K$, $t=j(u,ts^{-1} v)$ and so \begin{align} t\ct{\phi} & =j(u\phi, (ts^{-1} v)\phi)= j(u\phi, t^{\phi} (s^{-1}v\phi))= t^{\phi} j(u\phi, (s^{-1} v)\phi) = t^{\phi} (1\ct{\phi}). \end{align} In particular, $t\ct{\phi} = t^{\phi} \lambda$, where $\lambda_{\phi} := 1\ct{\phi}\in K^{\times}$ proving $\ct{\phi}\in \Gal(K)\ltimes K^{\times}\leq \GL_{\mathbb{Z}/p}(K)$. Consequently, $(\phi; \ct{\phi})\mapsto \ct{\phi}$ is a group homomorphism $\Psi\Isom(j) \to \Gal(K)\ltimes K^{\times}$ with kernel $\Isom(j)$. Now, for each $\tau\in \Gal(K)$, $(v\mapsto v^{\tau}; s\mapsto s^\tau)\in \Psi\Isom(j)$; hence, $\Gal(K)\hookrightarrow \Psi\Isom(j)$ and its image splits with the $K$-linear pseudo-isometries $\Psi\Isom_K(j)$. The group $\Psi\Isom_K(j)$ admits $\Isom(j)=\Sp(2m,K)$ as well as $K^{\times}$ since \begin{align*} \left(v\mapsto v \begin{bmatrix} I_m & 0 \\ 0 & sI_m\end{bmatrix}; \alpha\mapsto s\alpha\right)&\in \Psi\Isom_K(j) & (\forall s\in K^{\times}). \end{align*} The image of $K^{\times}$ splits with $\Isom(j)$ in $\Psi\Isom_K(j)$. This completes the proof. \end{proof} We now turn to the question of lifting isomorphisms of quotients of $H$ to automorphisms of $H$. Throughout this discussion, $K/(\mathbb{Z}/p)$ is a finite field extension and $b: V \times V \to W$ is a $\mathbb{Z}/p$-bimap. Suppose that $\pi: W \to X\neq 0$ and $\tau: W \to Y\neq 0$ are $\mathbb{Z}/p$-linear epimorphisms. Set $c=b\pi$ and $d=b\tau$. If $(\phi; \ct{\phi}): c \to d$ is a $\mathbb{Z}/p$-pseudo-isometry, then $\Adj (c)^{\varphi}=\Adj (d)$ and so there is an isomorphism $\ct{\Phi}:V\underset{\Adj (c)}{\otimes} V \to V\underset{\Adj (d)}{\otimes} V$ where \begin{align}\label{eq:Phi} (u \otimes v) \ct{\Phi} & = u \phi \otimes v \phi & (\forall u, v\in V). \end{align} So $(\phi;\ct{\Phi})$ is a $\mathbb{Z}/p$-pseudo-isometry from $\otimes_{\Adj (c)}$ to $\otimes_{\Adj (d)}$. Also, as $c$ factors through $\otimes_{\Adj (c)}$ there is an epimorphism $\hat{c}:V\otimes_{\Adj (c)} V \to X$ such that $c=\otimes_{\Adj (c)}\hat{c}$ and an isomorphism $\bar{c}:(V\otimes_{\Adj (c)} V)/(\ker \hat{c})\cong X$. The same construction is applied to $d$. Immediately, $\hat{c}\ct{\phi}=\ct{\Phi}\hat{d}$ and so $(\ker\hat{c})\ct{\Phi} = \ker\hat{d}$. Hence, $\ct{\Phi}$ induces an isomorphism $\gamma$ from $(V\otimes_{\Adj (c)} V)/(\ker \hat{c})$ to $(V\otimes_{\Adj (d)} V)/(\ker \hat{d})$ such that $\ct{\phi}=\bar{c}^{-1} \gamma \bar{d}$. So in that sense, $\ct{\Phi}$ induces $\ct{\phi}$, and so, we say that $(\phi;\ct{\Phi})$ induces $(\phi;\ct{\phi})$. Finally, if $A := \Adj (c)=\Adj (d)$, then $(\phi;\ct{\Phi})$ is a $\mathbb{Z}/p$-pseudo-isometry of $\otimes_{A}$ that induces the $\mathbb{Z}/p$-pseudo-isometry $(\phi;\ct{\phi})$. \begin{thm}\label{thm:lift-iso} Let $H$ be a generalized odd order Heisenberg group, and let $M$ and $N$ be proper subgroups of $H'$. If $H/M$ and $H/N$ are indigenous quotients of $H$, then every isomorphism $\varphi: H/M \to H/N$ is induced by an automorphism $\Phi$ of $H$ with $M\Phi=N$. \end{thm} \begin{proof} Choose $H=\Grp(j)$ for $j:K^{2m} \times K^{2m} \to K$ as in \eqref{eq:alt}, set $V=H/H'$, and fix the transversal $\ell:V \to K^{2m} \times 0\subseteq H$. Treat $M,N < H'=0 \times K$ as $\mathbb{Z}/p$-subspaces of $K$. Let $\pi_M:K \to K/M$ and $\pi_N:K \to K/N$ be the natural projections. There are also natural isomorphisms $$(H/M)/(H/M)'\overset{\tau_M}{\rightarrow} V \overset{\tau_N}{\leftarrow} (H/N)/(H/N)'.$$ We see that $(\tau_M; 1_{K/M})$ is an isometry from $\Bi (H/M)$ to $c=\Bi (H)\pi_M$ and $(\tau_N;1_{K/N})$ is an isometry from $\Bi (H/M)$ to $d=\Bi (H)\pi_N$. Now, fix an isomorphism $\varphi: H/M \to H/N$ of groups. Set $\phi=\tau_N^{-1} (\varphi|_{(H/M)/(H/M)'})\tau_M$, which is a $\mathbb{Z}/p$-linear automorphism of $H/H'$. Also, set $\ct{\phi}=\varphi|_{K/M}:K/M \to H/N$. Thus, $(\phi;\ct{\phi})$ is a $\mathbb{Z}/p$-pseudo-isometry from $c$ to $d$. Furthermore, $(\phi;\hat{\phi})$ induces an isomorphism $\Grp(\phi;\hat{\phi}):\Grp(\Bi (H/M)) \to \Grp(\Bi (H/N))$. At this point we have constructed the outer square in the commutative diagram of \figref{fig:lifting} where the vertical isomorphisms are given by the Baer correspondence with respect to the fixed transversal $\ell$; cf. \propref{prop:Baer-correspondence}. Since we assume $H/M$ and $H/N$ are indigenous to $H$, $A=\Adj (c)=\Adj (\Bi (H))=\Adj (d)$. Therefore, \eqref{eq:Phi} determines a $\mathbb{Z}/p$-pseudo-isometry $(\phi;\ct{\Phi})$ of $\otimes_A$ that induces $(\phi;\ct{\phi})$. By \corref{coro:quotient-Adj-1}, $\otimes_{A}$ is an alternating nondegenerate $K$-form, and this leads to a $\mathbb{Z}/p$-pseudo-isometry $(\tau;\ct{\tau})$ from $j$ to $\otimes_A$ (above). We obtain $(\gamma;\ct{\gamma})=(\phi;\ct{\Phi})^{(\tau;\ct{\tau})}\in \Psi\Isom_{\mathbb{Z}/p}(j)$ and $(\gamma;\ct{\gamma})$ induces $(\phi;\ct{\phi})$. Finally, $\Psi\Isom_{\mathbb{Z}/p}(j)$ embeds in $\Aut H$, and so, there is an automorphism $\Phi \in \Aut H$ such that $\Phi$ induces $(\gamma;\ct{\gamma})$, and so, it induces $\varphi$; in particular, $M\Phi=N$. This describes the inner square in the diagram \figref{fig:lifting}. \end{proof} \begin{figure} \begin{equation*} \xymatrix{ H/M \ar[rrr]^{\varphi}\ar[ddd]^{\cong} & & & H/N\ar[ddd]^{\cong}\\ & H\ar[d]^{\cong} \ar[r]^{\Phi}\ar[ul] & H\ar[ur]\ar[d]^{\cong} \\ & \Grp(\otimes_A)\ar[r]_{\Grp(\phi;\ct{\Phi})}\ar[dl] & \Grp(\otimes_A)\ar[dr]\\ \Grp(c) \ar[rrr]_{\Grp(\phi;\ct{\phi})} & & & \Grp(d). } \end{equation*} \caption{The diagram illustrating how to pass the isomorphism $\phi$ to the isomorphism $\Grp(\phi;\ct{\phi})$. Then lift to the automorphism $\Grp(\phi;\ct{\Phi})$, and finally to the automorphism $\Phi$.}\label{fig:lifting} \end{figure} \begin{remark} W. M. Kantor suggests that an alternative proof for \thmref{thm:lift-iso} might be obtained by considering the Schur multipliers. \end{remark} \thmref{thm:aut-Hei} implies the converse of \thmref{thm:lift-iso} and so we have proved: \begin{coro}\label{coro:orbits} If $H$ is a generalized odd order Heisenberg group and $M,N < H'$ are such that $H/M$ and $H/N$ are indigenous to $H$, then $H/M\cong H/N$ if and only if there is an automorphism $\Phi$ of $H$ with $M\Phi=N$. Thus, the isomorphism classes of the indigenous quotients of $H$ are in bijection with the $(\Aut H)$-orbits on the subgroups of $H'$. \end{coro} \subsection{Proof of \thmref{thm:main}} Let $(n,d)$ be a pair as in \lemref{lem:good-pair}, set $s=n-2d$, and fix $K$ to be a finite field of order $p^d$. Take $H$ to be a Heisenberg group over $K$, so $j=\Bi (H)$ is an alternating nondegenerate $K$-form on $K^{2}$. Following \thmref{thm:aut-Hei}, $\Aut H$ maps onto $\Psi\Isom(j)$ and $\Aut H$ acts on the subgroups of $H'$ as $\Gal(K) \ltimes K^{\times}$ acts on the $\mathbb{Z}/p$-subspaces of $K$. The number of subgroups of index $p^s$ in $H'$ is estimated by counting the number of $\mathbb{Z}/p$-subspaces of codimension $s$ in $K$ which is \begin{align} \begin{bmatrix} d \\ s \end{bmatrix}_p & = \prod_{i=1}^s \frac{p^d-p^{i-1}}{p^s-p^{i-1}} \geq p^{s(d-s)}. \end{align} The number of $(\Aut H)$-orbits on the subgroups $H'$ of index $p^s$ is bounded below by $p^{s(d-s)}/(|\Gal(K)|(|K|-1))$. By \propref{prop:stable-q}, quotients of size $p^{2d+s}=p^n$ are indigenous to $H$. Hence, in light of \corref{coro:orbits}, the number of isomorphism classes of quotients of $H$ of order $p^{n}$ is at least: \begin{align*} \frac{p^{s(d-s)} }{ d (p^{d}-1)}\geq p^{-s^2 +(s-1)d-\log_p d}. \end{align*} When we optimize $f(s,d)=-s^2+(s-1)d-\log_p d$ over $d$ subject to the constraint that $n=2d+s$, we find the maximum occurs for $d\in 5n/12+O(1)$ and $s\in n/6+O(1)$ and the number of orbits is at least $p^{n^2/24 + O(n)}$. By \lemref{lem:good-pair}(iii) the pair $(n,d)$ attains this asymptotic maximum. Therefore, the Heisenberg group of order $p^{3d}=p^{5n/4+O(1)}$ over a field of order $p^d$ has $p^{n^2/24+O(n)}$ pairwise nonisomorphic quotients of order $p^n$.\hfill $\Box$ \subsection{Proof of \thmref{algo:main}} As we mentioned in the introduction, our original algorithm applied only to permutation groups, but using a result of L. Ronyai, we can extend these to more general settings. \begin{proof}[Proof of \thmref{algo:main} (i)] Using the standard polynomial-time algorithms (cataloged in \cite[pp. 4--6]{Seress:book} for permutation groups, in \cite{Luks:mat} for matrix groups, and in \cite{HoltEO} for polycyclic groups with a black-box multiplication), compute $G^p$, $Z(G)$, and $G'$, and then, certify that $1 = G^p < G' = Z(G) < G$; otherwise, $G$ cannot be a nonabelian quotient of a generalized Heisenberg group. Next, use the algorithms of \cite[Section 5]{Wilson:algo-cent} to compute structure constants for $b = \Bi (G)$, a basis for $\Adj (b)$, and recognize whether or not $\Adj (b)$ is a simple ring acting irreducibly on $V = G/Z(G)$ and $*$-isomorphic to the adjoint ring of an alternating nondegenerate form. By \thmref{thm:rec-q-Hei}, at this point we have determined if $G$ is an epimorphic image of a generalized Heisenberg group. If $G$ is an epimorphic image of a generalized Heisenberg group then the algorithm creates $\otimes_{\Adj (b)}$ along with the canonical projection $\pi : V \otimes_{\Adj (b)} V \to G'$. Set $H = H_m (K)$ where $K$ is the center of $\Adj (b)$ and $2m = \dim_K V$. Finally, the algorithm computes a standard hyperbolic basis for $\otimes_{\Adj (b)}$ and a change of basis determines a pseudo-isometry $(\varphi;\ct{\varphi})$ from $j = \Bi (H)$, $2m = \dim_K V$, to $\otimes_{\Adj (b)}$. It follows that $(\varphi;\ct{\varphi})$ induces an isomorphism $\Phi : H \to \Grp (\otimes_{\Adj (b)})$ and $\pi$ determines an epimorphism $\Gamma: \Grp (\otimes_{\Adj (b)}) \to G$ so that $\Phi\Gamma : H \to G$ is the desired epimorphism. The algorithms cited have both a deterministic version that runs in time polynomial in $\log |G|+p$, and non-deterministic version of the Las Vegas type with polynomial run time in $\log |G|$. In particular the algorithms are honest deterministic polynomial time algorithms for both permutation groups and for matrix groups in bounded characteristic. This gives us the stated complexity of \thmref{algo:main}. \end{proof} \begin{lemma}[Ronyai]\label{lem:Ronyai-trick} Let $K/k$ be a finite extension of a finite field $k$. There is a deterministic algorithm that given $k$-subspaces $U$ and $V$ of $K$, determines a $c\in K^{\times}$ such that $Uc = V$ or proves that no such $c$ exists. The algorithm uses $O(\dim^6 K)$ operations in $k$. \end{lemma} \begin{proof} First the algorithm decides if $\dim_k U = \dim_k V$, and if not, then it reports that $U$ and $V$ cannot be in the same $K^{\times}$-orbit. Otherwise, the algorithm has $k$-bases $\{u_1,\dots,u_s\}$ and $\{v_1,\dots,v_s\}$ for $U$ and $V$ respectively. If there exists a field element $c\in K$ such that $Uc=V$, then for each integer $1 \leq i \leq s$, there are field elements $\alpha_{i1}, \dots, \alpha_{ij}\in k$, such that \begin{align}\label{eq:solutionset} u_i c & = v_1 \alpha_{i1} + \cdots + v_s \alpha_{is}. \end{align} Observe that these equations are $k$-linear in the variables $c$ and $\alpha_{i1},\dots,\alpha_{is}$. To solve the system, we first fix a $k$-basis for $K$. We then write $u_1, \dots, u_s$ and $v_s, \dots, v_s$ in this basis, and we write $c$ as linear combination in the basis for $K$ with unknown coefficient in $k$. We then solve the equations determined by \eqref{eq:solutionset}. This can be done with $O((\dim^2_k K)^3)$ operations in $k$. \end{proof} \begin{proof}[Proof of \thmref{algo:main}(ii)] Using \thmref{algo:main} (i), we determine if the groups are indigenous quotients of a common Heisenberg group $H = H_m (K)$ for a finite field $K$ of size $p^d$. This allows us to treat the input groups as quotients $H/M$ and $H/N$. Furthermore, we determine if $[H:M] = [H:N]$, and if not, then the groups are nonisomorphic. By \corref{coro:orbits}, the quotients $H/M$ and $H/N$ are isomorphic if and only if $N \in M^{\Aut H}$. Because $\Aut H/C_{\Aut H} (H') \cong \Gal(K) \ltimes K^{\times}$, we fix a generator $\sigma$ for $\Gal(K)$. Then, for each integer $1 \leq i\leq \dim_{\mathbb{Z}/p} K$, we use the algorithm of \lemref{lem:Ronyai-trick} to determine if there exists a field element $c\in K$, satisfying $(M\sigma^i)c=N$ (treating $M$ and $L$ as $\mathbb{Z}/p$-subspaces of $K$). If this fails for each $i$ then $H/L$ is not isomorphic to $H/M$. Otherwise, use the solution $(\sigma^i,c)\in \Gal(K)\ltimes K^{\times}$ to construct an automorphism $\Phi$ of $H$ where $M\Phi=N$ and so $\Phi$ induces an isomorphism $\phi=\Phi|_{H/M}:H/M\to H/N$. \end{proof} \begin{remark}\label{rem:nil-q-algo} Our original proof used the observation that the size of $M^{\Aut H}$ is a divisor of $d(p^d-1)$. The $(\Aut H)$-orbit of $M$ can be constructed from a basis for $M$ and $N$ can be tested for inclusion in $M^{\Aut H}$ by linear algebra at a cost of $O(d^3)$ for each of the $d(p^d-1)$ tests. Hence, the total work is at worst $d^{4} p^d \in O(|H|^{1/(m+1)}\log^c |H|)$ for a constant $c$. That was enough to obtain a polynomial bound on the algorithm's running time when the groups were specified by permutations. (That uses the observation that nonabelian quotients of Heisenberg groups have permutation representations of degree at least $p^{2d}$.) Our method still depends on exhausting over the elements in $\Gal(K)$, but this is dramatic decrease in the work required to list all of $\Gal(K)\ltimes K^{\times}$ (our original approach). Both are substantial improvements over the traditional methods which would list all of $\Aut H$ in this context. To see this we give a small survey of the standard methods some of which date back to work of Higman \cite[p. 10--12]{Higman:chic}. Higman defined a characteristic central series $\Phi^{(i)}$ for groups, now replaced by the lower exponent-$p$-central series. If $G$ and $J$ are $p$-groups and $G/\Phi^{(c)}(G)\cong J/\Phi^{(c)}(J)$, then there is a universal covering group $F$ mapping onto $G_{c+1} := G/\Phi^{(c+1)}(G)$ and $J_{c+1} := J/\Phi^{(c+1)}(J)$. Thus, $G_{c+1}$ and $J_{c+1}$ are isomorphic if and only if their kernels in $F$ are in the same $(\Aut F)$-orbit. Algorithms of this sort are collectively called \emph{nilpotent quotient algorithms} and have had many practical advances; for a survey see \cite{OBrien:iso}. Yet, for $p$-groups of nilpotence class $2$ and order $N=p^n$, the universal covering groups $F$ in use can have order $p^{n+\binom{n}{2}}=N^{\log_{c} N+O(1)}$, $c$ depending on $p$, and the size of the $(\Aut F)$-orbits can reach $N^{\log_{c'} N+O(1)}$, $c'$ depending on $p$. Indeed, for quotients of order $N=p^n$ of a Heisenberg group of order $p^{5n/4+O(1)}$, the size of the orbits required by the general nilpotent quotient algorithms is: \begin{align*} \frac{[\Aut F:C_{\Aut F}(F/F')|}{[\Aut H:C_{\Aut H}(H/H')]} & \approx \frac{|\GL(5n/6,p)|}{\frac{5n}{12}\cdot p^{5n/12} |\Sp(2,p^{5n/12})|} \in p^{\Theta(n^2)}=N^{\log_{d} N+\Theta(1)}, \end{align*} where $d$ depends only on $p$. The aspect of \thmref{algo:main} that permits a polynomial-time algorithm is summarized in \remref{rem:tensor} which shows we can use a much smaller covering group with much smaller orbits. Furthermore, as Ronyai astutely observed, the action of the relevant groups on these orbits is much simpler and so enables even better algorithms than we had thought. \end{remark} \section{Quotients of Heisenberg groups are indistinguishable} \label{sec:invariants} In this section, we run through a list of isomorphism invariants for finite $p$-groups of nilpotence class $2$ and determine what to expect of these isomorphism invariants in the family of quotients of Heisenberg groups. The isomorphism invariants that we select are independent in the sense that two groups with equal isomorphism invariants of one type are not forced to have equal isomorphism invariants of a different type. Hence, in combination these isomorphism invariants would seem to have a chance to distinguish two generic $p$-groups of class $2$. \subsection{Consequences of the Camina property} In this section, we derive some isomorphism invariants for quotients of generalized Heisenberg groups by observing these groups are special instances of Camina groups. Recall that a group $G$ is a \emph{Camina} group if for every $g\in G-G'$, $[g,G]=G'$. We saw in \lemref{lem:Camina} that generalized Heisenberg groups are Camina groups. This condition transfers to all quotients by proper subgroups of $G'$. Hence, nonabelian quotients of generalized Heisenberg groups are Camina groups. Camina groups have received recent attention, some interesting results include \cite{Dark:on}, \cite{Mac:some}, and \cite{Mann:on}. We use the Camina property to show that the complex character tables of quotients of a Heisenberg group are determined solely by their order. First, we briefly overview of representation theory and character theory for non-experts. Let $V$ be a finite dimensional complex vector space. A homomorphism $\rho : G \to \GL(V)$ is an \emph{irreducible representation} if, for all $v\in V-0$, $V = \langle v(g\rho) : g \in G \rangle$. The \emph{character} $\chi_{\rho} : \{ g^G : g \in G \} \to \mathbb{C}$ afforded by $\rho$ assigns to $g \in G$ the trace of $\rho(g)$ (i.e. for each $g\in G$, $\chi_{\rho} (g^G)$ is the sum of the eigenvalues of $g\rho$, with multiplicity). The \emph{character table}, $\Irr (G)$, of $G$ is the set of characters of all irreducible representations of $G$. Finally, for groups $G$ and $H$, an \emph{isomorphism of character tables} $\Irr (G) \to \Irr (H)$ is a pair $\phi : G \to H$ and $\hat{\phi} : \Irr (H) \to \Irr (G)$ of bijections such that \begin{align} (\chi\hat{\phi}) (g) & = \chi (g\phi) & (\forall \chi \in \Irr (H), \forall g \in G). \end{align} Isomorphic groups have isomorphic character tables. On the other hand, there are groups with isomorphic character tables that are not isomorphic. Nevertheless, there are incredibly deep properties of groups that can be inferred from character tables, but that expansive subject is not our objective; for details consider \cite{Isaacs:book}. \begin{thm}[\cite{Lewis:Camina}]\label{thm:Camina} If $G$ and $J$ are finite Camina $p$-groups of nilpotence class $2$, then $G$ and $J$ have isomorphic character tables if and only if $[G:G'] = [J:J']$ and $|G'| = |J'|$. \end{thm} Moreover, the characters in question are fully described in \cite{Lewis:Camina}. The implications of \thmref{thm:Camina} and other properties of Camina groups summarized in \cite{Lewis:Camina} give the following list of invariants (some of which might also follow upon direct inspection of quotients of Heisenberg groups). \begin{coro}\label{coro:Camina} If $G$ and $J$ have the same order and are quotients of a common odd order generalized Heisenberg group $H=H_m(K)$, then the following hold: \begin{enumerate}[(i)] \item $G' = Z(G)$ and $J' = Z (J)$ and both are the image of $Z(H) = H'$, \item $[G:G'] = [J:J']$ and $|G'| = |J'|$, \item the lattice of normal subgroups of $G$ and $J$ are isomorphic (they are precisely the subgroups contained in or containing the commutator), \item for every $g\in G - G'$ and every $h \in J - J'$, $|C_G (g)| = |C_J (h)|=[G:G']$, and \item the character table of $G$ is isomorphic to the character table of $J$, and if $H$ has odd order then the isomorphism of character tables also preserves power maps. \end{enumerate} \end{coro} \subsection{Consequences of centroids and adjoints} We can use the results on centroids and adjoints to determine when a quotient of a generalized Heisenberg group is directly or centrally indecomposable. The original use of centroids of bimaps for $p$-groups was to prove the following. \begin{thm}\cite[Theorem 1.2]{Wilson:direct-prod} \label{thm:direct-indecomp} A $p$-group $P$ with $P' \leq Z(P)$ is directly indecomposable if $\Cent (\Bi (P))$ is a local ring and $Z(P)$ is contained in $P'P^p$. \end{thm} \begin{coro}\label{coro:centroid} The centroid of a nonabelian quotient of a generalized Heisenberg group is a field. In particular, nonabelian quotients of generalized Heisenberg groups are directly indecomposable. \end{coro} \begin{proof} Let $H/N$ be a quotient of a Heisenberg group $H$. As $b=\Bi (H/N):V \times V \to W$ is nondegenerate and $b (V,V)=G'=W$, it follows that $\Cent(b)$ is faithfully represented by its restriction to $\End V$. Therefore, there is a natural embedding $\Cent(b)\hookrightarrow\Adj (b)$. Furthermore, centroid elements commute with adjoints, and so $\Cent(b)$ embeds in the center $K$ of $\Adj (b)$. By \corref{coro:quotient-Adj-1}, $\Adj (b)$ is central simple, and so $K$ is field. Therefore, $\Cent(b)$ is a field, and so $\Cent(b)$ is local. Finally, by \eqref{eq:exp}, $1=H^p\leq H'=Z(H) < H$, and so it follows that $Z(H/N)\leq (H/N)' (H/N)^p$. By \thmref{thm:direct-indecomp} $H/N$ is directly indecomposable. \end{proof} The use of adjoints for $p$-groups was originally designed to understand central decompositions. A set $\mathcal{H}$ of subgroups of a group $G$ is a \emph{central decomposition} of $G$ if $\mathcal{H}$ generates $G$ and for all $H\in\mathcal{H}$, $[H,\langle \mathcal{H}-\{H\}\rangle]=1$ and $G\neq \langle \mathcal{H}-\{H\}\rangle$. Say that $G$ is \emph{centrally indecomposable} if $\{G\}$ is the only central decomposition of $G$. Finally, a central decomposition is \emph{fully refined} if every member is centrally indecomposable. For example, in a generalized Heisenberg group $H=H_m(K)$, for each $0\neq x \in K^m$, \begin{align} H_x & = \left\{ \begin{bmatrix} 1 & tx & s\\ 0 & I_m & t' x^t\\ 0 & 0 & 1 \end{bmatrix}: s,t,t'\in K\right\}\cong H_1(K) \end{align} is a centrally indecomposable subgroup of $H_m(K)$. If $\mathcal{X}$ is a basis for $K^m$, then \begin{align}\label{eq:cent-decomp} \mathcal{H}(\mathcal{X})=\{H_x : x \in \mathcal{X}\} \end{align} is a fully refined central decomposition of $H$. We now apply the following result. \begin{thm}{\rm \cite[Theorem 4.4]{Wilson:unique-cent} with \cite[Theorem 3.8]{Wilson:algo-cent}}\label{thm:indecomp} A $p$-group $P$ of class $2$ is centrally indecomposable if and only if $Z(P)\leq P' P^p$ and $\Adj (\Bi (P))/J(\Adj (\Bi (P))$ is isomorphic as a $*$-ring to one of the following: for a field $K$, \begin{description} \item[Orthogonal] $(K,x\mapsto x)$, \item[Unitary] $(F,x\mapsto \bar{x})$ for a quadratic field extension $F/K$ along with the field automorphism of order $2$, \item[Exchange] $(K \times K,(x,y)\mapsto (y,x))$, or \item[Symplectic] $\left(M_2(K), \begin{bmatrix} a & b\\ c & d\end{bmatrix}\mapsto \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}\right)$. \end{description} \end{thm} When the degree $m$ of a generalized Heisenberg group $H$ is more than $1$, we know $H$ is centrally decomposable (see \eqref{eq:cent-decomp}). Because $H'$ is also the Frattini subgroup of $H$, if $N < H'$, then every central decomposition of $H$ induces a central decomposition of $H/N$. So nonabelian quotients of $H_m(K)$, $|K|=p^d$ are centrally decomposable whenever $m>1$. So suppose $m=1$, that is, that $H$ is a Heisenberg group. By \thmref{thm:rec-q-Hei}, for every $N < H'$, $\Adj (\Bi (H/N))$ is simple of Symplectic type. Therefore, by \thmref{thm:indecomp}, $H/N$ is centrally indecomposable if $H/N$ is indigenous to $H$. In fact, the converse of this is true. \begin{prop}\label{prop:cent-indecomp} Let $H/N$ be a nonabelian quotient of a Heisenberg group $H$ over $K$. The following are equivalent. \begin{enumerate}[(i)] \item $H/N$ is centrally indecomposable. \item $\Adj (\Bi (H/N))$ is $*$-isomorphic to $M_2(K)$ with the involution of \eqref{eq:adj-j}. \item $H/N$ is indigenous to $H$. \end{enumerate} \end{prop} \begin{proof} Suppose (i). By \corref{coro:quotient-Adj-1}, $\Adj (\Bi (H/N))$ is $*$-isomorphic to a central simple ring with the involution of \eqref{eq:adj-j}. Hence, by \thmref{thm:indecomp}, $\Adj (\Bi (H/N))$ is $*$-isomorphic to $M_2(L)$, for a field $L$, and $M_2(L)$ is equipped with the involution of \eqref{eq:adj-j}. As $V=(H/N)/(H/N)'\cong H/H'$ it follows that $\dim_L V=2$ while also $\dim_K V=2$. Hence, $K\cong L$. So (i) implies (ii). Assuming (ii) it follows from \eqref{eq:floor} that $\lfloor H/N\rfloor$ is a Heisenberg group over $K$. So (ii) implies (iii). Finally, if (iii) is true, then $\Adj (\Bi (H/N))=\Adj (H)=M_2(K)$ with the involution \eqref{eq:adj-j}. By \thmref{thm:indecomp}, $H/N$ is centrally indecomposable. \end{proof} Heisenberg groups can have quotients that are centrally decomposable (e.g. a Heisenberg group over a field of size $p^d$ has quotients isomorphic to $H_{d/e}(K)$, $|K|=p^e$, where $e|d$ -- these quotients are centrally decomposable unless $d=e$). It would seem that we could use the size of a fully refined central decomposition as an isomorphism invariant to distinguish some of the various quotients that could occur in \thmref{thm:main}. This requires a much deeper theorem than it may seem. For example, there is a $2$-group of class $2$ that has fully refined central decompositions of different sizes. However, \cite[Theorem 1.1]{Wilson:unique-cent} implies that the size of a fully refined central decomposition of a quotient of a Heisenberg group is an isomorphism invariant.\footnote{Indeed, because the adjoints of quotients $Q$ of Heisenberg groups are of Symplectic type we can further claim that the automorphism group of $Q$ acts transitively on the set of fully refined central decompositions of $Q$; cf. \cite[Corollary 6.8]{Wilson:unique-cent}.} Nevertheless, we can dash that hope as well by arranging the orders of our groups to force them all to be centrally indecomposable, yet maintain the growth developed in \thmref{thm:main}. \begin{coro} Let $(n,d)$ be a pair as in \lemref{lem:good-pair}. If $H$ is a Heisenberg group of order $p^{3d}$ and $N\leq H'$ with $[H:N]=p^n$, then $H/N$ is centrally indecomposable of symplectic type. \end{coro} \begin{proof} This follows from \propref{prop:stable-q} followed by \propref{prop:cent-indecomp}. \end{proof} Finally, we turn to the automorphism groups of the quotients $G$ of a Heisenberg group. For most large families of groups, it is impossible to describe the entire automorphism group of every member, and here we have not succeeded in the fullest generality. However, we are able to describe a very large portion of the automorphism group of such a group $G$. \begin{thm}\label{thm:aut-G} If a group $G$ has order $p^n$ and is an indigenous quotient of a generalized Heisenberg group $H = H_m(K)$, $|K|=p^d$, then \begin{align*} C_{\Aut G}(G') & \cong \Sp(2m,K) \ltimes_{\tau} \hom_{\mathbb{Z}/p}(K^{2m},\mathbb{Z}/p^{n-2md}) \end{align*} where for each $f \in \hom_{\mathbb{Z}/p}(K^{2m},\mathbb{Z}/p^{n-2me})$ and each $\phi \in \Sp (2m,K)$, $f (\phi\tau) = \phi^{-1} f$. Also, taking $G = H/M$, for $M < H' \cong K$, it follow that \begin{align*} \Aut G/C_{\Aut G} (G') & \cong \mathbb{Z}_e\ltimes k^{\times} \end{align*} for some integer $e | d$ and a subfield $k$ of $K$ such that $|k|$ divides $p^n$. \end{thm} \begin{proof} By \propref{prop:Baer-correspondence}, $C_{\Aut G} (G') = \Isom (\Bi (G)) \ltimes_{\tau} \hom_{\mathbb{Z}/p} (G/Z(G),G')$. Let $V = G/Z(G)$ and $W = G'$. Since $G$ is an indigenous quotient of $H := H_m(K)$, $V$ is isomorphic to $K^{2m}$ and $W$ is a quotient of $K$ of $\mathbb{Z}/p$-dimension $n - 2md$. Furthermore, by Theorems~\ref{thm:rec-q-Hei}(ii) and \ref{thm:simple-adj-tensor}, $\otimes_{\Adj(\Bi (G))}$ is an alternating nondegenerate $K$-form of rank $2m$ and so $\Isom (\Bi (G))=\Isom (\otimes_{\Adj(\Bi (G))})=\Sp (2m,K)$. Next, assume $G\cong H/M$ where $H = H_1(K)$, $|G| = p^n$, $|K| = p^d$ and $(n,d)$ satisfy \lemref{lem:good-pair}(i) and (ii). For each $\varphi\in \Aut G$, as in \eqref{eq:Phi}, there is a $\Phi\in \Aut H$ such that $M\Phi = M$ and $\Phi|_{H/M} = \varphi$. By \thmref{thm:aut-Hei}, $\Aut H$ acts on $H' = K$ as $\Gal(K)\ltimes (K^{\times})$. If $\Phi|_{H'}\in K^{\times}$, then $M\Phi = Ms$ for some $s\in K^{\times}$. Evidently $\mathbb{Z}/p\subseteq \{s\in K : Ms\subseteq M\} = k$ is a subfield of $K$. We show $k^{\times}$ embeds in $\Aut G$. First, $(\Aut G)|_G'$ embeds in $\Gal(K)\ltimes k$ (observing that $\Gal(K)$ acts on $k$ because subfields of finite fields are characteristic). In particular, $G'$ is a vector space over $k$. Also, recall from \thmref{thm:aut-Hei} that the action of $K^{\times}$ on $H$ splits with $C_{\Aut G}(G')$ and that the prescribed representation on $G/G'\cong H/H'\cong K^{2m}$ was $\rho_s : s\mapsto \begin{bmatrix} 1&0\\ 0 & s\end{bmatrix}$. In particular, $\Sp(2m,K)$ contains $\begin{bmatrix} 0 & I_{m}\\ I_m & 0\end{bmatrix}$.\footnote{This involution interchanges two complementary maximal totally isotropic subspaces $K^m \times 0$ and $0 \times K^m$ of $V = K^{2m}$ with respect to the geometry of $j$ on $V$.} So $\Aut G|_V$ contains $\begin{bmatrix} sI_m & 0 \\ 0 &tI_m\end{bmatrix}$ for all $s,t\in k^{\times}$. In particular, $V$ and $W$ are both $k$-vector spaces. Indeed, we have that $|G| = [G:G']|G'|$ is a multiple of $|k|$ and that $k^{\times}$ embeds in $\Aut G$. \end{proof} Following \thmref{thm:aut-G}, if $G$ is a proper indigenous quotient of $H = H_m(K)$, $|K|=p^d$, and $|G|=p^n$, then $C_{\Aut G}(G')$ is determined completely by $(p,m,d,n)$. Also, the quotient $\Aut G/C_{\Aut G}(G')\cong \mathbb{Z}_e\ltimes k^{\times}\cong \mathbb{Z}_e\ltimes \mathbb{Z}_{p^f-1}$ where $e|d$ and $f|d$. Furthermore, $p^n$ is a multiple of $|k| = p^f$ and $f < d$ (as $G$ is not isomorphic to $H$). So $f|n$ and $f|d$. That severely restricts the possible outcomes. For example, we may simply have $n$ and $d$ relatively prime, or in fact, make $d$ prime. Therefore, it follows that $(n,d)$ satisfies \lemref{lem:good-pair} (i) and (ii), $e \in \{1,d\}$, and $f = 1$. In particular, we have only two possible outcomes for $\Aut G/C_{\Aut G}(G')$, and this is far too small a variation to help distinguish the vast number of isomorphism types that are possible for $G$. \section{Closing remarks}\label{sec:closing} \subsection{2-groups}\label{sec:2-groups} In our first version of this article, we included quotients $G$ of Heisenberg $2$-groups. Though some of the arguments are unchanged, there were technical flaws whose resolutions ultimately detracted from the goals set forth in our introduction. Also, it was well-known that the isomorphism types of quotients of Heisenberg $2$-groups are determined by the character tables together with power maps (cf. \cite{Nenciu:VZ}). For these reasons, we opted to focus on the odd prime case. Below we outline the different strategy needed for $2$-groups. A group of exponent $2$ is abelian, and so, we cannot use that assumption with quotients of Heisenberg groups. However, we can replace the need for exponent $2$ by assuming only that our group is generated by appropriate subgroups of exponent $2$. (Many definitions below apply to odd primes as well.) We say a group $G$ is \emph{hyperbolic} if it has abelian normal subgroups $E$ and $F$ such that $G = EF$ and $E\cap F = Z(G)$. (This name is motivated by the term hyperbolic as used with classical forms and has no intended relationship to hyperbolic groups in the sense of Gromov.) The pair $(E,F)$ is a \emph{hyperbolic pair} for $G$. If $Z(G)$ splits in $E$ and $F$, then we say that $G$ is \emph{split hyperbolic}. \begin{ex}\label{ex:Hei-hyper} Generalized Heisenberg groups (over \emph{any} field $K$) $H = H_m(K)$ are split hyperbolic groups, e.g. they have the following split hyperbolic pair: \begin{equation}\label{eq:Hei-pair} E = \left\{ \begin{bmatrix} 1 & u & s \\ 0 & I_m & 0 \\ 0 & 0 & 1 \end{bmatrix}: s\in K, u\in K^m\right\} \& F = \left\{ \begin{bmatrix} 1 & 0 & s \\ 0 & I_m & v^t \\ 0 & 0 & 1 \end{bmatrix}: s\in K, v\in K^m\right\}. \end{equation} \end{ex} We now show that creating hyperbolic groups is easy. The idea dates back to Brahana \cite{Brahana}. Let $c : U \times V \to W$ be a bimap, and define a group $\Grp_{Bra}(c)$ on $U \times V \times W$ with product \begin{align*} (u,v;s)(x,y;t) & = (u+x, v+y; s+t+c(u,y) ) & (\forall (u,v;s),(x,y;t)\in U \times V \times W). \end{align*} Note $G := \Grp_{Bra}(c)$ is a hyperbolic group of nilpotence class $2$ with hyperbolic pair $E = U \times 0 \times W$ and $F = 0 \times V \times W$. If $W = c(U,V)$ and $c$ is nondegenerate, then $G' = Z(G) = 0 \times 0 \times W$ and $(E,F)$ is a split hyperbolic pair. Observe that isotopic bimaps produce isomorphic groups. \begin{ex}\label{ex:bi-Hei} If $K$ is a field and $d : K^m \times K^m \to K$ is the dot-product (i.e.: $d(u,v) = uv^t$, for all $u,v\in K^m$), then $\Grp_{Bra}(d)$ is isomorphic to the generalized Heisenberg group of degree $m$ over $K$. \end{ex} We still need to replace $\Bi$ from the Baer correspondence. A nilpotent group $G$ of class $2$ has a hyperbolic pair $(E,F)$ if and only if $G/Z(G) = E/Z(G)\oplus F/Z(G)$ and $b (E/Z(G),E/Z(G)) = 0 = b (F/Z(G),F/Z(G))$, for $b = \Bi (G)$. Assuming that $(E,F)$ is a hyperbolic pair for $G$, we may restrict $b$ to a second bimap: \begin{align*} c & = \Bi (G;E,F) :E/Z(G) \times F/Z(G) \to Z(G) \end{align*} where $c(u,v) = b ( u,v)$ for all $u\in E/Z(G)$ and all $v\in F/Z(G)$. As $(E,F)$ is a hyperbolic pair and $b$ is alternating, it follows for all $u,x \in E/Z(G)$ and all $v,y \in F/Z(G)$ that \begin{equation*} \begin{split} b (u+v,x+y) & = b (u,x) + b (u,y)+b (v,x) + b (v,y) = c(u,y) - c(x,v). \end{split} \end{equation*} Hence, $c$ determines $b$, and $c$ is nondegenerate. Unfortunately, this depends on the choice of hyperbolic pair $(E,F)$, and so, it introduces several ambiguities. In the special case of a group $G$ where $1 = Z(G)^2 < G^2\leq G' = Z(G) < G$ (as is the case for quotients of Heisenberg $2$-groups), we have a quadratic map $q := \Qd(G) : G/Z(G) \to G'$ where $q(Z(G)u) = u^2$ for all $u\in G$. We also observe that if $G = \Grp(c)$, then in characteristic $2$, \begin{align*} (u,v;s)^2 & = (2u,2v; 2s+c(u,v)) = (0,0;c(u,v)) & (\forall (u,v;s) \in U \times V \times W). \end{align*} In particular, the $c$ used to define $G$ can be recovered canonically from squares. \begin{prop}\label{prop:any-hyper} Let $1 = Z(G)^2 < G^2 \leq G' \leq Z(G) < G$. If $(E,F)$ is a split hyperbolic pair for a split hyperbolic group $G$, then $\Bi (G;E,F) = \Qd(G)$ as functions. In particular, $\Bi (G;E,F)$ does not depend on the choice of $(E,F)$. \end{prop} Notice that the use of a quadratic map means the role of the symplectic group in our proofs is now replaced by orthogonal groups, for example \thmref{thm:aut-Hei} must be adapted. The following correspondence of Brahana \cite{Brahana} is perhaps the earliest version of a functorial relationship between nilpotent groups and bimaps. Unlike its later generalizations by Baer, Mal'cev, Kaloujnine, and Lazard, it applies to $p$-groups without restriction on $p$ (at the cost of specializing to hyperbolic groups). \begin{prop}[Brahana 1935]\label{prop:back-forth} A group $G$ is hyperbolic if and only if $G$ is isoclinic to $\Grp_{Bra}(c)$ for a bimap $c : U \times V \to W$. In particular, $c$ can be chosen to be nondegenerate and with $W = Z(G)$. If $G$ is split hyperbolic and $G' = Z(G)$, then the isoclinism can be selected to be an isomorphism. \end{prop} \begin{proof} The reverse direction is explained above so we focus on the forward direction. Let $(E,F)$ be a hyperbolic pair for a hyperbolic group $G_1$. Let $c = \Bi (G_1;E,F)$ and set $G_2 = \Grp(c) = E/Z(G_1) \times F/Z(G_1) \times G_1'$. As $G_2/Z(G_2)\cong E/Z(G_1)\oplus F/Z(G_1)$ and $G_2' = 0 \times 0 \times G_1'$, there are isomorphisms $\varphi : G_1/Z(G_1) \to G_2/Z(G_2)$ and $\ct{\varphi} : G_1' \to G_2'$ $G_1/Z(G) = E/Z(G)\oplus F/Z(G)$. It follows that $(\varphi;\ct{\varphi}) : \Bi (G_1) \to \Bi (G_2)$ is a pseudo-isometry and so $G_1$ and $G_2$ are isoclinic. If $G_1$ is split hyperbolic with split hyperbolic pair $(E,F)$, then there are subgroups $E_0\leq E$ and $F_0\leq F$ such that $E = E_0\oplus Z(G)$ and $F = F_0\oplus Z(G)$. Observe that $G_1 = E_0\ltimes F$. We have canonical isomorphisms $f : E_0 \to E/Z(G_1) \times 0 \times 0\leq G_2$ and $g : F \to 0 \times F/Z(G_1) \times Z(G_1)\leq G_2$. Also, $(u,v)\mapsto (uf, vg)$, for $u\in E_0$ and $v\in F$, induces an isomorphism $G_1 \to G_2$. \end{proof} \begin{remark} Brahana introduced his correspondence as between hyperbolic groups (our terminology) and trilinear $k$-forms, that is, functions $t : U \times V \times W \to k$ that are $k$-linear in each variable. Notice $t$ determines a $k$-bimap $b : U \times V \to \hom_k(W,k)$ by $b (u,v) = t(u,v,-)$. Also, given a monomorphism $\tau : W \to \hom_k(W,k)$, a $k$-bimap $b : U \times V \to W$ can be converted into a trilinear $k$-form $t : U \times V \times W \to k$ via $t(u,v,w) = w(b (u,v)\tau)$. Thus our treatment above is equivalent to Brahana's. \end{remark} Using these tools one can derive appropriate variants of our main theorems. However, as we mentioned at the start, these examples are not so satisfactory because there are well-known isomorphism invariants for such groups. What we would very much like to know is a family of $2$-groups with expansive growth, a polynomial-time isomorphism test, and no obvious isomorphism invariants. That is still an open problem. \subsection{Our results as a `converse' to Brauer's problem} A final consequence of our results concerns Brauer tuples. Two groups $G$ and $H$ form a \emph{Brauer pair} if they are nonisomorphic yet have an isomorphism between their character tables that preserves powers. Brauer asked if such pairs exist \cite[p. 138]{Brauer}, suggesting that perhaps the character table considered along with powers would determine the isomorphism class of a finite group. This was answered in the negative by Dade \cite{Dade}. Nenciu \cite{Nenciu:VZ} showed there are no Brauer pairs of Camina $2$-groups of nilpotence class $2$ and the second author describes conditions for odd Camina $p$-groups of nilpotence class $2$ to be Brauer pairs \cite{Lewis:Brauer}. Brauer pairs have since been generalized. Following Eick and M\"uller in \cite{EiMu} and Nenciu in \cite{Nenciu:tuple}, we say that the groups $(G_1, \dots, G_t)$ form a \emph{Brauer $t$-tuple} if for all $1\leq i < j\leq t$, $(G_i, G_j)$ is a Brauer pair. Eick and M\"uller proved the existence of Brauer $4$-tuples \cite{EiMu}, and Nenciu proved the existence of $t$-tuples for arbitrarily large $t$ in \cite{Nenciu:tuple}. \corref{coro:Camina}(v) and \thmref{thm:main} give Brauer $t$-tuples of exponential size $t$. These new $t$-tuples are quite different from previous examples. In fact, we see our result as a converse to Brauer's problem. We give a seemingly routine set of groups that are pairwise nonisomorphic. Should there not also be a routine explanation of why two members from the set are nonisomorphic? \section*{Acknowledgments} We are grateful to the referee whose comments both improved the writing and prompted us to notice our gaps for the $2$-group setting. \def$'${$'$}
1,477,468,750,247
arxiv
\section{Introduction} In recent years we have proposed the $\widetilde{U}(12)$-scheme\cite{Cov,u-12}, a relativistically covariant level-classification scheme of hadrons. In this scheme, the ground state (GS) of light $q\bar{q}$ meson system is assigned as ${\bf 12} \times {\bf {12}^{*}}={\bf 144}$- representation of the $U(12)_{SF}$-group at their rest frame. The ${U}(12)_{SF}$-group includes, in addition to the conventional non-relativistic $SU(6)_{SF}$-group, the new symmetry $SU(2)_{\rho}$ {\footnote{The new degree of freedom corresponding to the $SU(2)_{\rho}$-symmetry is called the $\rho$-spin, after the well-known $\rho\otimes\sigma$-decomposition of Dirac matrices. }}, which corresponds to the degree of freedom associated with negative energy Dirac spinor solutions of confined quarks inside hadrons. By inclusion of this extra $SU(2)$ spin freedom, it leads to the possible existence of some extra multiples, called {\it chiral states}, which do not exist in the ordinary non-relativistic quark model (NRQM). As an example, the light scalar $f_{0}(600)$/$\sigma$ meson, a controversial particle for long time, is identified as $S$-wave chiral state as well as $\pi$ meson, and they play mutually the role of chiral partners in the $\widetilde{U}(12)$-scheme. As is well known, in conventional level classification scheme based on NRQM, lowest scalar meson is obliged to be assigned as orbital $P$-wave excited state. As an another example, the $a_{1}$ meson, possibly to be identified as the $q\bar{q}$ $S$-wave axial-vector mesons in the $\widetilde{U}(12)$-scheme. They form a linear representation of chiral symmetry with the $S$-wave $\rho$ meson. Here it is notable that these $\sigma$ and $a_{1}$ mesons are expected to have the light mass compared with the conventional case of the $P$-wave states. Furthermore, ${\bf 144}$-representation includes the another axial-vector meson state with $J^{PC}=1^{+-}$, to be identified with the $b_{1}$ meson. \\ In this work, we try to elucidate the properties of our new-type $S$-wave axial-vector mesons, $a_{1}$ and $b_{1}$, whose existence are predicted in the $\widetilde{U}(12)$-scheme, through the analyses of their radiative and pionic decay. In the actual analyses, we identify our chiral $S$-wave $a_{1}$ and $b_{1}$ mesons as the experimentally well-known states, $a_{1}(1260)$ and $b_{1}(1235)$, respectively. Then, by using a simple decay interaction, their partial widths of the strong $a_{1}$ ($b_{1}$)$\to \rho$ ($\omega$)$\pi$ decays (with $D/S$-wave amplitude ratios, ) and radiative transition widths of $a_{1}$ ($b_{1}$) $\to \pi \gamma$ processes are calculated in comparison with the respective experimental values. \section{Wave functions of the \mbox{\boldmath $a_{1}$} and \mbox{\boldmath $b_{1}$ mesons as $S$-wave chiral states} } In this section we collect the concrete expressions of meson wave function (WF) in our scheme necessary for the relevant applications \footnote{In more detail, see Ref. \cite{Cov,u-12,Dshep}}. \\ The basic framework of our level-classification scheme is what is called the boosted LS-coupling (bLS) scheme. In this scheme, the WF of $q\bar{q}$ GS mesons are given by the following (bi-local Klein Gordon) field with one each upper and lower indices {\footnote{ For simplicity, the only positive frequency part of WF is shown here. }}, \begin{eqnarray} \Phi(X,x)^{(+)}_{A}{}^{B} = N e^{+iP \cdot X} \ W(v)^{(+)}_{\alpha, a}{}^{\beta, b} \ f_{G}(v,x). \label{Eq1} \end{eqnarray} Where $A=(\alpha,a)$ ($B=(\beta,b)$) denotes Dirac spinor and flavor indices respectively, $X_{\mu}$ ($x_{\mu}$) represents the center of mass (CM) (relative) coordinate of the composite meson. The ${P_{\mu}}$ ($v_{\mu}=P_{\mu}/M$, $M$ being the mass of meson; $v_{\mu}^2=-1$, $v_{0}=+1$) denotes 4-momentum (4-velicity) of the relevant mesons. In the bLS scheme, respective spin ($W(v)_{A}{}^{B}$) and space-time {\footnote{ We have been adopted a definite metric type 4-dimensional oscillator function as $f_{G}(v,x)${\cite{MassCOQM}}. }} ($f_{G}(v,x)$) parts of WF are, separately, made covariant by boosting from the corresponding parts of NR ones. \\ Important feature of $\widetilde{U}(12)$-scheme is that the spin WF contains extra $SU(2)$ spin degree of freedom, called $\rho$-spin. As expansion bases of spinor WF, we use the Dirac spinor with hadron on-shell 4-velocity, \begin{eqnarray} \{ u_{+}(v), u_{-}(v) \}. \ \ \ \ (\rho_{3} \ u_{\pm}=\pm u_{\pm}) \end{eqnarray} Here, $u_{+}$ corresponds to conventional constituent quark degree of freedom, while $u_{-}$ is indispensable for covariant description of confined quarks {\footnote{They form the chiral partner in basic representation of the chiral group.} }. Accordingly, expansion basis of $q\bar{q}$ meson WF is given by direct product of the respective spinor WF corresponding to the relevant constituent quark. They consist of totally 16 members in $\tilde{U}(4)_{S}$-space as, \begin{eqnarray} W(v)_{\alpha}{}^{\beta} = u_{r}(v)_{\alpha}\bar{v}_{r'}(v)^{\beta}. \ \ \ \ (r,r^{'})=(\rho_{3},\bar{\rho}_{3}) \end{eqnarray} We show the specific form of spin WF for the respective members of $q\bar{q}$ $S$-wave mesons, appeared in the relevant applications, in Table {\ref{tab1}}. Here it should be noted that, in the actual application, being based on its success{\cite{Oda}} with $SU(6)_{SF}$-description for $\rho(770)$-nonet, it seems that its WF should be taken as the form containing only positive $\rho_{3}$-states. This is made by taking the equal-weight superposition of two spin WF which belongs to the different chiral representation, respectively. \begin{table}[htbp] \centering \caption{ {\it Spin wave functions of $S$-wave mesons applying for in this work, at their rest frame. Note that the physical $\rho$ meson WF is given by as a sum of the following two vector WF, the one only with $(\rho_{3},\bar{\rho}_{3})=(+,+)$. }~(i=1,2,3) } \vskip 0.1 in \begin{tabular}{|l|c|c|c|c|} \hline Mesons & $J^{PC}$ & $W(v=0)^{(+)}$& $SU(2)_{L}\otimes SU(2)_{R}$ &$(\rho_{3},\bar{\rho}_{3})$ \\ \hline \hline $a_{1}(1260)$& $1^{++}$ & $\frac{\gamma_{5}\gamma_{i}}{2}$&$(1_{L},0_{R})\oplus (0_{L},1_{R})$&$\frac{(-,+)+(+,-)}{\sqrt{2}}$\\ \cline{1-5} $b_{1}(1235)$& $1^{+-}$ & $\frac{i\gamma_{5}\sigma_{i4}}{2}$&$({\frac{1}{2}}_{L},{\frac{1}{2}}_{R})$&$\frac{i\left((-,+)+(+,-)\right)}{\sqrt{2}}$\\ \cline{1-5} $\rho(770)$& $1^{--}$ & $\frac{i\gamma_{i}}{2}$&$(1_{L},0_{R})\oplus (0_{L},1_{R})$&$\frac{(+,+)+(-,-)}{\sqrt{2}}$\\ \cline{2-5} $\rho(1250)${\cite{Yamauchi}}& $1^{--}$ & $\frac{\sigma_{i4}}{2}$&$({\frac{1}{2}}_{L},{\frac{1}{2}}_{R})$&$\frac{(+,+)-(-,-)}{\sqrt{2}}$\\ \cline{1-5} $\pi(140)$& $0^{-+}$ & $\frac{i\gamma_{5}}{2}$&$({\frac{1}{2}}_{L},{\frac{1}{2}}_{R})$&$\frac{(+,+)+(-,-)}{\sqrt{2}}$\\ \hline \end{tabular} \label{tab1} \end{table} \section{Radiative decays of the $a_{1}$ and $b_{1}$ mesons} At first, we will consider the radiative decays of $a_{1}$ and $b_{1}$ mesons. In this work, we focus on the radiative transitions among the GS mesons. Therefore we are able to adopt simply the effective spin-type interaction, \begin{eqnarray} H= \ \bar{q} \ \sigma_{\mu\nu}F_{\mu\nu}((iv\gamma)g + g') \ q \ . \label{em} \end{eqnarray} Here we introduced two independent coupling parameters $g$ and $g^{'}$. The $g$ term contributes to only quark chirality conserving transitions, while the $g^{'}$ term does to chirality non-conserving ones. By applying the quark-photon interaction ({\ref{em}}), the effective meson current is given by the following formulas, \begin{eqnarray} J_{\mu}^{}(P,P^{'})=J_{1,\mu}^{}(P,P^{'}) +J_{2,\mu}^{}(P,P^{'}). \end{eqnarray} Here, subscript $1$ ($2$) represents the coupling of the emitted single photon with the relevant meson system through constituent quark (anti-quark). The specific form of the current is represented by \begin{eqnarray} J_{1,\mu}^{}(P,P^{'})&=&e_{q} I_{G}^{(\gamma)}~\langle \bar{W}^{(-)}(v^{'}) [2 g i\sigma_{\mu\nu}q_{\nu}] iv\gamma W^{(+)}(v) iv\gamma\rangle \ , \\ J_{2,\mu}^{}(P,P^{'})&=&e_{\bar{q}} I_{G}^{(\gamma)}~\langle iv\gamma W^{(+)}(v){iv\gamma}[-2g^{}(-i\sigma_{\mu\nu}q_{\nu})] \bar{W}^{(-)}(v^{'})\rangle \, \end{eqnarray} for the case of the chirality conserving transition; and similarly \begin{eqnarray} J_{1,\mu}^{'}(P,P^{'})&=&e_{q} I_{G}^{(\gamma)}~\langle \bar{W}^{(-)}(v^{'}) [2 g_{}^{'} i\sigma_{\mu\nu}q_{\nu}]W^{(+)}(v)iv\gamma\rangle \ , \\ J_{2,\mu}^{'}(P,P^{'})&=&e_{\bar{q}} I_{G}^{(\gamma)}~\langle iv\gamma W^{(+)}(v)[-2 g_{}^{'}(-i\sigma_{\mu\nu}q_{\nu})] \bar{W}^{(-)}(v^{'})\rangle \ , \end{eqnarray} for the case of the chirality non-conserving transition. Here $q_{\mu}=P_{\mu}-P^{'}_{\mu}$ denotes the 4-momentum of emitted photon, $I_{G}^{(\gamma)}$ is overlapping integral (OI) of space-time oscillator function, which gives a Lorentz invariant transition form factor as \begin{eqnarray} I_{G}^{(\gamma)}&=&\int d^4 x f_{G}^{*}(v^{'}, x) f_{G}(v,x) e^{-i\frac{1}{2}q_{\mu}x_{\mu}}\\ &=&(\frac{2MM^{'}}{M^2+M^{'2}}){\rm exp}[-\frac{1}{2\Omega} \frac{(M^2-M^{'2})^2}{M^{2}+M^{'2}}], \label{IGgamma} \end{eqnarray} where we introduce the parameter $\Omega$ corresponding to the Regge slope inverse. \\ In our scheme the relativistic covariance of the spin current, due to the inclusion of Dirac spinor with negative $\rho_{3}$-value, plays an important role in some radiative transition processes. To clarify this point, we rewrite the spin current vertex operator as \begin{eqnarray} \sigma_{\mu\nu} iq_{\nu} A_{\mu}=\sigma_{\mu\nu} F_{\mu\nu} = {\mbox{\boldmath{$\sigma$}}} \cdot {\bf{B}} - i\rho_{1} {\mbox{\boldmath$\sigma$}}\cdot{\bf{E}}~~. \end{eqnarray} In the cases of transition between both positive (negative) $\rho_{3}$ Dirac spinors, as is well known, the main contribution comes from the magnetic interaction. On the other hand, in the case of transitions between Dirac spinors with positive and negative $\rho_{3}$-values, the electric interaction, coming from the $\sigma_{i4}iq_{i} A_{4}$-term, becomes a dominant contribution. As a results, this {\it intrinsic electric dipole}{\cite{Dshad03}} transition gives an important role for the transition accompanied with their parity change, such as $a_{1} (b_{1}) \to \pi \gamma$ processes. \\ In this work, we take the following values of parameters in our scheme. \begin{itemize} \item ($g$, $g_{}^{'}$)=($2.59$, $1.40$) \ from \ $\Gamma_{\rm EXP}( b_{1}^{+} \to \pi^{+} \gamma)$ and $\Gamma_{\rm EXP}(\rho^{+} \to \pi^{+} \gamma)$ \item $\Omega_{n\bar{n}} =1.13 \ {\rm GeV}^2$ from $\Omega=M({}^{3}P_{2})^2-M({}^{3}S_{1})^2 =M(a_{2}(1320))^2-M(\rho (770))^2$ \end{itemize} The masses of the respective mesons are taken from PDG{\cite{PDG2007}}, except for the one of the pion in the form factor with $M_{\pi}=0.78~{\rm GeV}$. The estimated widths are in comparison with experiment in Table {\ref{tab3}}. Results for this calculation are consistent with experiments. \begin{table}[t] \centering \caption{ {\it Radiative decay widths (KeV) in comparison with experiment. Experimental data are taken from PDG{\cite{PDG2007}}.} } \vskip 0.1 in \begin{tabular}{|l|c|c|} \hline Process & Our results & Experimental values \\ \hline \hline $\rho(770)\to\pi\gamma$ & 68~(input) & 68$\pm$7 \\ \hline $b_{1}(1235)\to\pi\gamma$ & 230~(input) & 230$\pm$60 \\ \hline $a_{1}(1260)\to\pi\gamma$ & 604 & 640$\pm$246 \\ \hline \end{tabular} \label{tab3} \end{table} \section{Pion emissions of $a_{1}$ and $b_{1}$ mesons} Next we consider the strong decays with one pion emission. We adopt simply the following two types of effective quark-pion interactions; \begin{eqnarray} L_{ps}&=&g_{ps} \ \bar{q}(-i\gamma_{5})q \ \pi, \\ L_{pv}&=& g_{pv} \ \bar{q}(-i\gamma_{5}\gamma_{\mu})q \ \partial_{\mu} \pi. \end{eqnarray} Note that here, $\pi$ ( and $\sigma$ ) meson is treated as an external local-field. Resultant matrix elements are given as a sum of two terms; \begin{eqnarray} T &=& T_{ps} + T_{pv} \ , \\ T_{ps} &=& g_{ps} I_{G}^{(\pi)}~\langle W (v^{'}) (- i\gamma_{5} \pi) W (v) iv\gamma \rangle + c.c. \ \ , \\ T_{pv} &=& g_{pv} I_{G}^{(\pi)}~\langle W (v^{'}) (-\gamma_{5} \gamma_{\mu}q_{\mu}\pi) W (v) iv\gamma \rangle + c.c. \ \ . \end{eqnarray} In the above case, the OI of the space-time WF is given by \begin{eqnarray} I_{G}^{(\pi)}&=&\int d^4 x f_{G}^{*}(v^{'}, x) f_{G}(v,x) e^{-i\frac{1}{2}q_{\mu}x_{\mu}}\\ &=&(\frac{2MM^{'}}{M^2+M^{'2}-m_{\pi}^2}) {\rm exp}[- \frac{(M^2-M^{'2})^2-m_{\pi}^2(M^2 + M^{'2})} {2\Omega \left(M^{2}+M^{'2}-m_{\pi}^2\right)}], \end{eqnarray} where $q^2=-m_{\pi}^2$, $q_{\mu}=P_{\mu}-P^{'}_{\mu}$ being the 4-momentum of emitted pion. The relevant decay amplitude is \begin{eqnarray} T=f_{1}\ \epsilon_{\mu}(v')\epsilon_{\mu}(v) \ + \ f_{2} \ (q_{\mu}\epsilon_{\mu}(v'))(q_{\nu}\epsilon_{\nu}(v)). \label{ampT} \end{eqnarray} The explicit forms of $f_{1}$ and $f_{2}$ are shown in Table {\ref{tab4}}. It may be worthwhile to note that at least two coupling types ( expressed $f_{1}$ and $f_{2}$ in the above ) are required to reproduce the experimental data on $D/S$-wave amplitude ratios.\\ Our decay interaction contains two independent coupling parameters, $g_{ps}$ and $g_{pv}$, which will be commonly applied to all quark-pion vertices {\footnote{ As an example, it is applied to the study of `extra'- $\kappa$ meson{\cite{Yamada}}. }}. These are determined from the experimental data of $D/S$-wave amplitude ratio and total width of $b_{1}$ meson as, \begin{itemize} \item $\frac{g_{ps}}{g_{pv}}=0.149 \ {\rm GeV}$ \ from \ $T_{D}/T_{S}|_{\rm EXP}(b_{1}^{+} \to \omega \pi^{+}) = + 0.277$ \item $g_{pv}=14.0$ \ from \ $\Gamma_{\rm EXP}( b_{1}^{+} \to \omega \pi^{+}) \approx \Gamma_{\rm EXP}( b_{1}^{+}{}_{{\rm total}})=142 \ {\rm MeV}$. \end{itemize} The masses of the relevant mesons are taken from PDG{\cite{PDG2007}}. The numerical results are shown in Table {\ref{tab5}}.~ \begin{table}[htbp] \centering \caption{ \it Coefficients of decay amplitude ({\ref{ampT}}) for $a_{1}\to\rho\pi$ and $b_{1}\to\omega\pi$ process. } \vskip 0.1 in \begin{tabular}{|l|c|c|} \hline & $b_{1}\to \omega \pi $ & $a_{1}\to \rho \pi$ \\ \hline \hline $f_{1}$ & $I_{G}\times ( -g_{ps}+(\omega M- M^{'})g_{pv})$ & $I_{G}\times ( -g_{ps}\omega+(M-\omega M^{'})g_{pv})$\\ $f_{2}$ &$I_{G}\times ( -g_{pv}\frac{1}{M^{'}})$ &$I_{G}\times ( g_{ps}\frac{1}{M M^{'}}+g_{pv}\frac{1}{M})$ \\ \hline \end{tabular} \label{tab4} \end{table} \begin{table}[htbp] \centering \caption{ \it Numerical results for pion emissions of $a_{1}$ and $b_{1}$ mesons. Experimental data are taken from PDG{\cite{PDG2007}}. } \vskip 0.1 in \begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{2}{|c|}{$T_{D}/T_{S}$} & \multicolumn{2}{|c|}{Width (MeV)} \\ \cline{2-3}\cline{4-5} process &Our results&Experimental values&$\Gamma_{\rm partial}^{\rm theor.}$& $\Gamma_{\rm total}^{\rm Exp.}$\\ \hline \hline $b_{1}\to \omega \pi $ &0.277(input) & 0.277$\pm$ 0.027 &142(input)&142$\pm$9\\ $a_{1}\to \rho \pi$ &-0.344 & -0.108$\pm$ 0.016& 191& $250 \sim 600 $\\ \hline \end{tabular} \label{tab5} \end{table} \section{Concluding remarks} In this work, we investigate the decay properties of $q\bar{q}$ $S$-wave $a_{1}$ and $b_{1}$ mesons in the $\tilde{U}(12)$-scheme, by assigning them with $a_{1}(1260)$ and $b_{1}(1235)$ mesons, respectively. \\ At first, it is shown that the radiative decay widths of $(a_{1},b_{1},\rho) \to \pi \gamma$ processes are consistently reproduced by using the simple spin-type quark-photon effective interaction in the framework of the $\widetilde{U}(12)$-scheme.\\ Secondly, for the strong one-pion emission decays, assuming the $ps$- and $pv$-type quark-pion effective interactions, the $D/S$-wave amplitude ratios and partial widths of $a_{1} (b_{1}) \to \rho (\omega)\pi $ decays are evaluated. As a results, by inputting the data for the $b_{1}$ meson, the sign of $D/S$-wave amplitude ratio for the $a_{1} \to \rho \pi $ decay agrees with the experiments, but its absolute value is about three time larger than experiment. Partial width of $a_{1} \to \rho \pi $ is predicted with $\Gamma ( a_{1} \to \rho \pi)\sim 200 {\rm MeV} $.\\ The interaction adopted in this work for the radiative/strong decays should be tested by applying it to other various decay processes.
1,477,468,750,248
arxiv
\section{Introduction} In a series of papers and in his 2009 book on configurations~\cite{Gru2009b}, Branko Gr\"unbaum described a sequence of operations to produce new $(n_{4})$ configurations from various input configurations. These operations were later called the ``Gr\"unbaum Incidence Calculus''~\cite[Section 6.5]{PisSer2013}. Some of the operations described by Gr\"unbaum are specific to producing 3- or 4-configurations. Other operations can be generalized in a straightforward way to produce $(n_{k})$ configurations from either smaller $(m_{k})$ configurations with certain properties, or from $(m_{k-1})$ configurations. Let $N_{k}$ be the smallest number such that for any $n, n \geq N_k$ there exists a geometric $(n_k)$ configuration. For $k = 2$ and $k = 3$, the exact value of $N_{k}$ is known, and for $k = 4$ it is known that $N_{4} = 20$ or $24$. We generalize two of the Gr\"unbaum Calculus operations in order to prove that for any integer $k$ there exists an integer $N_k$ and we give bounds on $N_{k}$ for $k \geq 5$. The existence of geometric 2-configurations is easily established. The only (connected) combinatorial configuration $(n_2)$ is an $n$-lateral. For each $n, n\geq 3$, an $n$-lateral can be realized as a geometric multilateral (for the definition of a \emph{multilateral}, see~\cite{Gru2009b}). As a specific example, an $(n_{2})$ configuration can be realized as a regular $n$-gon with sides that are extended to lines. (For larger values of $n$ it can also be realized as an $n$-gonal star-polygon, but the underlying combinatorial structure is the same.) Hence: \begin{prop}\label{thm:n2} A geometric $(n_2)$ configuration exists if and only if $n \geq 3$. In other words, $N_2 = 3$. \end{prop} For 3-configurations, $N_{3}$ is known to be 9 (see \cite[Section 2.1]{Gru2009b}); for example, Branko Gr\"{u}nbaum provides a proof (following that of Schr\"{o}ter from 1888, see the discussion in \cite[p. 65]{Gru2009b}) that the cyclic combinatorial configuration $\mc{C}_{3}(n)$, which has starting block $[0,1,3]$, can always be realized with straight lines for any $n \geq 9$. That is: \begin{prop}\label{thm:n3} A geometric $(n_3)$ configuration exists if and only if $n \geq 9$. In other words, $N_3 = 9$. \end{prop} Note that there exist two combinatorial 3-configurations, namely $(7_3)$ and $(8_3)$, that do not admit a geometric realization. For $k = 4$, the problem of parameters for the existence of 4-configurations is much more complex, and the best bound $N_4$ is still not known. For a number of years, the smallest known 4-configuration was the $(21_{4})$ configuration which had been studied combinatorially by Klein and others, and whose geometric realization, first shown in 1990 \cite{GruRig1990}, initiated the modern study of configurations. In that paper, the authors conjectured that this was the smallest $(n_{4})$ configuration. In a series of papers \cite{Gru2000, Gru2000b, Gru2002, Gru2006} (summarized in \cite[Sections 3.1-3.4]{Gru2009b}), Gr\"unbaum showed that $N_{4}$ was finite and less than 43. In 2008, Gr\"{u}nbaum found a geometrically realizable $(20_{4})$ configuration \cite{Gru2008a}. In 2013, J\"{u}rgen Bokowski and Lars Schewe \cite{BokSch2013} showed that geometric $(n_{4})$ configurations exist for all $n \geq 18$ except possibly $n = 19, 22, 23, 26, 37, 43$. Subsequently, Bokowski and Pilaud \cite{BokPil2015} showed that there is no geometrically realizable $(19_{4})$ configuration, and they found examples of realizable $(37_{4})$ and $(43_{4})$ configurations \cite{BokPil2016}. In 2018, Michael Cuntz \cite{Cun2018} found realizations of $(22_{4})$ and $(26_{4})$ configurations. However, the question of whether a geometric $(23_{4})$ geometric configuration exists is currently still open. In this paper, $\bar{N_k}$ will denote any known upper bound for $N_k$ and $N^R_k$ will denote currently best upper bound for $N_k$. Summarizing the above results, we conclude: \begin{prop}\label{prop:N-4-results} A geometric $(n_4)$ configuration exists for $n = 18,20,21,22$ and $n \geq 24$. Moreover, either $N_{4} = 20$ or $N_{4} = 24$ (depending on whether or not a $(23_{4})$ configuration exists). In other words, $N^R_4 = 24.$ \end{prop} The main result of the paper is the following result. \begin{theorem}\label{mainTheorem} For each integer $k \geq 2$ the numbers $N_k$ exist. \end{theorem} To simplify subsequent discussions, we introduce the notion of \emph{configuration-realizability}, abbreviated as \emph{realizability}, of numbers. A number $n$ is \emph{$k$-realizable} if and only if there exists a geometric $(n_k)$ configuration. We may rephrase Proposition \ref{prop:N-4-results} by stating that the numbers $n = 18, 20, 21, 22$ and $n \geq 24$ are $4$-realizable. Also note that the number $9$ is $2$- and $3$-realizable but not $k$-realizable for any $k \geq 4$. \section{Generalizing two constructions from the Gr\"{u}nbaum Incidence Calculus} \label{sect:Grunbaum} In this section, we generalize two constructions of the Gr\"unbaum Incidence Calculus which we will use to prove the existence of $N_{k}$ for any $k$. As input to examples of these constructions, we often will use the standard geometric realization of the $(9_{3})$ Pappus configuration $\mc{P}$, shown in Figure \ref{fig:pappus}. \begin{figure}[htbp] \begin{center} \includegraphics[width=.5\textwidth]{papcfg1.png} \caption{The standard geometric realization of the $(9_{3})$ Pappus configuration $\mc{P}$.} \label{fig:pappus} \end{center} \end{figure} \newcommand{\mathrm{AR}}{\mathrm{AR}} The first, which we call \emph{affine replication} and denote $\mathrm{AR}(m, k)$, generalizes Gr\"unbaum's $\mathbf{(5m)}$ construction; it takes as input an $(m_{k-1})$ configuration and produces a $((k+1)m_{k})$ configuration with a pencil of $m$ parallel lines. \newcommand{\mathrm{AS+}}{\mathrm{AS+}} The second, which we call \emph{affine switch}, is analogous to Gr\"unbaum's $\mathbf{(3m+)}$ construction. It takes as input a single $(m_{k})$ configuration with a set of $p$ parallel lines in one direction and a set of $q$ parallel lines in a second direction which are disjoint (in terms of configuration points) from the pencil of $p$ lines, and it produces a configuration $((k-1)m+r)_{k})$ for any $r$ with $1 \leq r \leq p+q$. Applying a series of affine switches to a single starting $(m_{k})$ configuration with a pencil of $q$ parallel lines produces a consecutive sequence (or ``band'') of configurations \[((k-1)m+1)_{k}), \ldots, ((k-1)m+q)_{k})\] which we will refer to as $\mathrm{AS+}(m,k,q)$. \subsection{Affine Replicatio } Starting from an $(m_{k-1})$ configuration $\mc{C}$ we construct a new configuration $\mc{D}$ which is a $((k+1)m_{k})$ configuration. A sketch of the construction is that $k-1$ affine images of $\mc{C}$ are carefully constructed so that each point $P$ of $\mc{C}$ is collinear with the $k-1$ images of $P$, and each line of $\mc{C}$ and its images are concurrent at a single point. Then $\mc{D}$ consists of the the points and lines of $\mc{C}$ and its images, the new lines corresponding to the collinearities from each point $P$, and the new points of concurrence corresponding to the lines of $\mc{C}$ and their images. The details of the construction are as follows: \begin{enumerate} \item Let $A$ be a line that (i) does not pass through the intersection of two lines of $\mc{C}$, whether or not that intersection point is a point of the configuration; (ii) is perpendicular to no line connecting any two points of $\mc{C}$, whether or not that line is a line of the configuration; (iii) intersects all lines of $\mc{C}$. \item Let $\alpha_{1}, \alpha_{2}, \ldots \alpha_{k-1}$ be pairwise different orthogonal axial affinities with axis $A$. Construct copies $\mc{C}_{1} = \alpha_{1}(\mc{C})$, $\mc{C}_{2} = \alpha_{2}(\mc{C})$,\ldots, $\mc{C}_{k-1} = \alpha_{k-1}(\mc{C})$ of $\mc{\mc{C}} = \mc{C}_{0}$. \item Let $\ell$ be any line of $\mc{C}$. Since $A$ is the common axis of each $\alpha_i$, the point $A\cap \ell$ is fixed by all these affinities. This means that the $k$-tuple of lines $\ell, \alpha_{1}(\ell), \ldots, \alpha_{k-1}(\ell)$ has a common point of intersection lying on $A$. We denote this point by $F_{\ell}$. By condition (i) in (1), for different lines $\ell, \ell' \in \mc {C}$ the points $F_{\ell}, F_{\ell'}$ differ from each other; they also differ from each point of the configurations $\mc {C}_i $ $(i=1,2,\dots, k-1)$. We denote the set $\{F_{\ell}: \ell\in \mc{C}\}$ of points lying on $A$ by $\mc{F}$. \item Let $P$ be any point of $\mc{C}$. Since the affinities $\alpha_{i}$ are all orthogonal affinities (with the common axis $A$), the $k$-tuple of points $P, \alpha_{1}(P), \dots, \alpha_{k-1}(P)$ lies on a line perpendicular to $A$ (and avoids $A$, by condition (i)). We denote this line by $\ell_P$. Clearly, we have altogether $m$ such lines, one for each point of $\mc{C}$, with no two of them coinciding, by condition (ii). We denote this set $\{\ell_P: P\in \mc{C}\}$ of lines by $\mc L$. \item Put $\mc{D} = \mc{C}_0\cup \mc{C}_{1}\cup\dots \cup\mc{C}_{k-1}\cup \mc{F} \cup \mc L$. \end{enumerate} \begin{figure}[h] \begin{center} \includegraphics[width=.66\textwidth]{Quadrilaterals.png} \caption{Affine replication $\mathrm{AR}(4, 3)$ applied to a quadrilateral, i.e.\ a $(4_2)$ configuration; it results in a $(16_3)$ configuration. The corresponding ordinary quadrangles are shaded (the starting, hence each of the three quadrangles are parallelograms). The axis $A$ is shown by a dashed line.} \label{fig:Quadrilaterals} \end{center} \end{figure} \noindent The conditions of the construction imply that $\mc{D}$ is a $((k+1)m_{k})$ configuration. Moreover, by construction, $\mc{D}$ has a pencil of $m$ parallel lines. Figures \ref{fig:Quadrilaterals} and \ref{fig:PappusExt} show two examples of affine replication, first starting with a $(4_{2})$ configuration to produce a $(16_{3})$ configuration, and then starting with the $(9_{3})$ Pappus configuration to produce a $(45_{4})$ configuration. \begin{remark} The orthogonal affinities used in the construction are just a particular case of the axial affinities called \emph{strains}~\cite{Cox1969}; they can be replaced by other types of axial affinities, namely, by oblique affinities (each with the same (oblique) direction), and even, by \emph{shears} (where the direction of affinity is parallel with the axis)~\cite{Cox1969}, while suitably adjusting conditions (i--iii) in (1). \end{remark} \begin{figure}[h!] \begin{center} \includegraphics[width=0.55\textwidth ]{PappusTo45-4c.png \caption{Affine replication $\mathrm{AR}(9,4)$ applied to the $(9_3)$ Pappus configuration, which yields a $(45_4)$ configuration. The starting figure is indicated by thick segments, while the first image is highlighted by red segments. The axis $A$ is shown by a dashed line. The construction is chosen so as to exemplify that ordinary mirror reflection can also be used. Note that the resulting configuration contains a pencil of 9 parallel lines arising from the construction, shown in green. } \label{fig:PappusExt} \end{center} \end{figure} We may summarize the above discussion as follows: \begin{lemma}\label{lemma:kp1} If affine replication $\mathrm{AR}(m, k)$ is applied to any $(m_{k-1})$ configuration, the result is a $(((k+1)m)_k)$ configuration with a pencil of $m$ parallel lines. \end{lemma} \subsection{Affine Switch} In our description of this construction, we are inspired by Gr\"unbaum~\cite[\S 3.3, pp. 177--180]{Gru2009b} but we have chosen a slightly different approach (in particular, we avoid using 3-space). At the same time, we generalize it from $(m_4)$ to $(m_k)$. A sketch of the construction is as follows: Suppose that $\mc{C}$ is an $(m_{k})$ configuration that contains a pencil $\mc{P}$ of $p$ parallel lines in one direction, and a pencil $\mc{Q}$ of $q$ parallel lines in a second direction, where the two pencils share no common configuration points; we say that the pencils are \emph{independent}. For each subpencil $\mc S$ of $\mc P$ and $\mc{T}$ of $\mc{Q}$ containing $s$ parallel lines and $t$ parallel lines respectively, with $1 \leq s \leq p$ and $0 \leq t \leq q$, we form the subfiguration $\hat{\mc C}$ by deleting $\mc S$ and $\mc T$ from $\mc C$ (here we use the term \emph{subfiguration} in the sense of Gr\"unbaum~\cite{Gru2009b}). We then carefully construct $k-2$ affine images of $\hat{\mc C}$ in such a way that for each (deleted) line $\ell$ in $\mc S$ and for each point $P_{1}, P_{2}, \ldots P_{k}$ on $\ell$, the collection of lines through each $P_{i}$ and its images all intersect in a single point $Y_{\ell}$, and simultaneously, for each line $\ell'$ in $\mc T$ and for each point $Q_{1}, Q_{2}, \ldots Q_{k}$ on $\ell'$, the collection of lines through $Q_{i}$ and its images all intersect in a single point $X_{\ell'}$. Let $\mc D$ be the collection of all the undeleted points and lines of $\hat{\mc{C}}$ and its affine images and for each of the deleted $\ell$ and $\ell'$, the new lines through each point $P_{i}$ $Q_{i}$ and their images, the points $Y_{\ell}$, and the points $X_{\ell'}$; then $\mc{D}$ is a $( ((k-1)m + s+t)_{k})$ configuration. As a preparation, we need the following two propositions. \begin{prop}\label{prop:pencil} Let $\alpha$ be a (non-homothetic) affine transformation that is given by a diagonal matrix with respect to the standard basis. Note that in this case $\alpha$ can be written as a (commuting) product of two orthogonal affinities whose axes coincide with the $x$- and $y$-axis, respectively: $$ \begin{matr}{cc} a&0\\0&b \end{matr} = \begin{matr}{cc} a&0\\0&1 \end{matr} \begin{matr}{cc} 1&0\\0&b \end{matr} = \begin{matr}{cc} 1&0\\0&b \end{matr}% \begin{matr}{cc} a&0\\0&1 \end{matr}. $$ Let $P_0(x_0,0),P_1(x_0, y_1),\dots,P_k(x_0, y_k)$ be a range of $k+1$ different points on a line which is perpendicular to the $x$-axis and intersects it in $P_0$. Then the $k$ lines connecting the pairs of points $(P_1, \alpha(P_1)),\dots, (P_k, \alpha(P_k))$ form a pencil with center $C_x$ such that $C_x$ lies on the $x$-axis, and its position depends only on $\alpha$ and $x_0$. Likewise, let $Q_0(x_0,y_0),Q_1(x_1, y_0),\dots,Q_k(x_k, y_0)$ be a range of $k+1$ different points on a line which is perpendicular to the $y$-axis and intersects it in $Q_0$. Then the $k$ lines connecting the pairs of points $(Q_1, \alpha(Q_1)),\dots, (Q_k, \alpha(Q_k))$ form a pencil with center $C_y$ such that $C_y$ lies on the $y$-axis, and its position depends only on $\alpha$ and $y_0$. \end{prop} \begin{proof} An elementary calculation shows that $$ C_x=C_x\left(0,\displaystyle\frac{a-b}{b-1}\,x_0\right), \text{\;resp.\;\,} C_y=C_y\left(0,\displaystyle\frac{b-a}{a-1}\,y_0\right) $$ is the common point of intersection of any two, hence of all the lines in question. \end{proof} \medskip \begin{prop} \label{prop:series} Let $h\ge3$ be a positive integer, and for each $j$ with $j=1,\dots,h-1$, let the affine transformation $\alpha_j$ be given by the matrix \begin{equation} \label{matrix} M_j = \begin{matr}{cc} \displaystyle\frac{h-j}{h}&0\\0&\displaystyle\frac{h+j}{h} \end{matr}. \end{equation} Then for any point $P$, the points $P, \alpha_1(P),\dots,\alpha_{h-1}(P)$ are collinear. \end{prop} \begin{proof} Choose any $j'$ and $j''$, and form the difference matrices $M_{j'}-U$ and $M_{j''}-U$ with the unit matrix $U$. Observe that these matrices are such that one is a scalar multiple of the other. Hence the vectors $\overrightarrow{PP'}$ and $\overrightarrow{PP''}$ are parallel, where $P'=\alpha_{j'}(P)$ and $P''=\alpha_{j''}(P)$. This means that the points $P$, $P'$ and $P''$ lie on the same line. \end{proof} \medskip \begin{figure}[h!] \begin{center} \includegraphics[width=.9\textwidth]{Series_of_Affinities.png} \caption{Illustration for Propositions~\ref{prop:pencil} and~\ref{prop:series}. Affine transformations with parameters $h=8$ and $j=1,\dots 5$ are applied on a square.} \label{fig:series} \end{center} \end{figure} Now we apply the following construction. Let $\mathcal C$ be an $(m_k)$ configuration such that it contains a pencil $\mc P$ of $p\ge1$ parallel lines and a pencil $\mc Q$ of $q\ge1$ parallel lines, too, such that these pencils are perpendicular to each other and are independent Note that any configuration containing independent pencils in two different directions can be converted by a suitable affine transformation to a configuration in which these pencils will be perpendicular to each other. Choose a position of $\mathcal C$ (applying an affine transformation if necessary) such that these pencils are parallel to the $x$-axis and $y$-axis, respectively. \begin{enumerate} \item Remove lines $\ell_1,\dots, \ell_s$ $(s\le p)$ from the pencil $\mc P$ parallel to the $x$-axis and $\ell_{s+1},\dots, \ell_{s+t}$ $(0 \leq t\le q)$ from the pencil $\mc Q$ parallel to the $y$-axis. Let $\widehat{\mathcal {C}}$ denote the substructure of $\mathcal C$ obtained in this way. \item \label{item:j} Let $h$ be a positive integer (say, some suitable multiple of $k$), and for each $j$, $j=1, \dots, k-2$, let $\alpha_j$ be an affine transformation defined in Proposition~\ref{prop:series}. Form the images $\alpha_j(\widehat{\mathcal {C}})$ for all $j$ given here. \item\label{item:ConnectingLine} Let $P$ be a point of $\widehat{\mathcal {C}}$ that was incident to one of the lines $\ell_i$ removed from $\mathcal C$. Take the images $\alpha_j(P)$ for all $j$ given in (\ref{item:j}). By Proposition~\ref{prop:series}, all the $k-1$ points $P, \alpha_j(P)$ are collinear. Let $c_i(P)$ denote this line. \item\label{item:centre} Take all the configuration points on $\ell_i$ and repeat (\ref{item:ConnectingLine}) for each of them. By Proposition~\ref{prop:pencil}, the $k$-set of lines $\{c_i(P): P\in \ell_i\}$ form a pencil whose center lies on the $x$-axis or the $y$-axis according to which axis $\ell_i$ is perpendicular to. \item \label{item:NewElements} Let $r=(s+t)\in\{1,2, \dots, p+q\}$ be the number of lines removed from the pencils of $\mathcal C$ in the initial step of our construction. Repeat (\ref{item:centre}) for all these lines. Eventually, we obtain $rk$ new lines and $r$ new points such that the set of the new lines is partitioned into $r$ pencils, and the new points are precisely the centers of these pencils (hence they lie on the coordinate axes). Observe that there are precisely $k$ lines passing through each of the new points, and likewise there are precisely $k$ points lying on each of the new lines. \item Putting everything together, we form a $(((k-1)m+r)_k)$ configuration, whose \begin{itemize} \item points come from the $(k-1)m$ points of the copies of $\widehat{\mathcal {C}}$, completed with the $r$ new points considered in (\ref {item:NewElements}).% \item lines come from the $(k-1)(m-r)$ lines of the copies of $\widehat{\mathcal{C}}$, completed with the $rk$ new lines considered in (\ref{item:NewElements}). \end{itemize} We use the notation $AS(m, k, r)$ to represent the $(((k-1)m+r)_k)$ configuration described above. \end{enumerate} Summarizing the discussion above, we conclude: \begin{lemma}\label{lemma:km1pr} Beginning with any $(m_k)$ configuration with independent pencils of $p\geq 0$ and $q \geq 1$ parallel lines, for each integer $r$ with $1 \leq r \leq p+q$, the affine switch construction produces an $(n_k)$ configuration, where $n = (k-1)m+r$. \end{lemma} Note that $p+q$ independent lines in an $(m_k)$ configuration covers $k(p+q) \leq m$ points. This gives an upper bound $p+q \leq m/k$, where the equality is attained only if $m$ divides $k$. In this paper we use the above Lemma \ref{lemma:km1pr} in connection with Lemma \ref{lemma:kp1} only for the case of a single pencil of parallel lines, such that $p = 0$. \begin{corollary}\label{lem:affineSwitch} From any starting $(m_{k})$ configuration that has a pencil of $q$ parallel lines, we apply a sequence of affine switches by removing $1, 2, \ldots, q$ lines in sequence, to construct a sequence of consecutive configurations \[ [ (((k-1)m+r)_{k}) ]_{r=1}^{q} = [ AS(m, k, r) ]_{r=1}^{q}. \] \end{corollary} This collection of consecutive configurations is represented by the notation $\mathrm{AS+}(m, k, q)$. That is, $\mathrm{AS+}(m,k,q) = [ AS(m, k, r) ]_{r=1}^{q}.$ \begin{example} Figure~\ref{fig:(k-1)m+} illustrates an application of this construction to the Pappus configuration $\mc{P}$ (cf.\ Figure~\ref{fig:pappus}). Removing only one line from the horizontal pencil results in a $(19_3)$ configuration, shown in Figure \ref{fig:(k-1)m+}(a). Removing two or three lines results in a $(20_3)$ or $(21_3)$ configuration, respectively, shown in Figures \ref{fig:(k-1)m+}(b) and \ref{fig:(k-1)m+}(c). (Observe that since the Pappus configuration has 9 points, the maximal total number of lines in independent pencils is 3, since any three disjoint lines in the configuration contain all the points of the configuration.) Taken together the three configurations, we have: $[(19_{3}), (20_{3}), (21_{3})] = \mathrm{AS+}(9,3,3)$. \end{example} \begin{figure}[htbp] \begin{center} \hskip -10pt \subfigure[A $(19_{3})$ configuration] {\includegraphics[width=0.3\textwidth]{19_3.png}} \hskip 6pt \subfigure[A $(20_{3})$ configuration] {\includegraphics[width=0.3\textwidth]{20_3.png}} \hskip 6pt \subfigure[A $(21_{3})$ configuration] {\includegraphics[width=0.3\textwidth]{21_3.png}} \caption{Configurations $(19_{3})$, $(20_{3})$, and $(21_{3})$, constructed from applying the affine switch construction to the realization of the Pappus configuration with a pencil of 3 parallel lines, shown in Figure \ref{fig:pappus}, by deleting one, two, or three lines respectively. (The vertical axis of affinity, denoted by dashed line, does not belong to the configuration.)} \label{fig:(k-1)m+} \end{center} \end{figure} Since axial affinities play a crucial role in the constructions described above, we recall a basic property. The proof of the following proposition is constructive, hence it provides a simple tool for a basically synthetic approach to these constructions, which is especially useful when using dynamic geometry software to construct these configurations. \begin{prop} \label{prop:determined} An axial affinity $\alpha$ is determined by its axis and the pair of points $(P,P')$, where $P$ is any point not lying on the axis, and $P'$ denotes the image of $P$, i.e.\ $P'=\alpha(P)$. \end{prop} \begin{proof} In what follows, for any point $X$, we denote its image $\alpha(X)$ by $X'$. Let $Q$ be an arbitrary point not lying on the axis and different from $P$. Take the line $PQ$, and assume that it intersects the axis in a point $F$ (see Figure~\ref{fig:AffineConstr}a). Thus $PQ=FP$. Take now the line $F'P'$, i.e., the image of $FP$. Since $F$ is a fixed point, i.e.\ $F'=F$, we have $F'P'=FP'$. This means that $Q'$ lies on $FP'$, i.e.\ $P'Q'=FP'$. To find $Q'$ on $FP'$, we use the basic property of axial affinities that for all points $X$ not lying on the axis, the lines $XX'$ are parallel with each other (we recall that the direction of these lines is called the \emph{direction} of the affinity). Accordingly, a line passing through $Q$ which is parallel with $PP'$ will intersect $FP'$ precisely in the desired point $Q'$. \begin{figure}[h!] \begin{center} \subfigure[] {\includegraphics[width=.5\textwidth]{AffineConstructionA.png}} \hskip 20pt \subfigure[] {\includegraphics[width=.35\textwidth]{AffineConstructionB.png}} \caption{Construction of the image of a pont $Q$ under axial affinity; the axis is the vertical red line, the direction of affinity is given by the blue line. Here we use oblique affinity, but the construction given in the proof is the same in any other types of axial affinities.} \label{fig:AffineConstr} \end{center} \end{figure} On the other hand, if $PQ$ is parallel with the axis, then clearly so is $P'Q'$. In this case $Q'$ is obtaned as the fourth vertex of the parallelogram determined by $P'$, $P$ and $Q$ (see Figure~\ref{fig:AffineConstr}b). \end{proof} \begin{remark} In using integer parameters $h$ and $j$ above, we followed Gr\"unbaum's original concept~\cite{Gru2009b} (as mentioned explicitly at the beginning of this subsection). However, the theory underlying Propositions~\ref{prop:pencil} and~\ref{prop:series} makes possible using continuous parameters as well, so that the procedure becomes in this way much more flexible. In what follows we outline such a more general version, restricted to using only one pencil of lines to be deleted. \end{remark} Start again with a configuration $\mathcal C$, and assume that the pencil $\mathcal P$ is in horizontal position; accordingly, the axis that we use is in vertical position (see e.g.\ Figure~\ref{fig:(k-1)m+}). Choose a line $\ell$ in $\mathcal P$, and a configuration point $P_0$ on $\ell$; then, remove $\ell$. $P_0$ will be the initial point of our construction (e.g., in Figure~\ref{fig:(k-1)m+} the ``north-west" (black) point of the starting configuration). Choose a point $C_{\ell}$ on the axis such that the line $C_{\ell}P_0$ is not perpendicular to the axis (in our example, this is the red point in Figure~\ref{fig:(k-1)m+}a). Now let $t\in\mathbb R$ be our \emph{continuous parameter}. Take the point \begin{equation} \label{eq:AffCombin} P=tC_{\ell}+(1-t)P_0; \end{equation} thus $P$ is a point on the line $C_{\ell}P_0$, and as $t$ changes, $P$ slides along this line. Moreover, by Proposition~\ref{prop:determined} we see that the pair of points $(P_0,P)$ determines two orthogonal affinities whose axes are perpendicular to each other. In particular, the axes are precisely the coordinate axes. These affinities act simultaneously, i.e.\ $P_0$ is sent to $P$ by their (commuting) product. Using coordinates, such as $P_0(x_0,y_0)$ and $P(x,y)$, we also see that the ratio of these affinities is $y/y_0$ (for that with horizontal axis), respectively $x/x_0$ (for that with vertical axis). (Note that these ratios, using the relation~(\ref{eq:AffCombin}), can also be expressed by the parameter $t$ and by the prescribed coordinates of $P_0$ and $C_{\ell}$. Furthermore, similarly, the matrix~(\ref{matrix}) above can also be parametrized by $t$; we omit the details.) It is easily checked that both Proposition~\ref{prop:pencil} and Proposition~\ref{prop:series} remains valid with this continuous parameter $t$. Hence, for any $P$, we can construct the corresponding affine image of $\mathcal C$ (or its substructures $\hat {\mathcal C}$ with lines of any number $r$ removed), together with the new lines (which are denoted by red in our example of Figure~\ref{fig:(k-1)m+}). In particular, in case of $k$-configurations, we need to choose altogether $k-2$ points on the line $C_{\ell}P_0$ (note the for $t=0$, the starting copy $\mathcal C$ returns, for $t=1$ the image of $\mathcal C$ collapses to a segment within the $y$-axis, and for a third value depending on the slope of $C_{\ell}P_0$, it collapses to a segment within the $x$-axis; these cases thus are to be avoided). \section{Proof of the Main Theorem}\label{sec:mainThms} In this section we prove the main theorem of our paper. For notational convenienence, given integers $a < b$, let $[a:b]$ denote the range $\{a, a+1, \ldots, b\}$. Similarly, for integer function $f(s)$ the range $\{f(a), f(a+1), \ldots, f(b)\}$ will be denoted by $[f(s)]^b_{s=a}$. The crucial step in the proof will be provided by the following Lemma. \begin{lemma}\label{mainLemma} Assume that for some $k \geq 3$, $N_{k-1}$ exists and that $\bar{N}_{k-1}$ is any known upper bound for it. Then $N_k$ exists and: $\bar{N}_k = (k^2-1)\max(\bar{N}_{k-1},k^2-2)$ is an upper bound for it. Moreover, if we have two upper bounds, say $\bar{N}_{k-1} < \tilde{N}_{k-1}$ for $N_{k-1}$, the better one will produce a better upper bound for $N_{k}$. \end{lemma} This Lemma will be proven with the tools from previous section by applying affine replication and affine switch. More precisely, Lemma \ref{lemma:kp1} and Corollary \ref{lem:affineSwitch} will be used. \begin{proof}[Proof of Lemma \ref{mainLemma}] Let $\bar{N}_{k-1}$ denote any known upper bound for $N_{k-1}$. By definition, the sequence of consecutive numbers \begin{equation} \label{eq:sequence-1} a=\bar{N}_{k-1}, a+1, \ldots, a+s, \ldots \end{equation} are all $(k-1)$-realizable; in other words, for each $s$, $s=0,1,\dots,$ there exists a geometric $((a+s)_{k-1})$ configuration (recall the definition of realizability, given in the Introduction). Apply affine replication to these configurations; by Lemma~\ref{lemma:kp1}, the sequence of numbers: \begin{equation} \label{eq:sequence-2} (k+1)a, (k+1)(a+1),\ldots, (k+1)(a+s), \dots \end{equation} are all $k$-realizable. Note that this is an arithmetic sequence with difference $(k+1)$. Furthermore, observe that for each $X \geq a$, the geometric $k$-configuration realizing the number $(k+1)X$ that was produced by affine replication has $X$ new parallel lines. Hence, we can apply a sequence of affine switch constructions to each of these configurations $((k+1)X_{k})$. By Corollary \ref{lem:affineSwitch}, the sequences $\mathrm{AS+}((k+1)X, k, X)$ of configurations is produced. It follows that the sequences of numbers \begin{multline}\label{eq:sequence-3} [(k-1) (k+1) a+1:(k-1) (k+1)a+a], \\ [(k-1) (k+1)(a+1)+1:(k-1) (k+1) (a+1)+(a+1))], \\ [(k-1) (k+1)(a+2)+1:(k-1) (k+1) (a+2)+(a+2))],\ldots \end{multline} are all $k$-realizable. Observe that from the initial outputs of affine replication, $n = X(k+1)$ is realizable as long as $X \geq \bar{N}_{k-1}$. Thus, every ``band'' of consecutive configurations produced by affine switches can be extended back one step, so there exists a band of consecutive $k$-configurations \[[(k-1)(k+1)X:(k-1) (k+1)X + X)] \] for each initial configuration $(X_{k-1})$. Another way to say this is that we can fill a hole of size 1 between the bands of configurations listed in equation \eqref{eq:sequence-3} using the output of the initial affine replications, listed in equation \eqref{eq:sequence-2}. To determine when we have either adjacent or overlapping bands, then, it suffices to determine when the last element of one band is adjacent to the first element of the next band; that is, when \[(k-1) (k+1) X+X + 1\geq (k-1) (k+1) (X+1).\] It follows easily that $X \geq k^{2}-2$. Hence, as long as we are guaranteed that a sequence of consecutive configurations $(q_{k-1})$, $((q+1)_{k-1}), \ldots$ exists, it follows that we are guaranteed the existence of consecutive $k$-config\-u\-ra\-tions $Q_{k}, (Q+1)_{k}, \ldots,$ where $Q = (k^{2}-1)(k^{2}-2)$. However, since we do not know whether that consecutive sequence exists, in the (extremely common) case where $\bar{N}_{k-1} > (k^{2}-1)(k^{2}-2)$, the best that we can do is to conclude that \[ N_{k} \leq (k^{2}-1) \max\{ \bar{N}_{k-1}, k^{2} - 2\}.\] \end{proof} This result gives rise to an elementary proof by induction for the main theorem. \begin{proof}[Proof of Theorem \ref{mainTheorem}] Let $s = 2$. The number $N_{s} = N_2 = 3$ exists. This is the basis of induction. Now, let $s = k -1$. By assumption, $N_{k-1}$ exists and some upper bound $\bar{N}_{k-1}$ is known. By Lemma \ref{mainLemma}, $\bar{N}_k = (k^2-1)\max(\hat{N}_{k-1},k^2-2)$ is an upper bound for $N_k$ . Therefore $N_k$ exists and the induction step is proven. \end{proof} Recall that we let $N^{R}_{k}$ denote the best known upper bound for $N_{k}$. The same type of result follows if we start with the best known upper bound $N^R_s$ for some $s \geq 2$. However, the specific numbers for upper bounds depend on our starting condition. Table \ref{tab:theorem1bounds} shows the difference if we start with $s = 2,3,4$. The reason we are using only these three values for $s$ follows from the fact that only $N^R_s, 2 \leq s \leq 4$ have been known so far. \begin{table}[htp] \caption{Bounds on $N_{k}$ from iterative applications of Lemma \ref{mainLemma}. Different bounds are produced if the iteration is started with $N^R_{2} = N_{2} = 3, N^R_{3}= N_{3} = 9$ or with $N^R_{4} = 24$. Boldface numbers give best bounds using this method and current knowledge.} \begin{center} \begin{tabular}{c|r|r|r|r} $k$ & $\bar{N}_{k}$ with $N^R_{2} = 3$ &$\bar{N}_{k}$ with $N^R_{3} = 9$ & $\bar{N}_{k}$ with $N^R_{4} = 24$ &$N^R_{k}$ \\[3 pt] \hline $2$ &{\bf 3} & - & - & {\bf 3}\\ $3$ &56& {\bf 9} & - & {\bf 9}\\ $4$ &840& 210 & {\bf 24} & {\bf 24} \\ $5$ &\numprint{20160}&\numprint{5040} & {\bf 576}& {\bf 576}\\ $6$ &\numprint{705600} &\numprint{176400}& {\bf \numprint{20160}}& {\bf \numprint{20160}}\\ $7$ &\numprint{33868800} &\numprint{8467200}& {\bf \numprint{967680}}& {\bf \numprint{967680}}\\ $8$ & \numprint{2133734400}&\numprint{533433600}& {\bf \numprint{60963840}}& {\bf \numprint{60963840}}\\ $9$ &\numprint{170698752000} &\numprint{42674688000}& {\bf \numprint{4877107200}}&{\bf \numprint{4877107200}}\\ $10$ &\numprint{16899176448000} &\numprint{4224794112000}&{\bf \numprint{482833612800}}&{\bf \numprint{482833612800}}\\ \end{tabular} \end{center} \label{tab:theorem1bounds} \end{table}% The rightmost column of Table \ref{tab:theorem1bounds} summarises the information given in other columns by computing the minimum in each row and thereby gives the best bounds that are available using previous knowledge and direct applications of Lemma \ref{mainLemma}. If new knowledge about best current values of $N^R_k$ for small values of $k$ becomes available, we may use similar applications of Lemma \ref{mainLemma} to improve the bounds of the last column. Since, the values for $k=2$ and $k=3$ are optimal, the first candidate for improvement is $k = 4$. A natural question is what happens if someone finds a geometric $(23_4)$ configuration. In this case Lemma \ref{mainLemma} would give us for $k = 5$ the bound $(k^2-1)\max(N^{R}_{k-1},k^2-2) = (5^2-1)\max(20,5^2-2) = 24 \times 23 = 552$, an improvement over 576. An alternative feasible attempt to improve the bounds would be to use other methods in the spirit of Gr\"unbaum calculus to improve the current bound 576 for $k=5$. However, there is another approach that can improve the numbers even without introducing new methods. It is presented in the next section. \section{Improving the bounds} Recall that $N^{R}_{3} = N_{3} = 9$, and $N^{R}_{4} = N_{4} = 21$ or $24$, according to whether or not a $(23_{4})$ configuration exists. If we apply the procedure in Lemma \ref{mainLemma} using as input information $N_{3} = N^{R}_{3} = 9$ (that is, beginning with a sequence of $3$-configurations $(9_{3}), (10_{3}), (11_{3}) \ldots$), Lemma \ref{mainLemma} says that \[ N_{k}\leq (k^{2}-1)\max\{N^{R}_{k-1},k^{2} - 2\} \implies N_{4} \leq (15)\max\{9, 14\} = 210. \] However, we know observationally that $N_{4} = 21$ or $24$. Thus, we expect that Theorem \ref{mainTheorem} is likely to give us significant overestimates on a bound for $N_{k}$ for larger $k$. For $k = 5$, the best we can do at this step with these constructions is the bound given by Lemma \ref{mainLemma}, beginning with the consecutive sequence of $4$-configurations $((24_{4}), (25_{4}), (26_{4}), \ldots)$. In this case, Lemma \ref{mainLemma} predicts that $N_{5} \leq (24)\max(24, 23) = 576.$ In a subsequent paper, we will show that this bound can be significantly decreased by incorporating other Gr\"unbaum-calculus-type constructions and several ad hoc geometric constructions for 5-configurations. However, we significantly decrease the bound on $N_{k}$ for $k \geq 6$ by refining the construction sequence given in Lemma \ref{mainLemma}: instead of beginning with $N^{R}_{k-1}$ determined by iterative applications of the sequence in Lemma \ref{mainLemma}, we consider all possible sequences determined by applying a series of affine replications, followed by a final affine switch. First we introduce a function $N(k,t,a,d)$ with positive integer parameters $k,t,a,d$ and $t < k$. Define for $t < k-1$: \[N(k, t, a, d):= (k^{2} - 1)\left(\frac{k!}{(t+1)! }\right) \max\left\{ a, (k^{2}-1)d \right\}, \] and for $t = k-1$: \[N(k, k-1, a, d):= (k^{2} - 1) \max\left\{ a, (k^{2}-1)d - 1 \right\}. \] This value $N(k, t, a, d)$ is precisely the smallest $n$ after which we are guaranteed there exists a sequence of consecutive $k$-configurations produced by starting with an initial sequence of $t$-configurations $a, a+d, ..., $ and sequentially applying affine replications followed by a final affine switch as described above. The following Lemma gives us a quite general and powerful tool for bound improvements without making any changes in constructions. \begin{lemma}\label{lemma:main2} Let $t \geq 2$ be an integer and let $a, a+d, a+2d, \ldots$ be an arithmetic sequence with integer initial term $a$ and integer difference $d$ such that for each $s = 0,1, \ldots$ geometric configurations $((a+sd)_t)$ exist. Then for any $k > t$ the value $ N(k, t, a, d)$ defined above is an upper bound for $N_k$; i.e., $N(k, t, a, d) \geq N_k$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:main2}] Beginning with an arithmetic sequence of $t$-configurations, we construct a consecutive sequence of $k$-configurations by iteratively applying a sequence of affine replications to go from $t$-configurations to $(k-1)$-configurations; a final affine replication to go from $(k-1)$-configurations to $k$-configurations with a known number of lines in a parallel pencil; and finish by applying affine switch on that final sequence of $k$-configurations to produce bands of consecutive configurations. We then analyze at what point we are guaranteed that the bands either are adjacent or overlap. Specifically, starting with a sequence of $t$-realizable numbers $a, a+d, a+2d, \ldots$ we successively apply $k-t$ affine replications to the corresponding sequence of configurations to form sequences of $s$-realizable numbers for $t \leq s \leq k$: \begin{align} a, a+d, a+2d, \ldots &\xrightarrow[(t+1)\text{-cfgs}]{ \mathrm{AR}(\cdot, t+1)} (t+2)a, (t+2)(a+d), (t+2)(a+2d), \ldots \nonumber \\ &\xrightarrow[(t+2)\text{-cfgs}]{ \mathrm{AR}( \cdot, t+2)} (t+3)(t+2)a, (t+3)(t+2)(a+d), (t+3)(t+2)(a+2d), \ldots \nonumber \\ &\vdots \nonumber\\ &\xrightarrow[k\text{-cfgs}]{ \mathrm{AR}( \cdot, k)} \frac{(k+1)!}{(t+1)!}a, \frac{(k+1)!}{(t+1)!}(a+d), \frac{(k+1)!}{(t+1)!}(a+2d), \ldots \label{eq:sequence-4} \end{align} By Lemma \ref{lemma:kp1}, each of the $k$-configurations corresponding to the realizable numbers in equation \eqref{eq:sequence-4} produced from a starting configuration $X$ has a pencil of $\frac{k!}{(t+1)!}X$ parallel lines. To those configurations we apply the affine switch operation: \begin{multline} \frac{(k+1)!}{(t+1)!}a, \frac{(k+1)!}{(t+1)!}(a+d), \frac{(k+1)!}{(t+1)!}(a+2d), \ldots \\ \xrightarrow[k\text{-cfgs}]{ \mathrm{AS+}(\cdot, k, \cdot)} \left[(k-1)\frac{(k+1)!}{(t+1)!}a+1: (k-1)\frac{(k+1)!}{(t+1)!}a+\frac{k!}{(t+1)!}q\right], \\ \left[(k-1)\frac{(k+1)!}{(t+1)!}(a+d)+1: (k-1)\frac{(k+1)!}{(t+1)!}(a+d)+\frac{k!}{(t+1)!}(a+d)\right], \ldots \label{eq:finalBands} \end{multline} As in the proof of Theorem \ref{mainTheorem}, observe that the $(n_{k})$ configurations described in \eqref{eq:sequence-4} all have $n$ a multiple of $\frac{(k+1)!}{(t+1)!}$. That is, any $n$ divisible by $\frac{(k+1)!}{(t+1)!}$ is $k$-realizable as long as when $n = \frac{(k+1)!}{(t+1)!}X$, $X$ is larger than $N^{R}_{t}$. We thus can extend our band of consecutive realizable configurations back one step, to be of the form \[ \left[(k-1)\frac{(k+1)!}{(t+1)!}X: (k-1)\frac{(k+1)!}{(t+1)!}X+\frac{k!}{(t+1)!}X\right]\] for a starting $t$-realizable number $X$. Successive bands of this form are guaranteed to either exactly meet or to overlap when the end of one band, plus one, equals or is greater to the beginning of the next, that is, when \begin{align} (k-1)\frac{(k+1)!}{(t+1)!}X+\frac{k!}{(t+1)!}X +1 &\geq (k-1)\frac{(k+1)!}{(t+1)!}(X+d) \implies \nonumber \\ X &\geq (k^{2}-1)d - \frac{(t+1)!}{k!}. \label{stupidInequality} \end{align} When $t = k-1$, $\frac{(t+1)!}{k!} = 1$, while when $t <k-1$, $\frac{(t+1)!}{k!} < 1$, and moreover, inequality \eqref{stupidInequality} holds as long as $X$ is greater than the bound on $t$-realizable configurations. \end{proof} We refine and improve the upper bounds of Table \ref{tab:theorem1bounds} with Theorem \ref{thm:main2}. This proof proceeds by showing, given a starting arithmetic sequence of consecutive $t$-configurations, a construction method for producing a sequence of consecutive $k$-configurations. \begin{theorem}\label{thm:main2} Recursively define \[\hat{N}_{k} = (k^{2}-1)\min_{3 \leq t < k}\{ N(k, t, \hat{N}_{t}, 1)\}\] with $\hat{N}_{3} = N_{3} = 9$ and $\hat{N}_{4}= N^{R}_{4}= 24$. Then $\hat{N}_{k}$ is an upper bound for $N_k$. \end{theorem} \begin{proof} Observe that by unwinding definitions, \[\hat{N}_{k} = (k^{2}-1) \min_{3 \leq t \leq k-1} \left\{ \frac{k!}{(t+1)!}\max\left\{\hat{N}_{t}, k^{2}-1\right\}\right\}.\] By construction, since for each $\hat{N}_{k}$ we have shown there exists consecutive $k$-configurations for each $n \geq \hat{N}_{k}$, it follows that $N_{k} \leq \hat{N}_{k}$, and the result follows. \end{proof} Applying Theorem \ref{thm:main2} results in the bounds for $N_{k}$ are shown in Table \ref{tab:thm2bounds-large}. \begin{table}[htp] \caption{Bounds on $N_{k}$ produced from Theorem \ref{thm:main2}. The values for $N^{R}_{k}$ given in this table agree with the record values listed in Table \ref{tab:theorem1bounds} for all $k \leq 5$ (boldface), and are strictly better for $k \geq 6$.} \begin{center} \begin{tabular}{c | r | l | l} $k$ & $\hat{N}_{k} = N^{R}_{k}$ & formula & initial sequence \\ \hline 4 & {\bf 24} & - & -\\ 5 & ${\bf 576}$ & $(5^{2}-1)^{2}$& $t = 4$\\ 6 & $\numprint{7350}$ & $6(6^{2}-1)^{2}$ & $t= 4$\\ 7 & $\numprint{96768}$ & $7\cdot6 \cdot (7^{2}-1)^{2}$ & $t = 4$ \\ 8 & $\numprint{1333584}$ & $\frac{8!}{5!} (8^{2}-1)^{2}$ & $t = 4$ \\ 9 & $\numprint{19353600}$ & $\frac{9!}{5!} (9^{2}-1)^{2}$ & $t = 4$ \\ 10 & $\numprint{287400960}$ & $\frac{10!}{6!}\cdot \mathbf{ 576 }\cdot (10^{2}-1)$ & $\mathbf{t = 5}$ \\ 11 & $\numprint{3832012800}$ & $\frac{11!}{6!}\cdot 576 \cdot (11^{2}-1)$ & $t = 5$ \\ $\vdots$ & & & \\ 24 & $\approx 2.85 \times 10^{26}$ & $\frac{24!}{6!}\cdot 576 \cdot (24^{2}-1)$ & $t = 5$\\ 25 & $\approx 8.39 \times 10^{27}$ & $\frac{25!}{6!}\cdot \mathbf{(25^{2}-1)^{2}}$ & $t = 5$\\ 26 & $\approx 8.02 \times 10^{30}$ & $\frac{26!}{6!}\cdot (26^{2}-1)^{2}$ & $t = 5$\\ $\vdots$ & & & \\ 32 & $\approx 3.82 \times 10^{38}$ & $\frac{32!}{6!}\cdot (32^{2}-1)^{2}$ & $t = 5$ \\ 33 & $\approx 1.38 \times 10^{40}$ & $\frac{33!}{7!}\cdot \mathbf{7350} \cdot (33^{2}-1)$ & $\mathbf{t = 6}$ \\ $\vdots$ &&\\ 85 & $\approx 2.97 \times 10^{132}$ &$\frac{85!}{7!}\cdot \mathbf{7350} \cdot (85^{2}-1)$& $t = 6$\\ 86 & $\approx 2.63 \times 10^{134}$ &$\frac{86!}{7!} \cdot\mathbf{(86^{2}-1)^{2}}$& $t = 6$\\ $\vdots$ &&& \\ 109& $\approx 4.04 \times 10^{180}$& $\frac{109!}{7!} (109^{2}-1)^{2}$ &$t = 6$\\ 110 & $\approx 4.61 \times 10^{182}$ & $\frac{110!}{8!}\cdot \frac{7!}{5!}\cdot (7^{2}-1)^{2} \cdot (110^{2}-1)$ & $\mathbf{t = 7}$ \end{tabular} \end{center} \label{tab:thm2bounds-large} \end{table}% There are some interesting things to notice about the bounds from Theorem \ref{thm:main2} shown in Table \ref{tab:thm2bounds-large}. First, note that $t = 3$ is never used in determining $\hat{N}_{k}$. Second, for example, the bound $\hat{N}_{10}$ uses an initial sequence of $5$-configurations, rather than starting with $4$-configurations. To understand why, observe that \begin{align*}\hat{N}_{10} &= (k^{2}-1) \min_{3 \leq t \leq 9} \{ N(k, t, \hat{N}_{t}, 1)\}\\ & = 99 \min\biggl\{\frac{10!}{4!} \max\{\hat{N}_{3} = 9, 99\}, \frac{10!}{5!} \max\{\hat{N}_{4} = 24, 99\}, \frac{10!}{6!} \max\{\hat{N}_{5} = 576, 99\}, \\ & \phantom{===}\;\,\, \qquad \frac{10!}{7!} \max\{\hat{N}_{6}=7350, 99\}, \ldots, \frac{10!}{10!} \max\{\hat{N}_{9}, 99\}\biggr\}\\ &= 99 \min\left\{\frac{10!}{4!} 99, \frac{10!}{5!} 99, \frac{10!}{6!} 576, \frac{10!}{7!} \hat{N}_{6}, \ldots, \hat{N}_{9}\right\} \end{align*} Since $ 6 \cdot 99 > 576$ (and the values $\hat{N}_{t}$ for $ 6\leq t \leq 9$ much larger than either), the minimum of that list is actually $\frac{10!}{6!}576$, and the computation for $\hat{N}_{10}$ starts with the sequence of consecutive $5$-configurations $(576_{5}), (577_{5}), \ldots$ rather than with $(24_{4}), (25_{4}), \ldots$. Sequences with $t = 5$ begin to dominate when $6(k^{2}-1) > 576 = (5^{2}-1)^{2}$; that is, when $k \geq \lceil\sqrt{97} \rceil = 10$. Sequences with $t = 6$ begin to dominate when $7(k^{2}-1) > 6(6^{2}-1)^{2} = 7350$, or $k \geq \left\lceil\sqrt{1051}\right\rceil = 33$. Sequences with $t = 7$ will dominate when $8(k^{2}-1) >7\cdot6 \cdot (7^{2}-1)^{2}$, that is $k \geq \lceil\sqrt{12097}\rceil = 110$. However, note that these bounds are absurdly large; $\hat{N}_{110} \approx 4.6 \times 10^{182}$. In addition, observe that since $k = 25$ is the smallest positive integer satisfying $k^{2}-1>576$, the bounds for $\hat{N}_{25}$ use the $25^{2}-1$ choice rather than $\hat{N}_{5}$ in taking the maximum, even though both $\hat{N}_{24}$ and $\hat{N}_{25}$ are starting with the same initial sequence of $5$-configurations, and there is a similar transition again at $k = 86$, when the function is using $6$-configurations to produce the maximum. At this position, since $85^{2} - 1 = 7224$ and $86^{2}-1 = 7395$, $\hat{N}_{85}$ uses $\hat{N}_{6} = 7350$, but $\hat{N}_{86}$ transitions to using $86^{2} - 1$ to compute the maximum. \section{Future work} With better bounds $N^R_{t}$ developed experimentally for small values of $t$, in the same way that $N^R_{4} = 24$ has been determined experimentally, we anticipate significantly better bounds $N^{R}_{k}$, for $k > t$, without changing the methods for obtaining the bounds. One obvious approach is to improve the bookkeeping even further. For instance, in Theorem \ref{thm:main2} we only used arithmetic sequences with $d = 1$ in $N(k,t,a,d)$ and ignoring any existing configuration $(m_t)$ for $m < N_t$. In particular, for $t = 4$, we could have used $N(k,4,18,2)$ since $18,20,22,24, \ldots $ form an arithmetic sequence of $4$-realizable numbers. Our experiments indicate that this particular sequence has no impact in improving the bounds. However, by carefully keeping track of the existing $t$-configurations below $N^R_t$, other more productive arithmetic sequences may appear. Another approach is to sharpen the bounds for $N_k$, for general $k$. This can be achieved, for instance, by generalizing some other ``Gr\"unbaum Calculus'' operations, which we plan for a subsequent paper. We also plan to apply several ad hoc constructions for $5$- and $6$-configurations to further sharpen the bound for $N_{5}$ and $N_{6}$, which will, in turn, lead to significantly better bounds for $N_{k}$ for higher values of $k$. However, based on the work involved in bounding $N_{4}$ and the fact that $N_{4}$ is not currently known (and on how hard it was to show the nonexistence of an $(19_{4})$ configuration), we anticipate that even determining $N_{5}$ exactly is an extremely challenging problem. Finally, very little is known about existence results on \emph{unbalanced} configurations, that is, configurations $(p_{q}, n_{k})$ where $q \neq k$. While some examples and families are known, it would be interesting to know any bounds or general results on the existence of such configurations. \section*{Acknowledgements} G\'abor G\'evay's research is supported by the Hungarian National Research, Development and Innovation Office, OTKA grant No.\ SNN 132625. Toma\v{z} Pisanski's research is supported in part by the Slovenian Research Agency (research program P1-0294 and research projects N1-0032, J1-9187, J1-1690, N1-0140, J1-2481), and in part by H2020 Teaming InnoRenew CoE. \bibliographystyle{plain
1,477,468,750,249
arxiv
\section{Introduction} \subsection{Notations and definitions} All rings are commutative with 1. We denote $R^{[n]}=R[x_1,\ldots, x_n]$ the polynomial ring in $n$ variables over a ring $R$. A polynomial map (or polynomial endomorphism) $F$ with coefficients in a ring $R$ is a list of polynomials $(F_1,\ldots, F_n)$ where $F_i\in R^{[n]}$. Such a polynomial map provides an endomorphism of $R^{[n]}$ as well as a map $R^n\longrightarrow R^n$. Since $R$ can be a finite field/ring, we cannot identify these viewpoints. (A polynomial map can induce the identity map $R^n\longrightarrow R^n$ while not being the identity endomorphism.) We define $\operatorname{ME}_n(R)$ as the set of polynomial endomorphisms on $R^{[n]}$. This forms a monoid w.r.t. composition, and the subset of invertible elements in this monoid is denoted by $\operatorname{GA}_n(R)$ and is the group of polynomial automorphisms. We define $\deg(F)=\max(\deg(F_1),\ldots, \deg(F_n))$ for $F\in \operatorname{ME}_n(R)$. The set of affine automorphisms $\mathit{Aff}_n(R)$ is $\{F\in \operatorname{GA}_n(R) ~|~\deg(F)=1\}$. A polynomial map $F\in \operatorname{ME}_n(R)$ is triangular if $F_i\in R[x_i,\ldots, x_n]$. If $F$ is a triangular automorphism and $R$ is a domain, it turns out to be of the form $(r_1x_1+f_1,\ldots, r_nx_n+f_n)$ where $r_i\in R^*$ and $f_i\in k[x_{i+1},\ldots, x_n]$. The set of triangular automorphisms is denoted by $\operatorname{BA}_n(R)$. Both $\operatorname{BA}_n(R)$ and $\mathit{Aff}_n(R)$ turn out to be subgroups of $\operatorname{GA}_n(R)$. We define $\operatorname{TA}_n(R):=<\operatorname{BA}_n(R), \mathit{Aff}_n(R)>$, the tame automorphism group. We define $\operatorname{SA}_n(R)=\{ F\in \operatorname{GA}_n(R) ~|~\det(\operatorname{Jac}(F))=1\}$. Similarly, we define $\operatorname{STA}_n(R)=\operatorname{SA}_n(R)\cap \operatorname{TA}_n(R)$ etc. For each of these sets, we define $\operatorname{ME}^d_n(R)=\{F\in \operatorname{ME}_n(R) ~|~\deg(F)\leq d\}$, $\operatorname{GA}^d(R)=\operatorname{GA}_n(R)\cap \operatorname{ME}_n^d(R)$ etc. We use the notation $x^{\alpha}=x_1^{\alpha_1}\cdots x_n^{\alpha_n}$ if $\alpha\in \mb{N}^n$. \subsection{The Jacobian Conjecture} The Jacobian Conjecture is a quite notorious conjecture in the field of Affine Algebraic Geometry. One formulation is:\\ {\bf (JC(R,n))}: If $F\in \operatorname{ME}_n(R)$ where $R$ is a domain of characteristic zero, then $\det(\operatorname{Jac}(F))\in R^*$ implies that $F\in \operatorname{GA}_n(R)$. \\ For many details we can refer to the book \cite{E00}. The conjecture is widely open even in the case $n=2$ (and trivial in dimension 1). Proving $JC(R,n)$ for any $R$ of characteristic zero yields $JC(R,n)$ true for all domains $R$ of characteristic zero. Naievely translating the Jacobian Conjecture into characteristic $p$ yields counterexamples, already in dimension 1 even: the map $x-x^p$ is not injective but has $\det(\operatorname{Jac}(x-x^p))=1$. Therefore, Adjamagbo defined in \cite{ADA92} a possible version of the Jacobian Conjecture for fields $k$ with characteristic $\operatorname{char}(k)=p$:\\ {\bf (AJC(n,p))}: Let $F=(F_1,\ldots, F_n)$ where $F_i\in k[x_1,\ldots, x_n]$ and $k$ a field of characteristic $p$. Assume that $\det(\operatorname{Jac}(F))\in k^*$ and additionally assume that $p$ does not divide $[ k(x_1,\ldots,x_n): k(F_1,\ldots, F_n)]$. Then $F$ has a polynomial inverse. \\ The ``$\operatorname{char}(k)$ does not divide $[ k(x_1,\ldots,x_n): k(F_1,\ldots, F_n)]$'' requirement seems to exclude all pathological counterexamples to the Jacobian Conjecture, but adds another difficult requirement to the (deceptively simple looking but) difficult equation $\det(\operatorname{Jac}(F))\in k^*$. Adjamagbo showed that knowing $AJC(n,p)$ for all $p$ implies $JC(n,k)$ for all $k$. We approach the JC in characteristic $p$ from a different perspective: let us write down a generic polynomial automorphism of degree 2, having affine part identity: \[ F=(x+a_1x^2+a_2xy+a_3y^2, y+b_1x^2+b_2xy+b_3y^2)\] Then, in characteristic zero, the equation $1=\det(\operatorname{Jac}(F))$ yields several equations on the coefficients: \[ \begin{array}{rl} 1=&\det(Jac(F))\\ =&1+\\ &(2a_1+b_2)x+\\ &(a_2+2b_3)y+ \\ &(2a_1b_2+2a_2b_1)x^2+\\ &(2b_2a_2+4a_1b_3+4a_3b_1)xy+\\ &(2a_2b_3+2a_3b_2)y^2\\ \end{array} \] Then apparently, the equations $2a_1+b_2=0,$ $a_2+2b_3=0$, etc. are exactly the equations one needs to ensure that $F$ is invertible in characteristic zero. However, in characteristic 2 the above equations are not enough to conclude that $F$ is invertible (in fact, some equations completely vanish), as an example $(x+x^2,y)$ shows. Therefore, one needs extra equations in characteristic $p$. In fact, thinking a little deeper, we {\em know} that such equations must exist. (Without going into detail, the equations must be the closure of the set of automorphisms having determinant Jacobian 1 inside $\operatorname{ME}_n^d(k)$, where closure is in a natural topology \cite{Shafarevich1,Shafarevich2, Stampfli}) We only have to {\em find} them. In this article we claim that we have found them (at least conjecturally). In fact, what we are doing is refining the regular Jacobian Conjecture so that it makes sense in characteristic $p$ also. We make a remark on Adjamagbo's formulation w.r.t. the above considerations: note that if there exists at least one counterexample $F$ to the Jacobian Conjecture in characteristic zero, then $[k(x_1,\ldots,x_n):k(F_1,\ldots,F_n)]=d>1$. It might very well be that $F\mod{p}$ is an interesting map for any prime $p$. But, if $p|d$, then Adjamagbo's formulation excludes this example, while one could argue that a formulation of the JC in characteristic $p$ should not. One could say that in this case $p{\!\!\not} \mid [k(x_1,\ldots,x_n):k(F_1,\ldots,F_n)]$ adds {\em too many} equations, or perhaps the {\em wrong} equations. \section{Initial considerations} Let us consider the degree 2 example of the previous section. One of the equations is $2a_1b_2+2a_2b_2$. In characteristic zero, this implies the equation $a_1b_2+a_2b_1$. Looking at it like this, it seems strange to exclude this latter equation in characteristic 2. Also, if we define the ideal \[ I=(2a_1+b_2,a_2+2b_3,2a_1b_2+2a_2b_1,2b_2a_2+4a_1b_3+4a_3b_1, 2a_2b_3+2a_3b_2) \] in the ring $\mb{Q}[a_1a_2,a_3, b_1,b_2,b_3]$, then any invertible polynomial map of degree 2 over $\mb{Q}$ will have coefficients which satisfy {\em any} equation of $I$. Even more, they satisfy any equation appearing in $\operatorname{rad}(I)$. Again, in the same vein as before, we can argue that any equation appearing in $\operatorname{rad}(I)$, should appear as equation in characteristic $p$, also. Hence, in this way we can give some universal equations which should be the equations which work in any characteristic. This is essentially the formulation of the Jacobian Conjecture in characteristic $p$ we introduce in the next section, but we have to use more formal language. \section{A new formulation of the Jacobian Conjecture in characteristic $p$} Given $F=(F_1,\ldots,F_n)\in \operatorname{ME}_n(R)$ where $R$ is some ring, we can write $F_i=\sum_{\alpha\in \mb{N}^n} c_{i, \alpha}x^{\alpha}$. We can also make the infinitely generated ring $C_R:=R[c_{i,\alpha} ~|~1\leq i\leq n, \alpha\in \mb{N}^n]$ where the $c_{i,\alpha}$ are variables. One can now make the universal polynomial map of degree $d\in \mb{N}$ by taking the polynomial map in $ \operatorname{ME}_n(C_R)$ which has coefficients the variables $c_{i,\alpha}$. Lets say that $C_{R,d}$ is the finitely generated ring generated by the coefficients up to and including degree $d$. (I.e. $C_R$ is the union, or direct limit, of the rings $\ldots \subset C_{R,d}\subset C_{R,d+1}\subset \ldots$. ) \begin{definition} Let $F[d]\in \operatorname{ME}_n(C_{\mb{Q},d})$ be the universal polynomial endomorphism of degree $d$ having affine part identity. Computing $\det\operatorname{Jac}(F[d])-1=\sum_{\alpha\in \mb{N}^n} E_{\alpha} x^{\alpha}$ yields a polynomial in $x_1,\ldots,x_n$ having coefficients $E_{\alpha}\in C_{\mb{Q},d}$. Define the ideal $I_{\mb{Q}}^d=(E_{\alpha} ~|~\alpha\in \mb{N}^n)$ in $C_{\mb{Q},d}$ generated by the equations found in the formula $\det\operatorname{Jac}(F[d])=1$. We define the ideal $I_{\mb{Q}}$ in $C_{\mb{Q}}$ as the inverse limit of the canonical chain $\ldots\longrightarrow I_{\mb{Q}}^{d+1}\longrightarrow I_{\mb{Q}}^d\longrightarrow \ldots$. It coincides with the equations found in the coefficients of $\det\operatorname{Jac}(F[\infty])=1$, where $F[\infty]$ is the power series with universal coefficients. \end{definition} \begin{definition} We define $J_{\mb{Z}}:=\operatorname{rad}(I_{\mb{Q}})\cap C_{\mb{Z}}$ as the ``ideal of integer keller equations''. This ideal captures the universal equations described in the previous section. If $R$ is a ring, we define $J_R:=J_{\mb{Z}}\otimes R$ as an ideal in the ring $C_R$. It is the ideal in $C_R$ generated by those same equations (in characteristic zero) or by those equations modulo $p$ (in characteristic $p$). In particular, we define $J_p:=J_{\mb{F}_p}=J_{\mb{Z}} \mod p$ as an ideal in $C_{\mb{F}_p}$. Similarly we define $J_{\mb{Z}}^{d}:=\operatorname{rad}(I_{\mb{Q}}^d)\cap C_{\mb{Z},d},$ $J_R^d:=J_{\mb{Z}}^d\otimes R$ and $J_p^d:=J_{\mb{Z}}^d \mod p$ where $d=deg(F).$ Let $N_d$ be the number of variables in $F[d]$ (i.e. the dimension of the ring $C_{R,d}$). We say that $v\in R^{N_d}$ satisfies $J_R^d$ if $f(v)=0$ for all $f\in J_R^d$. We say that ``$v\in R^{N_d}$ satisfies $J_R$'' if $v\in R^{N_d}$ satisfies $J_R^d$. We can identify $F\in \operatorname{ME}_n(R)$ by the vector of coefficients $v(F)$ of $F$; in particular, if $F\in \operatorname{ME}_n(R)$ we say that ``$F$ satisfies $J_R$ ($J_R^d$)'' if $v(F)$ satisfies $J_R$ ($J_R^d$). \end{definition} Throughout, we will write the elements of $J_{R}=J_{\mb{Z}}\otimes R$ by $\sum_i{e_i}h_i$ instead of $\sum_i{e_i}\otimes h_i$, where $e_i\in J_{\mb{Z}}$ and $h_i\in R$ for all $i$ (for simplicity, we will omit the tensor notation in this manuscript). \begin{definition} We say that $F\in \operatorname{ME}_n(R)$ is a {\bf strong Keller map} if $F$ satisfies $J_R$ (or equivalently $F\in \operatorname{ME}_n(R)$ is a {\bf strong Keller map} if $F$ satisfies $J_R^d$ where $deg(F)=d$). We denote the set of strong Keller maps by $\operatorname{SKE}_n(R)$ and the set of Keller maps by $\operatorname{KE}_n(R).$ \end{definition} \begin{conjecture}\textbf{Jacobian conjecture over any field (in particular, in positive characteristics)} ({$\bf \mathcal{JC}(k,n)$})\\ Let $k$ be a field (of characteristic $p$) and $F\in \operatorname{ME}_n(k)$ be a strong Keller map. Then $F\in \operatorname{GA}_n(k)$. \end{conjecture} So, an alternative definition of $\mathcal{JC}(k,n)$ is ``$\operatorname{SKE}_n(k)=\operatorname{GA}_n(k)$''. We will use curly letter notation $\mathcal{JC}$ to represent Jacobian conjecture in characteristic $p$. Of course, we still need to show that the above formulation coincides with the regular formulation in case the field is of characteristic zero. However, this will follow directly from lemma \ref{L3.8}. Note that we only defined the conjecture for fields of any characteristic, but with a slight modification one can define it for all domains (of any characteristic). However, we will stick with this formulation in this first encounter. Before we will study the validity of this conjecture, we will introduce some facts and concepts we will use afterwards. \section{Basic facts} Some basic facts are mentioned in the following remark about the map $F\mod p$ for $F\in\operatorname{ME}_n(\mb{Z}).$ They are used in various places without mentioning. \begin{remark} Let $F\in \operatorname{ME}_n(\mb{Z})$. Then \[ (\det\operatorname{Jac}(F))\mod p=\det(\operatorname{Jac}(F)\mod p) = \det\operatorname{Jac}(F \mod p)\] . In particular: \begin{itemize} \item $\det\operatorname{Jac}(F)=1\mod p \Longleftrightarrow \det\operatorname{Jac}(F\mod p)=1\mod p$. \item If $F\in\operatorname{ME}_n(\mb{Z})$ such that $F\mod p\in\operatorname{SKE}_n(\mb{F}_p)$, then $\det\operatorname{Jac}(F)=1+pH$ for some $H\in\operatorname{ME}_n(\mb{Z})$. \item $(F\circ G)\mod p = (F\mod p)\circ (G\mod p)$, and $\det\operatorname{Jac}(F\circ G)\mod p=\det\operatorname{Jac} (F\mod p\circ G\mod p).$ \end{itemize} \end{remark} \begin{proof} Writing out the equations $\det(\frac{\partial (F_i\mod p)}{\partial x_j})$ we see that checking the remark essentially comes down to checking that if $c_{\alpha} x^{\alpha}$ is a generic monomial where $\alpha \in \mb{N}^n$ and $c_{\alpha}\in \mb{Z}$, then \[ \frac{\partial c_{\alpha} x^{\alpha}} {\partial x_i} \mod p = \frac{\partial c_{\alpha} x^{\alpha}\mod p} {\partial x_i} \] which is true (just check the case where $p$ divides $c_{\alpha}$ or $p$ divides $\alpha_i$ separately). \end{proof} The interesting thing is that it is hard to grasp of some $F\in \operatorname{ME}_n(\mb{Z})$ what conditions imply $F\mod p \in \operatorname{SKE}_n(\mb{F}_p)$, which is stronger than just $\det\operatorname{Jac}(F))=1 \mod p$. The following lemma will be used several times. \begin{lemma}\label{ideals} Let $F\in\operatorname{ME}_n(R).$ If $F$ satisfies the ideal $I_{\mb{Q}}$ then it satisfies the ideal $J_{\mb{Z}}.$ \end{lemma} \begin{proof} Consider $Q\in J_{\mb{Z}}.$ Note that $J_{\mb{Z}}=\operatorname{rad}(I_{\mb{Q}})\cap C_{\mb{Z}} =\operatorname{rad}(I_{\mb{Q}}\cap C_{\mb{Z}})$ thus there exist $n\in\mb{Z}$ such that $Q^n\in I_{\mb{Q}}\cap C_{\mb{Z}}\subset C_{\mb{Z}}.$ Assume $I_{\mb{Q}}$ is generated by $\{e_i\}_{i\in\Omega}$, then $Q^n=\sum_{i}h_ie_i$ for $h_i\in C_{\mb{Z}}$ for all $i.$ Thus $Q^n(\nu(F))=\sum_{i}h_i(\nu(F))e_i(\nu(F))=0$ since $F$ satisfies $e_i\in I_{\mb{Q}}$. Hence $Q(\nu(F))=0$ as $C_{\mb{Z}}$ is integral domain. \end{proof} \begin{lemma} \label{L3.8} Let $R$ be a ring with $\operatorname{char}(R)=0$, then $F\in\operatorname{SKE}_n(R)$ if and only if $F\in\operatorname{KE}_n(R).$ \end{lemma} \begin{proof} Let $F\in SKE_n(R),$ then $\forall Q\in J_R$ we have $Q(\nu(F))=0$. Since $f\otimes1\in J_R$ for $f\in J_{\mb{Z}}$ and $1\in R$ thus $f(\nu(F))=0$ for all $f\in J_{\mb{Z}}.$ As $I_{\mb{Q}}\cap C_{\mb{Z}}\subseteq J_{\mb{Z}},$ thus $f(\nu(F))=0$ for all $f\in I_{\mb{Q}}\cap C_{\mb{Z}}.$ Since for any $e\in I_{\mb{Q}}$ we can find $f\in I_{\mb{Q}}\cap C_{\mb{Z}}$ such that $e=\frac{f}{m}$ for some $m\in\mb{Z}.$ Thus $e(\nu(F))=0$ for all $e\in I_{\mb{Q}}.$ Hence $F\in\operatorname{KE}_n(R).$ Conversely suppose that $F$ is a Keller map, then $F$ satisfies $I_{\mb{Q}}.$ Thus by lemma \ref{ideals} we have $e(\nu(F))=0$ for all $e\in J_{\mb{Z}}.$ Now for any $\sum_{i}e_ir_i\in J_R$ we have $\sum_{i}e_ir_i(\nu(F))=\sum_{i}e_i(\nu(F))r_i(\nu(F))=0,$ where $r_i\in R$ and $e_i\in J_{\mb{Z}}$ for all $i.$ Thus $F$ is strong Keller map. \end{proof} \begin{lemma}\label{rem1} $\operatorname{SKE}_n(k)\subset \operatorname{SKE}_n(\acute{k})$ for any fields $k\subset \acute{k}$ of positive characteristics. \end{lemma} \begin{proof} Let $F\in\operatorname{SKE}_n(k).$ Consider $k_0$ be the subfield of $k$ generated by the coefficients of $F$ then $F\in\operatorname{SKE}_n(k_0)$ and so $F$ satisfies the ideal $J_{k_0}.$ Since $J_{k_0}\subset J_{\acute{k}},$ it is obvious to see $q(\nu(F))=0$ for any $q\in {J_{\acute{k}}\setminus J_{k_0}}$ (as $q$ does not involve any coefficient of $F$ by definition of $J_{\acute{k}}$). Thus $F$ satisfies the ideal $J_{\acute{k}}$ and so $F\in \operatorname{SKE}_n(\acute{k}).$ \end{proof} \section{Two surjectivity conjectures} Given $F\in \operatorname{GA}_n(\mb{Z})$ we can define $F\mod p$ for any prime $p$, yielding an element of $\operatorname{GA}_n(\mb{F}_p)$. If we additionally assume that $p\not | \det(\operatorname{Jac}(F))$ then $F\mod{p} \in \operatorname{GA}_n(\mb{F}_p)$, even. This yields the natural map $\pi: \operatorname{SA}_n(\mb{Z})\longrightarrow \operatorname{SA}_n(\mb{F}_p)$. The following fact is not that difficult to prove: \begin{remark} $\pi(\operatorname{STA}_n(\mb{Z}))=\operatorname{STA}_n(\mb{F}_p)$. \end{remark} The reason for this is that (1) any affine or triangular map having determinant Jacobian 1 has a preimage under $\pi$, (2) any tame automorphism of determinant Jacobian 1 can indeed be written as a composition of affine and triangular automorphisms of determinant Jacobian 1. (See \cite{MR14} lemma 3.4.) Now an obvious question is whether the map $\pi : \operatorname{SA}_n(\mb{Z})\longrightarrow \operatorname{SA}_n(\mb{F}_p)$ is surjective or not; this question is interesting as nonsurjectivity would yield non-tame maps due to the above remark. This is part of the topic of the papers \cite{MR14, M03, MW11}. \begin{definition}\label{def2} Let $R$ be a $\mb{Z}$ algebra and $k$ be a field such that we have a surjective ring homomorphism $R\longrightarrow k$. We can extend it naturally from polynomial maps over $R$ to polynomial maps over $k.$ We denote this extended map by $\pi.$ \end{definition} We notice that corresponding to each automorphism $F\in\operatorname{SA}_n(\mb{Z})$ we have $F\mod p\in\operatorname{SA}_n(\mb{F}_p),$ but there may exist some automorphisms $f\in\operatorname{SA}_n(\mb{F}_p)$ such that $\pi^{-1}(f)\notin\operatorname{SA}_n(\mb{Z}).$ We conjecture the following for a $\mb{Z}$ algebra $R$ and a field $k$: \begin{conjecture} \label{conj1} Let $R$ be a $\mb{Z}$ algebra and $k$ be any field. If we have a surjective ring homomorphism $R\longrightarrow k,$ then we have \begin{enumerate} \item $\pi(\operatorname{SA}_n(R))=\operatorname{SA}_n(k)$. \item $\pi^{-1}(\operatorname{SA}_n(k))\cap\operatorname{KE}_n(R)=\operatorname{SA}_n(R).$ \end{enumerate} \end{conjecture} A similar conjecture is the following (see also lemma \ref{subset1} and corollary \ref{corsub1}): \begin{conjecture} \label{conj2} Let $R$ be a $\mb{Z}$ algebra and $k$ be any field of characteristic $p$. If we have a surjective ring homomorphism $R\longrightarrow k,$ then the map $\pi: \operatorname{KE}_n(R)\longrightarrow \operatorname{ME}_n(k)$ has $\operatorname{SKE}_n(k)$ in its image. \end{conjecture} If the above conjecture is {\em not} true, then it can mean various things: it could mean that $\mathcal{JC}(k,n)$ is not true (or should be reformulated), or that there exist non-tame automorphisms over $k$. Assuming $\mathcal{JC}(k,n)$ to be true, then conjecture \ref{conj1} imples \ref{conj2}, but no other implications can be made, nor does $\mathcal{JC}(k,n)$ imply any of the above conjectures.\\ {\bf Justification of the above conjectures:} The above conjectures are not made to ``match exactly what we need in our proofs''. They capture the essence of whether characteristic $p$ is {\em truly} different from characteristic zero. If one or more of these conjectures is wrong, then characteristic $p$ is in its core different from characteristic zero (for example, there might exist $\mb{F}_p$-automorphisms of $\mb{F}_p^{[n]}$ which are of a completely different nature than one can find in characteristic zero), while if both of them are correct, then characteristic $p$ is not too unsimilar from characteristic zero and both are intricately linked. The tendency is to believe the conjectures (hence the name ``conjecture'' and not ``problem'' or ``question''): it would be really surprising if any counterexamples would not be easily constructable in low degree and dimension (and known), whereas it can be easily imagined that the conjectures are true but hard to prove. For example, due to the the fact that we do not even have a (parametrized) list of generators for the automorphism group $\operatorname{GA}_n(k)$ (unlike for $\operatorname{GL}_n(k), \operatorname{TA}_n(k)$), we can understand that conjecture \ref{conj2}, if true, is very hard to prove. \footnote{Note, that with a little change, {\em some people} would agree on the same text for the Jacobian Conjecture. } \section{Some computations indicating the correctness of conjecture $\mathcal{JC}(k,n)$} We should check this conjecture for some nontrivial cases, in order to point out that it might do what it claims. Therefore, in this subsection we considered polynomial endomorphisms of degree $\leq3$ with coefficients in field of characteristic $p$, and having affine part identity. We will check if $\mathcal{JC}(k,2)$ is true for these maps for fields $k$ of characteristic $p$. Let us write down such a polynomial map with generic coefficients: \[T=(x,y)+(Ax^2+By^2+Cxy+Dx^3+Ey^3+Fx^2y+Gxy^2,\] \[A_1x^2+B_1y^2+C_1xy+D_1x^3+E_1y^3+F_1x^2y+G_1xy^2).\] Let us take the determinant of the Jacobian and equal it to 1: \[1=\det(\operatorname{Jac}(T))=1+(C_1+2A)x+(2B_1+C)y\]\[+(F_1+3D+2AC_1-2A_1C)x^2+(2G_1+2F+4AB_1-4A_1B)xy\]\[+(3E_1+G+2CB_1-2BC_1)y^2\]\[(6AE_1-6A_1E+4B_1F-4BF_1+CG_1-C_1G)xy^2\]\[(6DB_1-6D_1B+4AG_1-4A_1G+FC_1-F_1C)x^2y\] This gives us generators of the ideal $I_{\mb{Q}}=(C_1+2A, 2B_1+C,\ldots)$ in the ring $\mb{Q}[A,B,\ldots, E_1]$. It is clear that the following equations are in $I_{\mb{Q}}$ also, by doing some elementary manipulations: \[F_1+3D, AC_1-A_1C, G_1+F, AB_1-A_1B, 3E_1+G, CB_1-BC_1, AE_1-A_1E, B_1F-BF_1,\]\[ CG_1-C_1G, DB_1-D_1B, AG_1-A_1G, FC_1-F_1C, DE_1-D_1E,FG_1-F_1G, FA_1-F_1A,\]\[DC_1-D_1C, CE_1-C_1E, B_1G-BG_1, DG_1-GD_1, FE_1-EF_1, DF_1-D_1F\] \begin{eqnarray}\label{equ} C_1+2A,C+2B_1,GE_1-EG_1 \in I_{\mb{Q}}.\end{eqnarray} Moreover it can be checked by any computer algebra package (we used singular) that \begin{eqnarray}\label{radi} A^3{E_1}^2-B^3{D_1}^2,A^3{E}^2-B^3{D}^2\in \operatorname{rad}(I_{\mb{Q}}), \end{eqnarray} where these equations do not belong to $I_{\mb{Q}}.$\\ As before, we define $J_{\mb{Z}}:=\operatorname{rad}(I_{\mb{Q}})\cap \mb{Z}[A,B,\ldots, E_1]$, and $J_p:=J_{\mb{Z}}\mod p$. It is possible to now use a computer algebra system to show that \ref{equ} and \ref{radi} generate $J_p$, but this can be quite a strain on the computer system, which we can avoid in this case: We will show that (Part 1) assuming these equations forces $T$ to be invertible for any $p$, and (Part 2) if $T$ is assumed to be invertible, then it satisfies the equations \ref{equ} and \ref{radi} (meaning we show that these equations might not generate $J_p$, but the radical of the ideal generated by them does). \\ {\bf Part 1: assuming the equations yields invertibility.}\\ We first assume that $A, A_1, E$ are all nonzero. Then solving (1) and (2) yields \[C_1=-2A,C=-\frac{2A^2}{A_1},B_1=\frac{A^2}{A_1},B=\frac{A^3}{{A_1}^2},\]\[G=-\frac{3A_1E}{A},G_1=-\frac{3{A_1}^2E}{A^2},\]\[D=-\frac{{A_1}^3E}{A^3},D_1=\frac{{A_1}^4E}{A^4}\]\[F_1=-3D,G=-3E_1,F=-G_1.\] Thus \[T=(x,y)+(Ax^2+\frac{A^3}{{A_1}^2}y^2-\frac{2A^2}{A_1}xy-\frac{{A_1}^3E}{A^3}x^3+Ey^3+\frac{3{A_1}^2E}{A^2}x^2y-\frac{3A_1E}{A}xy^2,\] \[A_1x^2+\frac{A^2}{A_1}y^2-2Axy-\frac{{A_1}^4E}{A^4}x^3+\frac{A_1E}{A}y^3+\frac{3{A_1}^3E}{A^3}x^2y-\frac{3{A_1}^2E}{A^2}xy^2).\] This can be rewritten as \[ T= \left( \begin{matrix} x+A(x-\frac{A}{A_1}y)^2-E\frac{A^3}{A_1^3}(x-\frac{A}{A_1}y)^3, \\ y+ A_1(x^2-\frac{A}{A_1}y)^2-\frac{A_1^4E}{A^4}(x-\frac{A}{A_1}y)^3 \end{matrix} \right) \] Regardless of characteristics, $T$ is a tame map of the form \[ T=(x+\frac{A}{A_1}y, y) (x,y+A_1x^2-\frac{EA_1^3}{A^3}x^3)(x-\frac{A}{A_1}y,y) \] meaning that $T$ is invertible.\\ In case that one or more of $A,A_1,E$ are zero are easier than the above case (many coefficients are forced to be zero in these cases) and we leave it to the reader. {\bf Part 2: assuming invertibility satisfies the equations.} Since we have a map in dimension 2, it is tame, and we can use the Jung-van der Kulk theorem. Since the degree is three or less, it is a map of the form $\alpha (x, y+f(x)) \beta $ where $\alpha, \beta $ are affine invertible maps, and $\deg(f)\leq 3$. (There can only be one triangular map involved, as the degree is prime.) We can assume that $\beta=(ax+by+c, y)$ as we can put anything occuring in the second component in $f$. Also, we can assume $f$ is of degree 2 or 3, and also we can assume that $f(0)=0$ as we can put any constant added in $\alpha$. Adding in the requirement that the affine part of $\alpha (x, y+f(x)) \beta $ must be the identity yields requirements on $\alpha$ given $\beta$ and $f(x)$. Working this out yields a generic map that is actually very similar to the formula of $T$ above; it can be easily checked that it satisfies the equations. \\ {\bf Remark:} It is very hard to check this conjecture for specific degrees, even in $n=2$, as there is no shortcut other than doing hard-core computations. In fact, en passant one is proving the conjecture, and the computations are very similar to proving the Jacobian Conjecture in characteristic zero - which is (we hope the reader agrees) a difficult task\ldots Of course, we also checked the conjecture for many specific examples (which we do not list here, though we specifically mention that we could rule out ``obvious'' examples like $(x+x^p,y)$), though it feels a bit like touching a wall at random spots in the dark, not finding a light button and then shouting ``this wall has no light button''. \section{Implications of the Jacobian Conjecture among various fields} In this section we will see if $k,\acute{k}$ are two arbitrary fields of characteristics $p$ then what is the connection between $\mathcal{JC}(k,n)$ and $\mathcal{JC}(\acute{k},n)$ for all $n\geq1.$ We denote the Jacobian conjecture over all domains having characteristic zero by $JC(n,0)$ and the Jacobian conjecture over all fields with characteristic $p$ by $\mathcal{JC}(n,p).$ In characteristics zero we have the following theorem (theorem 1.1.18 in \cite{E00}). \begin{theorem}\label{char0} Let $R,\acute{R}$ be commutative rings contained in a $\mb{Q}$-algebra. If $JC(R,n)$ is true for all $n\geq1,$ then $JC(\acute{R},n)$ is true for all $n\geq1$. \end{theorem} For the characteristic $p$ equivalent we have to assume part of our conjectures: \begin{theorem}\label{charp} Assume the conjectures \ref{conj1}(2), \ref{conj2} are true. Let $k,\acute{k}$ be two fields contained in an $\mb{F}_p$-algebra such that $\mathcal{JC}(k,n)$ is true for all $n\geq1,$ then $\mathcal{JC}(\acute{k},n)$ is true for all $n\geq 1$. In particular, it is enough to verify $\mathcal{JC}(\mb{F}_p,n)$. \end{theorem} It is very hard to prove the statement \ref{charp} without making any assumption. To mention the main hurdle: consider $k$ is infinite field and $k_1\subset k$ a finite Galois extension, $a_1,a_2,\dots,a_n$ a $k_1$ basis of $k$ and denote by $\alpha:k_1^m\rightarrow k$ the map defined by $\alpha(y_1,\dots,y_n)=y_1a_1+\dots+y_na_n.$ The obvious extension $(\alpha,\dots,\alpha):(k_1^m)^n\longrightarrow k^n$ which we also denote by $\alpha,$ is clearly bijective. Let $F=(F_1,\dots,F_n):k^n\longrightarrow k^n$ be a polynomial map. So conjugate $F$ with $\alpha$ we get the map $F^{\alpha}:=\alpha^{-1}F\alpha:k_1^{mn}\longrightarrow k_1^{mn}.$ Comparing to the characteristics zero proof of theorem \ref{char0} there we know that $\det\operatorname{Jac}(F)\in k^*$ if and only if $\det\operatorname{Jac}(F^{\alpha})\in k_1^*$ (equation 1.1.26 in \cite{E00}). In characteristics $p$ we should have a similar statement that $F$ satisfies $J_{k}$ if and only if $F^{\alpha}$ satisfies $J_{k_1},$ but the proof of this is very difficult. This property that $F$ satisfies $J_{k}$ if and only if $F^{\alpha}$ satisfies $J_{k_1}$ is needed to prove the theorem \ref{charp} if we don't assume that conjectures \ref{conj1}(2) and \ref{conj2} are true. The remaining part of this section is devoted towards the proof of theorem \ref{charp}. We begin with some definitions and lemmas. Let $\Omega:=\text{algebraic closure of }\mb{F}_p(\{x_i| i\in\mb{N}\})$ then $\Omega$ is a field with infinite transcendence degree over $\mb{F}_p.$ \begin{definition}\label{def} Let $R,S$ be commutative rings. Let $\phi:R\rightarrow S$ a ring homomorphism. If $F\in R[X]^n,$ then $F^{\phi}$ denotes the element of $S[X]^n$ obtained by applying $\phi$ to the coefficients of the $F_i$. \end{definition} We use the notation $X=(x_1,x_2,\ldots,x_n).$ The following proposition is taken from \cite{E00} (proposition 1.1.7 in \cite{E00}). $\eta$ is the nilradical of $R$. \begin{proposition}(\textbf{Invertibility under base change})\label{invbc} Let $\phi:R\rightarrow S$ be a ring homomorphism with $\ker\phi\subset\eta.$ Let $F\in R[X]^n$ with $\det JF(0)\in R^*.$ Then $F$ is invertible if and only if $F^\phi$ is invertible over $S.$ \end{proposition} \begin{lemma} (\textbf{Embedding lemma})\label{embd} Let $\mb{F}_p\subset \mb{F}_p(a_1,a_2,\dots,a_n)$ be a finitely generated field extension. Then there exists an isomorphism $\phi:\mb{F}_p(a_1,a_2,\dots,a_n)\simeq k\subset \Omega,$ where $k$ is a subfield of $\Omega.$ \end{lemma} To prove this lemma we will use the following lemma which can be seen in any standard textbook on algebra (theorem 2.8 on page 233 in \cite{L02}). \begin{lemma}\label{embd1} Let $K/k$ be an algebraic field extensions and let $\phi: k \to C$ be a ring homomorphism where $C$ is an algebraically closed field. Then there exists a ring homomorphism $\sigma : K \to C$ which extends $\phi.$ \end{lemma} \begin{proof} (of embedding lemma) As $\mb{F}_p(a_1,a_2,\dots,a_n)$ is an algebraic extension of $\mb{F}_p$ and we have an inclusion $\mb{F}_p \to \Omega,$ so it has an extension $\mb{F}_p(a_1,a_2,\dots,a_n)\simeq k\subset \Omega$ by lemma \ref{embd1}. \end{proof} We can now use this to show that proving the $\mathcal{JC}$ for $\Omega$ is universal, in the sense that it proves the Jacobian Conjecture for all fields of the same characteristic. \begin{proposition} Let $n\geq1.$ If $\mathcal{JC}(\Omega,n)$ is true then $\mathcal{JC}(k,n)$ is true for any field $k$ of characteristics $p.$ \end{proposition} \begin{proof} Let $F\in k[X]^n$ satisfying $J_{\mb{Z}}\otimes k.$ Let $k_0$ be the subfield of $k$ generated over $\mb{F}_p$ by the coefficients of $F.$ Then $F$ satisfies $J_{\mb{Z}}\otimes k_0.$ By \ref{embd} we get an embedding $\phi:k_0\to \Omega.$ Since $F$ satisfies $J_{\mb{Z}}\otimes k_0$ we get that $F^{\phi}$ satisfies $J_{\mb{Z}}\otimes \phi(k_0)$ and hence by lemma \ref{rem1} $F^{\phi}$ satisfies $J_{\mb{Z}}\otimes \Omega.$ Hence $F^{\phi}$ is invertible over $\Omega$ since we assume that $JC(\Omega,n)$ is true. So by \ref{invbc} $F$ is invertible over $k_0$ and hence over $k.$ \end{proof} \begin{corollary}\label{countable}{of lemma \ref{rem1}\\} Let $n\geq1$ and $k_0\subset k$ be fields of characteristics $p.$ If $\mathcal{JC}(k,n)$ is true then $\mathcal{JC}(k_0,n)$ is true for any subfield $k_0$ of $k$. \end{corollary} \begin{proof} Let $F\in\operatorname{SKE}_n(k_0)$ then by lemma \ref{rem1} we have $F\in\operatorname{SKE}_n(k).$ Since we assume that $JC(k,n)$ is true so $F$ is invertible over $k.$ Hence $F$ is invertible over $k_0$ by proposition \ref{invbc}. \end{proof} Let $k$ be any countable field of characteristics p. We can write $k=\{a_1,a_2,\dots\}$ where each $a_i\neq a_j$ for all $i\neq j.$ We can also assume that $k$ is ordered set. Corresponding to each element $a_i$ in $k$ consider the indeterminate $x_i.$ Define a polynomial ring over $\mb{Z}$ by $\Lambda_k:=\mb{Z}[x_1,x_2,...].$ Define a map by $\tau:\Lambda_k\rightarrow k$ by $x_i\mapsto a_i$ and $m\mapsto m\mod p$ for any $m\in\mb{Z}.$ Then it is clearly well defined surjective ring homomorphism. Thus we have the following definition \begin{definition}\label{def1} For each countable field $k$ of characteristics $p$, define a polynomial ring $\Lambda_k$ over $\mb{Z}$ such that $\tau:\Lambda_k\longrightarrow k$ is surjective ring homomorphism. Notice that we can naturally extend $\tau$ to a map from polynomial maps over $\Lambda_k$ to polynomial maps over $k$. We denote this extended map by $\pi$ as in definition \ref{def2}. \end{definition} Thus we have the following lemma. \begin{lemma}\label{subset1} Let $k$ be a countable field of characteristics $p.$ We have $\pi(\operatorname{KE}_n(\Lambda_k))\subseteq \operatorname{SKE}_n(k)$ for every $n\geq1.$ \end{lemma} \begin{proof} Let $F\in\operatorname{KE}_n(\Lambda_k),$ then $\det \operatorname{Jac}(F)=1$ and so $F$ satisfies $I_{\mb{Q}}$. Let $q\in J_k:=\bar{J}_{\mb{Z}}\otimes k$ then $q=\sum_{i}\tilde{e_i}h_i$ for $\tilde{e_i}\in \bar{J}_{\mb{Z}}$ and $h_i\in k$ for all $i$ (here $\tilde{e}_ih_i=\tilde{e}_i\otimes h_i,$ but we omit the tensor notation). Since $h_i\in k$ there exist $H_i\in \Lambda_k$ such that $\tau(H_i)=h_i.$ We can define a surjective homomorphism $J_{\Lambda_k}:=J_{\mb{Z}}\otimes {\Lambda_k}\longrightarrow \bar{J}_{\mb{Z}}\otimes k\text{ by }a\otimes b\mapsto \tau(a)\otimes \tau(b)$ where $a\in J_{\mb{Z}}$ and $b\in {\Lambda_k}.$ Thus there exist $Q\in J_{\Lambda_k}$ defined by $Q=\sum_{i}{e_i}H_i$ such that $q=\sum_{i}\tau(e_i) \tau(H_i)=\sum_{i}\tilde{e_i} h_i$ where $e_i\in J_{\mb{Z}}$ such that $\tau(e_i)=\tilde{e_i}$ for all $i$. By lemma \ref{ideals} we have $e_i(\nu(F))=0$ for all $i$ (since $F$ satisfies $I_{\mb{Q}}$). If we identify $x_i$ with $a_i$ as in definition of $\tau,$ $\tilde{e_i}(\nu(\pi(F)))=e_i(\nu(F))\mod p=0\mod p$ for all $i.$ Thus $q(\nu(\pi(F)))=\sum_{i}\tilde{e_i}(\nu(\pi(F)))h_i(\nu(\pi(F)))=0\mod p.$ This shows that $\pi(F)$ satisfies $J_{k}.$ Hence $\pi(F)\in \operatorname{SKE}_n(k)$ which proves the lemma. \end{proof} Of course, the above lemma slightly reformulates conjecture \ref{conj2}: \begin{corollary}\label{corsub1} Assume conjecture \ref{conj2} is true and $k$ be a countable field of characteristics $p.$ Then $\pi(\operatorname{KE}_n(\Lambda_k))=\operatorname{SKE}_n(k).$ \end{corollary} We are now ready to link $JC(n,0)$ to $\mathcal{JC}(n,p)$. \begin{proposition}\label{lemma1}\label{lemma2}~\\ (1) Assume conjecture \ref{conj2} is true. Then \[ JC(n,0) \forall n\in \mb{N}^* \Longrightarrow \mathcal{JC}(n,p)\forall n\in \mb{N}^*.\] (2) Assume the conjectures \ref{conj1}(2), \ref{conj2} are true. Then \[ JC(n,0) \forall n\in \mb{N}^* \Longleftrightarrow \mathcal{JC}(n,p)\forall n\in \mb{N}^*.\] In fact, it is enough to prove or disprove $JC(n,\mb{Z})$ for all $n$ to prove or disprove $\mathcal{JC}(k,n)$ for all $n$ and for any field $k$. \end{proposition} \begin{proof} (1) Consider $K$ be an arbitrary field of characteristics $p$ and $f\in SK_n(K).$ Let $k$ be a subfield of $K$ generated over $\mb{F}_p$ by the coefficients of $f.$ Since $k$ is at most countable, we have a surjective ring homomorphism $\tau:{\Lambda_k}\rightarrow k$ (definition \ref{def1}). By corollary \ref{corsub1} there exists $F\in\operatorname{KE}_n({\Lambda_k})$ such that $\pi(F)=f$. Thus $F$ is invertible since we assume that $JC(n,0)$ is true, so there exist $G\in\operatorname{ME}_n(\Lambda_k)$ such that $F\circ G=I.$ Applying $\pi$ we have $\pi(F)\circ \pi(G)=\pi(I)=I\mod p.$ Thus $\pi(G)$ is an inverse of $f=\pi(F).$ This shows that $f$ is invertible over $k$ and hence over $K.$\\ (2) Let $K$ be an arbitrary field of characteristics $p$ and consider $k\subseteq K$ be a countable field (if K is itself countable then take $k=K$). By corollary \ref{corsub1} we have $\pi(\operatorname{KE}_n({\Lambda_k}))=\operatorname{SKE}_n(k)$ (where $\Lambda_k$ is defined in \ref{def1}). Let $F\in\operatorname{KE}_n({\Lambda_k})$ then $\pi(F)$ satisfies $J_{k}.$ Suppose $\mathcal{JC}(K,n)$ is true then $\mathcal{JC}(k,n)$ is true by corollary \ref{countable}. Thus $\pi(F)\in \operatorname{SA}_n(k)$ and so $F\in\pi^{-1}(\operatorname{SA}_n(k)).$ By conjecture \ref{conj1}(2) we have $F\in\operatorname{SA}_n({\Lambda_k}).$ By theorem \ref{char0} we have that $JC(n,0)$ is true. \end{proof} \begin{proof}(of theorem \ref{charp})\\ This is direct consequence of proposition \ref{lemma2} \end{proof} \section{Some results related to $\mathcal{JC}(k,n)$ } In this section we present some basic results related to our formulation of the Jacobian conjecture in characteristic $p$. \subsection{Invertible polynomial maps and $\mathcal{JC}(k,n)$} In this subsection we will discuss a natural question which can come to mind when studying the previous. If characteristic of $k$ is zero then we know that if $F\in\operatorname{SA}_n(k)$ then $F$ satisfies the keller condition $\det\operatorname{Jac}(F)=1$ (the only condition for Jacobian conjecture $JC(k,n)$). This is due to the fact that the determinant of the Jacobian has the property $\det\operatorname{Jac}(G\circ F)=\det\operatorname{Jac}(F)\cdot(\det\operatorname{Jac}(G)\circ F)$. If characteristic of $k$ is $p$ it is not easy to prove that if $F\in\operatorname{SKE}_n(k)$ then $F$ satisfies $J_k$ (the universal equations). Nevertheless, assuming conjectures \ref{conj1}(1) and \ref{conj2} we can prove that if $F\in\operatorname{SA}_n(k)$ then it implies that $F\in\operatorname{SKE}_n(k).$ \begin{proposition} Assume conjectures \ref{conj1}(1) and \ref{conj2} are true and $k$ be a field of characteristics $p.$ If $f\in\operatorname{SA}_n(k)$ then $f\in\operatorname{SKE}_n(k).$ \end{proposition} \begin{proof} Let $f\in\operatorname{SA}_n(k).$ Consider $k_0\subset k$ generated over $\mb{F}_p$ by the coefficients of $f.$ By conjecture \ref{conj1}(1) there exist some $F\in\operatorname{SA}_n(\Lambda_{k_0})$ such that $\pi(F)=f$ and thus $F$ satisfies $\operatorname{KE}_n(\Lambda_{k_0}).$ Assuming conjecture \ref{conj2} we have $\pi(\operatorname{KE}_n(\Lambda_{k_0}))=\operatorname{SKE}_n(k_0)$ by corollary \ref{corsub1}. Thus $f\in \operatorname{SKE}_n(k_0)$ and hence $f\in\operatorname{SKE}_n(k)$ by lemma \ref{rem1}. \end{proof} \subsection{Closure property of $\operatorname{SKE}_n(k)$} The set $\operatorname{KE}_n(R)$ is closed under composition for any ring $R$, and also for $R=k$ a field of characteristic $p$, even though it does not only consist of automorphisms. One would expect that $\operatorname{SKE}_n(\mb{F}_p)$ is also closed under composition. However, trying to prove this turns out to be an incredibly difficult task: if $F\in \operatorname{SKE}_n(\mb{F}_p)$ then the coefficients of $F$ satisfy certain equations that can be found in $J_p$. If we compose two such maps $F,G\in \operatorname{SKE}_n(\mb{F}_p)$, then the coefficients of the resulting map $F\circ G$ (denoted $v(F\circ G)$) are polynomials in the coefficients of $F$ and $G$, i.e. $v(F\circ G)=P(v(F), v(G))$ for some polynomial map $P$. To check if $F\circ G$ is in $\operatorname{SKE}_n(\mb{F}_p)$ we need to see if $v(F\circ G)$ satisfy (the equations in) $J_p$; however, this turned out to be extremely difficult. Comparing to characteristic zero, there we know a priori due to the ``magical'' equation $\det\operatorname{Jac}(F\circ G) = \det\operatorname{Jac}(G)\cdot (\det\operatorname{Jac}(F)\circ G)$ that $\operatorname{KE}_n(\mb{Z})$ is closed under composition. As a corollary, it gives that ``$v(F)$ satisfies $J_{\mb{Z}}$ and $v(G)$ satisfies $J_{\mb{Z}}$'' implies ``$v(F\circ G)$ satisfies $J_{\mb{Z}}$'', but exactly {\em how} is very complicated. Nevertheless, making an assumption we can prove that $\operatorname{SKE}_n(\mb{F}_p)$ is closed under composition. \begin{proposition} Assume conjecture \ref{conj2} is true. Then $\operatorname{SKE}_n(k)$ is closed under composition, where $k$ be any field of characteristics $p$. \end{proposition} \begin{proof} Let $f,g\in\operatorname{SKE}_n(k).$ Let $k_1$ be the subfield of $k$ generated over $\mb{F}_p$ by the coefficients of $f$ and $g$. Field $k_1$ is countable, thus by corollary \ref{corsub1} there exist $F,G\in\operatorname{KE}_n(\Lambda_{k_1})$ such that $\pi(F)=f$ and $\pi(G)=g$. Now $F\circ G\in\operatorname{KE}_n(\Lambda_{k_1})$ as $\operatorname{KE}_n(\Lambda_{k_1})$ is closed under composition. Thus by corollary \ref{corsub1} $f\circ g=\pi(F)\circ \pi(G)=\pi(F\circ G)\in\operatorname{SKE}_n(k_1).$ Hence $f\circ g\in\operatorname{SKE}_n(k)$ (lemma \ref{rem1}). \end{proof} \subsection{Connections between $\mathcal{JC}(\mb{F}_p,n)$ and $JC(\mb{Z},n)$.} In this subsection we will see how we can move back and forth between $\mathcal{JC}(\mb{F}_p,n)$ and $JC(\mb{Z},n).$ We quote theorem 10.3.13 from \cite{E00}. We will need this theorem to build the connection between $\mathcal{JC}(\mb{F}_p,n)$ and $JC(\mb{Z},n)$. \begin{theorem}\label{10.3.13} Let $F\in\mb{Z}[x_1,x_2,\dots,x_n]^n.$ If $F\mod p:{\mb{F}_p}^n\rightarrow{\mb{F}_p}^n$ is injective for all but finitely many primes $p$ and $\det\operatorname{Jac}(F)\in\mb{Z}\setminus\{0\},$ then $F$ is invetible over $\mb{Z}.$ \end{theorem} \begin{lemma} If $\mathcal{JC}(\mb{F}_p,n)$ is true for all but finitely many primes $p$, then $JC(\mb{Z},n)$ is true. \end{lemma} This is a slight variation on proposition \ref{lemma1} but without any requirements. \begin{proof} Let $F\in\operatorname{ME}_n(\mb{Z}),$ such that $\det(\operatorname{Jac}(F))=1.$ Then by lemma \ref{subset1} $F\mod p$ satisfies $J_p.$ Thus $F\mod p$ is invertible for almost all $p$ by given assumptions. By theorem \ref{10.3.13} we conclude that $F$ is invertible. \end{proof} For the converse of this lemma we need to assume conjecture \ref{conj2} to be true. This in turn resembles proposition \ref{lemma2}. \begin{lemma} Suppose conjecture \ref{conj2} and $JC(\mb{Z},n)$ are true, then $\mathcal{JC}(\mb{F}_p,n)$ is true. \end{lemma} \begin{proof} Let $f\in\operatorname{ME}_n(\mb{F}_p)$ such that $f$ satisfies $J_p.$ By corollary \ref{corsub1} there exists $F\in\operatorname{KE}_n(\mb{Z})$ such that $F\mod p=f$. By assumption, $F$ is invertible, so there exist $G\in\operatorname{ME}_n(\mb{Z})$ such that $F\circ G=I.$ Thus $(F\mod p)\circ (G\mod p)=I\mod p$ and hence $G\mod p:=g$ is the inverse of $f$. \end{proof} \subsection{Boundedness} In this subsection we explore what happens if we assume that if the degree, or the degree and the coefficents, of a polynomial map is small with respect to $p$. In some sense, the results say that if $p$ is ``small'' with respect to some formula depending on $p$ and $n$, then the situation is exactly the same as in characteristic zero. We fix $n$ in this section, but note that the constant $N_d$ below depends also on $n$. Let $\operatorname{ME}_n{({\mb{F}}_p)}^d$ be the set of polynomial endomorphisms of degree at most $d.$ Similarly we can define $\operatorname{KE}_n(\mb{F}_p)^d,$ $\operatorname{SKE}_n(\mb{F}_p)^d$ etc. \begin{lemma}\label{local} Let $F\in ME_n{({\mb{F}}_p)}^d$ and $I_{Q}^d=(E_1,\dots,E_m)$ then there exist a positive integer $N_{d}$ such that for $p>N_d$ we have $J_{\mb{Z}_{(p)}}^d=\operatorname{rad}(E_1,\dots,E_m).$ \end{lemma} \begin{proof} Consider the ideals $I_{\mb{Q}}^{d}=(E_1,\dots,E_m)$ and $I_{\mb{Q}}^d\cap C_{\mb{Z}_{(p)}}=(E_1,\dots,E_m,Q_1,\dots,Q_r),$ where $Q_i=\frac{P_i(E_1,\dots,E_m)}{n_i},$ and $P_i(X)$ are polynomials with integer coefficients. Let $N_{d}=lcm(n_1,n_2,\dots,n_r).$ Then for $p>N_d$ we have $I_{\mb{Q}}^d\cap C_{R}=(E_1,\dots,E_m)$ where $R:=\mb{Z}[\frac{1}{N_{d}}].$ Hence $J_{\mb{Z}_{(p)}}^d:=\operatorname{rad}(I_{\mb{Q}}^{d}\cap C_{R})=\operatorname{rad}(E_1,\dots,E_m).$ \end{proof} \begin{corollary}\label{pkeller} Let $F\in ME_n{({\mb{F}}_p)}^d$ and $I_{\mb{Q}}^d=(E_1,\dots,E_m)$ then there exist a positive integer $N_{d}$ such that for $p>N_d$ we have $J_{p}^d=\operatorname{rad}(E_1\mod p,\dots,E_m\mod p).$ \end{corollary} \begin{proof} Since by definition \[J_{p}^d=J_{\mb{Z}}^d\mod p=J_{\mb{Z}}^d\otimes_{\mb{Z}}\mb{F}_{p}\]\[=J_{\mb{Z}}^d\otimes_{\mb{Z}}(\mb{Z}_{(p)}\otimes_{\mb{Z}_{(p)}}\mb{F}_{p})\]\[=(J_{\mb{Z}}^d\otimes_{\mb{Z}}\mb{Z}_{(p)})\otimes_{\mb{Z}_{(p)}}\mb{F}_{p}\] \[=J_{\mb{Z}_{(p)}}^d\otimes_{\mb{Z}_{(p)}}\mb{F}_{p}\]\[=J_{\mb{Z}_{(p)}}^d\mod p.\] By lemma \ref{local} we get $J_{p}^d=J_{\mb{Z}_{(p)}}^d\mod p=\operatorname{rad}(E_1\mod p,\dots,E_m\mod p).$ \end{proof} \begin{corollary}\label{pkeller1} There exist a positive integer $N_{d}$ such that $\operatorname{KE}_n(\mb{Z})^{d}\mod p\subset\operatorname{SKE}_n(\mb{F}_p)^{d}$ for $p>N_{d}.$ \end{corollary} \begin{proof} Direct consequence of corollary \ref{pkeller}. \end{proof} The following lemma is intuitively clear: if you have a polynomial map having coefficients which are (in $\mb{Z}$) small, then knowing that the map modulo $p$ is a (special) Keller map yields that it was a Keller map to start with. \begin{lemma}\label{pkC} Let $f\in\operatorname{SKE}_n(\mb{F}_p)^d$ having coefficients bounded by some constant $C,$ (meaning here that for the coefficients a representative in $\mb{Z}$ can be picked in the interval $[-C,C]$). If $p$ is large enough with respect to $d$ and $C,$ then picking $F\in ME_n(\mb{Z})^d$ such that $f=F\mod p$ and the coefficients of $F$ are in the interval $[-C,C]$, then $F\in \operatorname{KE}_n(\mb{Z})^d.$ \end{lemma} \begin{proof} Consider ideals $I_{\mb{Q}}^d=(E_1,\dots,E_m)$ and $I_{\mb{Q}}^d\cap C_{\mb{Z},d}=(Q_1,\dots,Q_r)$ such that $Q_i=\frac{P_i(E_1,\dots,E_m)}{n_i},$ where $P_i(X)$ are polynomials with integer coefficients for all $i$. Let $N_{d}=lcm(n_1,n_2,\dots,n_r)$ then for $p>N_d$ we have $I_{\mb{Q}}^d\cap C_{R,d}=(E_1,\dots,E_m)$ where $R=\mb{Z}[\frac{1}{N_{d}}].$ Let $f\in \operatorname{SKE}_n(\mb{F}_p)^d$ then $s(\nu(f))=0\mod p$ for all $s\in J_p^d.$ Consider $S\in J_{R}^d$ such that $S\mod p=s$ and $F\in \operatorname{ME}_n(\mb{Z})$ such that $F\mod p=f$ then $S(\nu(F))\mod p=0\mod p$ for all $S\in J_{R}^d.$ Since $I_{\mb{Q}}^d\cap C_{R,d}\subset J_{R}^d$ thus for $p>N_d$ we have $E_i(\nu(F))\mod p=0\mod p$ for all $i.$ Define $N_i:=\max\{|E_i(\eta)|:\eta\in[C,C]^l, l=\text{the number of coefficients of the generic polynomial F} \}$ and $N_d(C):=\max\{N_d,N_1,N_2,\dots,N_m\},$ then for $p>N_d(C)$ we have $|E_i(\nu(F))|<p$ for all $1\leq i\leq m.$ Thus $E_i(\nu(F))=0$ for all $1\leq i\leq m$ for $p>N_d(C).$ Hence $F\in \operatorname{KE}_n(\mb{Z})^d$ for $p>N_d(C).$ \end{proof} Under some {\em very stringent conditions} we can now show closedness under composition of some elements in $\operatorname{SKE}_n(\mb{F}_p)$. Let $\operatorname{ME}_n{({\mb{F}}_p)}^{d,C}$ be the set of polynomial endomorphisms of degree at most $d$ with bounded coefficients (indeed we can choose $C$ large enough and for bound coefficients are considered as representative in $\mb{Z}$). Similarly we can define $\operatorname{KE}_n(\mb{F}_p)^{d,C}$ $\operatorname{SKE}_n(\mb{F}_p)^{d,C}$ etc. \begin{corollary}\label{A} There exist a positive integer $N_{d^2}(C)$ such that if $f,g\in\operatorname{SKE}_n(\mb{F}_p)^{d,C}$ with $p>N_{d^2}(C)$ then $f\circ g\in\operatorname{SKE}_n(\mb{F}_p)^{d^2,C}.$ \end{corollary} \begin{proof} Let $f,g\in\operatorname{SKE}_n(\mb{F}_p)^{d,C}$ such that for $F,G\in\operatorname{ME}_n(\mb{Z}),$ $F\mod p=f$ and $G\mod p=g.$ By lemma \ref{pkC} for $p>N_{d^2}(C)$ we have $F,G\in\operatorname{KE}_n(\mb{Z})^{d,C}.$ Since $\operatorname{KE}_n(\mb{Z})^{d,C}$ is closed under composition. Thus $F\circ G\in\operatorname{KE}_n(\mb{Z})^{d^2,C}.$ Hence by corollary \ref{pkeller} for $p>N_{d^2}(C)$ we have $f\circ g\in\operatorname{SKE}_n(\mb{F}_p)^{d^2,C}.$ \end{proof} The generic case eludes us: \begin{conjecture} Let $k$ be a field of characteristic $p$. Then $\operatorname{SKE}_n(k)$ is closed under composition. \end{conjecture}
1,477,468,750,250
arxiv
\section{Introduction} \IEEEPARstart{G}{estures} play a crucial role in human communication and are used together with a speech to convey meaning. Gestures can be any form of visual action including hand motions, pose changes and facial expressions. Given the vital role of gestures in human communication, automated gesture recognition has been explored in a vast number of application areas including human-computer interaction, robotics, sign language recognition, gaming, and virtual reality control; and due to these diverse applications, automated gesture recognition has received much attention within computer vision research. Gestures, like speech, are continuous in nature, with the next gesture directly related to those that have occurred before. However, despite the temporal relationships that exist within a sequence of gestures, there is limited research that has considered continuous gesture recognition. Most gesture recognition approaches are based on recognising isolated gestures \cite{joze2020mmtm,molchanov2015hand,molchanov2015multi} where an input video is manually segmented into clips, each of which contains a single isolated gesture. In a real-world scenario where gestures are performed continuously, methods based on isolated gestures are not directly applicable, and thus do not translate to a natural setting. As such, recent approaches \cite{benitez2020ipn,hoang2019continuous,kopuklu2019real} aim to recognise gestures in the continuous original (i.e. unsegmented) video where multiple types of gesture, including both gestures and non-gesture actions, are present. These continuous gesture recognition approaches are formulated in two ways: two-stage \cite{hoang2019continuous,kopuklu2019real,zhu2018continuous} and single-stage \cite{gupta2016online} methods. The two-stage approach is built around using two models: one model to perform gesture detection (also known as gesture spotting), and another for gesture classification. In \cite{kopuklu2019real} the authors proposed a two-stage method where gestures are first detected by a shallow 3D-CNN and when a gesture is detected, it activates a deep 3D-CNN classification model. Another work \cite{hoang2019continuous} proposed using a Bidirectional Long Short-Term Memory (Bi-LSTM) to detect gestures while the authors use a combination of two 3D Convolution Neural Networks (3D-CNN) and a Long Short-Term Memory (LSTM) network to process multi-modal inputs for gesture classification. \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{Figures/problem_def.pdf} \caption{Single-stage Continuous Gesture Recognition: The model is fed with a multi-modal (RGB, Depth etc.) feature sequence and the ground truth label sequence. Input gestures can belong to a particular gesture class or a non-gesture (BG) class. During training, using the ground-truth the model learns to map from the input frames to the correct gesture class.} \label{fig:task} \end{figure} Single-stage approaches originate from the action recognition domain\cite{lea2017temporal,farha2019ms}, where frames that do not contain an action are labelled `background' (similar to the non-gesture class). In contrast to two-stage methods, single-stage methods use only a single model which directly performs gesture classification. Fig. \ref{fig:task} illustrates the typical structure of a single-stage approach where the recognition is performed by considering all the gesture classes together with the non-gesture class. In addition to being simpler than two-stage methods, single stage methods avoid the potential issue of errors being propagated between stages. For example, in a two-stage method, if the detector makes an error when estimating the start or end of a gesture sequence, this error is propagated through to the classification process. Hence in two-stage methods, classifier performance is highly dependent on the robustness of the detector. However, we observe that two-stage methods are a popular choice among researchers when performing continuous gesture recognition. This is largely due to the challenges that a single network must address when performing both gesture localisation and recognition concurrently. Several gesture recognition approaches have also exploited multi-modal data and have shown improved results through fusion \cite{joze2020mmtm,hoang2019continuous}. In \cite{joze2020mmtm}, the authors introduce a simple neural network module to fuse features from two modes for the two-stage gesture recognition task. However, for continuous gesture recognition, any fusion scheme must consider that the input sequence may include multiple gestures that evolve temporally. Hence, using a simple attention layer to fuse domains restricts the learning capacity as model attention is applied to the complete sequence, ignoring the fact that there are multiple gesture sub-sequences, and potentially leading to some individual gestures being suppressed. In this paper we propose a novel single-stage method for continuous gesture recognition. By using a single-stage approach we expect the classification model to learn natural transitions between gestures and non-gestures. However, directly learning the gestures from a continuous unsegmented video is challenging as it requires the model to detect the transitions between gesture classes, and recognise gestures/non-gestures simultaneously. To improve performance we consider multiple modalities and introduce a novel fusion module that extracts multiple feature sub-sequences from the multi-modal input streams, considering their temporal order. The proposed fusion module preserves this temporal order and enables the learning of discriminative feature vectors from the available modalities. To aid model learning, we propose a novel mid-point based loss function, and perform additional experiments to demonstrate the effectiveness of the proposed loss. Figure \ref{fig:model} illustrates the architecture of our proposed Temporal Multi-Model Fusion (TMMF) framework. In the first stage of the model, semantic features from each mode are extracted via a feature extractor, and are passed through a Unimodal Feature Mapping (UFM) block. We maintain separate UFM blocks for each stream. The outputs of all UFM blocks are used by the proposed fusion module which learns multi-modal spatio-temporal relationships to support recognition. The output of the fusion model is passed through the Multi-modal Feature Mapping (MFM) block which performs the final classification. The model is explicitly designed to handle variable length video sequences. Through evaluations on three continuous gesture datasets, ChaLearn LAP Continuous Gesture Dataset (ConGD) \cite{wan2016chalearn}, EgoGesture \cite{zhang2018egogesture} and IPN hands \cite{benitez2020ipn}, we show that our proposed method achieves state-of-the-art results. We also perform extensive ablation evaluations, demonstrating the effectiveness of each component of the model. Furthermore, we illustrate the scalability of our proposed TMMF model by performing continuous gesture recognition with two and three modalities, where the third modality is obtained using the Generative model in \cite{isola2017image}. In summary our contributions are three fold: \begin{itemize} \item We propose a novel single-stage temporal multi-modal fusion (TMMF) framework, that uses a single model to recognise gestures and their temporal transitions through multi-modal feature fusion, supported by our proposed fusion block algorithm. \item Our model automatically learns the natural transitions between gestures through the proposed mid-point-based loss function. \item We carry out experiments showing the model's ability to outperform the state-of-the-art on three challenging datasets, and ablation experiments emphasise the contribution of each component of the proposed TMMF architecture. \end{itemize} \begin{figure*}[ht] \centering \includegraphics[width=0.75\textwidth]{Figures/proposed_updated.pdf} \caption{Proposed single-stage Temporal Multi-Modal Fusion (TMMF) framework: The data from each mode is passed through a pre-trained feature extractor and subsequently through separate Unimodal Feature Mapping (UFM) blocks. The output of each UFM block is fused by the proposed fusion block which learns discriminative features from each mode, considering the temporal order of the data. This aids gesture classification which is performed by the Multi-modal Feature Mapping (MFM) block.} \label{fig:model} \end{figure*} \section{Related Works} Gestures are primarily the movements of the limbs or body that aid and emphasize speech, and they play a major role in communication between humans, and in human-computer interactions. Therefore, gesture recognition has been an extensively studied area in computer vision as it enables numerous application areas that require human-computer collaboration. We discuss the related works in the areas of isoloated gesture recognition (see Section \ref{subsec:litiso}), continuous gesture recognition (see Section \ref{subsec:litcont}), and multi-modal gesture recognition (see Section \ref{subsec:litmulti}). \subsection{Isolated Gesture Recognition} \label{subsec:litiso} Isolated gesture recognition uses segmented videos containing a single gesture per video, and is a naive and simplified way of performing gesture recognition which does not reflect the real-world challenge that the gesture recognition presents. Early discrete gesture recognition methods are based on handcrafted features \cite{wan2015explore,shen2012dynamic,trinh2012hand,yang2014super}. For example, in \cite{wan2015explore} the authors proposed a spatio-temporal feature named Mixed Features around Sparse key-points (MFSK), which is extracted from RGB-D data. In \cite{shen2012dynamic} the authors propose to extract a visual representation for hand motions using motion divergence fields. Other methods are based on extracting Random Occupancy Pattern (ROP) features \cite{wang2012robust}, Super Normal Vectors (SNV) \cite{yang2014super}, and improved dense trajectories \cite{wang2013action}. However, these hand-crafted feature methods rely on human domain knowledge and risk failing to capture necessary information that may greatly contribute towards correct recognition. Subsequently, attention has shifted to deep network-based approaches \cite{simonyan2014two,gammulle2019predicting,teng2019deep,shou2016temporal,ji20123d} due to their ability to learn task-specific features automatically. As such, most recent gesture recognition methods use deep networks \cite{zhang2019eleatt,liu2017continuous,kopuklu2019real,benitez2020ipn,joze2020mmtm,li2020one,su2017unsupervised} and have demonstrated superior results to their hand-crafted counterparts. In \cite{zhang2020gesture}, the authors proposed three variants of 3D CNNs which are able to learn spatio-temporal information through their hierarchical structure to learn to recognise isolated gestures. The authors of \cite{joze2020mmtm} proposed a fusion unit to integrate and learn information that flows through two uni-modal CNN models to support isolated gesture recognition. \cite{abavisani2019improving} introduced a multi-modal training/uni-modal testing approach where the authors embed the knowledge from individual 3D-CNN networks, forcing them to collaborate and learn a common semantic representation to recognise isolated gestures. However, the simplicity of isolated gesture recognition methods prevents their direct real-world application, as input videos that contain multiple gestures must first be segmented. To address this, \cite{benitez2021improving} conducted experiments using light-weight semantic segmentation methods such that an isolated gesture based method could be used in real-time. An alternate approach is the development of methods to directly recognise gestures in unsegmented (i.e. continuous) video streams \cite{kopuklu2019real,liu2017continuous}. \subsection{Continuous Gesture Recognition} \label{subsec:litcont} Continuous Gesture Recognition is evaluated on unsegmented gesture videos, and contains more than one gesture per video. In such unsegmented videos, as there are sub-sequences containing both gestures and non-gestures, a typical model first detects gesture regions (also known as gesture spotting \cite{zhang2018egogesture}) prior to recognising each gesture. \cite{kopuklu2019real} formulated a two-stage framework to carry out the detection and classification of continuous gestures, where their detection method first performs gesture detection and the classification model is activated only when a gesture is detected. However, these two-stage methods require two separate networks to perform gesture detection and classification respectively. This limitation has motivated us to develop a single-stage method which requires only a single model to learn the gestures and their natural transitions. To the best of our knowledge \cite{gupta2016online, zhang2018egogesture} are the only existing single-stage continuous gesture recognition methods. In \cite{gupta2016online} the authors employ an RNN to predict the gesture labels for an input frame sequence. In \cite{cao2017egocentric} the authors use a C3D model to classify continuous gestures. The model sequentially slides over the input video and outputs a single gesture class representing the gesture within that input window, including the non-gesture class. They propose to further improve the gesture prediction method by employing a Spatio-Temporal Transfer Module (STTM) \cite{cao2017egocentric} and an LSTM network, where the LSTM predicts gesture labels based on the C3D features. However, these methods fail to achieve the accuracy of two-stage methods. We believe this is due to the simplistic nature of the architecture, which cannot not handle the complexities of the single-stage formulation. The related task of continuous action recognition (also known as temporal action segmentation) has been approached using various strategies \cite{lea2017temporal,farha2019ms,gammulle2019coupled,gammulle2020fine}. Unlike gesture recognition, most temporal action segmentation approaches are single-stage methods where detection and classification is performed by a single network. Single-stage methods offer advantages over two-stage methods in that there is only a single model and errors from the first stage (the detector) are not propagated to the second stage (the classifier). Furthermore, a single stage model can learn not only a single gesture sequence, but also leverage information on how different types of gestures are sequentially related. \cite{lea2017temporal} introduced Temporal Convolution Networks (TCN) that use a hierarchy of temporal convolutions. In \cite{farha2019ms}, the authors have extended the ideas of \cite{lea2017temporal} and introduced a multi-stage model for action segmentation, where each stage is composed of a set of dilated temporal convolutions that generate predictions at each stage. We take inspiration from \cite{lea2017temporal, farha2019ms} and make use of temporal convolutions and residual dilated temporal convolutions when formulating our uni-modal and multi-model feature mapping blocks (i.e. UFM and MFM blocks). However, the models in \cite{lea2017temporal, farha2019ms} are unsuitable for a multi-modal problem. Hence we design our gesture recognition model to exploit multi-modal data. \subsection{Multi-modal Gesture Recognition} \label{subsec:litmulti} Multi-model methods have been investigated in multiple research areas \cite{vielzeuf2018centralnet,Reviewer1_sugg1,simonyan2014two}. These methods can either use multi-modal fusion (early, intermediate or late fusion) \cite{simonyan2014two,owens2018audio}, or learn uni-modal networks through multi-modal network training \cite{abavisani2019improving}. In \cite{chen2017multimodal}, an early fusion hard-gated approach is proposed for multi-modal sentiment analysis, while in \cite{simonyan2014two} a late fusion method (fusion at the prediction level) is proposed for action recognition. In the gesture recognition domain, the authors in \cite{joze2020mmtm} proposed a simple neural network unit to fuse information from the intermediate features of uni-modal networks. Their proposed unit can be added at different levels of the feature hierarchy with different spatial feature dimensions. In \cite{abavisani2019improving}, the authors utilise separate networks for each available modality and encourage the networks to share information and learn common semantic features and representations to improve the individual networks, and achieve better performance. However, the methods in \cite{joze2020mmtm, abavisani2019improving} have limited applicability to the single stage recognition paradigm and are designed only to handle segmented actions or gestures. When sequences are segmented all frames are part of the same gesture, hence, a simple attention or concatenation of features can produce good results as all the information relates to the one gesture. In contrast, in a single-stage model operating over continuous gestures, the input to the classifier contains multiple gesture sequences and non-gesture frames. Hence, the fusion strategy should understand how these sub-sequences are temporally related and filter the most relevant information considering this temporal order. To this end, we introduce a fusion mechanism that preserves this temporal accordance, and that can be applied to two or more modalities for continuous gesture recognition. In \cite{Reviewer1_sugg1,Reviewer1_sugg2} the authors proposed multi-modal methods which share information through rich labels from the text domain in order to handle the problem of insufficient image training data through the proposed Deep Transfer Networks (DTN). Both models take a similar approach to ours where the uni-modal information (text-domain and image domain) is first learned through separate uni-modal networks, and they learn a shared intermediate representation. However, in our proposed method we learn from both spatial and temporal information, and explicitly learn the temporal evolution of the spatial information across multiple modalities. \section{Method} We introduce a novel framework, temporal multi-modal fusion (TMMF) model, to support multi-modal single-stage video-based classification of gestures. In the introduced framework, first videos from each mode are passed through feature extractors, and the extracted deep features are subsequently passed through individual unimodal networks which we term Unimodal Feature Mapping (UFM) blocks. The output feature vector of each UFM block is used by the proposed fusion block to create a discriminative feature vector, which is passed to the Multi-modal Feature Mapping (MFM) block to perform classification. Figure \ref{fig:model} illustrates the overall model architecture. The task our approach seeks to solve can be defined as follows: given a sequence of video frames $X^i = \{x^i_{1}, x^i_{2},\dots, x^i_{T}\}$, where $i= 1, 2, \dots, M$ (M is the number of modalities), we aim to infer the gesture class label for each time step $t$ (i.e $ \hat{y}_{1}, \hat{y}_{2}, \dots,\hat{y}_{T}$). Our TMMF framework can be used with segmented or unsegmented videos, that may be composed of one or more gesture classes, and supports the fusion of any number of modalities greater than 1. Each feature stream has it's own UFM block that is used to obtain a domain specific representation prior to fusion. In the following sections, we provide a detailed description of the models and the proposed loss formulation. \subsection{Unimodal Feature Mapping (UFM) Block} Video frames for a given modality are passed through a feature extractor (each mode has it's own feature extractor to learn a mode specific representation), and the extracted features are the input to the UFM block. Through the UFM block, we capture salient features related to a specific modality and learn a feature vector suitable for feature fusion. As shown in Figure \ref{fig:model}, this uni-modal network is composed of temporal convolution layers and multiple dilated residual blocks, where each dilated residual block is composed of a dilated convolution layer followed by a non-linear activation function and a $1\times1$ convolution-BatchNorm-ReLu \cite{isola2017image} layer. A dilated convolution can be defined as a convolution where the filter is applied over an area larger than its length by skipping input values at a defined interval. It is similar to a convolution with a larger filter where zeros are placed within the filter to achieve the dilation effect, however dilated convolutions are considered more efficient \cite{oord2016wavenet}. We take inspiration from \cite{farha2019ms}, where the authors use residual connections to facilitate gradient flow. As in \cite{oord2016wavenet,farha2019ms,lea2017temporal}, we use a dilation factor that is doubled at each layer and each layer is composed of an equal number of convolution filters. Our UFM block has a similar architecture to the single-stage model utilised in \cite{farha2019ms}, however without the final prediction layer. In \cite{farha2019ms}, multiple single-stage models are stacked together in order to formulate the multi-stage architecture, while at each single-stage an action prediction is made which is then refined by the next stage. It should be noted that we adopt the UFM block only to encode the features (no gesture predictions are made before feature fusion) and learn the temporal uni-modal data in a manner that supports the feature fusion and gesture segmentation tasks. We further illustrate the importance of the UFM block through ablation experiments in Section \ref{subsec:ablation_exp}. \subsection{Fusion Block} The output vectors of the UFM blocks are passed through the fusion block, which extracts temporal features from the uni-modal sequences, considering their temporal accordance with the current time step. Feature fusion is performed using the attention level parameter. This parameter defines the feature units that should be selected from the output vector of each UFM block at a given time. An illustration is given in Figure \ref{fig:attentions}. \subsubsection{Attention Level parameter ($A$)} Let $V^1_t, V^2_t, \dots, V^M_t$ be the output feature vectors from the UFM blocks representing the $M$ modalities, where $t = {1,2,\dots, T}$. By considering the value set for the parameter $A$ the algorithm decides which feature units from each vector should be selected for fusion at time $t$. This selection criteria is defined based on the fact that multi-modal feature streams are synchronised and the features from the temporal neighbours at a particular timestamp should carry knowledge informative for the gesture class of that frame, while distant temporal neighbours do not carry helpful information (as they are likely from different gesture classes). Based on whether $A$ is even or odd, we calculate the position increment ($i_{inc}$) and decrement ($i_{dec}$) values as shown below. Here, $i_{inc}$ defines the number of units ahead we should consider during the fusion, while $i_{dec}$ defines the number of units behind that should be selected. \begin{algorithm}[htb] \SetAlgoLined \KwIn{$A$: Attention Level} \KwOut{ $i_{inc}$ and $i_{dec}$} \uIf{$A$ is even (i.e. $A\%2 = 0$)}{ $i_{inc} = A/2$ and $i_{dec} = (A-2)/2$ \; } \uElseIf{$A_{t}$ is odd (i.e. $A\%2 = 1$)}{ $i_{inc} = i_{dec} = (A-1)/2$ } \Return $i_{inc}$, $i_{dec}$ \caption{Calculation of position increment ($i_{inc}$) and decrement ($i_{dec}$) values based on the Attention level parameter.} \label{alg:alg1} \end{algorithm} Once $i_{inc}$ and $i_{dec}$ are calculated, at $t$ the units from $t-i_{dec}$ to $t+i_{inc}$ are selected from each feature vector. This sub feature vector is given by, \begin{equation} S^i_t = [V^i_{t-i_{dec}}, \dots, V^i_t, \dots, V^i_{t+i_{inc}}], \end{equation} where $i = 1, 2, \dots, M$. As shown in Figure \ref{fig:fusion}, when the attention level is 4, 4 feature units (from $t-1$ to $t+2$) are selected from each vector from the UFM. \subsubsection{Feature Enhancer (FE)} \label{sec:fe} At each time step $t$, the feature enhancer receives computed sub-vectors $S^i_t$s from each UFM, where $i$ indicates the modality, and concatenates these sub-vectors generating an augmented vector $\eta_t$, \begin{equation} \eta_t = [S^1_t, \ldots, S^i_t, \ldots, S^M_t]. \end{equation} If each feature unit is of dimension $d$ and the attention level is $A$, then $\eta_t$ will have shape, ($d, A \times M$). We then utilise the proposed Feature Enhancer (FE) block, which is inspired by the squeeze and excitation block architecture introduced in \cite{hu2019squeeze}, to allow the model to identify informative features from the fused multi-modal features, enhancing relevant feature units and suppressing uninformative features. However, the squeeze-and-excitation block of \cite{hu2019squeeze} considers the overall 2D/3D CNN layer output and enhances features considering their distribution across channels. In contrast, we propose to enhance features within sub-feature vectors, $V^i$s, for each $t$. Through the FE block, features from each sub-feature vector are enhanced by explicitly modelling the inter-dependencies between channels, further supporting the multi-modal fusion. To exploit the sub-feature dependencies we first perform global average pooling to retrieve relevant information within each of the $d$ channels of the sub-feature vector. This can be defined by, \begin{equation} z_t = F^{GAP}(\eta_t(a,m)) = \frac{1}{ A \times M}\sum_{a=1}^{A}\sum_{m=1}^{M} \eta_t(a,m), \end{equation} Then a gating mechanism implemented using sigmoid activations is applied to filter out the informative features within $d$ such that, \begin{equation} \beta_t = \sigma(W^2\times \mathrm{ReLu}(W^1 \times z_t)), \end{equation} where $W^1$ and $W^2$ are trainable weights of the gating mechanism. An augmented feature vector, $\tilde{\eta} $, is obtained by multiplying the sub feature vector $\eta_t$ by the respective weights. This feature vector is also of shape ($d, A \times M$), however, the informative information within is enhanced by considering all modalities. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{Figures/attention_levels.pdf} \caption{Illustration of the proposed Attention level parameter, $A$, and the associated attention scheme. This parameter determines the number of temporal neighbours that a particular frame is associated with, controlling the information flow to the fusion module. For instance if $A=5$ two neighbouring feature units surrounding the current time step $t$ in each direction (i.e from t-2 to t and from t to t+2) are selected and processed.} \label{fig:attentions} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{Figures/fusion.pdf} \caption{Illustration of the fusion process when $A = 4$. Features surrounding the current time step $t$ are passed to the proposed fusion block from each modality and it first concatenates them (See. Sec. \ref{sec:fe}). Then it passes the concatenated feature vector through the proposed feature enhancement function which identifies salient feature values in the concatenated vector to enhance, and components to suppress. Utilising this scheme we identify the most informative feature units from the local temporal window for decision making at the current time step.} \label{fig:fusion} \end{figure} \subsection{Multi-modal Feature Mapping (MFM) block} The MFM block learns to generate the final gesture classification for frame $t$ using the fused feature vector $\tilde{\eta_t}$. Similar to the UFM, the MFM uses a series of temporal convolution layers and multiple dilated residual blocks to operate over the fused feature vector, $\tilde{\eta} = [\tilde{\eta_1}, \ldots, \tilde{\eta_T}]$. By considering sequential relationships, it generates the frame-wise gesture classifications, $\hat{y}_{1}, \dots,\hat{y}_{T}$. This can be written as, \begin{equation} \hat{y}_{1}, \dots,\hat{y}_{T} = F^{\mathrm{MFM}} ([ \tilde{\eta_1}, \ldots, \tilde{\eta_T}]). \end{equation} \subsection{Loss Formulation} As the classification loss we utilise the cross-entropy loss which is defined as, \begin{equation} \mathcal{L}_{ce} = \dfrac{1}{T} \sum_{t} -\log(\hat{y}_t). \label{eq:ce} \end{equation} However, only using the frame wise classification loss to learn gesture segmentation is insufficient and can lead to over segmentation errors, even while maintaining high frame wise accuracy. Hence, we also use the smoothing loss introduced by \cite{farha2019ms}. This smoothing loss uses the truncated mean squared error over the frame-wise log probabilities. The smoothing loss can be defined as, \begin{equation} \mathcal{L}_{sm} = \dfrac{1}{T \times C} \sum_{t,c} \tilde{\Delta}_{t,c}, \label{eq:sm} \end{equation} where, \begin{equation} \tilde{\Delta}_{t,c}= \begin{cases} \Delta_{t,c},& \text{if } \Delta_{t,c}\leq \tau\\ \tau, & \text{otherwise} \end{cases} \end{equation} and, \begin{equation} \Delta_{t,c} = \lvert \log \hat{y}_{t,c} - \log \hat{y}_{t-1,c} \rvert. \end{equation} Here, $T$, $c$, $y_{t,c}$ define the number of frames per sequence, number of classes and the probability of class $c$ at time $t$, respectively. In addition to the above two loss functions we further support the action segmentation task through our proposed mid-point loss. However, instead of merely smoothing the predictions, we calculate the distance between the smoothed ground truth and predictions, incorporating a smoothing effect when calculating the loss. Through the mid-point loss, we seek to avoid calculating the loss based on all frames in the sequence, and thus limit the complexity of loss calculations to reduce the overhead that confusion that can occur during the learning process. Using this loss the model is able to capture the dominant classes within the input sequence and pay attention to those classes without been overwhelmed by the small fine-grained details within the windows. Let $w$ represent a sliding window with $N$ elements. First, we obtain the ground truth gesture class at the mid-point within the window $w$, \begin{equation} \bar{y} = F^{\mathrm{mid-point}}(y_n) , \end{equation} where $n \in N$. Similarly we obtain the predicted gesture class at the mid-point using, \begin{equation} \tilde{y} = F^{\mathrm{mid-point}}(\hat{y}_n). \end{equation} Then we define our mid-point smoothing loss, \begin{equation} \mathcal{L}_{mid} = \sum_{T}\lVert \bar{y} - \tilde{y} \rVert ^2. \label{eq:med} \end{equation} As we are operating over the smoothed ground truth and predicted sequences instead of raw sequences, we observe that this loss accounts for smooth alignment between ground truth and predictions. Finally, all three loss functions are summed to form the final loss, \begin{equation} \mathcal{L} = \mathcal{L}_{ce} + \lambda_1 \mathcal{L}_{sm} + \lambda_2 \mathcal{L}_{mid}, \label{eq:overall} \end{equation} where, $\lambda_1$ and $\lambda_2$ are model hyper-parameters that determine the contribution of the different losses. \subsection{Implementation details} The features are extracted from the flatten layer of ResNet50 \cite{resnet}. Prior to the feature extraction, ResNet50 is initialised with weights pre-trained on ImageNet \cite{russakovsky2015imagenet}, and fine-tuned by keeping the first 6 layers frozen. The UFM block is composed of $k=12$ dilated residual layers, while the MFM block contains $k'=10$ dilated residual layers. Similar to \cite{farha2019ms}, we double the dilation factor at each layer. We use the Adam optimiser with a learning rate of 0.0005. The implementation of the proposed framework is completed using PyTorch \cite{paszke2019pytorch}. \section{Experiments} \subsection{Datasets} We evaluate our TMMF model on three challenging public datasets: EgoGesture \cite{zhang2018egogesture}, IPN hand \cite{benitez2020ipn} and the ChaLearn LAP Continuous Gesture dataset \cite{wan2016chalearn}. All three datasets are comprised of unsegmented videos containing one or more gesture per video. \textbf{EgoGesture dataset \cite{zhang2018egogesture}} is the largest egocentric gesture dataset available for segmented and unsegmented (continuous) gesture classification and is composed of static and dynamic gestures. The dataset contains various challenging scenarios including a static subject with a dynamic background, a walking subject with a dynamic background, cluttered backgrounds, and subjects facing strong sunlight. In our work we utilise the unsegmented continuous gesture data which is more challenging as it requires us to segment and recognise the gesture in a single pass. The dataset consists of 84 classes (83 gesture classes and the non-gesture class) recorded in 6 diverse indoor and outdoor settings. The dataset contains 1,239, 411 frames and 431 videos for training, validation and testing purposes respectively. The dataset provides RGB and depth videos. \textbf{IPN hand dataset \cite{benitez2020ipn}} is a recently released dataset that supports continuous gesture recognition. The dataset contains videos based on 13 static/dynamic gesture classes and a non-gesture class. The gestures are performed by 50 distinct subjects in 28 diverse scenes. The videos are collected under extreme illumination conditions, and static and dynamic backgrounds. The dataset contains a total of 4,218 gesture instances and 800,491 RGB frames. Compared to other publicly available hand gesture datasets, IPN Hand includes the largest number of continuous gestures per video, and has the most rapid transitions between gestures \cite{benitez2020ipn}. We utilise the IPN hand dataset specifically as it is provided with multiple modalities: RGB, optical flow and hand segmentation data; which enables us to demonstrate the scalability (Sec. \ref{scalability}) of the proposed framework. \textbf{ChaLearn LAP Continuous Gesture Dataset (ConGD) \cite{wan2016chalearn}} is large-scale gesture dataset derived from the ChaLearn Gesture Dataset (CGD). The dataset includes 22,535 RGB-D gesture videos containing 47,933 gesture instances, and each video may contain one or more gestures. Overall the dataset is composed of 249 different gesture classes performed by 21 different subjects. \subsection{Evaluation Metrics} \label{metrics} \textbf{Mean Jaccard Index (MJI)}: To enable state-of-the-art comparisons, we utilise the MJI to evaluate the model on the EgoGesture dataset as suggested in \cite{zhang2018egogesture,wan2016chalearn}. For a given input, the Jaccard index measures the average relative overlap between the ground truth and the predicted class label sequence. The Jaccard index for the $i^th$ class is calculated using, \begin{equation} J_{s,i} = \dfrac{G_{s,i} \cap P_{s,i}}{G_{s,i} \cup P_{s,i}}, \end{equation} where $G_{s,i}$ and $P_{s,i}$ represents the ground truth and predictions of the $i^{th}$ class label for sequence $s$ respectively. Then the Jaccard index for the sequence can be computed by, \begin{equation} J_s = \dfrac{1}{l_s} \sum_{i=1}^L J_{s,i}, \end{equation} where $L$ is the number of available gesture classes and $l_s$ represents unique true labels. Then, the final mean Jaccard index of all testing sequences is calculated, \begin{equation} \bar{J_s} = \dfrac{1}{q} \sum_{j=1}^q J_{s,j}. \end{equation} \textbf{Levenshtein Accuracy (LA)}: In order to evaluate the IPN hand dataset we use the Levenshtein accuracy metric used by \cite{benitez2020ipn}. The Levenshtein accuracy is calculated by estimating the Levenshtein distance between the ground truth and the predicted sequences. The Levenshtein distance counts the number of item-level changes between the sequences and transforms one sequence to the other. However, after obtaining the Levenshtein distances, the Levenshtein accuracy is calculated by averaging this distance over the number of true target classes ($T_p$), and subtracting the average value from 1 (to obtain the closeness), and multiplied by 100 to obtain a percentage, \begin{equation} LA = 1 - \dfrac{l_d}{T_p} \times 100 \%. \end{equation} \subsection{Evaluations} \subsubsection{Selection of Parameter Value, $A$} To determine the attention level parameter, $A$, we have evaluated the model on all three datasets with different values of $A$. Figure \ref{fig:attention_level_ego} illustrates the impact of the attention-level parameter on the MJI for the EgoGesture validation set and the highest value is obtained when $A=8$. Note that $A=1$ is simple concatenation of features. For IPN hand dataset, we calculated the LA metric and achieved the highest result at $A=8$ (as shown in Figure \ref{fig:attention_level_ipn}). On the ConGD dataset (see Figure \ref{fig:attention_level_congd}) we obtained the highest MJI when $A=10$. Considering these findings we have conducted the remaining experiments considering the values of $A$ that achieved the highest MJI and LA results. \begin{figure*} \centering \subfigure[EgoGesture]{\includegraphics[width = 0.3\textwidth]{Figures/att_graph.pdf} \label{fig:attention_level_ego}} \subfigure[IPN Hand]{\includegraphics[width = 0.3\textwidth]{Figures/att_graph.pdf} \label{fig:attention_level_ipn}} \subfigure[ConGD]{\includegraphics[width = 0.3\textwidth]{Figures/att_graph.pdf} \label{fig:attention_level_congd}} \caption{Evaluation performance as $A$ changes for (a) EgoGesture, (b) IPN Hand, and (c) ConGD. For EgoGesture and ConGD MJI is used to evaluate performance. For IPN Hand we use LA.} \end{figure*} \subsubsection{Impact of $\lambda_1$ and $\lambda_2$} The impact of the proposed losses are controlled through the hyper-parameters, $\lambda_1$ and $\lambda_2$. In order to select the best values for the hyper-parameters we follow similar experiments to those in \cite{farha2019ms}. For the evaluations we have considered the EgoGesture dataset. The impact of $\lambda_1$ is measured by keeping the $\lambda_2$ fixed at $\lambda_2=0.25$. Then we train the model at different $\lambda_1$ values from 0.05 to 0.25. As shown in Table \ref{tab:hyper_para}, when $\lambda_1=0.15$ the model achieves the highest result, while there is a slight drop when $\lambda_1=0.05$ and $\lambda_1=0.25$. The reason for this is that the smoothing loss tends to heavily penalize changes in frame-wise labels, which controls the boundaries between action segments. To measure the impact of $\lambda_2$ we maintain a fixed $\lambda_1$ of 0.15 and trained models with different $\lambda_2$ values ranging from 0.05 to 0.35. As shown in the Table \ref{tab:hyper_para}, the highest result is obtained at $\lambda_2=0.25$. This indicates that our proposed mid-point based loss contributes more than the smoothing loss to the final gesture recognition result, and highlights the values of seeking to maintain natural gesture transitions. As $\lambda_2$ was increased further to $\lambda_2=0.35$, we noticed a slight drop in the overall result. Based on these findings, we use $\lambda_1=0.15$ and $\lambda_2=0.25$ for the remaining experiments. \begin{table}[ht!] \caption{Impact of $\lambda_1$ and $\lambda_2$ on the EgoGesture dataset with the window size of 16 and stride of 8.} \centering \resizebox{.75\linewidth}{!}{ \begin{tabular}{l|l} \hline \hline Imapct of $\lambda_1$ & MJI \\ \hline TMMF $(\lambda_1=0.05, \lambda_2=0.25)$ & 0.786 \\ TMMF $(\lambda_1=0.15, \lambda_2=0.25)$ & 0.803 \\ TMMF $(\lambda_1=0.25, \lambda_2=0.25)$ & 0.791 \\ \hline Impact of $\lambda_2$ & MJI \\ \hline TMMF $(\lambda_1=0.15, \lambda_2=0.05)$ & 0.742 \\ TMMF $(\lambda_1=0.15, \lambda_2=0.15)$ & 0.777 \\ TMMF $(\lambda_1=0.15, \lambda_2=0.25)$ & 0.803 \\ TMMF $(\lambda_1=0.15, \lambda_2=0.35)$ & 0.797 \\ \hline \end{tabular}} \label{tab:hyper_para} \end{table} \subsubsection{State-of-the-art Comparison} In Table \ref{tab:egogesture1}, we compare the performance of the proposed method with the current state-of-the-art using the EgoGesture dataset. It should be noted that the comparison is done using continuous video streams containing multiple gestures in each videos (not pre-segmented videos containing a single gesture per video). For the evaluations, the MJI (as described in \ref{metrics}) is used. Similar to the original works \cite{zhang2018egogesture,cao2017egocentric}, we use two settings in evaluating the results. In both settings we keep the sliding window length at 16 and we consider strides of 16 (l=16, s=16) and 8 (l=16, s=8). In Table \ref{tab:egogesture1}, the method introduced in \cite{wang2016large} (denoted QOM + IDMM) is based on a two-stage paradigm which handles the temporal segmentation and classification tasks separately using only the depth information. In order to detect the start and the end frames of the each gesture, the Quantity of Movement (QOM) is used. Then each segmented gesture is passed through an Improved Depth Motion Map (IDMM) and fed to a ConvNet in order to perform gesture classification. In \cite{zhang2018egogesture}, the authors obtain the label predictions by considering the class probabilities of each clip predicted by the C3D softmax layer. Here, the sliding window is applied over the whole sequence to generate a video clip. For the setting where the sliding window overlaps (i.e. l=16, s=8), frame label predictions are obtained by accumulating the classification scores of two overlapped windows and the most likely class is chosen as the predicted label for each frame. The authors in \cite{zhang2018egogesture} further improved the result by utilising an LSTM network (denoted C3D + LSTM). In C3D + LSTM, the class labels of each frame are predicted by an LSTM based on the C3D features extracted at the current time slice. An LSTM with a hidden feature dimension of 256 is used. The authors have gained better results by employing the LSTM network. We also compare our results with the method introduced in \cite{cao2017egocentric} where the authors proposed a gesture prediction method which employed a Spatio-Temporal Transfer Module (denoted by C3D + STTM). However, in both settings, our proposed TMMF model is able to outperform the current state-of-the-art methods for the EgoGesture dataset by a considerable margin. We also obtained 1.9\% gain (metric calculation of MJI (see Sec. \ref{metrics})) using the second setting (i.e. l=16, s=8) where the sliding windows overlap. In addition to the results included in the Table \ref{tab:egogesture1} in our original manuscript, we have evaluated the results using the frame-wise accuracy metric and achieved 95.10\%. Compared to the highest recorded state-of-the-art result of 94.72\% in \cite{shi2019gesture}, we achieved 0.38\% improvement in frame-wise accuracy. However, the limitation of frame-wise metrics is that they do not capture the segmentation behaviour of continuous data. Models achieving similar accuracy can have large variation in qualitative results \cite{lea2017temporal}. As such, we are unable to compare the segmentation abilities of our method with the method proposed in \cite{shi2019gesture}. \begin{table}[ht!] \caption{Comparison of our proposed TMMF model with the state-of-the-art methods on the EgoGesture dataset. Results are shown using the MJI metric (see Sec. \ref{metrics}). } \centering \resizebox{.75\linewidth}{!}{ \begin{tabular}{ll} \hline \hline Method & MJI \\ \hline QOM + IDMM \cite{wang2016large} & 0.430 \\ C3D (l=16, s=16) \cite{zhang2018egogesture} & 0.618 \\ C3D (l=16, s=8) \cite{zhang2018egogesture} & 0.698 \\ C3D + STTM (l=16, s=8) \cite{cao2017egocentric} & 0.709 \\ C3D + LSTM (l=16, s=8) \cite{zhang2018egogesture} & 0.718 \\ \hline TMMF (l=16, s=16) & 0.784 \\ TMMF (l=16, s=8) & \textbf{0.803} \\ \hline \end{tabular}} \label{tab:egogesture1} \end{table} We also evaluate our proposed TMMF model on the IPN Hand dataset \cite{benitez2020ipn} and Table \ref{tab:ipnhands_1} includes a comparison of our proposed fusion model with the state-of-the-art. The results use the Levenshtein accuracy metric (see \ref{metrics}). In the original work \cite{benitez2020ipn}, the authors have performed continuous gesture recognition using a two-stage approach where at the first stage a separate detection model is used to detect gestures within a sequence. For this task binary classification is carried out to separate gestures from non-gestures using a ResLight-10 \cite{kopuklu2019real} model. In the second stage the detected gesture is classified by the classification model (ResNet50 or ResNetXt-101). For the overall process in \cite{benitez2020ipn}, the authors have considered different combinations of data modalities such as RGB-Flow and RGB-Seg where 'Flow' and 'Seg' refer to optical flow and semantic segmentation respectively. However, the authors gained the highest classification results using ResNetXt-101 with RGB-Flow data. In contrast to the two-stage approach introduced in \cite{benitez2020ipn}, we use a single-stage method which directly predicts the sequence of gesture class labels for each frame of the entire sequence. Even though such a direct approach is challenging and requires a high level of discriminating ability within the model to separate multiple gesture and non-gesture classes, our fusion model outperforms the state-of-the-art results on IPN hand dataset by a significant margin. In Sec. \ref{scalability} we further evaluate the model using the three available modalities of RGB, flow and semantic segmentation outputs, illustrating the scalability of the proposed framework. We further compare our proposed TMMF model with the state-of-the-arts on the ConGD dataset in Table \ref{tab:conGD_results}. The method introduced in \cite{pigou2017gesture} utilises deep residual networks fused with a bi-directional LSTM to learn spatio-temporal features, while in \cite{cihan2017particle} a probabilistic approach is used to segment gestures prior to the 3DCNN based recognition step. In \cite{liu2017continuous} a temporal segmentation method is proposed utilising the hand positions obtained through a fast-RCNN based network which is then followed by a classification step. A similar approach is taken in \cite{wang2017large} with a combination of a two-stream CNN based detection model followed by recognition models handling depth and RGB modalities. Compared to the previous methods, the models in \cite{zhu2018continuous,wan2020chalearn} are able to achieve the best results by a significant margin. \cite{zhu2018continuous} proposed a 2-stage method where the authors introduce a temporal dilated Res3D network for the gesture detection task, which is followed by a classification module based on a combination of 3DCNN, convolutional LSTM and 2D-CNNs. In \cite{wan2020chalearn}, a bi-directional LSTM is utilised to perform temporal segmentation task. Our proposed TMMF method outperforms the state-of-the-art models of \cite{zhu2018continuous,wan2020chalearn} by 1.38\% and 1.22\% respectively (results based on the MJI metric). \begin{table}[ht!] \caption{Comparison of our proposed TMMF model with the state-of-the-art method on the IPN Hand dataset. The results are shown in term of Levenshtein accuracy (see Sec. \ref{metrics}). } \centering \resizebox{.8\linewidth}{!}{ \begin{tabular}{lll} \hline \hline Method & Modality & Results \\ \hline ResNet50 \cite{benitez2020ipn} & RGB-Seg & 33.27 \\ ResNet50 \cite{benitez2020ipn} & RGB-Flow & 39.47 \\ ResNeXt-101 \cite{benitez2020ipn} & RGB-Seg & 39.01 \\ ResNeXt-101 \cite{benitez2020ipn} & RGB-Flow & 42.47 \\ \hline TMMF & RGB-Flow & \textbf{68.12} \\ \hline \end{tabular}} \label{tab:ipnhands_1} \end{table} \begin{table}[ht!] \caption{Comparison of our proposed TMMF model with the state-of-the-art methods on the LAP ConGD dataset. Results are shown using the MJI metric (see Sec. \ref{metrics}).} \centering \resizebox{.9\linewidth}{!}{ \begin{tabular}{ll} \hline \hline Method & MJI \\ \hline Temporal Residual Networks \cite{pigou2017gesture}& 0.3164 \\ C3D + Probabilistic Forced Alignment \cite{cihan2017particle} & 0.3744 \\ ConvNets + convLSTM \cite{wang2017large} & 0.5950 \\ Faster RCNN + C3D \cite{liu2017continuous}& 0.6103 \\ TD-Res3D \cite{zhu2018continuous} & 0.7163 \\ Bi-LSTM \cite{wan2020chalearn} & 0.7179\\\hline TMMF & \textbf{0.7301} \\ \hline \end{tabular}} \label{tab:conGD_results} \end{table} In addition to the quantitative results, we provide qualitative results (in Fig. \ref{fig:qualitative1} a,b, c) where we visualise the temporal gesture predictions generated by the proposed method for different frame sequences from ConGD, EgoGesture and IPN Hand datasets respectively. Please refer to supplementary material for additional qualitative results. \begin{figure*}[ht!] \centering \subfigure[][ConGD]{\includegraphics[width=.79\textwidth]{Figures/ConGD_new_1.pdf}} \\ \subfigure[][EgoGesture]{\includegraphics[width=.8\textwidth]{Figures/EgoGesture_new.pdf}} \\ \subfigure[][IPN Hand]{\includegraphics[width=.8 \textwidth]{Figures/IPN_hand_new.pdf}} \\ \caption{Qualitative results of the propose model predictions on ConGD, EgoGesture, IPN Hand dataset.} \label{fig:qualitative1} \end{figure*} \subsubsection{Impact of Loss Formulation} We investigate the impact of our proposed loss formulation, which enhances the overall learning of the introduced model. In Table \ref{tab:loss_results}, the MJI of the EgoGesture dataset obtained with different loss formulations is shown. In the table, $\mathcal{L}_{ce}$, $\mathcal{L}_{sm}$ and $ \mathcal{L}_{med}$ represents the cross-entropy loss (Eq. \ref{eq:ce}), the smoothing loss (Eq. \ref{eq:sm}) and mid-point smoothing loss (Eq. \ref{eq:med}) respectively. Note that the combination of all three losses is the loss used by the proposed approach, $\mathcal{L}_{overall}$ (Eq. \ref{eq:overall}). \begin{table}[ht!] \caption{The impact of different loss formulations: we compare the MJI on the EgoGesture dataset with different loss formulations. Here, $\mathcal{L}_{ce}$, $\mathcal{L}_{sm}$ and $ \mathcal{L}_{med}$ represent the cross-entropy loss (Eq. \ref{eq:ce}), the smoothing loss (Eq. \ref{eq:sm}) and mid-point smoothing loss (Eq. \ref{eq:med}) respectively.} \centering \resizebox{.55\linewidth}{!}{ \begin{tabular}{ll} \hline \hline Loss & MJI \\ \hline $\mathcal{L}_{ce}$ & 0.753 \\ $\mathcal{L}_{ce}+ \lambda_1 \mathcal{L}_{sm}$ & 0.781 \\ $\mathcal{L}_{ce}+ \lambda_2 \mathcal{L}_{mid}$ & 0.784 \\ $\mathcal{L}_{ce}+ \lambda_1 \mathcal{L}_{sm} + \lambda_2 \mathcal{L}_{mid}$ & \textbf{0.803} \\\hline \end{tabular}} \label{tab:loss_results} \end{table} From Table \ref{tab:loss_results} we observe that both losses, $\mathcal{L}_{mid}$ and $\mathcal{L}_{sm}$ have contributed to improving the the cross-entropy loss, with the proposed mid-point based smoothing mechanism showing a slightly higher improvement. However, we observe a significant improvement when utilising all losses together, illustrating the importance of mid-point based comparison of different predicted and ground-truth windows. \subsubsection{Scalability of the Fusion Block} \label{scalability} In order to illustrate the scalability of the proposed framework and fusion mechanism to different numbers of modalities, we make use of a third modality: the segmentation maps which are provided in the IPN Hand dataset \cite{benitez2020ipn}. To make the feature extraction of hand segmentation maps more meaningful, we use the Pix-to-Pix GAN introduced in \cite{isola2017image} \footnote{We use the implementation provided at https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix}. We first train the GAN to generate hand segmentation maps that are similar to the ones provided with the IPN hand dataset. We set the number of filters in the generator and the discriminator to 8 and train the GAN by following the original work. After GAN training, we use the trained generator model for feature extraction of the third modality where features are obtained from the bottleneck layer of the generator. These extracted feature vectors (of dimension $ 18 \times 18 \times 8 = 2592$ ) are fed to the third UFM model along side the UFM models used for the RGB and optical flow based feature vectors, as per the model evaluated in Tab. \ref{tab:ipnhands_1}. It should be noted that having varying feature vector dimensions (i.e 2592 for segmentation map inputs while 2048 for RGB and optical flow features) does not effect the fusion as the UFM block maps the feature vectors to the same dimensionality at the output head, which is the input to the fusion block. As expected, with the use of three modalities we were able to improve the overall Levenshtein accuracy by 1.8\% from the setting with only 2 modalities, achieving a Levenshtein accuracy of 69.92\% with the 3 feature modalities. With this evaluation we illustrate that the proposed method can seamlessly be extended to fuse the data from a different number of modalities with different feature dimensions. \subsubsection{Ablation Experiment} \label{subsec:ablation_exp} \begin{figure*}[ht!] \centering \subfigure[][Only RGB/ Only Depth]{\includegraphics[width=.25 \linewidth]{Figures/single_stream.pdf}} \subfigure[][Simple Fusion]{\includegraphics[width=.4\linewidth]{Figures/simple_concat.pdf}} \caption{Ablation models: (a) The models that utilise a single modality (Only RGB/ Only Depth) are composed of a single UFM and MFM block, and the MFM block works as a second uni-modal network. (b) The Simple Fusion model is formulated by performing concatenating the sequence of RGB and depth features instead of utilising the proposed fusion block.} \label{fig:ablation_arc} \end{figure*} In order to demonstrate the importance of the proposed fusion mechanism we conducted an ablation experiment. In the experiment, we gradually remove components of the proposed framework and re-train and test the models with the EgoGesture dataset. In Table \ref{tab:ablation}, we report the evaluation results for five ablation models with the MJI metric. The ablation models are formulated as follows. \begin{itemize} \item \textbf{Only RGB:} A single UFM block is utilised for the RGB stream and the output of the uni-modal block is passed directly through the MFM block (see Fig. \ref{fig:ablation_arc} (a)). Here, the MFM block works as a second uni-modal network as only a single modality is used. Therefore, the proposed feature enhancer (FE) is not used. \item \textbf{Only Depth:} The model is similar to that of the 'Only RGB' stream ablation model ((see Fig. \ref{fig:ablation_arc} (a))). However, the instead of the input RGB stream, the depth input stream is used. \item \textbf{Simple Fusion:} Our proposed framework is utilised without the introduced fusion block. The fusion is performed by concatenating along the sequence of RGB and depth modalities. The model architecture is illustrated in Fig \ref{fig:ablation_arc} (b). \item \textbf{TMMF w/o UFM:} Our proposed framework without the UFM blocks to temporally map and encode the uni-modal features. The sequence of features extracted through the ResNet50 with dimension 2048 is fed directly to the fusion block. \item \textbf{TMMF w/o MFM:} The proposed TMMF model without the MFM block to temporally map the multi-modal features and classify the gestures. The output form the fusion block is directly used for classification. \item \textbf{TMMF w/o FE:} The proposed framework without the feature enhancer (FE) module. \end{itemize} \begin{table}[ht!] \caption{Evaluation results for the ablation models using the EgoGesture dataset.} \centering \resizebox{.55\linewidth}{!}{ \begin{tabular}{ll} \hline \hline Model & MJI \\ \hline Only RGB & 0.697 \\ Only Depth & 0.741 \\ Simple Fusion & 0.755 \\\hline TMMF w/o UFM & 0.640 \\ TMMF w/o MFM & 0.763 \\ TMMF w/o FE & 0.792 \\ \hline TMMF & \textbf{0.803} \\ \hline \end{tabular}} \label{tab:ablation} \end{table} A key observation based on the results presented in \ref{tab:ablation} is that naive concatenation of multi-modal features does not generate helpful information for continuous gesture recognition. We observe a performance drop of approximately 5\% when simple concatenation is applied in comparison to the proposed approach, and only a slight improvement for naive concatenation over the best individual mode (depth). We also have conducted experiments by removing the important components of the proposed TMMF model. It is clear that the UFM block plays a major role at the earlier stage of the framework, by learning the spatio-temporal information of the uni-modal data and encode this into more discriminative semantic features to support the fusion bock. The results of `TMMF w/o UFM' show that the performance is even lower than the `only RGB' setting which utilises a UFM model for the uni-modal feature mapping. We performed another evaluation without the MFM block (TMMF w/o MFM) to map the multi-modal features while keeping the UFM blocks. The evaluation result is slightly higher than the `Simple Fusion' and we believe this mainly benefits from the UFM and fusion blocks. Furthermore, the proposed temporal fusion strategy, as well as the feature enhancement block (based on the results on the TMMF w/o FE setting), have clearly contributed to the superior results that we achieved. \section{Conclusion} We propose a single-stage continuous gesture recognition framework (TMMF) with a novel fusion method to perform multi-modal feature fusion. The proposed framework can be applied to varying length gesture videos, and is able to perform the gesture detection and classification in a single direct step without the help of an additional detector model. The proposed fusion model is introduced to handle multiple modalities without a restriction on the number of modes, and further experiments demonstrate the scalability of the fusion method and show how the multiple streams complement the overall gesture recognition process. With the proposed loss formulation our introduced single-stage continuous gesture recognition framework learns the gesture transitions with considerable accuracy, even with the rapid gesture transitions of the IPN hand dataset. The ablation experiment further highlight the importance of the components of the proposed method, which outperformed the state-of-the-art systems on all three datasets by a significant margin. Our model has applications to multiple real-world domains that require classification on continuous data, while the fusion model is applicable to other fusion problems where videos or signal inputs are present, and can be used with or without the UFM or MFM blocks. \section*{Acknowledgment} The research presented in this paper was supported by an Australian Research Council (ARC) Discovery grant DP170100632. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,477,468,750,251
arxiv
\section{Introduction} \par Location modeling is a branch of operations research with vast real-world applicability and thus has been studied for a number of decades. Location modeling typically considers the location and time of demand signals over a network and optimizes the corresponding location of a servicing asset, such as a factory or vehicle. The underlying spatiotemporal demand signal data points are thus instrumental to the quality of the resulting model. \par When considering large spatiotemporal datasets, there is frequently a need to aggregate demand points to make the problem more tractable for the solver, clearer for the analyst, and comprehensible for the end-user. Aggregation, while of practical use, is not a lossless compression, and introduces aggregation error into the model. When location data is aggregated, the resulting grouping's location is traditionally represented by an aggregated data point. The distances between the actual demand points and the aggregated data points depend on the size of the aggregated region and the manner of aggregation. Similarly, the magnitude of uncertainty in the aggregated demand volumes is influenced by the nature of the aggregation. Therefore, great consideration must be given to the aggregation technique used when solving location problems. \par The impact of aggregation becomes more pronounced when the geographic region expands in size and there is high variability in demand density across the region; this struggle is actualized in studying the United State Coast Guard (USCG) District 14's search and rescue (SAR) mission. The international community recognizes the need for global cooperation in responding to emerging crises around the world. Nations have entered into SAR agreements, dividing the globe into respective search and rescue regions (SSRs). Per the United States National Search and Rescue Supplement to the International Aeronautical and Maritime Search and Rescue Manual \citep{NatSAR}, the USCG is the federal SAR coordinator for SAR missions within the United States' maritime SSRs and the aeronautical SSRs that do not overlay the continental United States or Alaska. \par USCG District 14 is headquartered in Honolulu, Hawaii and is responsible for USCG statutory missions across the Pacific region. In particular, the district's SSR spans more than 12 million square nautical miles, though the preponderance of SAR emergencies occur in the vicinity of Guam and the Hawaiian Islands. Additionally, District 14 has among the fewest assets in the USCG fleet, increasing the necessity to optimally posture those assets across the Pacific. Given the time-sensitive nature of rescue operations, it is imperative the USCG be optimally postured to ensure rapid response. Over the past decade, researchers have partnered with Coast Guard units - USCG and international - to solve these variations of the traditional facility location problem. These studies typically use historic SAR event data as the foundation of either a deterministic or simulation-based location model. \par This study quantifies the effects of the aggregation trade-off for spatiotemporal data over a large region, using District 14 SAR emergency data as a practical basis for consideration. Section 2 of this paper reviews previous works related to the aggregation of data for location models in general and coast guard SAR missions in particular. In section 3, we outline the methodology for implementing various aggregation techniques, both deterministic and stochastic, using a training data set. In section 4, we evaluate the effectiveness of these techniques by quantifying the aggregation errors between the modelled demand and actual demand over a two-year period. In section 5, we review our findings and provide recommendations for future research. \section{Related Works} \par Researchers have long been cognizant of a relationship between the methods used to aggregate location data and the resulting solutions generated by location models using this data. \cite{Gehlke1934} were among the first to note this problem, observing that the smoothing of census data inherent in aggregation resulted in a loss of valuable information and impacted the corresponding correlation coefficients of their models. \cite{Hillsman1978} laid a foundation for aggregation theory when they classified three sources of error (type A, B, and C) associated with representing individual demand points using aggregated demand points for solving factory location problems. Source A refers to the difference in distance from the aggregated demand points to the placed factory and the sum of distances from individual demand points to the factory. Source B is similar to Source A, if the factory were required to be collocated with an aggregated demand point. Source C refers to the phenomena where individual demand points are erroneously assigned to inefficient factories due to the zone in which it is aggregated. \par Several research teams have subsequently sought to quantify and minimize these aggregation errors. \cite{Papadimitriou1981} presents two heuristics for aggregating data points in a manner that reduces the worst-case aggregation error and \cite{Zemel1984} produced a theorem for the worst-case bounds on Papadimitriou's honeycomb approach. \cite{Qi2010} note the underlying assumption to Zemel's work of uniformly distributed demand points, and propose a multi-pattern tiling approach for considering arbitrarily distributed demand. Works by \cite{Current1987, Current1990}, outline methods for eliminating Source A and B error when solving P-Median, set covering, and maximal covering location problems. \cite{Francis2014} present a metric for measuring the error bounds for a P-Median problem, and \cite{Francis2004b} discuss formulations for minimizing the aggregation error using a penalty function approach. \par In the fields of geography and ecology, aggregation error of spatial data points is dubbed the modifiable areal unit problem (MAUP) \citep{Openshaw, Dark2007} or the zone definition problem \citep{Fotheringham1995, Curtis1996}. Research into MAUP typically decomposes the problem into two main effects: the scale effect and the zone effect. The scale effect refers to the impact on the spatial analysis results that are caused by the fidelty of the aggregation; for example, the impact of aggregating demand in a city using 200m x 200m grids versus 1km x 1km grids. Conversely, the zone effect refers to the impact caused by the way in which aggregation zones are bounded; for example, the impact of aggregating demand in a state using county lines versus city limits versus a grid overlay \citep{Openshaw, Dark2007}. \cite{Jelinsky&Wu} created a seminal contrived demonstration of these effects, which we replicate for completeness in Figure \ref{JelWU97}. \begin{figure}[h] \centering \includegraphics[width=0.85\textwidth]{JelWu.PNG} \caption{(a-c) show effect of scale effect. As scale of aggregation increases mean does not change but variance declines. (d-e) show effect of aggregation effect. Keeping scale equal but changing method of aggregation changes variance. (c,e,f) show that even when number of zones is constant (4) mean and variance can change. } \label{JelWU97} \end{figure} \par Previous research on MAUP has cautioned against arbitrary aggregation of spatial data and stressed its threat on the reliability of the resulting location analysis. \cite{Openshaw} was foundational in the study of MAUP and called for developing better methods for aggregating spatial data due to MAUP's impact on the reliability of geographic studies. \cite{Curtis1996} studied data for New York and concluded that researchers can bias the results of their analysis based on the means of aggregation, even if there appears to be a logical basis for the employed method of aggregation. \cite{Fotheringham1995} go so far as to question the accuracy of any location-based analysis conducted using aggregated data because of the effects of MAUP. \par In studies of MAUP, and aggregation theory in general, trends have emerged. Increases in the number of aggregated zones are typically proportional to decreases in distance-based aggregation error; distance-based aggregation error disappears when each distinct demand point is assigned to a unique zone (i.e., the number of aggregation zones equals the number of demand points). As any grouping introduces an associated level of distance-based error, it follows that reducing the amount of aggregation would subsequently reduce this error. \cite{Francis2004} notes the law of diminishing returns applies in this context, however, suggesting that iterative reductions in the number of aggregate groups shows diminishing improvements to error reduction. \cite{Francis1992} discuss the \textit{paradox of aggregation}, noting that solving formulations to minimize error can be more cumbersome than the original location problem being solved, which is counter-intuitive as aggregation is employed to simplify the resolution of these original location problems. \cite{Dark2007} consider the trends corresponding to both the scale effect and the zone effect. A known benefit of aggregation is tied to the scale effect; predictions of aggregated demand levels tend to be more accurate with fewer, larger aggregate zones. This is because when there are more demand points consolidated in each zone, the demand variance between zones decreases. The impact of zone effect is less understood and tends to differ from problem-to-problem. \par The importance of careful aggregation has been thoroughly studied and is synthesized by \cite{Francis2008}. In their survey of previous literature regarding aggregation error associated with location problems, Francis et al. note that there is an inherent tradeoff when aggregating data points; although aggregation has a tendency to decrease computational requirements and statistical uncertainty within the grouped data, it increases the error within the model by introducing aggregation error. Thus there does not exist a singular ``best'' level of aggregation and the tradeoffs inherent in aggregation must be considered. \par In addition to the theoretical work on this problem, there has been applied work specifically relating to Coast Guard SAR missions. Although some research into this area was conducted in the late 1970s \citep{Armstrong}, the preponderance of studies relating to Coast Guard posturing has emerged in the past decade. Studies researching the allocation of SAR assets, or facilities, typically adopt a quadrat modeling technique for aggregating location data \citep{AkbariINFOR, Akbari, Karatas, Afshartous}. This technique consists of decomposing the region in question into square cells using a grid overlay. Notably, the quadrat method is frequently adopted in crime data analyses, which typically seek to quantify spatial trends in criminal activity across a city or state \citep{Anselin, Chainey}. \par \cite{Armstrong} constructed a goal-programming model for assigning SAR aircraft, incorporating probabilistic consideration for the time required by the aircraft to locate distress events in different areas of the corresponding region, using a grid overlay to create a collection of square zones. These zones were then assigned deterministic values, representing the average number of distress events per month. Similarly, \cite{Karatas} utilized a quadrat model for simulating the location and volume of distress calls for the Turkish Coast Guard in the Aegean Sea. They first determined the optimal resource allocation strategy using individual events as separate demand nodes, and then evaluated the effectiveness of this strategy using simulated demand. \par The incorporation of kernel density estimation with the quadrat model, popular in crime data analysis \citep{Anselin}, has been previously implemented in SAR location problems. The kernel density estimation method composes the region into grid cells and assigns a density function to each data point ($s_i$). Points that are within proximity to each other relative to a specified bandwidth ($\tau$), are grouped into a kernel ($k$) and their density functions are combined. The resulting image is a smooth heat map with greater densities illustrated over areas that have the most activity clustered closely together \citep{Anselin, Chainey}. \cite{Erdemir} utilized kernel density estimation when considering the problem of locating aeromedical bases across the state of New Mexico. Similarly, \cite{Akbari} implemented a kernel density estimation approach to approximate the intensity of distress calls received by the Canadian Coast Guard. They varied the size of the grid overlay based upon the proximity to shoreline. This decision was based upon the assumption that since most distress events occurred closer to shore, the analysis would benefit from greater fidelity in aggregation along the coastline. \par Though not specifically kernel density estimation, \cite{Afshartous} implemented an intensity function-based approach for solving the Coast Guard SAR location problem. They first constructed a non-parametric statistical simulation of distress calls within USCG District 7 (headquartered in Miami, Florida) and then utilized their simulation to model demand for a facility location problem. This simulation was constructed by overlaying the region with a relatively fine grid and estimating the intensity of distress calls for each cell. \par While most work regarding SAR posturing has incorporated quadrat techniques, \cite{Azofra} introduced an intuitive method that has been applied to maritime research. Instead of defaulting to grids, Azofra's zonal distribution model allows for flexibility in the definition of emergency zones, such as zones based upon subject matter expertise. Once the zones are determined, the centroids of distress calls, dubbed \textit{superaccidents}, are computed for each zone. The zonal distribution model is a gravitational model, with the determination in optimal SAR operational response based upon the distance to the superaccidents and their associated weight. They demonstrate the implementation of this model using a notional example involving three superaccidents and three ports. \par Since the introduction of the zonal distribution model, some researchers have opted to expand upon it by applying it to real-world problems. \cite{Ai} utilize this model for locating supply bases and positioning vessels for maritime emergencies for a portion of the coastline of China along the Yellow Sea. While not adhering to the strict grid cells of previous studies, their zones remained rectangular in shape and varied in size across the region. \cite{Razi} improved upon the zonal distribution model by utilizing a \textit{k}-means clustering algorithm for defining the zones and implementing a weighted approach for locating the superaccidents. By adopting this approach, Razi et al. define the aggregated zones and corresponding representative demand nodes based upon historical trends in distress calls in the Aegean Sea rather than arbitrary cells. \cite{Hornberger} propose an extension to the work of Razi and Karatas, which they dub the stochastic zonal distribution model. Their model implements hierarchical \textit{k}-means clustering algorithm to define the aggregation zones, fits probability distributions to model the SAR demand for each zone, and then uses empirically constructed discrete distributions to model the corresponding rescue response for each emergency. \par A review of the existing literature regarding SAR asset posturing models finds a lack of explicit consideration regarding the impact of aggregation. Additionally, as SAR research expands to larger regions of consideration (e.g., oceans vs. seas or shorelines), it is necessary to more thoroughly consider the effects of various aggregation methods. Outside of SAR, and more generally emergency response asset modeling (e.g., \cite{araz2007fuzzy}), other transportation resource posturing problems which utilize massive demand data-sets assume or require demand aggregation (e.g., taxi service areas (\cite{li2019taxi}, \cite{rajendran2019insights}), and should also be concerned with how such aggregation effects the associated location modeling. To provide such consideration, our study utilizes historic SAR data from across the Pacific Ocean to compare the effectiveness of a zonal aggregation technique compared to quadrats of varying fidelity. Additionally, we evaluate these tradeoffs in the aggregation as applied to deterministic and stochastic implementations. \section{Methodology} \par In this section, we consider two key characteristics that define a zonal aggregation of demand signals: dividing the region into zones, and modeling the demand level. Using these two characteristics as the framework, we model and compare the following methodologies: deterministic quadrat approaches of various fidelities, the \cite{Razi} zonal distribution model, and the \cite{Hornberger} stochastic zonal distribution model. \par These methodologies are compared using the District 14 SAR region, an interesting test case due to its large area and highly variable demand levels; Figure \ref{SAR_Region} depicts the Honolulu Maritime Search and Rescue Region \citep{SARPlan}. Historic search and rescue demand data was obtained from the Marine Information for Safety and Law Enforcement (MISLE) database to form both a training set and a test set. The training set is comprised of SAR events from a 5 year span (January 2011 - December 2015) and is utilized to construct the models of spatiotemporal SAR demand. The accuracy of the aggregated demand methodologies is then evaluated using historic SAR data for the same region from January 2016 - December 2017. \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{SAR_Region.PNG} \caption{Honolulu Maritime SAR Region} \label{SAR_Region} \end{figure} The training and test data is scoped to only consider events that occurred within the District 14 area of responsibility (AOR). Additionally, demand points missing GPS coordinates were removed as were data points classified as medical consultations since these consultations only require a discussion with a medical professional over the phone and resources are not dispatched. The final training set contains 2629 demand points and the test set contains 1080 demand points. \subsection{Modeling Spatiotemporal Demand} \par The quadrat aggregation approach was implemented with 6 different quadrat scales to test the impact of the scale effect. These six grid-based decompositions of the region are labelled Aggregations A - F. Aggregation A considered the region of study as a singular zone, consolidating all demand points; see Figure \ref{AggA}. Aggregation B divided the region into two zones along the antimeridiean; see Figure \ref{AggB}. Aggregations C, D, and E are iterative increases in fidelity, decomposing the region into eight, fifteen, and forty-three zones, respectively; see Figures \ref{AggC}, \ref{AggD}, and \ref{AggE}. Aggregation F adopts the approach employed by \cite{Akbari} and allows for smaller grid cells in sections of higher demand. Specifically, the two zones from Aggregation E with the greatest proportion of Guam and Hawaiian Island workloads are further decomposed into $1^o$ x $1^o$ cells; Aggregation F results in 212 zones. Aggregation F is depicted in Figure \ref{AggF}. \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{AggAZones.PNG} \caption{Aggregation A (1 Zone)} \label{AggA} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{AggBZones.PNG} \caption{Aggregation B (2 Zones)} \label{AggB} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{AggCZones.PNG} \caption{Aggregation C (8 Zones)} \label{AggC} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{AggDZones.PNG} \caption{Aggregation D (15 Zones)} \label{AggD} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{AggEZones.PNG} \caption{Aggregation E (43 Zones)} \label{AggE} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{AggFZones.PNG} \caption{Aggregation F (212 Zones)} \label{AggF} \end{figure} \par Aggregation ZDM was constructed utilizing \cite{Razi} general implementation of the zonal distribution model and divided the AOR using a weighted \textit{k}-means clustering algorithm; see Figure \ref{ZDM}. Razi and Karatas defined the weight of each SAR event using an analytical hierarchy process based upon the level of fatality, material damage, response arduousness, and environmental impact. Their weighting scheme was not viable for this study based on the available information in MISLE, so this implementation of Razi and Karatas's procedure utilizes \textit{total activities} as a weighting. The metric of total activities represents the number of resources assigned to a rescue operations, in addition to the instances when a significant change occurred in the course of the rescue operation; this metric of total activities serves as a proxy for the complexity of a SAR event. Razi and Karatas determine the number of zones to cluster demand points into based upon a \textit{rule of thumb method} proposed by \cite{Kodinariya}. This method suggests that the number of zones Z is based upon the total number of events K, such that $|Z| \approx \sqrt{|K|/2}$. \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{AggZDM.PNG} \caption{Aggregation ZDM (36 Zones)} \label{ZDM} \end{figure} \par Aggregation SZDM was developed by implementing the stochastic zonal distribution model approach proposed by \cite{Hornberger}; see Figure \ref{SZDM}. Hornberger et al. utilized a hierarchical \textit{k}-means clustering algorithm to aggregate demand points into zones. All demand points are sorted into mutually exclusive groups based upon the unit that coordinated the response and the types of assets utilized in the response. District 14 is divided into Sector Guam and Sector Honolulu, which split the coverage of the AOR around longitude $160^o$ E. Current policy dictates that the mission range for USCG boats is 50 nautical miles from the shoreline of an island on which there exists a USCG boat station; District 14 has boat stations located on the islands of Guam, O'ahu, Kaua'i, and Maui. Hornberger et al. note that a reasonable approximation of asset utilization would be a combination of boats and helicopter aircraft responding to SAR events within the 50 nautical mile boundary of these islands while a combination of cutters and aeroplane aircraft respond to SAR events beyond these boundaries. Therefore, all demand points where sorted into the following mutually exclusive groups: Guam Boat/Helicopter Events, Guam Cutter/Airplane Events, Hawaii Boat/Helicopter Events, and Hawaii Cutter/Airplane Events. These groups are further decomposed into clusters based upon the geographic proximity of the data points by employing a \textit{k}-means clustering algorithm. The number of zones was determined by considering the relationship between the number of zones and the corresponding within-cluster variance. A plot of this relationship forms an \textit{elbow curve}, whose name is tied to the phenomena that initial groupings account for a greater reduction in variance compared to subsequent groupings; the `elbow' of the curve occurs at the suggested number of zones for the data set. \begin{figure}[h!] \centering \includegraphics[width=0.85\textwidth]{AggSZDM.PNG} \caption{Aggregation SZDM (15 Zones)} \label{SZDM} \end{figure} \subsection{Methods of Comparative Analysis} \par This study evaluates the effectiveness of various methods of aggregation when conducting spatiotemporal forecasting. Specifically, we seek to assess the merit of the \cite{Razi} deterministic zonal distribution mode, and the \cite{Hornberger} stochastic zonal distribution model, comparing their effectiveness against traditional quadrat methods of varying fidelity's. To conduct these comparisons, two metrics are considered: distance-based aggregation error and volume-based aggregation error. \par The distance-based aggregation error ($d_e$) represents the total distance between where events were modelled as occurring ($\hat{x_j}$) and the actual location of their occurrence ($x_{i, j}$), for each event ($i \in I$) in the zone ($j \in J$). The anticipated event locations for all zones are weighted centroids for the each zone. In the quadrat models, the centroids are computed as an average of the latitudes/longitudes, multiplied by the events' corresponding total activities, for all events in the zone. In the zonal and stochastic zonal distribution models, the clustering algorithm yields a weighted centroid. The distance-based aggregation error metric is: \begin{equation} \label{DistMetric} d_e = \sum_{i \in I} \sum_{j \in J} |x_{i, j} - x_j| \end{equation} \noindent where the Haversine formula, \begin{equation} \label{haversine} d = 2r \text{ arcsin} \left( \sqrt{\text{sin}^2\left( \frac{\phi_2 - \phi_1}{2} \right) + \text{cos}(\phi_1)\text{cos}(\phi_2)\text{sin}^2\left( \frac{\theta_2 - \theta_1}{2}\right)} \right) \end{equation} \noindent which, given latitudes $\phi$, and longitudes $\theta$, calculates the great-circle distance between two points, is used to calculate each individual distance. \par The weighted distance-based aggregation error ($d_{we}$) is the sum of the differences in distance between where individual assets are modelled as being deployed to ($\hat{x_j}$) and the actual location assets are dispatched to. The weighting ($w_i$) is the number of assets assigned to the rescue operation. The difference between $d_e$ and $d_{we}$ is that the former treats individual SAR events as being equal in magnitude, whereas the latter incorporates the number of deployed assets. As with $d_e$ the individual distances in $d_{we}$ are calculated using the Haversine formula. \begin{equation} \label{DistMetric} d_{we} = \sum_{i \in I} \sum_{j \in J} w_i|x_{i, j} - x_j| \end{equation} The distance-based aggregation error ($d_e$), and the weighted distance-based aggregation error ($d_{we}$) are both computed for all aggregations A-F, as well as for the ZDM and the SZDM. \par The volume-based aggregation error ($v_e$) represents the total difference between the predicted level of monthly demand ($\hat{l}_{j, k}$) and the actual level of monthly demand ($l_j$), for each month in the considered time frame ($k \in K$). The metric is computed as: \begin{equation} \label{VolMetric} v_e = \sum_{j \in J} \sum_{k \in K} |l_{j, k} - l_j| \end{equation} \par Given that a primary difference between ZDM and SZDM is the integration of stochastic elements in the modeling of the demand, both deterministic and stochastic demand comparisons for volume-based aggregation error are conducted. For purposes of consistency, all frequency considerations are made on a \textit{per month} basis. \par Aggregations A-F are compared to the ZDM using a deterministic demand signal. This requires a singular, static value which represents the typical demand volume for each zone. Two methods are frequently used to identify these deterministic values: averages and medians. The average value is a common metric and is familiar to an end-user decision maker, but can be easily skewed by the presence of outliers. Median values tend to be more stable in the presence of outliers and thus more representative of the typical demand volume. As such, median values are implemented as the metric for deterministic demand volume in this study. \par The stochastic modelling approach utilized in SZDM considers the inherent uncertainty present in SAR events by fitting probability distributions to demand volumes in each zone. As noted by \cite{Afshartous} and \cite{Akbari}, SAR events can often be viewed as Poisson processes. In particular, \cite{Hornberger} found the emergence of SAR events in District 14's AOR could be modelled using poisson and gamma-poisson distributions. This study implements stochastic demand modeling in SZDM, and compares this to aggregations C and D to compare the impact of aggregation method on the simulation of future SAR demand. (Aggregations A and B were deemed too trivial to be of real interest, and stochastic models of Aggregations E and F proved intractable on the authors' hardware.) \par A modification of the volume-based aggregation error, $v_e = \sum_{j \in J} \sum_{k \in K} ( l_{j, k} - l_j)$, is also considered providing a distinction between over- and under-forecasting events. Stochastic models are compared graphically, plotting the simulated output for each month of the 24-month test period against the actual demand volume observed. \section{Analysis} \subsection{Distance-Based Aggregation Error} \par The distances, in nautical miles, between the aggregated demand point and the subsequent demand nodes during 2016 - 2017 are shown in Table \ref{DeterDemandDist}. The resulting distance-based aggregation error for the quadrat models reflect the law of diminishing returns, as described by \cite{Francis2004}. The first division of the region of study, from Aggregation A to Aggregation B, results in an 82.3\% reduction to the locational aggregation error. This error was continuously diminished with additional divisions. These results support the trend of location error generally reducing with additional zones. \begin{table}[h!] \centering \small \caption{Distance-Based Aggregation Error} \label{DeterDemandDist} \begin{tabular}{| c | c | c | c |} \hline \textbf{Aggregation} & \textbf{Number of Zones} & $d_e$ & $d_{we}$ \\ \hline A & 1 & 1,471,479 & 2,195,276 \\ B & 2 & 251,042.3 & 312,118.6 \\ C & 8 & 171,531.3 & 225,615 \\ D & 15 & 158,119.1 & 208,812.7 \\ E & 43 & 86,745.88 & 119,741.5 \\ F & 212 & 51,553.33 & 66,668.67 \\ \textit{ZDM} & \textit{36} & \textit{80,165.06} & \textit{92,669.37} \\ \textit{SZDM} & \textit{15} & \textit{92,067.72} & \textit{97,425.77} \\ \hline \end{tabular} \end{table} \par Aggregations ZDM and SZDM perform very well compared to the quadrat models. The zonal distribution model has a lower associated location error than Aggregation E, despite only having 36 zones compared to Aggregation E's 43 zones. This runs counter to the general claim that more zones always improves the accuracy of the location model, suggesting instead that deliberate steps can be implemented to aggregate spatial demand points in fewer clusters while still achieve competitively low levels of location error. The stochastic zonal distribution model's results support this observation, achieving a 41.7\% reduction in distance-based aggregation error compared to Aggregation D despite using the same number of zones. \par Similar trends are observed when the attention is shifted from the error in SAR event distances to the error in resource dispatch distances. There is a steady improvement in accuracy as the number of zones is increased, with the exception of Aggregations ZDM and SZDM. Additionally, the differences between $d_e$ and $d_{we}$ are notably larger for the quadrat models compared to Aggregations ZDM and SZDM; the stochastic zonal distribution model had the smallest increase in location error when weighting by the number of resources dispatched. These observations suggest that deliberate zoning of demand point can enhance the robustness of aggregate zones to weighted events, particularly when the zones are developed with consideration to both geographic proximity and the operational characteristics that are tied to the event weights. \subsection{Deterministic Volume-Based Aggregation Error} \par The total error in volume based upon the median monthly demand for each zone compared to the actual demand volumes as depicted in Table \ref{DeterDemandVol}. The phenomena described by \cite{Francis2008} and \cite{Dark2007} is observed; there is a general increase in total volume-based aggregation error as the number of zones increases. \begin{table}[h!] \centering \small \caption{Volume-Based Aggregation Error for Deterministic Demand Modeling} \label{DeterDemandVol} \begin{tabular}{| c | c | c |} \hline \textbf{Aggregation} & \textbf{Number of Zones} & $v_e$ \\ \hline A & 1 & 139 \\ B & 2 & 189 \\ C & 8 & 288 \\ D & 15 & 306 \\ E & 44 & 372 \\ F & 212 & 584 \\ \textit{ZDM} & \textit{36} & \textit{458} \\ \hline \end{tabular} \end{table} \par Interestingly, implementing the zonal distribution model corresponds to a large volume-based aggregation area, second only to Aggregation F; see Figure \ref{Total_Vol}. This suggests deliberate clustering based on geographic proximity does not correspond to improvements in deterministic demand volume modeling. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Deter_Total_Vol2.png} \caption{Comparison of the Total Volume-Based Aggregation Error for Deterministic Demand Modeling} \label{Total_Vol} \end{figure} \par Additional analysis compared the tendency for different aggregation models to overpredict versus underpredict demand volume. A plot of this analysis is shown in Figure \ref{OverUnder}, colorcoding the region of overprediction as red and underprediction as blue. For each month, Aggregation A and B perform equally well; the lines overlap in the plot. With the exception of Aggregation F, all methods adhere to similar trends in spikes and drops throughout the test timeframe. The general trend is for models to underpredict more consistently as they incorporate more aggregated zones. The exception to this trend is the zonal distribution model, which continues to have greater volume-based aggregation error compared to Aggregation E. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Deter_Vol_Over_Under2.png} \caption{Comparison of Over- and Under-predictions fo Deterministic Demand Modeling} \label{OverUnder} \end{figure} \subsection{Stochastic Volume-Based Aggregation Error} \par A comparison of stochastic demand models was used probability distributions fit to each zone in Aggregations C, D, and SZDM. The results from these simulations are compared to the actual observed demand levels for the two-year test period; see Figure \ref{Stoch_Vol}. Note that since the demand distributions were observed to be relatively stationary at large, each month's simulated volume from each model is determined by random draws from static probability distributions assigned to each zone (i.e., poisson and gamma-poisson distributions). \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{Stoch_Vol.PNG} \caption{Comparison of Stochastic Demand Models and Observed Demand Levels} \label{Stoch_Vol} \end{figure} \par Since the results from Figure \ref{Stoch_Vol} are randomly generated, the emphasis is less on the specific results from month-to-month and more on whether overall trend appears similar to the observed trend. This analysis shows similar trends for the three stochastic demand models, suggesting that they all could be used to effectively simulate the stochastic demand of the AOR. Aggregation C does make a notable spike in simulated SAR activity at the end of the test period, caused by the coincidence of multiple zones within the model simulating larger-than-normal demand volume. This phenomena was investigated further. \par While the observed demand volume fluctuates from month-to-month, it stays within the bounds of 30 and 60 events per month. Using these levels as thresholds, a monte carlo simulation of 10,000 2-year models was constructed. For each of the 240,000 simulated months, Table \ref{ExtMonth} shows the number that were beyond the thresholds of 30 and 60 events per month. All models appear relatively stable compared to these bounds; Aggregation C, with the greatest number of `extreme months', only had approximately 4.6\% of the 240,000 months classified as `extreme'. The stochastic zonal distribution model appeared to be the most stable of the three considered models, having the fewest months classified as `extreme' on either side of the bound. These findings suggests that while extreme months are not likely to be a significant occurrence in a simulation of SAR demand, the stochastic zonal distribution model minimizes the likelihood this will occur. \begin{table}[h!] \centering \small \caption{Comparison of Extreme Months over 10,000 2-Year Simulations} \label{ExtMonth} \begin{tabular}{| c | c | c |} \hline \textbf{Aggregation} & \textbf{Below 30 Events} & \textbf{Above 60 Events} \\ \hline C & 6175 & 5056 \\ D & 5402 & 4660 \\ \textit{SZDM} & 4727 & 3854 \\ \hline \end{tabular} \end{table} \section{Conclusion} \par The method used to aggregate spatiotemporal demands affects the outcome of location models built using the aggregated data, thus an understanding of the impacts of aggregation methods is fundamental. We have presented a framework for comparison of both static and stochastic spatiotemporal aggregation models, utilizing both a distance based aggregation error metric, an event magnitude weighted distance based aggregation error metric, and a volume based aggregation error metric. We further applied this framework to test six quadrat aggregation models of varying fidelity's, and two zonal based models, using historical search and rescue data from a massive scale region possessing highly variable demands. As expected aggregations with greater fidelity tend to reduce the distance-based aggregation error. In addition implementation of a deliberate zoning approach (e.g., ZDM and SDZM) further reduce this error while utilizing fewer zones. However, higher fidelity aggregations with increased number of zones has a detrimental effect on the modelling of demand volumes. Finally, stochastic representations of SAR demand appears to be effective at simulating actual SAR demand. \par Based on the results of our aggregation analysis we propose the following as potential exploratory efforts. Zonal techniques based on hierarchies and clustering techniques seem very promising, additional research on the impacts of clustering techniques could be fruitful. Additionally combining these zonal techniques, with their associated reduced location errors, with a lower fidelity aggregation model to project region level demands may be useful. Finally, a study examining possible nonlinear dynamic effects on the resulting output of location models as a result of changes in aggregation method may be informative.
1,477,468,750,252
arxiv
\section{Introduction} Named Entity Recognition (NER) aims at identifying different types of entities, such as people names, companies, location, etc., within a given text. This information is useful for higher-level Natural Language Processing (NLP) applications such as information extraction, summarization, and data mining \cite{Chen:2004:CDM:987521.987556, Banko:2007:OIE:1625275.1625705, Aramaki:2009:TMT:1572364.1572390}. Learning Named Entities (NEs) from social media is a challenging task mainly because (i) entities usually represent a small part of limited annotated data which makes the task hard to generalize, and (ii) they do not follow strict rules \cite{Ritter:2011:NER:2145432.2145595, Li:2012:TNE:2348283.2348380}. This paper describes a multi-task neural network that aims at generalizing the underneath rules of emerging NEs in user-generated text. In addition to the main category classification task, we employ an auxiliary but related secondary task called NE segmentation (i.e. a binary classification of whether a given token is a NE or not). We use both tasks to jointly train the network. More specifically, the model captures word shapes and some orthographic features at the character level by using a Convolutional Neural Network (CNN). For contextual and syntactical information at the word level, such as word and Part-of-Speech (POS) embeddings, the model implements a Bidirectional Long-Short Term Memory (BLSTM) architecture. Finally, to cover well-known entities, the model uses a dense representation of gazetteers. Once the network is trained, we use it as a feature extractor to feed a Conditional Random Fields (CRF) classifier. The CRF classifier jointly predicts the most likely sequence of labels giving better results than the network itself. With respect to the participants of the shared task, our approach achieved the best results in both categories: 41.86\% F1-score for entities, and 40.24\% F1-score for surface forms. The data for this shared task is provided by \citet{wnutOrganizers17}. \section{Related Work} Traditional NER systems use hand-crafted features, gazetteers and other external resources to perform well \cite{Ratinov:2009:DCM:1596374.1596399}. \citet{luo-EtAl:2015:EMNLP2} obtain state-of-the-art results by relying on heavily hand-crafted features, which are expensive to develop and maintain. Recently, many studies have outperformed traditional NER systems by applying neural network architectures. For instance, \citet{glample2016} use a bidirectional LSTM-CRF architecture. They obtain a state-of-the-art performance without relying on hand-crafted features. \citet{limsopatham2016_wnut_ner}, who achieved the first place on WNUT-2016 shared task, use a BLSTM neural network to leverage orthographic features. We use a similar approach but we employ CNN and BLSTM in parallel instead of forwarding the CNN output to the BLSTM. Nevertheless, our main contribution resides on Multi-Task Learning (MTL) and a combination of POS tags and gazetteers representation to feed the network. Recently, MTL has gained significant attention. Researchers have tried to correlate the success of MTL with label entropy, regularizers, training data size, and other aspects~\cite{martinezalonso-plank:2017:EACLlong,bingel-sogaard:2017:EACLshort}. For instance, \newcite{collobert2008unified} use a multi-task network for different NLP tasks and show that the multi-task setting improves generality among shared tasks. In this paper, we take advantage of the multi-task setting by adding a more general secondary task, NE segmentation, along with the primary NE categorization task. \section{Methodology} This section describes our system\footnote{~\url{https://github.com/tavo91/NER-WNUT17}} in three parts: feature representation, model description\footnote{~The neural network is implemented using Keras (\url{https://github.com/fchollet/keras}) and Theano as backend (\url{http://deeplearning.net/software/theano/}).}, and sequential inference. \subsection{Feature Representation} \label{feature_rep} We select features to represent the most relevant aspects of the data for the task. The features are divided into three categories: character, word, and lexicons. \noindent{\bf Character representation}: we use an orthographic encoder similar to that of \citet{limsopatham2016_wnut_ner} to encapsulate capitalization, punctuation, word shape, and other orthographic features. The only difference is that we handle non-ASCII characters. For instance, the sentence \emph{``3rd Workshop !''} becomes \emph{``ncc Cccccccc p''} as we map numbers to `n', letters to `c' (or `C' if capitalized), and punctuation marks to `p'. Non-ASCII characters are mapped to `x'. This encoded representation reduces the sparsity of character features and allows us to focus on word shapes and punctuation patterns. Once we have an encoded word, we represent each character with a 30-dimensional vector \cite{DBLP:journals/corr/MaH16}. We account for a maximum length of 20 characters\footnote{~Different lengths do not improve results} per word, applying post padding on shorter words and truncating longer words. \noindent{\bf Word representation}: we have two different representations at the word level. The first one uses pre-trained word embeddings trained on 400 million tweets representing each word with 400 dimensions \cite{godin2015multimedia}\footnote{~\url{http://www.fredericgodin.com/software}}. The second one uses Part-of-Speech tags generated by the CMU Twitter POS tagger \cite{owoputi2013improved}. The POS tag embeddings are represented by 100-dimensional vectors. In order to capture contextual information, we account for a context window of 3 tokens on both words and POS tags, where the target token is in the middle of the window. We randomly initialize both the character features and the POS tag vectors using a uniform distribution in the range \(\left[-\sqrt{\frac{3}{dim}}, +\sqrt{\frac{3}{dim}}\right]\), where \(dim\) is the dimension of the vectors from each feature representation \cite{he2015delving}. \noindent{\bf Lexical representation}: we use gazetteers provided by \citet{mishra2016_wnut_ner} to help the model improve its precision for well-known entities. For each word we create a binary vector of 6 dimensions (one dimension per class). Each of the vector dimensions is set to one if the word appears in the gazetteers of the related class. \subsection{Model Description} \label{model} \noindent{\bf Character level CNN}: we use a CNN architecture to learn word shapes and some orthographic features at the character level representation (see Figure \ref{fig:cnn_char}). The characters are embedded into a $\mathbb{R}^{d \times l}$ dimensional space, where $d$ is the dimension of the features per character and $l$ is the maximum length of characters per word. Then, we take the character embeddings and apply 2-stacked convolutional layers. Following \citet{DBLP:journals/corr/ZhouKLOT15}, we perform a \textit{global average pooling}\footnote{~\citet{DBLP:journals/corr/ZhouKLOT15} empirically showed that \textit{global average pooling} captured more extensive information from the feature maps than \textit{max pooling}.} instead of the widely used \textit{max pooling} operation. Finally, the result is passed to a fully-connected layer using a Rectifier Linear Unit (ReLU) activation function, which yields the character-based representation of a word. The resulting vector is used as input for the rest of the network. \begin{figure} \centering \includegraphics[width=\linewidth,height=6cm]{cnn_char.png} \caption{ \small Orthographic character-based representation of a word (green) using a CNN with 2-stacked convolutional layers. The first layer takes the input from embeddings (red) while the second layer (blue) takes the input from the first convolutional layer. Global Average Pooling is applied after the second convolutional layer.} \label{fig:cnn_char} \end{figure} \noindent{\bf Word level BLSTM}: we use a Bidirectional LSTM \cite{DBLP:journals/corr/DyerBLMS15} to learn the contextual information of a sequence of words as described in Figure \ref{fig:blstm_word}. Word embeddings are initialized with pre-trained Twitter word embeddings from a Skip-gram model \cite{godin2015multimedia} using word2vec \cite{mikolov2013word2vec}. Additionally, we use POS tag embeddings, which are randomly initialized using a uniform distribution. The model receives the concatenation of both POS tags and Twitter word embeddings. The BLSTM layer extracts the features from both forward and backward directions and concatenates the resulting vectors from each direction ($[\vec{h};\quad\cev{h}]$). Following \citet{DBLP:journals/corr/MaH16}, we use 100 neurons per direction. The resulting vector is used as input for the rest of the network. \begin{figure} \centering \includegraphics[width=\linewidth,height=6cm]{blstm_word.png} \caption{ \small Word representation of POS-tag embeddings (blue) and Twitter word embeddings (red) using a BLSTM neural network. } \label{fig:blstm_word} \end{figure} \noindent{\bf Lexicon network}: we take the lexical representation vectors of the input words and feed them into a fully-connected layer. We use 32 neurons on this layer and a ReLU activation function. Then, the resulting vector is used as input for the rest of the network. \noindent{\bf Multi-task network}: we create a unified model to predict the NE segmentation and NE categorization tasks simultaneously. Typically, the additional task acts as a regularizer to generalize the model~\cite{goodfellow2016deep,collobert2008unified}. The concatenation of character, word and lexical vectors is fed into the NE segmentation and categorization tasks. We use a single-neuron layer with a sigmoid activation function for the secondary NE segmentation task, whereas for the primary NE categorization task, we employ a 13-neuron\footnote{~Using BIO encoding, each of the 6 classes will have a \textit{begin} and \textit{inside} version (e.g. B-product, I-product).} layer with a softmax activation function. Finally, we add the losses from both tasks and feed the total loss backward during training. \subsection{Sequential Inference} \label{crf} The multi-task network predicts probabilities for each token in the input sentence individually. Thus, those individual probabilities do not account for sequential information. We exploit the sequential information by using a Conditional Random Fields\footnote{~Python CRF-Suite library: \url{https://github.com/scrapinghub/python-crfsuite}} classifier over those probabilities. This allows us to jointly predict the most likely sequence of labels for a given sentence instead of performing a word-by-word prediction. More specifically, we take the weights learned by the multi-task neural network and use them as features for the CRF classifier (see Figure \ref{fig:system_overview}). Taking weights from the common dense layer captures both of the segmentation and categorization features. \begin{figure} \includegraphics[width=\linewidth]{system.png} \caption{ \small Overall system design. First, the system embeds a sentence into a high-dimensional space and uses CNN, BLSTM, and dense encoders to extract features. Then, it concatenates the resulting vectors of each encoder and performs multi-task. The top left single-node layer represents segmentation (red) while the top right three-node layer represents categorization (blue). Finally, a CRF classifier uses the weights of the common dense layer to perform a sequential classification. } \label{fig:system_overview} \end{figure} \section{Experimental Settings} We preprocess all the datasets by replacing the URLs with the token \textless URL\textgreater~before performing any experiment. Additionally, we use half of development set as validation and the other half as evaluation. Regarding the network hyper-parameters, in the case of the CNN, we set the kernel size to 3 on both convolutional layers. We also use the same number of filters on both layers: 64. Increasing the number of filters and the number of convolutional layers yields worse results, and it takes significantly more time. In the case of the BLSTM architecture, we add dropout layers before and after the Bidirectional LSTM layers with dropout rates of 0.5. The dropout layers allow the network to reduce overfitting \cite{Srivastava:2014:DSW:2627435.2670313}. We also tried using a batch normalization layer instead of dropouts, but the experiment yielded worse results. The training of the whole neural network is conducted using a batch size of 500 samples, and 150 epochs. Additionally, we compile the model using the AdaMax optimizer \cite{DBLP:journals/corr/KingmaB14}. Accuracy and F1-score are used as evaluation metrics. For sequential inference, the CRF classifier uses L-BFGS as a training algorithm with L1 and L2 regularization. The penalties for L1 and L2 are $1.0$ and $1.0e^{-3}$, respectively. \begin{table} \resizebox{\columnwidth}{!}{% \centering \begin{tabular}{|l|r|r|r|} \hline \bf Classes & \bf Precision (\%)& \bf Recall (\%) & \bf F1 (\%) \\ \hline\hline corporation & 35.71 & 29.41 & 32.26 \\ creative-work & 60.00 & 5.26 & 9.68 \\ group & 30.00 & 12.00 & 17.14 \\ location & 65.71 & 56.10 & 60.53 \\ person & 83.98 & 62.04 & 71.36 \\ product & 39.29 & 15.71 & 22.45 \\ \hline \hline \bf Entity & 72.16 & 43.30 & 54.12 \\ \bf Surface & 68.38 & 95.05 & 79.54 \\ \hline \end{tabular} } \caption{\small \label{exp_dev_crf} This table shows the results from the CRF classifier at the class level. The classification is conducted using the development set as both validation and evaluation.} \end{table} \section{Results and Discussion} We compare the results of the multi-task neural network itself and the CRF classifier on each of our experiments. The latter one always shows the best results, which emphasizes the importance of sequential information. The results of the CRF, using the development set, are in Table \ref{exp_dev_crf}. Moreover, the addition of a secondary task allows the CRF to use more relevant features from the network improving its results from a F1-score of 52.42\% to 54.12\%. Our finding that a multi-task architecture is generally preferable over the single task architecture is consistent with prior research~\cite{sogaard2016deep,collobert2008unified,attia-EtAl:2016:CogALex-V,maharjan-EtAl:2017:EACLlong}. We also study the relevance of our features by performing multiple experiments with the same architecture and different combinations of features. For instance, removing gazetteers from the model drops the results from 54.12\% to 52.69\%. Similarly, removing POS tags gives worse results (51.12\%). Among many combinations, the feature set presented in Section \ref{feature_rep} yields the best results. \begin{table} \resizebox{\columnwidth}{!}{% \centering \begin{tabular}{|l|r|r|r|} \hline \bf Classes & \bf Precision (\%) & \bf Recall (\%) & \bf F1 (\%) \\ \hline\hline corporation & 31.91 & 22.73 & 26.55 \\ creative-work & 36.67 & 7.75 & 12.79 \\ group & 41.79 & 16.97 & 24.14 \\ location & 56.92 & 49.33 & 52.86 \\ person & 70.72 & 50.12 & 58.66 \\ product & 30.77 & 9.45 & 14.46 \\ \hline \hline \bf Entity & 57.54 & 32.90 & 41.86 \\ \bf Surface & 56.31 & 31.31 & 40.24 \\ \hline \end{tabular} } \caption{\small \label{test_results} This table shows the final results of our submission. The hardest class to predict for is \textit{creative-work}, while the easiest is \textit{person}.} \end{table} The final results of our submission to the WNUT-2017 shared task are shown in Table \ref{test_results}. Our approach obtains the best results for the \textit{person} and \textit{location} categories. It is less effective for \textit{corporation}, and the most difficult categories for our system are \textit{creative-work} and \textit{product}. Our intuition is that the latter two classes are the most difficult to predict for because they grow faster and have less restrictive patterns than the rest. For instance, products can have any type of letters or numbers in their names, or in the case of creative works, as many words as their titles can hold (e.g. name of movies, books, songs, etc.). Regarding the shared-task metrics, our approach achieves a 41.86\% F1-score for entities and 40.24\% for surface forms. Table \ref{all-scores} shows that our system yields similar results to the other participants on both metrics. In general, the final scores are low which states the difficulty of the task and that the problem is far from being solved. \begin{table} \small \centering \begin{tabular}{|l|r|r|} \hline \bf Participants & \bf F1 - E (\%) & \bf F1 - SF (\%) \\ \hline \hline MIC-CIS & 37.06 & 34.25 \\ Arcada & 39.98 & 37.77 \\ Drexel-CCI & 26.30 & 25.26 \\ SJTU-Adapt & 40.42 & 37.62 \\ FLYTXT & 38.35 & 36.31 \\ SpinningBytes & 40.78 & 39.33 \\ \bf{UH-RiTUAL} & \bf{41.86} & \bf{40.24} \\ \hline \end{tabular} \caption{\small \label{all-scores} The scores of all the participants in the WNUT-2017 shared task. The metrics of the shared task are entity and surface form F1-scores. Our results are highlighted. } \end{table} \section{Error Analysis} By evaluating the errors made by the CRF classifier, we find that the NE boundaries are a problem. For instance, when a NE is preceded by an article starting with a capitalized letter, the model includes the article as if it were part of the NE. This behavior may be caused by the capitalization features captured by the CNN network. Similarly, if a NE is followed by a conjunction and another NE, the classifier tends to join both NEs as if the conjunction were part of a single unified entity. Another common problem shown by the classifier is that fully-capitalized NEs are disregarded most of the time. This pattern may be related to the switch of domains in the training and testing phases. For instance, some Twitter informal abbreviations\footnote{~E.g. \textit{LOL} is an informal social media expression that stands for \textit{Laughing Out Loud}, which is not an NE.} may appear fully-capitalized but they do not represent NEs, whereas in Reddit and Stack Overflow fully-capitalized words are more likely to describe NEs. \section{Conclusion} We show that our multi-task neural network is capable of extracting relevant features from noisy user-generated text. We also show that a CRF classifier can boost the neural network results because it uses the whole sentence to predict the most likely set of labels. Additionally, our approach emphasizes the importance of POS tags in conjunction with gazetteers for NER tasks. Twitter word embeddings and orthographic character embeddings are also relevant for the task. Finally, our ongoing work aims at improving these results by getting a better understanding of the strengths and weaknesses of our model. We also plan to evaluate the current system in related tasks where noise and emerging NEs are prevalent.
1,477,468,750,253
arxiv
\section{Introduction} Analytic approximations and perturbation methods are of common use in different branches of general relativity theory. The most common methods are the post-Newtonian approximations, able to deal with the gravitational field of any system in the non-relativistic limit, the post-Minkowskian approximations, which are appropriate for relativistic systems in the weak gravitational field regime, and the perturbation formalisms, which expand (at linear order in general) around some exact solution of the Einstein field equations. The B3 session {\em Analytic approximations, perturbation methods, and their applications} of GR18 conference was dominated by the issues more or less directly related to the problem of detecting gravitational waves by currently operating or planned to be built in the near future detectors. One of the most important targets for gravitational-wave detectors are coalescing binary systems made of compact objects of different kinds. These sources produce ``chirps'' of gravitational radiation whose amplitude and frequency are increasing in time. For successful detection of the chirp signals and extraction of their astrophysically important parameters it is crucial to theoretically predict gravitational waveforms with sufficient accuracy. One of the important sources for the future space-based LISA detector are extreme-mass-ratio binaries consisting of a small compact body (of stellar mass) orbiting around a supermassive black hole. The need for accurate modelling of the orbital dynamics of such binaries motivates some recent work on the problem of calculating the gravitational self-force experienced by a point particle moving in the background space-time of a more massive body. Several talks in the session reported the progress on different aspects of the gravitational self-force computations. These were talks by {\em S~Detweiler}, {\em M~Favata}, {\em J~L~Friedman}, {\em W~Hikida}, {\em N~Sago}, and {\em B~F~Whiting}. For comparable-mass binaries, computations based on the post-Newtonian approximation of general relativity are useful for constructing gravitational-wave templates for data-analysis purposes. The post-Newtonian approximation describes with great accuracy the inspiral phase of these systems. The analytic post-Newtonian results are currently matched to recent numerical calculations of the merger and ring-down phases of black hole binaries \cite{Buonanno&Cook&Pretorius2007,Boyle&al2007}. In the contributed talks (by {\em L~Blanchet}, {\em B~R~Iyer}, and {\em M~Vas\'uth}) recent results on incorporating the spin-orbit effects (also within the accuracy beyond the leading-order effects) as well as the generalization to eccentric orbits (most of the explicit analytic results concern circular orbits) were presented. Recently a new approach to the perturbative solution of the problem of motion and radiation in general relativity was developed. This is the approach pioneered by Goldberger and Rothstein \cite{Goldberger&Rothstein2006} in which effective field theory methods were used to describe the dynamics of nonrelativistic extended objects coupled to gravity. {\em B~Kol} discussed this approach as well as some of its applications. {\em A~Tartaglia} presented a new semi-analytic method for computing the emission of gravitational waves by a compact object moving in the background of a more massive body. Several other topics were tackled in the contributed talks. {\em C~L\"ammerzahl} discussed the influence of the cosmic expansion on the physics in gravitationally bound systems. {\em D~Singh} presented an analytic perturbation approach for classical spinning particle dynamics, based on the Mathisson-Papapetrou-Dixon equations of motion. {\em H~Sotani} studied gravitational radiation from collapsing magnetized dust. {\em A~S~Kubeka} remarked on the computation of the Ricci tensor for non-stationary axisymmetric space-times. In the rest of this article all talks contributed to the B3 session are sketched in more detail (in the order in which they were presented at the conference). \section{Contributed talks} \subsection{{\em Self-force analysis in extreme-mass-ratio inspiral} by S~Detweiler and I~Vega (reported by S~Detweiler)} The motion of a small object of mass $m$ orbiting a supermassive black hole of mass $M$ deviates slightly from a geodesic and has an acceleration that scales as the ratio $m/M$ of the masses. This acceleration includes the dissipative effects of radiation reaction and is said to result from the gravitational self-force acting on $m$ \cite{Poisson2004}. As an alternative, the effects of the self-force may be described as geodesic motion in an appropriately regularized metric of the perturbed spacetime \cite{Detweiler2005}. The LISA effort requires accurate gravitational wave templates for data analysis. For extreme-mass-ratio inspirals the templates should include both the dissipative and conservative effects of the self-force. The talk described a novel, efficient method for simultaneously calculating both the gravitational self-force as well as its effect on the gravitational waveform. The Authors replaced the usual singular point source with a distributed, abstract, analytically determined source. The resulting perturbation in the field from this special distributed source is guaranteed to be differentiable at the location of the particle and to provide the appropriate self-force effect on the motion of the particle. At the same time, the field from the distributed source is identically equal to the actual perturbed field in the wave zone. Thus, this abstract field simultaneously provides both the self-force acting on a point source and also the effect of the self-force on the waveform of the radiation. \subsection{{\em The adiabatic approximation and three-body effects in extreme-mass-ratio inspirals} by M~Favata} Extreme-mass-ratio inspirals (EMRIs) are an important class of LISA sources consisting of a compact object inspiralling into a supermassive black hole. The detection of these sources and the precision measurement of their parameters relies on the accurate modeling of their orbital dynamics. A precise description of the binary's orbit requires an evaluation of the compact object's self-force. The adiabatic approximation (more appropriately referred to as the {\em radiative approximation} \cite{Pound&Poisson2007a,Pound&Poisson2007b}), consists of computing the time-averaged rates of change of the three conserved quantities of geodesic motion. Its use greatly simplifies the computation of the orbital evolution. However, the adiabatic approximation ignores corrections to the conservative dynamics proportional to the mass $m$ of the compact object. These `post-adiabatic' corrections will affect the binary's positional orbital elements, for example, by causing $O(m)$ corrections to the pericenter precession rate. Using a toy model of an electric charge orbiting a central mass and perturbed by the electromagnetic self-force, Pound, Poisson, and Nickel \cite{Pound&Poisson&Nickel2005} have called into question the accuracy of the adiabatic approximation, especially for eccentric orbits. In order to estimate the size of the post-adiabatic phase errors in the gravitational case, the Author presented an analytical computation, accurate to second-post-Newtonian order, of the small-eccentricity corrections to the gravitational-wave phase. These post-Newtonian, eccentricity corrections to the phase can be significant not only for EMRIs but for other binary sources as well. Using this phase expansion it was found that the post-adiabatic phase errors are sufficiently small that waveforms based on the adiabatic approximation can be used for EMRI detection, but not for precise parameter measurements. The Author also discussed the effect of a third mass orbiting the EMRI. The analysis models the system as a hierarchical triple using the equations of motion of Blaes, Lee, and Socrates \cite{Blaes&Lee&Socrates2002}. To have a measurable effect on the EMRI's waveform, the distant mass must be sufficiently close to the compact object that both the inner and outer binaries would be detected as EMRIs. Such ``double-EMRI'' systems are rare. \subsection{{\em Extreme-mass-ratio binary inspiral in a radiation gauge} by~J~L~Friedman, T~S~Keidl, Dong-Hoon~Kim, E~Messaritaki and A~G~Wiseman (reported by~J~L~Friedman)} Gravitational waves from the inspiral of a stellar-size black hole of mass $m$ to a supermassive black hole of mass $M$ can be accurately approximated by a point particle moving in a Kerr background. To find the particle's orbit to first order in the mass ratio $m/M$, one must compute the self-force. The computation requires a renormalization, but the well-known MiSaTaQuWa prescription \cite{Mino&Sasaki&Tanaka1997,Quinn&Wald1997} involves a harmonic gauge, a gauge that is not well suited to perturbations of the Kerr geometry. In a harmonic gauge, one solves ten coupled PDEs, instead of the single decoupled Teukolsky equation for the gauge invariant components ($\psi_0$ or $\psi_4$) of the perturbed Weyl tensor. In the talk progress was reported in finding the renormalized self-force from $\psi_0$ or $\psi_4$. Following earlier work by Chrzanowski and by Cohen and Kegeles, a radiation gauge was adopted to reconstruct the perturbed metric from the perturbed Weyl tensor. The Weyl tensor component is renormalized by subtracting a singular part obtained using a recent Detweiler-Whiting version \cite{Detweiler&Whiting2003} of the singular part of the perturbed metric as a local solution to the perturbed Einstein equations. The Authors' method relies on the fact that the corresponding renormalized $\psi_0$ is a {\em sourcefree} solution to the Teukolsky equation. One can then reconstruct a nonsingular renormalized metric in a radiation gauge, a gauge that exists only for vacuum perturbations. More details can be found in Ref.\ \cite{Keidl&al2007}. \subsection{{\em Adiabatic evolution of three `constants' of motion in Kerr spacetime} by~W~Hikida, K~Ganz, H~Nakano, N~Sago and T~Tanaka (reported by~W~Hikida)} General orbits of a particle of small mass around a Kerr black hole are characterized by three parameters: the energy, the angular momentum, and the Carter constant. For energy and angular momentum, one can evaluate their change rates from the fluxes of the energy and the angular momentum at infinity and on the event horizon according to the balance argument. On the other hand, for the Carter constant, one can not use the balance argument because the conserved current associated with it is not known. Recently Mino proposed a new method of evaluating the averaged change rate of the Carter constant by using the radiative field. The Authors developed a simplified scheme for practical evaluation of the evolution of the Carter constant based on the Mino's proposal. In the talk this scheme was described in some detail, and derivation of explicit analytic formulae for the change rates of the energy, the angular momentum, and the Carter constant was presented. Also some numerical results for large eccentric orbits were shown. For more details see Ref.\ \cite{Ganz&al2007}. \subsection{{\em Gravitational self-force on a particle orbiting a Schwarzschild black hole} by~L~Barack and N~Sago (reported by~N~Sago)} In the talk the calculation of the gravitational self-force acting on a pointlike particle moving around a Schwarzschild black hole was presented. The calculation was done in the Lorenz gauge. First, the Lorenz-gauge metric perturbation equations were solved directly using numerical evolution in the time domain. Then the back-reaction force from each of the multipole modes of the perturbation was computed. Finally, the {\em mode sum} scheme was applied to obtain the physical self-force. The temporal component of the self-force describes the rate of the loss of orbital energy. As a check of their scheme, the Authors compared their result for this component with the total flux of gravitational-wave energy radiated to infinity and through the event horizon. The radial component of the self-force was also calculated. From their result for the self-force, the Authors computed the correction to the orbital frequency due to the gravitational self-force, taking into account of both the dissipative and the conservative effects. More details can be found in Ref.\ \cite{Barack&Sago2007}. \subsection{{\em Mobile quadrupole as a semianalytic method for gravitational-wave emission} by~A~Tartaglia, M~F~De~Laurentis, A~Nagar and N~Radicella (reported by~A~Tartaglia)} The quadrupole formula is the simplest approximation for studying the gravitational-wave emission from a binary system. The formula gives its best performance for quasi-circular and quasi-stationary motion of the emitters. Whenever the motion is far from the quasi-circular approximation, other semi-analytic methods or numerical calculations of growing complexity must be used. In the talk a situation was studied where the gravitational wave is emitted by a concentrated object accelerating in the background field of a central mass. Provided one knows the background metric, the gravitational wave represents a first order perturbation on it, so that the space-time trajectory of each object of the pair is almost geodesic. Once the equations of a geodesic are written, the motion can be thought of as an instantaneous rotation around an (instantaneously at rest) curvature centre for the space trajectory. In this condition the quadrupole formula is easily applicable at each different place along the geodesic, after calculating the curvature and the equivalent angular velocity as the ratio between the three-speed and the curvature radius. Everything is reduced to a problem of ordinary geometry. The energy emission rate and the waveforms obtained by this way must simply be converted from the local time to the time of a far away inertial observer. The approach was applied to the capture of a mass by a Kerr black hole. The method is much lighter (from the computational point of view) than numerical relativity, giving comparable results. For the research results relevant to the presented approach see Refs.\ \cite{Dymnikova1977,Dymnikova&Popov1980,Schnittma&Bertschinger2004}. \subsection{{\em The non-radiated multipoles in the perturbed Kerr spacetime} by~L~R~Price and B~F~Whiting (reported by~B~F~Whiting)} For the self-force problem in general relativity, it has been shown that the perturbed metric produced by a finite-mass test, point-particle, has a singular part which exerts no influence on the particle, while the self-force which the particle experiences arises entirely due to a metric perturbation which is smooth at the location of the particle \cite{Detweiler&Whiting2003}. However, metric reconstruction from the perturbed Weyl tensor is unable to yield perturbations for the non-radiated multipoles in Petrov type II spacetimes, such as that surrounding the Kerr black hole \cite{Whiting&Price2005}. In the talk a new form of the perturbed Einstein equations, developed by the Authors on the base of the Newman-Penrose formalism, was presented. With its assistance, progress towards filling the low multipole gap, which will contribute to the calculation of regularization parameters for the self-force problem, was discussed. \subsection{{\em Higher-order spin effects in the radiation field of compact binaries} by~L~Blanchet, A~Buonanno and G~Faye (reported by~L~Blanchet)} In the talk the investigation, motivated by the search for gravitational waves emitted by binary black holes, of the gravitational radiation field of compact objects with spins was discussed. The approach is based on the multipolar post-Newtonian wave generation formalism and on the formalism of point particles with spin (Papapetrou-Dixon-Bailey-Israel). The Authors computed: (i) the spin-orbit coupling effect in the binary's equations of motion one post-Newtonian (PN) order beyond the dominant effect (confirming a previous result by Tagoshi et al.\ \cite{Tagoshi&Ohashi&Owen2001}), (ii) the spin-orbit coupling effects in the binary's mass and current quadrupole moments at the same order, (iii) the spin-orbit contributions in the gravitational-wave energy flux, and (iv) the secular evolution of the binary's orbital phase up to 2.5PN order (for maximally rotating black holes). Previous results on the spin-orbit effect at the lowest order were computed in Refs.\ \cite{Kidder&Will&Wiseman1993,Kidder1995}. Crucial ingredients for obtaining the next-order 2.5PN contribution in the orbital phase are the binary's energy and the spin precession equations. These results provide more accurate gravitational-wave templates to be used in the data analysis of rapidly rotating Kerr-type black-hole binaries with the ground-based interfrometric detectors and the space-based detector LISA. Details of the presented results were published in Refs.\ \cite{Faye&Blanchet&Buonanno2006,Blanchet&Buonanno&Faye2006}. \subsection{{\em The 3PN gravitational wave luminosity from inspiralling compact binaries in~eccentric orbits} by~K~G~Arun, L~Blanchet, B~R~Iyer and M~S~S~Qusailah (reported by~B~R~Iyer)} Some details of the computation of the complete gravitational-wave luminosity of inspiralling compact binaries on quasi-elliptical orbits up to the third post-Newtonian (3PN) order using multipolar post-Minkowskian formalism were presented. There are two types of contributions to the gravitational-wave luminosity at 3PN order: the instantaneous type terms, which depend on the dynamics of the binary only at the retarded instant, and the hereditary terms, which are sensitive to dynamics of the system in the entire past. The new inputs for the calculation of the 3PN instantaneous terms include the mass octupole and current quadrupole at 2PN for general orbits and the 3PN accurate mass quadrupole. Using the 3PN quasi-Keplerian representation of elliptical orbits obtained recently \cite{Memmesheimer&Gopakumar&Schafer2004}, the flux is averaged over the binary's orbit. The hereditary terms have a `tail', `tail of tail' and `tail-squared' contributions which are computed using a semi-analytic procedure extending the earlier work of Blanchet and Sch\"afer at 1.5PN \cite{Blanchet&Schafer1993}. This semi-analytic extension uses the 1PN quasi-Keplerian parametrisation of the binary and exploits the doubly periodic nature of the orbital motion. The final 3PN accurate energy flux averaged over the binary's orbit was presented in the modified harmonic (which contains no logarithmic terms) and the ADM coordinates. Also a gauge invariant expression of the flux was provided in terms of the orbital frequency and the periastron precession constant. The results are consistent with those obtained by perturbation theory in the test particle limit to order $e_t^2$ (where $e_t$ is the so-called time eccentricity) and the 3PN circular orbit results. These results form the starting input for the construction of templates for inspiralling binaries in quasi-eccentric orbits, an astrophysically possible class of sources both for the ground-based and the space-based gravitational-wave interferometers. \subsection{{\em On the influence of the cosmic expansion on the physics in gravitationally bound systems} by~C~L\"ammerzahl and H~Dittus (reported by~C~L\"ammerzahl)} It is an old question whether the cosmological expansion influences the dynamics of gravitationally bound systems \cite{Lammerzahl&Preuss&Dittus2007}. Though sometimes it has been claimed that the expansion will tie apart gravitationally bound systems, the majority of papers covering this issue derive no measurable influence. In the talk some additional arguments for the latter were given. It was shown that (i) the gravitational field created by an isolated body will feel only a tiny influence, (ii) the planetary orbits also are practically inert against the expansion, and (iii) Doppler tracking of satellites in deep space is also only marginally influenced by the cosmic expansion. \subsection{{\em Spin evolution in binary systems} by~M~Vas\'uth and J~Maj\'ar (reported~by~M~Vas\'uth)} Gravitational waves emitted by compact binary systems are characterized by different parameters of the binary. Among them the effects of rotation of the orbiting bodies appear at 1.5 post-Newtonian (PN) order both in the dynamical description and the wave generation problem. In the talk the evolution of the individual spins of the bodies was discussed for compact binaries in circular and general eccentric orbits. For a 2PN description the spin precession equations were analyzed up to 0.5PN order. To the lowest order the angles between the total angular momentum and spin vectors are constant and spin-spin interaction causes additional harmonic dependence. The true anomaly parameterization proved to be useful in the description of eccentric orbits. In the circular and general cases linear and harmonic dependence of the angles describing the orientation of spins were found. \subsection{{\em Matched asymptotic expansion as a classical effective field theory} by~B~Kol} In the talk it was explained how the method of {\em classical effective field theory} borrowed from {\em quantum field theory} by Goldberger and Rothstein \cite{Goldberger&Rothstein2006} in the context of the motion of a compact object within a background whose typical length scale is much larger, is equivalent to {\em matched asymptotic expansion}, and moreover it offers additional insight. Feynman diagrams, divergences, (dimensional) regularization, counter-terms and the Feynman gauge appeared. The ideas were demonstrated by the case of caged black holes (black holes within a compact dimension). Within this method the source is replaced by a "black box" effective action. Another application of these ideas is to the inspiral problem of a binary system. The Author presented a computation utilizing high energy physics methods of the radiation reaction force for the case of a scalar field. \subsection{{\em An analytic perturbation approach for classical spinning particle dynamics} by~D~Singh} The Author presented a perturbation method to analytically describe the dynamics of a classical spinning particle, based on the Mathisson-Papapetrou-Dixon equations of motion. By a power series expansion with respect to the particle's spin magnitude, it was demonstrated how to obtain an analytic representation of the particle's kinematic and dynamical degrees of freedom that is formally applicable to infinite order in the expansion parameter. Within this formalism, it is possible to identify a classical analogue of radiative corrections to the particle's mass and spin due to spin-gravity interaction. The robustness of this approach was demonstrated by showing how to explicitly compute the first-order momentum and spin tensor components for arbitrary particle motion in a general space-time background. Potentially interesting applications based on this perturbation approach were also considered. For more details see Ref.\ \cite{Singh2007}. \subsection{{\em Gravitational radiation from collapsing magnetized dust} by~H~Sotani, S~Yoshida and K~D~Kokkotas (reported by~H~Sotani)} The Authors studied the influence of magnetic fields on the axial gravitational waves emitted during the collapse of a homogeneous dust sphere. It was found that while the energy emitted depends weakly on the initial matter perturbations it has strong dependence on the strength and the distribution of the magnetic field perturbations. The gravitational wave output of such a collapse can be up to an order of magnitude larger or smaller calling for detailed numerical 3D studies of collapsing magnetized configurations. More details are given in Ref.\ \cite{Sotani&al2007}. \subsection{{\em Gravitational waveforms for compact binaries} by~M~Vas\'uth and J~Maj\'ar (reported by~M~Vas\'uth)} Among the promising sources of gravitational radiation are binary systems of compact stars. The detectable signal is characterized by different parameters of the system, e.g., rotation of the bodies and the eccentricity of the orbit. The Authors presented a method to evaluate the gravitational wave polarization states for inspiralling compact binaries and considered eccentric orbits and the spin-orbit contribution in the case of two spinning objects up to 1.5 post-Newtonian order. In the circular orbit limit the presented results are in agreement with existing results. For more details see Ref.\ \cite{Vasuth&Majar2007}. \subsection{{\em On the Ricci tensor for non-stationary axisymmetric space-times} by~A~S~Kubeka} The results on Ricci tensor for non-stationary axisymmetric space-times determined by Chandrasekhar \cite{Chandrasekhar1975} have been found to be incorrect both in the linear and non-linear regimes. However, the incorrectness of the Ricci tensor does not affect the well-known results on linear perturbations of a Schwarzschild black hole solution. \section*{Acknowledgments} MV contribution was supported by OTKA grant No.\ F049429. The work of SD, LP, IV and BW was supported by NSF Grant No.\ PHY-0555484. \section*{References}
1,477,468,750,254
arxiv
\section*{Methods} Experiments were performed on Bi$_2$Se$_3$ single crystals grown by directional slow solidification in an inclined ampoule and cleaved \textit{in-situ} along the (111) plane in vacuum of $5\times 10^{-11}$ torr. The Au(111) surface was prepared by \textit{in-situ} evaporation on a clean W(110) substrate according to standard methods. High resolution spin-integrated ARPES data (Fig.~1(b,c)) were taken at beamline 4.0.3 at the Advanced Light Source with 35 eV linearly p-polarized photons, at a sample temperature of $<20$K. The energy and momentum resolutions were $\sim$ 13 meV and 0.005 \AA$^{-1}$, respectively. Spin-resolved ARPES data were taken with 5.99 eV laser light and a high-efficiency spin-resolved spectrometer utilizing time-of-flight (TOF) technique and low-energy exchange-scattering techniques.\cite{Jozwiak2010} These data were taken at a sample temperature of $\sim$ 80 K, with instrumental energy and momentum resolutions of $\sim$ 15 meV and 0.02 \AA$^{-1}$, respectively. The spectrometer acquires data as a function of binding energy in parallel, allowing high resolution full energy distribution curves (EDCs) to be acquired in 2-3 minutes, as opposed to several hours with conventional spin-resolved ARPES systems, thus precluding any surface aging effects (e.g. vacuum or laser exposure) during acquisition and enabling the wide coverage of experimental parameter space in the experiment. Full two-dimensional energy-momentum polarization maps (Figs.~3(b,c)~and~4(c,d,g,h) are made up of 20-30 individual EDCs. Each pair of maps (e.g. Fig.~4(c,d)) is taken simultaneously, alternating photon polarization after each EDC, such that photon polarization dependence in a pair of maps cannot be due to surface aging. The momentum, or $\mathbf{k}$-vector probed in an EDC is scanned by rotating the crystal about the $y$ or $x$ axes, while the photon beam, photoelectron collection angle, and spin analysis axis are all held fixed, as shown in Fig.~1(f). \section{Experimental setup} The spin-integrated data of Fig.~1(a,b) was taken at the Merlin beamline, BL4.0.3, of the Advanced Light Source (Berkeley, CA, USA), using a commercial Scienta R8000 hemispherical analyzer with a photon energy $h\nu=35$~ eV, and linear $p$-polarization. All other data, both spin-resolved and spin-integrated, were taken with the `spin-TOF' spectrometer with a laser-based light source in the geometry shown in Fig.~S1, as discussed below. The laser source is a cavity-dumped, mode-locked Ti:Sapphire oscillator pumped by a 6 W frequency doubled Nd:YVO$_4$ laser. This oscillator generates $\sim$ 150 fs pulses, tuned to 828 nm at a repetition rate of 54.3/$n$~MHz, where $n$ is an integer. For the present work, $n$ was set to 10, for a repetition rate of $\sim$ 5 MHz -- this approaches the maximum repetition rate compatible with the `spin-TOF' spectrometer in typical conditions. The oscillator output is frequency-quadrupled through cascaded, type-I phase-matched second harmonic generation in two beta barium borate (BBO) crystals of 2 and 5~mm thicknesses, producing a 207 nm (5.99 eV) beam, with a measured bandwidth of $\sim$~4~meV and pulse lengths estimated to be several picoseconds. The polarization of the 6 eV beam is straightforwardly controlled with zero-order half-wave and quarter-wave plates, providing linearly and circularly polarized light, respectively. The beam is then focused into the main vacuum chamber through a UV-grade fused silica viewport and onto the sample surface. The experimental geometry is schematically shown in Fig.~S1. The $x$, $y$, and $z$ axes reference a fixed coordinate system in the lab, with the origin located at the simultaneous intersection of the sample surface, the photon beam, and the electron-optical axis of the photoelectron spectrometer. The photon beam is incident in the $xz$ plane, shaded gray in the figure, at a fixed 45$^{\circ}$ angle from the $x$ axis. The photon beam, when linearly polarized, can have its polarization vector oriented at any angle $\alpha_0$ between $p$- and $s$-polarization geometries, as shown. The photon polarization vector is aligned within the $xz$ plane for $p$-polarization and is along the $y$ axis for $s$-polarization, respectively, as defined in the present manuscript. Circular polarizations of either helicity can also be selected, with right-hand circularly polarized (RCP) light defined in the figure. Photoelectrons emitted along the $z$ axis are collected by the spectrometer, with an angular acceptance of $\sim \pm$1$^{\circ}$. This translates to a momentum resolution of $\sim$ 0.02 \AA$^{-1}$. Selecting the momentum to be probed requires rotation of the sample surface with respect to the fixed spectrometer. With the sample surface aligned to the fixed $xy$ plane as drawn in Fig.~S1, emission at $\Gamma$, or $(k_x,k_y) = (0,0)$, is probed. The value of $k_x$ is scanned by rotating the sample about the $y$ axis (the $\theta$ rotation in Fig.~S1), while the value of $k_y$ is scanned by rotating the sample about the $x$ axis (the $\beta$ rotation in Fig.~S1). \begin{figure*} \includegraphics[width=14cm]{Fig1_si} \caption{\label{fig:geo} \textbf{Experimental geometry.} Schematic diagram of the experimental geometry. The $x$, $y$, and $z$ axes reference a fixed coordinate system. The photon beam is incident within the $xz$-plane, at a fixed angle from the $x$ axis. Photoelectrons emitted along the fixed $z$ axis, shown by the black arrow, are collected by the spectrometer, which is sensitive to spin along the $y$ and $z$ axes. The photons can be linearly polarized with any orientation between $p$- and $s$-polarizations, defined by the angle $\alpha_0$. The photons can also be circularly polarized, with either helicity. Right-hand circularly polarized light (RCP) is shown. } \end{figure*} The `spin-TOF' spectrometer is a custom built spin-resolved photoelectron spectrometer for high efficiency acquisition of spin-resolved photoelectron energy distribution curves (EDCs) with high energy and angular resolution. It is described in detail in Ref.~\onlinecite{Jozwiak2010}. Photoelectron kinetic energies are resolved in parallel through the `time-of-flight' (TOF) technique, in which the total photoelectron transit time from emission to detection is accurately measured, providing an energy resolution of $\sim$~15~meV. This technique requires the light source to be pulsed, and requires an adequate timing window between pulses, thus setting an upper limit on the laser repetition rate. In the present case, this limit is $\sim$ 10 MHz in standard conditions. Spin is resolved through differential measurement of the relative reflectivity of the photoelectrons scattered from the surface of a magnetic thin film. This is described in detail in Ref.~\onlinecite{Jozwiak2010} and references therein. Due to specific details of the scattering geometry used in the `spin-TOF' spectrometer, spin can be resolved along the fixed $y$ or $z$ axes, providing measurements of $P_y$ and $P_z$, as depicted in Fig.~S1. The directions of positive $P_y$ and $P_z$ as used in the manuscript are defined by the directions of the corresponding arrows. The overall efficiency of the spectrometer and laser allowed a high acquisition speed. This efficiency was critical for the current study which included spin-resolved data through a wide parameter space -- spin was resolved along two axes (i.e. $P_y$ and $P_z$) and measured as a function of binding energy, momentum, and many photon polarizations. A high level of statistics was achieved for a single spin-resolved EDC in only 2-3 minutes, allowing spin resolved data to be taken with multiple photon polarizations in rapid succession without the sample surface degrading or altering due to finite residual vacuum conditions (the vacuum chamber pressure was $\sim 6\times10^{-11}$~torr). Full spin-resolved maps were quickly acquired in $\sim$~1~hour, by measuring $\sim$~ 30 successive spin-resolved EDCs while scanning momentum (emission angle) in small steps. When acquiring such spin-resolved maps with different photon polarizations for direct comparison (i.e. Fig.~2(c,d,g,h) and Fig.~3(c,d)), the maps were acquired `interlaced', with an EDC at one $k$ (emission angle) being successively taken with each photon polarization before moving to the next $k$ (angle). This approach provides a better direct measure of photon polarization dependence, free of any time dependent effects that may be introduced if the maps were acquired separately, one after the other. \section{Extrinsic spin-polarization effects induced by spin-orbit coupling} Although often forgotten, it is known that spin polarization effects can occur in photoemission due to spin-orbit coupling, causing the spin polarization of the photoelectrons to be different from that of the corresponding initial states. This is most easily exemplified in cases where spin-polarized photoelectrons are measured from unpolarized initial states, such as unpolarized atoms or from spin-degenerate states in non-magnetic solids. The `Fano effect', in which photoelectrons from the $s$ orbitals of unpolarized atoms can be spin-polarized using circular polarized light,\cite{Fano1969} is such an example. Less intuitively, it was also shown that photoelectrons emitted from spin-degenerate atomic subshells of orbital angular momentum $l > 0$ into well defined angular directions can also be spin-polarized, even when using linear and unpolarized light.\cite{Lee1974,Cherepkov1979} It should be noted that in these cases, the photoemission dipole operator considered does not actually change the orientation of the electron spin through the photoemission process.\cite{Cherepkov1979} Instead, the measured spin polarization results from effective spin-dependent photoemission matrix elements (SMEs) which effectively lead to selective emission of electrons with a particular spin orientation. As an example, the SME-induced photoelectron polarization vector for photoionization of atoms with linearly polarized light is\cite{Cherepkov1979} \begin{equation}\label{eqn:psme} \boldsymbol{\vec P}_{\mathrm{SME}} = \frac{2\xi \left( \boldsymbol{\hat{k}}_e \cdot \boldsymbol{\hat{\epsilon}} \right) }{1+\beta\left(\frac{3}{2}(\boldsymbol{\hat{k}}_e \cdot \boldsymbol{\hat{\epsilon}})^2 - \frac{1}{2}\right)}\left[ \boldsymbol{\hat{k}}_e \times \boldsymbol{\hat{\epsilon}} \right] \,, \end{equation} where $\boldsymbol{\hat{k}}_e$ and $\boldsymbol{\hat{\epsilon}}$ are the outgoing photoelectron and photon polarization unit vectors, respectively. The denominator of Eqn.~(\ref{eqn:psme}) is due to the angular distribution of photoemission where $\beta$ is the asymmetry parameter. The parameter $\xi$ reflects the interference between the possible $l+1$ and $l-1$ continuum photoelectron states and is the source of the spin dependence. Thus, this SME-induced polarization is due to the spin-orbit interaction. Like a great many other matrix element related effects in ARPES, $\boldsymbol{\vec P}_{\mathrm{SME}}$ is dependent on details of the initial and final photoelectron states, and therefore also photon energy. Equation~(\ref{eqn:psme}) also shows that both the magnitude and orientation of $\boldsymbol{\vec P}_\mathrm{SME}$ are dependent on the orientations of the photon polarization and the outgoing photoelectrons. The geometrical terms in Eqn.~(\ref{eqn:psme}) ($\boldsymbol{\hat{k}}_e \cdot \boldsymbol{\hat{\epsilon}}$ and $\boldsymbol{\hat{k}}_e \times \boldsymbol{\hat{\epsilon}}$) are required by symmetry: parity conservation requires $\boldsymbol{\vec P}_\mathrm{SME}$ to be perpendicular to the reaction plane formed by $\boldsymbol{\hat{\epsilon}}$ and $\boldsymbol{\hat{k}}_e$, or more generally to any mirror planes of the complete system.\cite{Kesslerbook} For the case of circularly polarized light, the corresponding equation is more complicated, involving a component of $\boldsymbol{\vec P}_\mathrm{SME}$ perpendicular to the reaction plane formed by $\boldsymbol{\hat{k}}_e$ and the propagation vector of the photon flux, similar to above, and an additional component along the propagation vector of the photon flux that changes sign with the photon helicity.\cite{Cherepkov1979} In addition to being observed in atomic photoionization,\cite{Heinzmann1979} spin polarized photoemission qualitatively described by Eqn.~(\ref{eqn:psme}) has been observed in solid-state photoemission from core levels of nonmagnetic systems, including the Cu $2p$ and $3p$ (Ref.~\onlinecite{Roth1994}), W $4f$ (Ref.~\onlinecite{Rose1996}), and Pt $4d$ and $4f$ levels (Ref.~\onlinecite{Yu2008}), as well as the Bi $5d$ levels in Bi$_2$Se$_3$.\cite{Jozwiak2011} Similar SMEs have been both predicted\cite{Tamura1987,Tamura1991,Tamura1991a,Henk1994} and observed\cite{Schmiedeskamp1988,Schmiedeskamp1991,Irmer1992,Irmer1994,Irmer1995,Irmer1996} in various forms in the valence bands of Pt and Au single crystals. We have previously shown such SMEs induce spin polarized photoemission from the bulk valence and conduction bands of Bi$_2$Se$_3$,\cite{Jozwiak2011} as well, which can significantly impact on the spin-resolved ARPES data from this system. Figure~S2(a) shows an ARPES intensity map of Bi$_2$Se$_3$. Panel (b) shows the $y$ component of the measured photoelectron spin polarization curves, corresponding to the vertical line cuts marked in (a). Three curves are shown (corresponding to $k_x=0,\pm k_\textrm{F}$) at three different photon energies. Each data set in Fig.~S2(b) is taken in the geometry of Fig.~S1 with linear, $p$-polarized light. In the absence of SME-related effects, one expects to observe large values of $P_y$ due to the spin polarization of the helical spin texture of the topological surface state. This texture dictates a strongly $k_x$-dependent $P_y$, with $P_y$ at $k_x=+ k_\textrm{F}$ to be reversed from $P_y$ at $k_x=- k_\textrm{F}$. Following the symmetry requirements discussed above, any SME-induced spin polarization must be oriented along the $y$ axis, and is largely determined by the angle between the photoelectron emission direction and the photon polarization vector (from Eqn.~(\ref{eqn:psme}), which is qualitatively applicable here). As this angle stays fixed in the experiment while $k$ is scanned, any SME-induced spin polarization is effectively independent of $k$. \begin{figure*} \includegraphics[width=16cm]{Fig2_si} \caption{\label{fig:fig1} \textbf{The dependence of photoelectron background spin on photon energy and polarization.} (a) ARPES intensity map of Bi$_2$Se$_3$ as a function of binding energy and momentum, along $\Gamma$M. Taken with $h\nu=35$ eV. (b) The measured $y$ component of the photoelectron spin polarization, $P_y$, as a function of binding energy at a given momentum. Each panel contains curves corresponding to the momenta of the vertical cuts shown in (a), labeled by marker. Each panel corresponds to data taken with the specified photon energy, taken with the $p$-polarized light geometry. (c) Direct comparison of the photoelectron $P_y$ at $\Gamma$ ($k_x=0$) from (b) at each photon energy. (d) The photoelectron $P_y$ at $\Gamma$, measured with the laser ($h\nu=6$~eV), with various photon polarizations. } \end{figure*} Indeed, the $P_y$ curves in Fig.~S2(b) show both $k_x$-dependent and $k_x$-independent contributions. As the $k_x$-dependent contribution should have $P_y=0$ at $k_x=0$, the measured $P_y$ curves at $k_x=0$ (purple circles) can be taken as a measure of the $k_x$-independent contribution caused by SMEs. In the top panel, taken with $h\nu=36$~eV photons, the $k_x=0$ curve is non-zero at all binding energies, reflective of SMEs throughout the electronic structure, including the bulk. The $k$-dependent component of $P_y$, which is reflective of the surface state spin texture, can be seen in addition to this: the $P_y$ curve at $k_x=+k_\textrm{F}$ is the `inverse' of that at $k_x=-k_\textrm{F}$, approximately inverted about the $k_x=0$ curve. These two curves each exhibit two peaks, a maximum and minimum, resulting from the opposite spin textures of the upper and lower halves of the Dirac cone. These features are discussed in detail in Ref.~\onlinecite{Jozwiak2011}. The $h\nu=36$~and~70~eV data were taken previously with synchrotron light\cite{Jozwiak2011}, while the $h\nu=6$~eV data were taken with the laser source as part of the current experiment. The three panels of Fig.~S2(b) show that while the characteristic $k$-dependent component of the $P_y$ curves remains at each photon energy, the $k$-independent SME-induced component is strongly photon energy dependent -- it even changes sign between $h\nu=36$~and~70~eV. To illustrate this more clearly, the curves at $k_x=0$ in each panel of Fig.~S2(b) are plotted again for direct comparison in Fig.~S2(c). Furthermore, the SME-induced photoelectron spin polarization is strongly dependent on the photon polarization. Figure~S2(d) shows the same $P_y$ curves at $k_x=0$ measured with the 6~eV laser at several photon polarization geometries. As in Eqn.~(\ref{eqn:psme}), the SME-induced photoelectron spin polarization is maximum for $p$-polarized light ($\boldsymbol{\hat{k}}_e \cdot \boldsymbol{\hat{\epsilon}}$ is maximum), and is very close to zero for $s$-polarized light ($\boldsymbol{\hat{k}}_e \cdot \boldsymbol{\hat{\epsilon}}=0$). Thus, the photoelectron $P_y$ in Bi$_2$Se$_3$ measured with the $h\nu=6$~eV laser in the current geometry exhibits a strong SME-induced component, particularly with $p$-polarized light, which is $k$-independent and distinct from the intrinsic surface state spin texture related to the topological ordering. This effectively induces a loosely qualitative `offset' or `shift' to the measured photoelectron $P_y$ curves, visible in each panel in Fig.~S2(b). This `shift' is then also visible in corresponding full $P_y$ maps. For instance, Fig.~S3(a) shows a map of $P_y$ as a function of binding energy and $k_x$. This false color scale image is the result of taking $\sim$ 30 individual $P_y$ curves such as those shown in Fig.~S2(b), as a function of emission angle, and mapping the data to energy-momentum space. The map exhibits the momentum-dependent component of $P_y$ that reflects the underlying helical spin texture of the surface state: on the left there is a dark blue (very positive $P_y$) streak following the surface state dispersion, and on the right there is a slightly red (slightly negative $P_y$) streak following the other side of the surface state dispersion. However, since the data was taken with $p$-polarized light, there is a qualitative `shift' to positive $P_y$ (or `blue' in the figure). For instance, signal in between the dispersions (near $E_\textrm{F}$ and $k_x=0$), due to the bulk conduction band which is assumed spin-degenerate in the crystal, appears blue (moderately positive $P_y$). \begin{figure*} \includegraphics[width=12cm]{Fig3_si} \caption{\label{fig:fig1} \textbf{Asymmetric color scale to counter induced polarization asymmetry.} (a) Photoelectron $P_y$ map as a function of binding energy and momentum along the $k_x$ axis ($\Gamma$M), measured with $p$-polarized photons with $h\nu=6$~eV. Here the color scale is symmetric, ranging from $P_y = -0.9$~to~+0.9, with `white' corresponding to $P_y=0$. The dashed lines are guides to the eye, marking the topological surface state dispersion. (b) Same as (a), but with an asymmetric color scale ranging from $P_y=-0.25$~to~+0.9. } \end{figure*} This SME-induced `shift' can be qualitatively removed simply by displaying the map with an asymmetric color scale, such that the color `white' approximately corresponds to the SME-induced component. Figure~S3(b) shows the same data as panel (a), but with an asymmetric color scale, resulting in a map that more readily displays the $P_y$ texture related to that of the helical Dirac surface state. This map is the one displayed in Fig.~2(c) of the main paper, as the $k$-independent SME-induced effects are not the primary focus of the current work. The similar map in Fig.~2(d) of the main paper was acquired with $s$-polarized light, and so the SME-induced $P_y$ goes to zero, as discussed above. Thus there is no `shift' to the map, and it displays the $k$-dependent $P_y$ quite well with a symmetric color scale. While this $s$-polarized light geometry is free from SME-induced effects, the photoelectron $P_y$ texture is opposite to the intrinsic helical spin texture of the surface state electrons, as discussed in the main text. With the present geometry and photon energy (6 eV), the Au(111) sample appears free of SME-induced $P_y$ even with the $p$-polarized light, and thus the $P_y$ maps of Fig.~2(g)~and~(h) are both shown with symmetric color scales. This SME-induced `shift' in $P_y$ is also present for $\pm sp$-polarized light geometries, used in Fig.~3(a,b) of the main paper, although quantitatively less than for $p$-polarized light. The SME effect on $P_y$ for both $+sp$- and $-sp$-polarizations should be the same, following Eqn.~(\ref{eqn:psme}); our measurement is in agreement with this reasoning (e.g. see Fig. S2(d)). Just as for the above discussion with $p$-polarized light, this effect `shifts' the $P_y$ curves of Figs.~3(a,b) slightly to more positive values. On top of this, there remains a clear $k_y$ dependence consistent with a radial component of the photoelectron spin texture shown in the inset and discussed in the text. \section{Novel spin manipulation in photoemission} As discussed above, there are numerous observed and understood cases of photoemission inducing a spin-polarization in the photoelectrons that is different from the spin-polarization of the initial state crystal electrons. These effects, however, are normally understood as being due to spin-dependent transition matrix elements of spin-conserving transitions, which preferentially excite electrons of a particular spin, resulting in photoelectron ensembles with altered spin polarization. The cases where these effects have been observed involved measuring slight spin polarizations in photoelectrons corresponding to unpolarized initial states.\cite{Kessler1970,Heinzmann1979,Roth1994,Rose1996,Yu2008,Starke1996,Schmiedeskamp1988,Schmiedeskamp1991,Irmer1992,Irmer1994,Irmer1995,Irmer1996} The present observations in Bi$_2$Se$_3$ of photoelectron spin polarization flipping dependent on photon polarization are unique. In this case, the initial state crystal electrons are in fact highly spin polarized to begin with and are emitted with different or even opposite spin polarization, dependent on photon polarization, suggesting a fundamental difference from previous spin effects seen in photoemission. This is an interesting demonstration of low energy photoemission being dominated by a spin-flip transition, which are usually assumed to contribute negligibly compared to spin-conserving transitions.\cite{Feuchtwang1978a,Federbook} To our knowledge, the present results are the first observation of photoelectron spin flipping dependent on linear photon polarization and such a wide extent of demonstrated control of photoelectron spin in three dimensions through manipulation of only the photon polarization. \section{Impact of $A_z$ component of incident light} Reference~\onlinecite{CheolHwan} presents the interaction Hamiltonian of the surface state electron and photon as \begin{equation}\label{eqn:fullHam} H_\textrm{int} \propto \left(A_y\sigma_x - A_x\sigma_y \right) + i\gamma A_z I ~ , \end{equation} where $\mathbf{A}$ is the vector potential of the photon field, $\sigma_x$ and $\sigma_y$ are the Pauli matrices, and $I$ is the $2\times 2$ identity matrix. This expression corresponds to equation (18) of Ref.~\onlinecite{CheolHwan}, ignoring an overall coefficient and using $\gamma$ for $2\beta/\alpha$. In this expression, the $x,y,z$ components reference the sample coordinate system where $x$ and $y$ are the horizontal and vertical axes in the sample plane, and $z$ is the axis along the sample surface normal. As discussed in Ref.~\onlinecite{CheolHwan}, since the last term above is proportional to the identity matrix, any $A_z$ component (i.e. any out-of-surface-plane component) of the photon polarization contributes to spin-conserving photoemission and cannot alone alter the spin polarization of photoemitted electrons. Although the present experiment contains a finite $A_z$ component for all photon polarizations except for $s$-polarized light (see Fig.~S1), it is ignored in the main text in order to focus on the new physics contained in the spin-flip terms proportional to $A_y$ and $A_x$. Specifically, in the main text the interaction Hamiltonian of the surface state electron and photon is expressed in equation (1) of the main text as \begin{equation}\label{eqn:shortHam} H_\textrm{int} \propto \left( \vec{\sigma} \times \hat{z} \right) \cdot \hat{\epsilon} ~ , \end{equation} where $\vec{\sigma}$ is the spin Pauli matrix, $\hat{z}$ is the unit surface normal vector, and $\hat{\epsilon}$ is the photon polarization vector. This expression is equivalent to Eqn.~(\ref{eqn:fullHam}) assuming $A_z=0$. As presented in the main text, this assumption leads to the simple picture presented in Fig.~4 of the main text, which has good overall agreement with the measured photoelectron spin polarizations and is consistent with the observed photon polarization dependence. For more insight, however, it is worthwhile to also consider the impact of the $A_z$ component. Since to first order it contributes to spin-conserving photoemission, its impact is expected to decrease the effect of the spin-flipping photoemission, and thus decrease the photon polarization dependence of the photoelectron spin polarization. For example, the magnitude of the photon polarization dependence shown in Fig.~3(a,b) of the main text is smaller than would be expected from Eqn.~(\ref{eqn:fullHam}) assuming $A_z = 0$, consistent with the finite $A_z$ present in the experiment reducing the photon polarization dependence. \begin{figure*} \includegraphics[width=10cm]{Fig4_si} \caption{\label{fig:fig1} \textbf{Large photoelectron $P_z$ induced by off-normal incidence of light.} Measured photoelectron $P_z$ curves as a function of binding energy at $(k_x,k_y)=(0,-k_F), (0,0), \textrm{and}, (0,+k_F)$, corresponding to the locations shown by the color-coded circles in the Fermi surface diagram. The light was $p$-polarized, with the photon polarization component in the sample surface plane along the $k_x$ direction, as shown by the green arrow in the inset. The photon polarization also has a significant component along the direction normal to the sample surface plane. } \end{figure*} Perhaps more interestingly, there should also be effects due to interference of the spin-conserving $A_z$ term and the spin-flip terms in Eqn.~(\ref{eqn:fullHam}). These effects should be easiest to observe in cases where the spin polarization due to the individual terms is small. For instance, we consider the case of measuring the out-of-plane spin polarization component, or $P_z$, at momentum positions along the $k_y$ axis using $p$-polarized light. As the Fermi surface of the Bi$_2$Se$_3$ sample is mostly isotropic, without much hexagonal warping which can introduce out-of-plane spin polarization in the surface state \cite{Fu2009}, the surface state spin is primarily in-plane,\cite{Jozwiak2011} following the helical spin texture. Figure~S4 shows several $P_z$ measurements, taken with $p$-polarized light and at momenta along the $k_y$ axis at $k_y = -k_F,0,\textrm{and} +k_F$. In contrast to the expected surface state spin texture, surprisingly high values of photoelectron $P_z$ are measured, reaching nearly $\pm70\%$ at $k_y=\pm k_F$, respectively. The interaction Hamiltonian in Eqn.~(\ref{eqn:fullHam}) is qualitatively consistent with these results as follows. Assuming only spin-conserving photoemission, the photoelectron spin is expected to be directed along the $\pm k_x$ direction at $k = \pm k_y$, and thus $P_z$ is expected to be zero. Likewise, as presented in Ref.~\onlinecite{CheolHwan} and Fig.~4 of the main text, assuming only the spin-flip terms above, the photoelectron spin polarization is expected to oppositely directed along the $\pm k_x$ axis, and thus $P_z$ is still expected to be zero. However, consideration of finite contributions of both terms predicts large values of $P_z$ with reversed signs at $k_y>0$ and $k_y<0$. Specifically, calculations following Ref.~\onlinecite{CheolHwan} predict values of $P_z$ to exceed $\pm90\%$ at $k_y=\pm k_F$, respectively, for $\gamma^2=2.0$ and $\pm75\%$ at $k_y=\pm k_F$ for $\gamma^2=0.25$. Note these quantitative predictions do not take into account scattering effects which can reduce measured photoelectron spin polarizations. Note also that they do not take into account the extrinsic effects due to spin-orbit induced spin-dependent matrix elements (SMEs) discussed in Section SI II, which are also present due to the off-normal incident light. Additional insights can likely be gained by further investigation of the photoelectron spin polarization (in three dimensions) in a large variety of momenta and photon polarization geometries. Rigorous quantitative comparisons with calculations will require the SMEs, as well as other possible effects, to be taken into account. The following Sections (SI V and SI VI) discuss particular aspects of the data presented in the main text which are likely affected by the finite $A_z$ component in the experiment. \section{Fit to the $P_y$ dependence on photon polarization rotation} Following the approaches presented in the main text and Ref.~\onlinecite{CheolHwan}, assuming the light is incident normal to the sample surface and the photon polarization vector is entirely within the surface plane, the $y$ component of the spin polarization ($P_y$) of photoelectrons emitted from the surface state of a topological insulator using linearly polarized light will be \begin{equation} P_y = \frac{n}{A_x^2 + A_y^2} \left[ \left( A_y^2 - A_x^2 \right) \cos{\theta_\mathbf{k}} - 2A_x A_y \sin{\theta_\mathbf{k}} \right] ~. \end{equation} In the above, $A_x$ and $A_y$ are the components of the photon polarization vector along the $x$ and $y$ axes, respectively, in the sample surface plane, $\theta_\mathbf{k}$ is the angle with respect to the $x$ axis of the in-plane momentum vector, $\mathbf{k}$, being probed, and $n=\pm1$ for the upper and lower cones of the Dirac dispersion, respectively. For the upper band at $E_F$, and for $\theta_\mathbf{k}=\pi$ (corresponding to the measurement in Fig.~1(f) of the main paper), and the geometry of the present experiment, this gives, \begin{equation} P_y = \frac{\cos^2{\theta} \cos^2{\alpha_0} - \sin^2{\alpha_0}}{\cos^2{\theta} \cos^2{\alpha_0} + \sin^2{\alpha_0}}, \end{equation} where $\alpha_0$ is defined in Fig.~1(e)~and~S1, and $\theta$ is the angle between the incident light propagation vector and the sample normal. As drawn in Fig.~1(e)~and~S1, $\theta=45^{\circ}$, however the sample must be rotated about the $y$-axis for the measurement to reach this point in momentum space, such that for $\theta_\mathbf{k}=\pi$, $\theta=36^{\circ}$. We found that the above equation does not fit the slight asymmetry of the data shown in of Fig.~1(f) perfectly, likely due to the finite component of the photon polarization along the out-of-surface-plane direction ($A_z$), which is largest for $\alpha_0=0$, and vanishes for $\alpha_0=90^{\circ}$. According to Ref.~\onlinecite{CheolHwan}, the $A_z$ component can contribute to the total photoemission through purely spin-conserving emission. We can add the influence of this contribution to the total measured spin polarization by modifying the above equation following the definition of spin polarization, $P_y = (I_\uparrow - I_\downarrow)/(I_\uparrow + I_\downarrow)$, as follows, \begin{equation} P_y = \frac{\cos^2{\theta} \cos^2{\alpha_0} - \sin^2{\alpha_0} + \gamma^2 \sin^2{\theta}\cos^2{\alpha_0} }{\cos^2{\theta} \cos^2{\alpha_0} + \sin^2{\alpha_0} + \gamma^2 \sin^2{\theta}\cos^2{\alpha_0}}, \end{equation} where $A_z$ is given by $\sin{\theta}\cos{\alpha_0}$ in this geometry, and $\gamma^2$ is a fit parameter and measure of the relative contribution of $A_z$ to the total photoemission intensity ($\gamma$ corresponds to $2\beta/\alpha$ if we use the variables in Ref.~\onlinecite{CheolHwan}). The final expression used to fit the data includes an overall coefficient (to account for finite resolution and other effects which can slightly reduce the measured polarizations in such spin-ARPES experiments) and a constant offset to $\alpha_0$ to allow for a slight misalignment of the photon polarization orientation. The fit shown in Fig.~1(f) is obtained with a value of about 0.72 for the overall coefficient, an offset of $\alpha_0$ of $3.2^{\circ}$, and a $\gamma^2$ parameter of 2.0. \section{$P_z$ maps of Bi$_2$Se$_3$ taken with circularly polarized light} Again, following the approaches presented in the main text and Ref.~\onlinecite{CheolHwan}, circularly polarized light, at normal incidence to the surface of a topological insulator, will lead to a photoelectron spin texture oriented completely out-of-plane for a fixed $|P_z|$ at all $\mathbf{k}$, and with the sign of $P_z$ determined by the handedness of the light. The experimentally measured $P_z$ maps shown in Figs.~3(c,d) of the main paper mostly agree with these predictions. However, streaks of reduced $P_z$ follow the surface state dispersion along the right hand side at positive $k_x$. This is likely due to the off-normal incidence of the light in the present experiment (see Fig.~S1), which will modify the expected spin polarization including reduced values of $P_z$. Note that in the present geometry, negative values of $k_x$ (on the left side of Figs.~3(c,d)) are measured with the sample turned \textit{towards} the photons, and thus closer to normal incidence, while positive values of $k_x$ (on the right side of Figs.~3(c,d)) are measured with the sample turned \textit{away} from the photons, and thus further from normal incidence with lower values of $P_z$ expected. Altering the experimental geometry, for instance to have the light at normal incidence, would provide direct insight into this issue. Reduced values of $P_z$ may also be due to the light not being fully circularly polarized. Imperfections in the photon polarization may be introduced by slight birefringence in the vacuum chamber's fused silica window, induced by mechanical and thermal stress associated with installation and bakeout. \section{Spin-degeneracy of final states} We note that the theory in Ref.~\onlinecite{CheolHwan} assumes spin-degenerate final states. Even if this assumption is modified, it is reasonable to expect that the spin polarization of the photoelectrons could still be different from that of the originating surface states and dependent on the photon polarization, albeit in a manner somewhat different from that predicted by the theory in Ref.~\onlinecite{CheolHwan}. The spin-degeneracy of the final states is not a strictly necessary condition to observe novel photoelectron spin-flipping such as observed in the current experiment. On the other hand, because ARPES circular dichroism experiments on Bi$_2$Se$_3$ with similar photon energy are well described by theory based on the similar assumption of spin-degenerate final states,\cite{Wang2011b} the assumption is likely valid for the experimental results shown in the present work. \section{Circular dichroism in Bi$_2$Se$_3$} It was previously shown in Bi$_2$Se$_3$ that the total (spin-integrated) surface state photoemission intensity in ARPES data taken with circularly polarized light is sensitive to the relative alignment of photon helicity and surface state spin orientation.\cite{Wang2011b} This was argued to provide indirect experimental access to the spin texture of the helical Dirac surface states. Such studies, however, do not resolve the actual spin polarization of the photoexcited electrons. Our current observations constitute an entirely different facet of the physics involved. \section{Comparison of Bi$_2$Se$_3$ and Au(111)} The spin-resolved ARPES from Bi$_2$Se$_3$ and Au(111) present an interesting juxtaposition. In the present geometries, the spin polarization of photoelectrons from the Bi$_2$Se$_3$ topological surface state shows an extremely strong dependence on the photon polarization. The top row of Fig.~S4 presents a couple of examples. Panel (b) shows the measured $y$ component of photoelectron spin polarization, $P_y$, corresponding to the three vertical cuts marked in panel (a). The $P_y$ curves are shown for both $p$- and $s$-polarized light. The data corresponds to the full maps shown in Figs.~2(c)~and~(d) of the main paper, but offers a different view by directly comparing a few individual curves. There is an enormous change in $P_y$ between the two photon polarizations. In line with the discussion of Section SI II, there are two components to the photon polarization dependence of the $P_y$ curves visible here. The $P_y$ curve corresponding to $\theta=0^{\circ}$ ($k_x=0$) shows moderately positive $P_y$ with $p$-polarized light, and nearly zero $P_y$ for $s$-polarized light. This is due to the SME-induced effect discussed above. As the SME effects are effectively $k$-independent, this contribution to the photon polarization dependence should be $k$ ($\theta$) independent. The $P_y$ curves at $\theta=\pm 9^{\circ}$ ($k_x=\pm k_\textrm{F}$), however, show opposing behaviors. At $\theta=-9^{\circ}$, $P_y$ near $E_\textrm{F}$ measured with $P$-polarized light is more positive than that measured with $s$-polarized light, while the opposite is true at $\theta=+9^{\circ}$. This $k$-dependent contribution to the photon polarization dependence reflects the behavior of helical Dirac fermions as presented in the main manuscript. \begin{figure*} \includegraphics[width=15cm]{Fig5_si} \caption{\label{fig:fig1} \textbf{Polarization dependence of photoelectron spin in Bi$_2$Se$_3$ and Au(111) surface states.} (a) ARPES intensity map of Bi$_2$Se$_3$ as a function of binding energy and momentum, along $\Gamma$M, taken with $h\nu=6$ eV. (b) The $y$ component of photoelectron spin polarization, $P_y$, as a function of binding energy at labeled emission angles, corresponding to the line cuts marked in (a). The $P_y$ curves at each emission angle are vertically offset by 2 for clarity. Corresponding $P_y$ curves measured with $p$- and $s$-polarized light are directly compared. (c) Same as (b), except the curves display the $z$ component of photoelectron spin polarization, $P_z$, and are measured with both helicities of circular polarized light. (d-f) Same as (a-c) but measured from the Au(111) Shockley surface state. } \end{figure*} Figure~S4(c) shows similar data, displaying the $z$ component, $P_z$, taken with right- and left-hand circularly polarized light. This data corresponds to the full maps shown in Figs.~3(c)~and~(d) of the main paper. Again, there is a very large dependence change with photon polarization, in line with Ref.~\onlinecite{CheolHwan}. The bottom row of Fig.~S4 presents the corresponding data for the Au(111) surface state. Although this surface state has a similar helical spin-texture, also as a result of spin-orbit coupling, the data shows a nearly constant photoelectron spin polarization at each photon polarization, in striking contrast with Bi$_2$Se$_3$. This is true for $P_y$ (Figs.~S4(b)~and (e), and Fig.~2 of the main paper) and $P_z$ (Figs.~S4(c)~and~(f)). Panel~(e) also shows that there is no $k$-independent component induced by SME effects ($P_y$ at $\theta=0$, or $k_x=0$, is zero for both $s$- and $p$-polarized light). This contrasting behavior may be related to the contrasting circular dichroism in standard spin-integrated ARPES experiments on Bi$_2$Se$_3$ and Au. In the case of Bi$_2$Se$_3$, a strong circular dichroism in the ARPES signal (difference in photoelectron intensity when illuminated with right- and left-hand circularly polarized light) was observed that has a texture in momentum-space which closely matches that of the surface state's spin texture.\cite{Wang2011b,Park2012a} However, in the case of Au, a strong circular dichroism was observed that has a texture in momentum space that does not at all match that of the surface state's spin texture.\cite{Kim2012} Kim~\textit{et al.}\cite{Kim2012} discusses these results in terms of the relative strengths of two particular terms in the Hamiltonian: `$H_\textrm{SOC}$', a term due to spin-orbit coupling, and `$H_\textrm{ES}$', a term due to the inversion symmetry breaking electrostatic field at the surface. It is argued that in Bi$_2$Se$_3$, the $H_\textrm{SOC}$ term dominates, and leads to the circular dichroism behavior that mirrors the surface state spin texture. In the Au, however, it is argued that relatively weaker spin-orbit coupling allows the $H_\textrm{ES}$ term to dominate, which leads to the contrasting circular dichroism that does not mirror the surface state spin texture. This scenario may also apply to our current observations. The $H_\textrm{SOC}$ term is the spin-flip term in the photoemission interaction Hamiltonian, as described in the main text, and its dominance in Bi$_2$Se$_3$ is consistent with the observed photoelectron spin-flipping. The $H_\textrm{ES}$ term, in contrast, leads to a spin-conserving term in the photoemission interaction Hamiltonian. Thus the weaker spin-orbit coupling and relative dominance of $H_\textrm{ES}$ in Au could explain the observed lack of photoelectron spin-flipping. Similar experiments on other systems with varying and intermediate spin-orbit coupling strengths, such as BiTl(S$_1-\delta$Se$_{\delta}$)$_2$\cite{Xu2011a} and the adsorbate-induced Rashba states on Bi$_2$Se$_3$,\cite{King2011,Bahramy2012a} would be helpful in investigating this picture. We note that following Ref.~\onlinecite{Kim2012}, the $H_\textrm{ES}$ term is linearly dependent on $k$. In this scenario, apparently this term remains dominant to the spin-orbit term down to momentum values below which could be experimentally resolved (in either our spin-resolved measurement, or the spin-integrated circular dichroism experiment\cite{Kim2012}). This overall picture may be supported if a reversal of behavior, from Bi$_2$Se$_3$-like to Au-like, was observed as a function of increasing $k$ in a system with intermediate spin-orbit coupling.
1,477,468,750,255
arxiv
\section{Introduction} Confidence intervals are a natural and commonly used way to summarize a distribution over real numbers. In informal terms, a confidence interval is a concise way to express what values a given sample mostly contains. They are used widely, e.g., to denote ranges of data, specify accuracies of parameter estimates, or in Bayesian settings to describe the posterior distribution. A confidence interval is given by two numbers, the lower and upper bound, and parametrized by the percentage of probability mass that lies within the bounds. They are easy to interpret, because they can be represented visually together with the data, and convey information both about the location as well as the variance of a sample. There is a plethora of work on how to estimate the confidence interval of a distribution based on a finite-sized sample from that distribution, see \cite{Hyndman96} for a summary. However, most of these approaches focus on describing a single univariate distribution over real numbers. Also, the precise definition of a confidence interval varies slightly across disciplines and application domains. In this paper we focus on \emph{confidence areas}: a generalization of univariate confidence intervals to multivariate data. All our intervals and areas are such that they {\em describe ranges of data and are of minimal width}. In other words, they contain a given fraction of data within their bounds and are as narrow as possible. By choosing a confidence area with minimal size we essentially locate the densest part of the distribution. Such confidence areas are particularly effective for visually showing trends, patterns, and outliers. Considering the usefulness of confidence intervals, it is surprising how little work exists on applying confidence intervals on multivariate data \cite{KorpelaPG14}. In multivariate statistics {\em confidence regions} are a commonly used approach, see e.g.,~\cite{Guilbaud08}, but these methods usually require making assumptions about the underlying distribution. Moreover, unlike confidence areas, most conference regions cannot be described simply with an upper and a lower bound, e.g., confidence regions for multivariate Gaussian data are ellipsoids. Thus, these approaches do not fully capture two useful characteristics of one-dimensional confidence intervals: a) easy interpretability and b) lack of assumptions about the data. \begin{figure*}[t!] \begin{center} \includegraphics[width=0.45\textwidth]{figs/toyplot_N100_Nb99_M80_cropped.pdf} \hspace{1em} \includegraphics[width=0.45\textwidth]{figs/toyplot_cross_N100_Nb99_M80_cropped.pdf} \caption{\label{fig:toy} Examples 1 and 2. Please see text for details. LEFT: Local anomalies in time series (solid black line) are easier to spot when computing the confidence area using the proposed method (orange lines) as opposed to existing approaches (green lines). RIGHT: Our approach results in a confidence area (orange ``cross'') that is a better representation of the underlying distribution than existing approaches (green rectangle).} \end{center} \end{figure*} The simplest approach to construct multivariate confidence intervals is to compute one-dimensional intervals separately for every variable. While this naive approach satisfies conditions a) and b) above, it is easy to see how it fails in general. Assume, e.g., that we have 10~independently distributed variables, and have computed for each variable a 90\% confidence interval. This means that when considering every variable individually, only 10\% of the distribution lies outside of the respective interval, as desired. But taken together, the probability that an observation is outside at least one of the confidence intervals is as much as $1-0.9^{10}=65$\%. This probability, however, depends strongly on the correlations between the variables. If the variables were perfectly correlated with a correlation coefficient of $\pm 1$, the probability of an observation being outside all confidence intervals would again be 10\%. In general the correlation structure of the data affects in a strong and hard-to-predict way on how the naive confidence intervals should be interpreted in a multivariate setting. Ideally a multivariate confidence area should retain the simplicity of a one-dimensional confidence interval. It should be {\em representable by upper and lower bounds for each variable} and the semantics should be the same: {\em each observation is either inside or outside} the confidence area, and most observations of a sample should be inside. Particularly important is specifying when an observation in fact is within the confidence area, as we will discuss next. Confidence areas for time series data have been defined~\cite{Kolsrud2007, KorpelaPG14} in terms of the {\em minimum width envelope} (MWE) problem: a time series is within a confidence area if it is within the confidence interval of {\em every variable}. While this has desirable properties, it can, however, result in very conservative confidence areas if there are local deviations from what constitutes ``normal'' behavior. The definition in \cite{KorpelaPG14} is in fact too strict by requiring an observation to be contained in all variable-specific intervals. Thus, here we propose an alternative way to define the confidence area: {\em a data vector is within the confidence area if it is outside the variable-specific confidence intervals at most $l$ times}, where $l$ is a given integer. This formulation preserves easy interpretability: the user knows that any observation within the confidence area is guaranteed to violate at most $l$ variable-specific confidence intervals. In the special case $l=0$, this new definition coincides with the MWE problem~\cite{KorpelaPG14}. The following examples illustrate further properties and uses of the new definition. \subsection*{Example 1: local anomalies in time series.} The left panel of Fig.~\ref{fig:toy} presents a number of time series over $m = 80$ time points, shown in light gray, together with three different types of 90\% confidence intervals, shown in green, purple and orange, respectively. In this example, ``normal'' behavior of the time series is exemplified by the black dash-dotted curve that exhibits no noteworthy fluctuation over time. Two other types of time series are shown also in black: a clear outlier (dashed black line), and a time series that exhibits normal behavior most of the time, but strongly deviates from this for a brief moment (solid black line). In the situation shown, we would like the confidence interval to only show what constitutes ``normal'' behavior, i.e., not be affected by strong local fluctuations or outliers. Such artifacts can be caused, e.g., by measurement problems, or other types of data corruption. Alternatively, such behavior can also reflect some interesting anomaly in the data generating process, and this should not be characterized as ``normal'' by the confidence interval. In Fig.~\ref{fig:toy} (left) the confidence area based on MWE \cite{KorpelaPG14}, is shown by the green lines; recall the MWE interval corresponds to setting $l = 0$. While it is unaffected by the outlier, it strongly reacts to local fluctuations. The naive confidence area, i.e., one where we have computed confidence intervals for every time point individually, is shown in purple. It is also affected by local peaks, albeit less than the $l = 0$ confidence area. Finally, the proposed method is shown in orange. The area is computed using $l = 25$, i.e., a time series is still within the confidence area as long as it lies outside in {\em at most 25 time points}. This variant focuses on what we would expect to be normal behavior in this case. Our new definition of a confidence area is thus nicely suited for {\em finding local anomalies in time series data}. \subsection*{Example 2: representing data distributions.} On the other hand, the right panel of Fig.~\ref{fig:toy} shows an example where we focus only on the time points $x$ and $y$ as indicated in the left panel. Time point $x$ resides in the region with a strong local fluctuation, while at $y$ there are no anomalies. According to our definition, an observation, in this case an $(x,y)$ point, is within the confidence area if it is outside the variable-specific confidence intervals at most $l$ times. We have computed two confidence areas using our method, one with $l=0$ (green), and another with $l=1$ (orange), as well as the naive confidence intervals (purple). For $l=0$, we obtain the green rectangle in Fig.~\ref{fig:toy} (right panel). The variable specific confidence intervals have been chosen so that the green rectangle contains 90\% the data and the sum of the widths of the confidence intervals (sides of the rectangle) is minimized. For $l=1$, we obtain the orange ``cross'' shaped area. The cross shape follows from allowing an observation to exceed the variable specific confidence interval in $l=1$ dimensions. Again, the orange cross contains 90\% of all observations, and has been chosen by greedily minimizing the sum of the lengths of the respective variable-specific confidence intervals. It is easy to see that with $l = 0$, i.e., when using the MWE method \cite{KorpelaPG14}, the observations do not occur evenly in the resulting confidence area (green rectangle). Indeed, the top right and bottom right parts of the rectangle are mostly empty. In contrast, with $l=1$, the orange cross shaped confidence area is a much better description of the underlying data distribution, as there are no obvious ``empty'' parts. Our novel confidence area is thus {\em a better representation of the underlying data distribution} than the existing approaches. \subsection*{Contributions} The basic problem definition we study in this paper is straightforward: for $m$-dimensional data and the parameters $\alpha\in[0,1]$ and integer $l$, find a confidence interval for each of the variables so that the $1-\alpha$ fraction of the observations lie within the confidence area, defined so that the sum of the length of the intervals is minimized, and an observation can break at most $l$ of the variable-specific confidence intervals. We make the following contributions in this paper: \begin{itemize} \item We formally define the problem of finding a multivariate confidence area, where observations have to satisfy most but not all of the variable-specific confidence intervals. \item We analyze the computational complexity of the problem, and show that it is NP-hard, but admits an approximation algorithm based on a linear programming relaxation. \item We propose two algorithms, which produce good confidence areas in practice. \item We conduct experiments demonstrating various aspects of multivariate confidence intervals. \end{itemize} The rest of this paper is organized as follows: Related work is discussed in Sec.~\ref{sec:related}. We define the \textsc{ProbCI} and \textsc{CombCI} problems formally in Sec.~\ref{sec:def}. In Sec.~\ref{sec:theory} we study theoretical properties of the proposed confidence areas, as well as study problem complexity. Sec.~\ref{sec:alg} describes algorithms used in experiments in Sec.~\ref{sect:experiments}. Finally, Sec.~\ref{sec:concl} concludes this work. \section{Related work} \label{sec:related} Confidence intervals have recently gained more popularity, as they convey information both of statistical significance of the result {\em as well as the effect size}. In contrast, p-values give information only of the statistical significance: it is possible to have statistically significant results that are meaningless in practice due to the small effect size. The problem has been long and acutely recognized, e.g., in medical research \cite{Gardner86}. Some psychology journals have recently banned the use of p-values \cite{Woolston15,Trafimow15}. The proposed solution is not to report p-values at all, but use confidence intervals instead \cite{Nuzzo14}. Simultaneous confidence intervals for time series data have been proposed \cite{Kolsrud2007, KorpelaPG14}. These correspond to the confidence areas in this paper when $l=0$. The novelty here generalization of the confidence areas to allow outlying dimensions ($l>0$), similarly to \cite{Wolf2015}, and the related theoretical and algorithmic contributions, allowing for narrower confidence intervals and in some cases more readily interpretable results. Simultaneous confidence intervals with $l=0$ were in \cite[p. 154]{Davidson97}, and studied \cite{Mandel08} by using the most extreme value within a data vector as a ranking criterion. Another examples of $l=0$ type procedures include \cite{Liu07,Schussler16}. In the field of information visualization and the visualization of time series confidence areas have been used extensively; see \cite{Aigner11} for a review. An interesting approach is to construct simultaneous confidence regions by inverting statistical multiple hypothesis testing methods, see e.g.,~\cite{Guilbaud08}. The approach presented in this paper allows some dimensions of an observation to be partially outside the confidence area. This is in the same spirit---but not equivalent---to false discovery rate (FDR) in multiple hypothesis testing, which also allows a controlled fraction of positives to be ``false positives''. In comparison, the approach in \cite{KorpelaPG14} is more akin to family-wise error rate (FWER) that controls the probability of at least one false discovery. \section{Problem definition} \label{sec:def} A {\em data vector} $x$ is a vector in ${\mathbb{R}}^m$ and $x(j)$ denotes the value of $x$ in $j$th position. Let matrix $X\in{\mathbb{R}}^{n\times m}$ be a dataset of $n$ data vectors $x_1,\ldots,x_n$, i.e. rows of $X$. We start by defining the confidence area, its size, and the envelope for a dataset $X$. \begin{Definition} Given $X\in{\mathbb{R}}^{n\times m}$, a {\em confidence area} for $X$ is a doublet of vectors $(x_l,x_u)$, $x_l,x_u\in {\mathbb{R}}^m$ composed of lower and upper bounds satisfying $x_l(j)\le x_u(j)$ for all $j$, respectively. The {\em size} of the confidence area is $A=\sum_{j=1}^m{w(j)}$, where $w(j)=x_u(j)-x_l(j)$ is the {\em width} of the confidence interval w.r.t.\ $j$th position. The {\em envelope} of $X$ is a confidence area denoted by $\operatorname{env}(X)=(x_l,x_u)$, where $x_l(j)=\min_{i=1}^n{x_i(j)}$ and $x_u(j)=\max_{i=1}^n{x_i(j)}$. \end{Definition} We define the error of the confidence area as the number of outlying data vectors as follows. \begin{Definition} Let $x$ be a data vector in $\mathbb{R}^m$ and $(x_l,x_u)$ a confidence area. The {\em error} of $x$ given the confidence area is defined as \begin{equation} \nonumber V(x \mid x_u, x_l) = \sum_{j=1}^m \operatorname{I} \left[x(j)<x_l(j)\vee x_u(j)<x(j) \right]. \end{equation} \end{Definition} The indicator function $\operatorname{I}[\Box]$ is unity if the condition $\Box$ is satisfied and zero otherwise. The main problem in this paper is as follows. \begin{Problem}[\textsc{ProbCI}] \label{prob:pci} Given $\alpha \in [0,1]$, an integer~$l$, and a distribution $F$ over $\mathbb{R}^m$, find a confidence area $(x_l,x_u)$ that minimizes $\sum_{j=1}^m w(j)$ subject to constraint \begin{equation} \Pr_{x \sim F}\left[ V(x \mid x_u, x_l) \le l \right] \ge 1-\alpha. \label{eq:p1} \end{equation} \end{Problem} For this problem definition to make sense, the variables or at least their scales must be comparable. Otherwise variables with high variance will dominate the confidence areas. Therefore, some thinking and a suitable preprocessing, such as normalization of variables, should be applied before solving for the confidence area and interpreting it. The combinatorial problem is defined as follows. \begin{Problem}[\textsc{CombCI}] \label{prob:cci} Given integers $k$ and $l$, and $n$ vectors $x_1, \ldots, x_n$ in $\mathbb{R}^m$, find a confidence area $(x_l,x_u)$ by minimizing $\sum_{j=1}^m w(j)$ subject to constraint \begin{equation} \label{eq:cciconstraint} \sum_{i=1}^n \operatorname{I} \left[ V(x_i \mid x_u, x_l ) \le l \right] \ge n-k. \end{equation} \end{Problem} Any confidence area satisfying \eqref{eq:cciconstraint} is called a \emph{$(k,l)$-confidence area}. In the special case with $l=0$, Problems~\ref{prob:pci} and \ref{prob:cci} coincide with the minimum width envelope problem from \cite{KorpelaPG14}. The problem definition with non-vanishing $l$ is novel to best of our knowledge. The relation of Problems~\ref{prob:pci}~and~\ref{prob:cci} is as follows. \begin{Definition} Problems \ref{prob:pci} and \ref{prob:cci} {\em match} for a given data from distribution $F$ and parameters $\alpha$, $k$, and $l$, if a solution of Problem \ref{prob:cci} satisfies Eq. \eqref{eq:p1} with equality for a previously unseen test data from distribution $F$. \label{def:match} \end{Definition} We can solve Problem \ref{prob:pci} by solving Problem \ref{prob:cci} with different values of $k$ to find a $k$ that matches the given $\alpha$, as done in Sec. \ref{sec:match} or \cite{KorpelaPG14}. \section{Theoretical observations} \label{sec:theory} \subsection{Confidence areas for uniform data} \label{sec:uniform} It is instructive to consider the behavior of Problem \ref{prob:pci} with the uniform distribution $F_{unif}=U(0,1)^m$. We show that a solution may contain very narrow confidence intervals and discuss the required number of data vectors to estimate a confidence area with a desired level of $\alpha$. Consider Problem \ref{prob:pci} in a simple case of two-dimensional data with $m=2$, $l=1$, and $\alpha=0.1$. In this case an optimal solution to Problem \ref{prob:pci} is given by confidence intervals with $w(1)=0.9$ and $w(2)=0$, resulting in the size of confidence area of $\sum_{j=1}^2{w(j)}=0.9$. A solution with confidence intervals of equal width, i.e., $w(1)=w(2)=0.68$ would lead to substantially larger area of $1.37$. As this simple example demonstrates, if data is unstructured or if some of the variables have unusually high variance, we may obtain solutions where some of the confidence intervals are very narrow. In the case of uniform distribution the choice of variables with zero width confidence intervals is arbitrary: e.g., in the example above we could as well have chosen $w(1)=0$ and $w(2)=0.9$. Such narrow intervals are not misleading, because they are easy to spot: for a narrow interval---such as the one discussed in this paragraph---only a small fraction of the values of the variable lie within it. In real data sets with non-trivial marginal distributions and correlation structure the confidence intervals are often of roughly similar width. Next we consider the behavior of the problem at the limit of high-dimensional data. \begin{Lemma} The solution with confidence intervals of equal width $w=w_j=x_u(j)-x_l(j)$ corresponds to $\alpha$ of \begin{equation} \alpha=1-BC(l,m,w), \label{eq:bc} \end{equation} where $BC(l,m,w)=\sum_{j=0}^l \binom{m}{j}(1-w)^jw^{m-j}$ is the cumulative binomial distribution. \label{lem:eq} \end{Lemma} \begin{Lemma} If $n$ vectors are sampled from $F_{unif}=U(0,1)^m$ then the expected width of the envelope for each variable is $w=\frac{n-1}{n+1}$. The probability of more than $l$ variables from a vector from $F_{unif}$ being outside the envelope is given by Eq. \eqref{eq:bc} with $w=\frac{n-1}{n+1}$. \label{lem:n} \end{Lemma} One implication of Lemma \ref{lem:n} is that there is a limit to the practical accuracy that can be reached with a finite number of samples. The smallest $\alpha$ we can hope to realistically reach is the one implied by the envelope of the data, unless we make some distributional assumptions of the shape of the distribution outside the envelope. Conversely, the above lemmas define a minimum number of samples needed for uniform data to find the confidence area for a desired level of $\alpha$. In the case of $l=0$ to reach the accuracy of $\alpha$, we have $w^m\approx 1-\alpha$, or $n\approx -2m/\log{\left(1-\alpha\right)}\approx 2m/\alpha$, where we have ignored higher order terms in $1/m$ and $\alpha$. For a typical choice of $\alpha=0.1$ and $m=100$ this would imply that at least $n\approx 2000$ data vectors are needed to estimate the confidence area; the number of data vectors needed increases linearly with the dimensionality $m$. On the other hand, for a given $\beta\in]0,1[$, if we let $l=\lfloor\beta m\rfloor$, a solution where the width of the envelope is $w\approx 1-\beta$ is asymptotically sufficient when the dimensionality $m$ is large, in which case the number of data vectors required is $n\approx 2/\beta$. For a value of $\beta=0.1$ only $n\approx 20$ data vectors would therefore be needed even for large $m$. As hinted by the toy example in Fig.~\ref{fig:toy}, a non-vanishing parameter $l$ leads at least in this example to substantially narrower confidence intervals and, hence, make it possible to compute the confidence intervals with smaller data sets. \subsection{Complexity and approximability} We continue by briefly addressing the computational properties of the \textsc{CombCI} problem. The proofs are provided in the appendix. \begin{Theorem} \label{thr:np} \textsc{CombCI} is NP-hard for all $k$. \end{Theorem} For $k>0$ the result directly follows from \cite[Theorem 3]{KorpelaPG14}, and for $k=0$ and $l>0$ a reduction from \textsc{Vertex-Cover} can be used. Now, there exists a {\em polynomial time approximation algorithm} for solving a variant of \textsc{CombCI} in the special case of $k=0$. In particular, we consider here a {\em one-sided} confidence area, defined only as the {\em upper bound} $x_u$, the lower bound $x_l$ is assumed to be fixed, e.g., at zero, or any other suitable value. This complements the earlier result that the {\em complement} of the objective function of \textsc{CombCI} is hard to approximate for $k>0$ and $l=0$ \cite[Theorem 3]{KorpelaPG14}. \begin{Theorem} \label{thr:app} There is a polynomial time $(l+1)$ approximation algorithm for the one-sided \textsc{CombCI} problem with $k=0$. \end{Theorem} The proof uses a linear programming relaxation of an {\em integer linear program} corresponding to the $k=0$ variant of the one-sided \textsc{CombCI} and the approximation ratio obtained from the solution given by the relaxation. Finding a bound that does not depend on $l$ is an interesting open question, as well as extending the proof to the two-sided \textsc{CombCI}. It is unlikely that the problem admits a better approximation bound than $2$, since this would immediately result in a new approximation algorithm for the \textsc{Vertex-Cover} problem. This is because in the proof of Theorem \ref{thr:app} we describe a simple reduction from \textsc{Vertex-Cover} to \textsc{CombCI} with $l=1$ and $k=0$. This reduction preserves approximability, as it maps the \textsc{Vertex-Cover} instance to an instance of our problem in a straightforward manner. For \textsc{Vertex-Cover} it is known that approximation ratios below $2$ are hard to obtain in the general case. Indeed, the best known bound is equal to $2 - \Theta(1/\sqrt{\log|V|})$ \cite{Karakostas2009}. \section{Algorithms} \label{sec:alg} We present two algorithms for $(k,l)$-confidence~areas. \subsection{Minimum intervals ({\sc mi})} Our first method is based on {\em minimum intervals}. A standard approach to define a confidence interval for univariate data is to consider the minimum length interval that contains $(1-a)$\% of the observations. This can be generalized for multivariate data by treating each variable independently, i.e., for a given $a$, we set $x_l(j)$ and $x_u(j)$ equal to lower and upper limit of such a minimum length interval for every $j$. The {\sc mi} algorithm solves the \textsc{CombCI} problem for given $k$ and $l$ by adjusting the parameter $a$ such that the resulting $(x_u, x_l)$ satisfies the constraint in Eq.~\eqref{eq:cciconstraint} in the training data set. The resulting confidence area may have a larger size than the optimal solution, since all variables use the same $a$. Note that the {\sc mi} solution differs from the naive solution mentioned in the introduction because the naive solution does not adjust $a$ but simply sets it to $a=k/n$. The time complexity of {\sc mi} with binomial search for correct $a$ is $O(mn\log{k}\log{n})$. \begin{figure*}[t] \centering \includegraphics[width=0.999\textwidth]{figs/n_l_comparison_norm_m10} \caption{Comparison of $k/n$ used when fitting the confidence interval on training data and the observed level of $\alpha$ in a separate test data. Both data are normal with $m=10$, $n_{test} = 1000$, $n_{train} = \{250,1000\}$ and $l = \{0,2\}$. The dashed green line indicates $k/n = \alpha$. Shown are the averages over 25 independent trials over different randomly generated training and test instances. Note the log-scale on the vertical axis.} \label{fig:n_l_comparison} \end{figure*} \subsection{Greedy algorithm ({\sc gr})} Our second method is a greedy algorithm. The greedy search could be done either {\em bottom-up} (starting from an empty set of included data vectors and then adding $n-k$ data vectors) or {\em top-down} (starting from $n$ data vectors and by excluding $k$ data vectors). Since typically $k$ is smaller than $n-k$ we will consider here a top-down greedy algorithm. The idea of the greedy algorithm is to start from the envelope of whole data and sequentially find $k$ vectors to exclude by selecting at each iteration the vector whose removal reduces the envelope the largest amount. In order to find the envelope wrt.\ the relaxed condition allowing $l$ positions from each vector to be outside, at each iteration the set of included data points needs to be (re)computed. This is done by implementing a greedy algorithm solving the \textsc{CombCI} problem for $k=0$. Here one removes individual points that result in maximal decrease in the envelope size so that at most $l$ points from each data vector are be removed, thus obtaining the envelope wrt.\ the $l$ criterion. % After this envelope has been computed, the data vector whose exclusion yields a maximal decrease in the size of the confidence area is excluded. For this, a slightly modified variant of the greedy MWE algorithm from \cite{KorpelaPG14} with $k=1$ is used. After $k$ data vectors have been excluded, the final set of points included in the envelope is returned. The time complexity of {\sc gr} is $O(mkn\log{n})$. \section{Empirical evaluation} \label{sect:experiments} We present here empirical evaluation of the algorithms. In the following {\sc mi} and {\sc gr} refer to the Minimum intervals and Greedy algorithm, respectively. \subsection{Datasets} We make experiments using various datasets that reflect different properties of interest from the point of view of fitting confidence areas. In particular, we want to study the effects of correlation (autocorrelation in the case of time-series or regression curves), number of variables, and distributional qualities. \squishlist \item {\bf Artificial data.} We use artificial multivariate (i.i.d.~variables) datasets sampled from the uniform distribution (in the interval $[0,1]$), the standard normal distribution, and the Cauchy distribution (location parameter $0$, scale parameter $\gamma = 2$), with varying $n$ and $m$ to study some theoretical properties of multivariate confidence areas. \item {\bf Kernel regression data.} We use the Ozone and South African heart disease (SA heart) datasets (see, e.g., \cite{eslbook}) to compute confidence areas for bootstrapped kernel regression estimates. We use a simple Nadaraya-Watson kernel regression estimate to produce the vectors $X$, and fit confidence intervals to these using our algorithms. By changing the number of bootstrap samples we can vary $n$, and by altering the number of points in which the estimate is computed we can vary $m$. \item {\bf Stock market data.} We obtained daily closing prices of $n = 400$ stocks for years 2011--2015 ($m = 1258$ trading days). The time-series are normalized to reflect the absolute change in stock price with respect to the average price of the first five trading days in the data. The data is shown in Fig.~\ref{fig:stock2}. \squishend \vspace{-2ex} \subsection{Finding the correct $\mathbf{k}$} \label{sec:match} Our algorithms both solve the {\sc CombCI} problem (Problem~\ref{prob:cci}) in which we must specify the number of vectors $k$ that are allowed to violate the confidence area. To obtain a matching $\alpha$ in {\sc ProbCI} (Definition \ref{def:match}) we must choose the parameter $k$ appropriately. We study here how this can be achieved. \begin{figure}[t] \includegraphics[width=\columnwidth]{figs/l_vs_alpha_comparison} \caption{Effect of the value of $l_{\operatorname{test}}$ on the observed $\alpha$ for different values of $l_{\operatorname{train}}$. On the left $l_{\operatorname{train}}$ is equal to $0$ (black), $1$ (red), $2$ (green) and $3$ (blue), while on the right $l_{\operatorname{train}}$ is $0$ (black), $5$ (red), $10$ (green), and $15$ (blue). In every case the confidence area was trained to have $\alpha = 0.1$ for the given $l_{\operatorname{train}}$.} \label{fig:l_vs_alpha} \end{figure} Fig.~\ref{fig:n_l_comparison} shows $\alpha$ as a function of $k/n$ in data with $m = 10$ independent normally distributed variables for four combinations of $n$ and $l$. The dashed green line shows the situation with $k/n = \alpha$. (This is a desirable property as it means fine-tuning $k/n$ is not necessary to reach some particular value of $\alpha$.) We observe from Fig.~\ref{fig:n_l_comparison} that when the data is relatively small ($n=250$), for a given $k/n$ both {\sc gr} as well as {\sc mi} tend to find confidence areas that are somewhat too narrow for the test data leading to $\alpha > k/n$. To obtain some particular $\alpha$, we must thus set $k/n$ to a lower value. For example, with $n=250$ and $l=0$, to have $\alpha = 0.1$ we must let $k/n = 0.05$. As the number of training examples is increased to $n=1000$, we find that both algorithms are closer to the ideal situation of $k/n = \alpha$. Interestingly, this happens also when when $l$ increases from $l=0$ to $l=2$. The relaxed notion of confidence area we introduce in this paper is thus somewhat less prone to overfitting issues when compared against the basic confidence areas with $l=0$ of \cite{KorpelaPG14}. On the other hand, for $n=1000$ we also observe how {\sc gr} starts to ``underfit'' as $k/n$ increases, meaning that we have $\alpha < k/n$. Errors in this direction, however, simply mean that the resulting confidence area is conservative and will satisfy the given $k/n$ by a margin. The dependency between $k/n$ and $\alpha$ and other observations made above are qualitatively identical for uniform and Cauchy distributed data (not shown). \subsection{Effect of $l$ on $\alpha$ in test data} Note that the value of $\alpha$ that we compute for a given confidence area also depends on the value of $l$ used when {\em evaluating}. We continue by showing how $\alpha$ depends on the value of $l$ used when evaluating the confidence area on test data. In this experiment we train confidence areas using the Ozone data (with $n=500$ and $m=10$ or $m=50$) and adjust $k/n$ so that we have $\alpha = 0.1$ for a given value of $l_{\operatorname{train}}$, where $l_{\operatorname{train}} \in \{ 0, 0.1m, 0.2m, 0.3m\}$ is the parameter given to our algorithms ({\sc mi}, {\sc gr}) to solve Prob.~\ref{prob:cci}. Then we estimate $\alpha$ for all $l_{\operatorname{test}} \in \{1, \ldots, m\}$ using Eq.~\eqref{eq:p1} and previously unseen test data set. Results are shown in Fig.~\ref{fig:l_vs_alpha}. We find that in every case the line for a given value of $l_{\operatorname{train}}$ intersects the thin dashed (red) line (indicating $\alpha = 0.1$) at the correct value $l_{\operatorname{test}} = l_{\operatorname{train}}$. More importantly, however, we also find that $l_{\operatorname{test}}$ has a very strong effect on the observed $\alpha$. This means that if we relax the condition under which an example still belongs in the confidence area (i.e., increase $l_{\operatorname{test}}$), $\alpha$ drops at a very fast rate, meaning that a confidence area trained for a particular value of $l_{\operatorname{train}}$, will be rather conservative when evaluated using $l_{\operatorname{test}} > l_{\operatorname{train}}$. Obviously the converse holds as well, i.e., when $l_{\operatorname{test}} < l_{\operatorname{train}}$, the confidence area will become much too narrow very fast. This implies that $l_{\operatorname{train}}$ should be set conservatively (to a low value) when it is important to control the false positive rate, e.g., when the ``true'' number of noisy dimensions is unknown. \subsection{Algorithm comparison} Next we briefly compare the {\sc mi} and {\sc gr} algorithms in terms of the confidence area size (given as $A/m$, where $m$ is the number of variables) and running time $t$ (in seconds). Results for artificial data sets as well as the two regression model data are shown in Table~\ref{table:results_table}. The confidence level (in test data) was set to $\alpha = 0.1$ in every case, and all training data had $n = 500$ examples. All numbers are averages of 25 trials. We can observe, that {\sc mi} tends to produce confidence areas that are consistently somewhat smaller than those found by {\sc gr}. Also, {\sc mi} is substantially faster, albeit our implementation of {\sc gr} is by no means optimized. Finally, the $k/n$ column shows the confidence level that was used when fitting the confidence area to obtain $\alpha = 0.1$. Almost without exception, we have $k/n < \alpha$ for both algorithms, with {\sc mi} usually requiring a slightly larger $k$ than {\sc gr}. Also worth noting is that for extremely skewed distributions, e.g., Cauchy, the confidence area shrinks rapidly as $l$ is increased from zero. \begin{table} \begin{footnotesize} \input{results_table.tex} \end{footnotesize} \caption{Comparison between {\sc mi} and {\sc gr}.} \label{table:results_table} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.96\columnwidth]{figs/saheart_sbp_kreg}\\ \includegraphics[width=0.96\columnwidth]{figs/ozone_kreg} \caption{\label{fig:kreg_examples} Using {\sc gr} and {\sc mi} on bootstrapped kernel regression estimates ($n=250$, $m=30$, $l \in \{0, 5\}$). Top: Probability of coronary heart disease as a function of systolic blood pressure in the SA heart data. Bottom: Ozone level as a function of observed radiation.} \end{figure} \subsection{Application to regression analysis} We can model the accuracy of an estimate given by a regression model by resampling the data points, e.g., by the bootstrap method, and then refitting the model for each of the resampled data sets \cite{Efron93}. The estimation accuracy or spread of values for a given independent variable can be readily visualized using confidence intervals. Fig.~\ref{fig:kreg_examples} shows examples of different confidence intervals fitted to bootstrapped kernel regression estimates on two different datasets using both $l=0$ and $l=5$ ($n=250$ and $m=30$). In both cases $k$ was adjusted so that $\alpha = 0.1$ in a separate test data. We find that qualitatively there is very little difference between {\sc mi} and {\sc gr} when $l=5$. For $l=0$, {\sc gr} tends to produce a somewhat narrower area. In general, this example illustrates the effect of $l$ on the resulting confidence area in visual terms. By allowing the examples to lie outside the confidence bounds for a few variables ($l=5$) we obtain substantially narrower confidence areas that still reflect very well the overall data distribution. \subsection{Stock market data} The visualization for the stock market data of Fig.~\ref{fig:stock2} has been constructed using {\sc gr} algorithm with parameters $k=40$ and $l=125$. The figure shows in yellow the stocks that are clearly outliers and among the $k=40$ anomalously behaving observations ignored in the construction of the confidence area. The remaining $n-k=360$ stocks (shown in blue) remain within the confidence area at least $\frac{m-l}{m}=90$\% of the time. However, they are allowed to deviate from the confidence area 10\% of the time. Fig.~\ref{fig:stock2} shows several such stocks, one of them highlighted with red. By allowing these excursions, the confidence area is smaller and these potentially interesting deviations are easy to spot. E.g., the red line represents Mellanox Technologies and market analyses from fall 2012 report the stock being overpriced at that time. The black line in Fig.~\ref{fig:stock2} represents Morning\-star, an example staying inside the confidence area. % If we do not allow any deviations outside the confidence intervals, i.e., we set $l=0$, then the confidence area will be larger and such deviations might be missed. \begin{figure*}[t] \begin{center} \includegraphics[width=0.94\textwidth]{figs/stockdata_blw5_K40_L125_cropped.pdf} \caption{\label{fig:stock2} Visualization of the relative closing values of 400 stocks from Jan.~2011 to Dec.~2015 (1258 days) compared to the starting days. The confidence area with parameters $k=40$ and $l=125$ is shown with blue lines. An example of a valuation of a stock that temporarily deviates (in less than $l$ time points) from the confidence area is shown in red, and an example of a valuation for a stock observing ``normal'' development is shown in black.} \end{center} \end{figure*} \section{Concluding remarks} \label{sec:concl} The versatility of confidence intervals stems from their simplicity. They are easy to understand and to interpret, and therefore often used in presentation and interpretation of multivariate data. The application of confidence intervals to multivariate data is, however, often done in a naive way, disregarding effects of multiple variables. This may lead to false interpretations of results if the user is not being careful. In this paper we have presented a generalization of confidence intervals to multivariate data vectors. The generalization is simple and understandable and does not sacrifice the interpretability of one-dimensional confidence intervals. The confidence areas defined this way behave intuitively and offer insight into the data. The problem of finding confidence areas is computationally hard. We present two efficient algorithms to solve the problem and show that even a rather simple approach ({\sc mi}) can produce very good results. Confidence intervals are an intuitive and useful tool for visually presenting and analyzing data sets, spotting trends, patterns, and outliers. The advantage of confidence intervals is that they give at the same time information about both the statistical significance of the result and size of the effect. In this paper, we have shown several examples demonstrating the usefulness of multivariate confidence intervals, i.e. confidence areas. In addition to visual tasks, the confidence intervals can also be used in automated algorithms as a simple and robust distributional estimators. As the toy example of Fig. \ref{fig:toy} shows, the confidence areas with $l>0$ can be a surprisingly good distributional estimator, if data is sparse, i.e., a majority of variables is close to the mean and in each data vector only a small number of variables have significant deviations from the average. With p-values there are established procedures to deal with multiple hypothesis testing. Indeed, a proper treatment of the multiple comparisons problem is required, e.g., in scientific reporting. Our contribution to the discussion about reporting scientific results is to point out that it is indeed possible to treat multidimensional data with confidence intervals in a principled~way. \bibliographystyle{abbrv} \begin{small}
1,477,468,750,256
arxiv
\section{Introduction}\label{sec1} A large number of papers has been devoted to the study of asymptotic behavior for solutions of parabolic problems under various assumptions and in different contexts: for a review on classical results see \cite{f}, \cite{a}, \cite{sp}, and references therein. More recently in \cite{pe} and \cite{lp} the case of nonlinear monotone operators, and quasilinear problems with nonlinear absorbing terms having natural growth, have been considered; in particular, in \cite{pe}, we dealt with nonnegative measures $\mu$ absolutely continuous with respect to the parabolic $p$-capacity (the so called \emph{soft measures}). Here we analyze the case of linear operators with possibly singular general measures and no sign assumptions on the data. Let $\Omega\subseteq \mathbb{R}^{N}$ be a bounded open set, $N\geq 2$, $T>0$; we denote by $Q$ the cylinder $(0,T)\times\Omega$. We are interested in the study of main properties and in the asymptotic behavior with respect to the time variable $t$ of the solution of the linear parabolic problem \begin{equation}\label{plin3} \begin{cases} u_t +L (u)=\mu& \text{in}\ (0,T)\times\Omega,\\ u(0)=u_0,& \text{in}\ \Omega,\\ u=0& \text{on}\ (0,T)\times\partial\Omega, proof}o{\Box{cases} proof}o{\Box{equation} with $\mu\in \mathcal{M}(Q)$ the space of Radon measures with bounded total variation on $Q$, $u_0\in L^{1}(\Omega)$, and $$L(u)=-\mathrm{div}(M(x)\nabla u),$$ where $M$ is a matrix with bounded, measurable entries, and satisfying the ellipticity assumption \begin{equation} \label{coercp} M(x)\xi\cdot\xi\geq \alpha |\xi|^2, proof}o{\Box{equation} for any $\xi\in\mathbb{R}^{N}$, with $\alpha >0$. In order to obtain uniqueness, in the elliptic case, the notion of duality solution of Dirichlet problem \begin{equation}\label{elin3} \begin{cases} -\mathrm{div}{(M(x)\nabla v)} =\mu & \text{in}\ \Omega,\\[1.5 ex] v=0 &\text{on}\ \partial\Omega, proof}o{\Box{cases} proof}o{\Box{equation} was introduced in \cite{s}. Following the idea of \cite{s} we can define a solution of problem \rife{plin3} in a duality sense as follows \begin{definition}\label{dualdef} A function $u\inL^{1}(Q)$ is a \emph{duality solution} of problem \rife{plin3} if \begin{equation}\label{dualsense} -\int_{\Omega} u_{0} w(0)\ dx+\displaystyle \int_{\Omega \times (0, +\infty)} u\, g\ dxdt=\displaystyle \int_{\Omega \times (0, +\infty)} w\ d\mu, proof}o{\Box{equation} for every $g\inL^{\infty}(Q)$, where $w$ is the solution of the \emph{retrograde problem} \begin{equation}\label{retroc1} \begin{cases} -w_{t}- \mathrm{div} (M^{\ast}(t,x)\nabla w) =g & \text{in}\ (0,T)\times\Omega,\\ w(T,x)=0 & \text{in}\ \Omega,\\ w(t,x)=0 &\text{on}\ (0,T)\times\partial\Omega, proof}o{\Box{cases} proof}o{\Box{equation} where $M^{\ast}(t,x)$ is the transposed matrix of $M(t,x)$. proof}o{\Box{definition} \begin{remark} Notice that all terms in \rife{dualsense} are well defined thanks to standard parabolic regularity results (see \cite{lsu}, \cite{e}). Moreover, it is quite easy to check that any duality solution of problem \rife{plin3} actually turns out to be a distributional solution of the same problem. Finally recall that any duality solution turns out to coincide with the renormalized solution of the same problem (see \cite{pe1}); this notion introduced in \cite{dmop} for the elliptic case, and then adapted to the parabolic case in \cite{pe1} should be the right one to ensure uniqueness also in the nonlinear framework. proof}o{\Box{remark} A unique duality solution for problem \rife{plin3} exists, in fact we have the following \begin{theorem}\label{exiuni} Let $\mu\in \mathcal{M}(Q)$ and $u_0\inL^{1}(\Omega)$, then there exists a unique duality solution of problem \rife{plin3}. proof}o{\Box{theorem} The main result of this paper concerns the asymptotic behavior of the duality solution of problem \rife{plin3}, in the case where the measure $\mu$ do not depend on time. First observe that by Theorem \ref{exiuni} a unique solution is well defined for all $t>0$. We are interested in the asymptotic behavior of $u(t,x)$ as $t$ tends to infinity. We recall that by a duality solution of problem \rife{elin3} we mean a function $v\inL^{1}(\Omega)$ such that \begin{equation}\label{elldualsense} \int_{\Omega} v\, g\ dxdt=\int_{\Omega} z\ d\mu, proof}o{\Box{equation} for every $g\inL^{\infty}(\Omega)$, where $z$ is the variational solution of the dual problem \begin{equation}\label{dualell} \begin{cases} -\mathrm{div} (M^{\ast}(x)\nabla z) =g & \text{in}\ \ \Omega,\\ z(x)=0 &\text{on}\ \ \partial\Omega. proof}o{\Box{cases} proof}o{\Box{equation} As we will see later, a duality solution of problem \rife{plin3} turns out to be continuous with values in $L^{1}(\Omega)$. Let us state our main result: \begin{theorem}\label{asi} Let $\mu\in \mathcal{M}(Q)$ be independent on the variable $t$. Let $u(t,x)$ be the duality solution of problem \rife{plin3} with $u_0 \in L^{1}(\Omega)$, and let $v(x)$ be the duality solution of the corresponding elliptic problem \rife{elin3}. Then \[ \lim_{T\rightarrow +\infty} u(T,x)=v(x), \] in $L^{1}(\Omega)$. proof}o{\Box{theorem} \setcounter{equation}{0} \section{Existence and uniqueness of the duality solution} Let us prove Theorem \ref{exiuni}: \begin{proof} Let us first prove the result in the case $\mu\inL^{1}(Q)$ and $u_0$ smooth; let us fix $r,q\in\mathbb{R}$ such that \[ r, \,q>1,\ \ \ \ \ \frac{N}{q}+\frac{2}{r}<2\,, \] and let us consider $g\in \parl{r}{q}\capL^{\infty}(Q)$. Let $w$ be the solution of problem \rife{retroc1}; standard parabolic regularity results (see again \cite{lsu}) imply that $w$ is continuous on $Q$ and \[ \|w\|_{L^{\infty}(Q)}\leq C \|g\|_{\parl{r}{q}}; \] therefore, the linear functional \[ \Lambda:\parl{r}{q}\mapsto\mathbb{R}, \] defined by \[ \Lambda(g)=\displaystyle \int_{\Omega \times (0, +\infty)} w\ d\mu + \int_{\Omega} u_0w(0)\,, \] is well-defined and continuous, since \[ \displaystyle|\Lambda (g)|\leq (\|\mu\|_{\mathcal{M}(Q)}+\|u_0\|_{L^{\infty}(\Omega)})\|w\|_{L^{\infty}(Q)} \leq C \|g\|_{\parl{r}{q}}. \] So, by \emph{Riesz's representation theorem} there exists a unique $u\in\parl{r'}{q'}$ such that \[ \Lambda(g)=\displaystyle \int_{\Omega \times (0, +\infty)} u\,g\ dxdt, \] for any $g\in\parl{r}{q}$. So we have that, if $\mu\inL^{1}(Q)$ and $u_0$ is smooth, then there exists a (unique by construction) duality solution of problem \rife{plin3}. A standard approximation argument shows that a unique solution also exists for problem \rife{plin3} if $\mu\in \mathcal{M}(Q)$ and $u_0\in L^{1}(\Omega)$. In fact, via a standard convolution argument, we can approximate $u_0$ in $L^{1}(\Omega)$ with smooth functions $u_0^\varepsilon$, and $\mu$ with smooth functions $\mu^\varepsilon$ in the narrow topology of measures, that is $$ \lim_{\varepsilon\to 0}\displaystyle \int_{\Omega \times (0, +\infty)}\varphi\ d\mu^\varepsilon = \displaystyle \int_{\Omega \times (0, +\infty)}\varphi\ d\mu, \ \ \ \forall \ \varphi\in C(\overline{Q}), $$ and $\|\mu^\varepsilon\|_{L^{1}(Q)}\leq C$. Hence, reasoning as in the proof of Theorem $1.2$ in \cite{bdgo}, one can show that there exists a function $u\inL^{1}(Q)$ such that $u^\varepsilon$ converges to $u$ in $L^{1}(Q)$ and so we can pass to the limit in the duality formulation of $u^\varepsilon$ to obtain the result. proof}o{\Box{proof} \setcounter{equation}{0} \section{Asymptotic behavior} In this section we will prove Theorem \ref{asi}. From now on we will denote by $T_k (s)$ the function $\text{max}(-k,\text{min}(k,s))$ and $\Theta_k (s)$ will indicate its primitive function, that is: \[ \Theta_{k}(s)=\int_0^s T_k (\sigma)\ d\sigma . \] Let us prove the following preliminary result: \begin{proposition}\label{vsol} Let $\mu\in \mathcal{M}(Q)$ be independent on time and let $v$ be the duality solution of the elliptic problem \begin{equation}\label{dualasiee} \begin{cases} -\mathrm{div}(M(x)\nabla v)=\mu& \text{in}\ \Omega,\\ v=0,& \text{on}\ \partial \Omega. proof}o{\Box{cases} proof}o{\Box{equation} Then $v$ is the unique solution of the parabolic problem \begin{equation}\label{dualasiep} \begin{cases} w_t -\mathrm{div}(M(x)\nabla w)=\mu& \text{in}\ (0,T)\times\Omega,\\ w(0)=v(x),& \text{in}\ \Omega,\\ w(t,x)=0 & \text{on}\ (0,T)\times\partial\Omega, proof}o{\Box{cases} proof}o{\Box{equation} in the duality sense introduced in Definition \ref{dualdef}, for any fixed $T>0$. proof}o{\Box{proposition} \begin{proof} We have to check that $v$ is a solution of problem \rife{dualasiep}; to do that let us choose $T_{k}(v)$ as test function in \rife{retroc1}. We obtain $$ \begin{array}{l} \quad\displaystyle-\int_0^T\langle w_t,T_{k}(v)\rangle \ dt+\displaystyle \int_{\Omega \times (0, +\infty)} M^\ast (x) \nabla w\cdot\nabla T_{k}(v)\ dxdt \displaystyle=\displaystyle \int_{\Omega \times (0, +\infty)} T_{k}(v)\ g\ dxdt. proof}o{\Box{array} $$ Now, integrating by parts we have \[ -\int_0^T\langle w_t,T_{k}(v)\rangle \ dt=\int_{\Omega} w(0) v(x) +\omega(k), \] where $\omega(k)$ denotes a nonnegative quantity which vanishes as $k$ diverges, while \[ \displaystyle \int_{\Omega \times (0, +\infty)} T_{k}(v)\ g\ dxdt= \displaystyle \int_{\Omega \times (0, +\infty)} v\,g \ dxdt +\omega(k). \] Finally, using Theorem $2.33$ and Theorem $10.1$ of \cite{dmop}, we have \[ \displaystyle \int_{\Omega \times (0, +\infty)} M^\ast(x) \nabla w\cdot\nabla T_{k}(v)\ dxdt=\displaystyle \int_{\Omega \times (0, +\infty)} M(x)\nablaT_{k}(v)\cdot\nabla w\ dxdt=\int_0^T \int_{\Omega} w \ d\lambda_k (x)\, dt, \] where the $\lambda_k $ are measures in $\mathcal{M}(\Omega)$ which converge to $\mu$ in the narrow topology of measures; thus, recalling that $w$ is bounded continuous, and using the dominated convergence theorem, we have \[ \displaystyle \int_{\Omega \times (0, +\infty)} M^\ast (x) \nabla w\cdot\nabla T_{k}(v)\ dxdt=\displaystyle \int_{\Omega \times (0, +\infty)} w\ d\mu +\omega(k). \] Gathering together all these facts, we have that $v$ is a duality solution of \rife{plin3} having itself as initial datum. proof}o{\Box{proof} Proposition \ref{vsol} allows us to deduce that the duality solution of problem \rife{plin3} $u$ belongs to $C(0,T;L^{1}(\Omega))$ for any fixed $T>0$; indeed, $z=u- v$ uniquely solves problem \begin{equation}\label{dualasiz} \begin{cases} z_t -\mathrm{div}(M(x)\nabla z)=0& \text{in}\ (0,T)\times\Omega,\\ z(0)=u_0 - v & \text{in}\ \Omega,\\ z=0 & \text{on}\ (0,T)\times\partial\Omega, proof}o{\Box{cases} proof}o{\Box{equation} in the duality sense, and so $z\in C(0,T;L^{1}(\Omega))$. This is due to a result of \cite{po}, since $z$ turns out to be an entropy solution in the sense of the definition given in \cite{p}. So, we have that $u$ satisfies \begin{equation}\label{dualdual} \displaystyle \int_{\Omega \times (0, +\infty)} u\, g\ dxdt=\displaystyle \int_{\Omega \times (0, +\infty)} w\ d\mu+\int_{\Omega} u_0 \, w(0)\ dx, proof}o{\Box{equation} for any $g\inL^{\infty}(Q)$, where $w$ is the unique solution of the retrograde problem \begin{equation}\label{retasi} \begin{cases} -w_{t}- \mathrm{div} (M^\ast (x)\nabla w) =g & \text{in}\ (0,T)\times\Omega,\\ w(T,x)=0 & \text{in}\ \Omega,\\ w(t,x)=0 &\text{on}\ (0,T)\times\partial\Omega. proof}o{\Box{cases} proof}o{\Box{equation} Therefore, as we said before, for fixed $\mu$ and $g\inL^{\infty}(Q)$ one can uniquely determine $u$ and $w$, solution of the above problems, defined for any time $T>0$. Moreover, let us give the following definition: \begin{definition} A function $u\in L^{1}(Q)$ is a \emph{duality supersolution} of problem \rife{plin3} if \[ \displaystyle \int_{\Omega \times (0, +\infty)} u \, g\ dxdt \geq \displaystyle \int_{\Omega \times (0, +\infty)} w\ d\mu +\int_{\Omega} u_0 w(0)\ dx, \] for any bounded $g\geq 0$, and $w$ solution of \rife{retasi}, while $u$ is a \emph{duality subsolution} if $-u$ is a duality supersolution. proof}o{\Box{definition} \begin{lemma}\label{asilemma} Let $\overline{u}$ and $\underline{u}$ be respectively a duality supersolution and a duality subsolution for problem \rife{plin3}. Then $\underline{u}\leq\overline{u}$. proof}o{\Box{lemma} \begin{proof} Simply subtract the formulations for $\underline{u}$ and $\overline{u}$ to obtain \[ \displaystyle \int_{\Omega \times (0, +\infty)} (\underline{u}-\overline{u})g\ dxdt\leq 0, \] for any $g\geq 0$, and so $\underline{u}\leq\overline{u}$. proof}o{\Box{proof} \begin{remark}\label{perognit} Observe that, if the functions in Lemma \ref{asilemma} are continuous with values in $L^{1}(\Omega)$, then we actually have that $\underline{u}(t,x)\leq\overline{u}(t,x)$ for every fixed $t$, a.e on $\Omega$. proof}o{\Box{remark} \begin{proof}[Proof of Theorem \ref{asi}] We split the proof in few steps. {\it Step $1$.} Let us first suppose $u_0 =0$ and $\mu\geq 0$. If we consider a parameter $s>0$ we have that both $u(t,x)$ and $u_{s}(t,x)\equiv u(t+s,x)$ are duality solutions of problem \rife{plin3} with, respectively, $0$ and $u(s,x)\geq 0$ as initial datum; so, from Lemma \ref{asilemma} we deduce that $u(t+s,x)\geq u(t,x)$ for $t,s>0$. Therefore $u$ is a monotone nondecreasing function in $t$ and so it converges to a function $\tilde{v}(x)$ almost everywhere and in $L^{1}(\Omega)$ since, thanks to Proposition \ref{vsol} and Lemma \ref{asilemma}, $u(t,x)\leq v(x)$. \\ Now, recalling that $u$ is obtained as limit of regular solutions with smooth data $\mu_\varepsilon$, we can define $u^{n}_{\varepsilon} (t,x)$ as the solution of \begin{equation}\label{dualasitn} \begin{cases} (u^{n}_{\varepsilon})_t -\mathrm{div}(M(x)\nabla u^{n}_{\varepsilon})=\mu_\varepsilon & \text{in}\ (0,1)\times\Omega,\\ u^{n}_{\varepsilon}(0,x)=u_\varepsilon (n,x) & \text{in}\ \Omega\\ u^{n}_{\varepsilon}= 0 & \text{on}\ (0,1)\times\partial\Omega. proof}o{\Box{cases} proof}o{\Box{equation} On the other hand, if $g\geq 0$, we define $w^n (t,x)$ as \begin{equation}\label{retasin} \begin{cases} -w^{n}_{t}- \mathrm{div} (M^\ast (x)\nabla w^n) =g & \text{in}\ (0,1)\times\Omega,\\ w^n (1,x)=w(n+1,x) & \text{in}\ \Omega,\\ w^n =0 &\text{on}\ (0,1)\times\partial\Omega. proof}o{\Box{cases} proof}o{\Box{equation} Recall that, through the change of variable $s=T-t$, $w$ solves a similar linear parabolic problem, so that if $g\geq 0$, by classical comparison results one has that $w(t,x)$ is decreasing in time. Moreover, by comparison principle, we have that $w^n $ is increasing with respect to $n$ and, again by comparison Lemma \ref{asilemma}, we have that, for fixed $t\in (0,1)$ \[ w^{n}(1,x)\leq w^{n}(t,x)=w(n+t,x)\leq w(n,x)=w^{n-1}(1,x), \] and so its limit $\tilde{w}$ does not depend on time and is the solution of \begin{equation}\label{retasie} \begin{cases} - \mathrm{div} (M^\ast (x)\nabla \tilde{w}) =g & \text{in}\ \Omega,\\ \tilde{w}(x)=0 &\text{on}\ \partial\Omega . proof}o{\Box{cases} proof}o{\Box{equation} An analogous argument shows that also the limit of $u^n$ (which exists thanks to standard compactness arguments, see for instance \cite{bdgo} again) does not depend on time. Thus, using $u^{n}_{\varepsilon} $ in \rife{retasin} and $w^n$ in \rife{dualasitn}, integrating by parts, subtracting, and passing to the limit over $\varepsilon$, we obtain \[ \int_0^1 \int_{\Omega} u^n \, g -\int_0^1 \int_{\Omega} w^n\ d\mu +\int_{\Omega} u^n (0) w^n (0)\ dx -\int_{\Omega} u^n (1) w^n (1)\ dx=0. \] Hence, we can pass to the limit on $n$ using monotone convergence theorem obtaining \begin{equation}\label{dacita} \int_{\Omega} \tilde{v} \, g - \int_{\Omega} \tilde{w}\ d\mu \ dx=0, proof}o{\Box{equation} and so $v=\tilde{v}$. If $g$ has no sign we can reason separately with $g^+$ and $g^-$ obtaining \rife{dacita} and then using the linearity of \rife{dualdual} to conclude. If $v$ is the duality solution of problem \rife{elin3}, we proved in Proposition \ref{vsol} that $v$ is also the duality solution of the initial boundary value problem \rife{plin3} with $v$ itself as initial datum. Therefore, by comparison Lemma \ref{asilemma}, if $0\leq u_{0}\leq v$, we have that the solution $u(t,x)$ of \rife{plin3} converges to $v$ in $\elle{1}$ as $t$ tends to infinity; in fact, we proved it for the duality solution with homogeneous initial datum, while $v$ is a nonnegative duality solution with itself as initial datum. {\it Step $2$.} Now, let us take $u_\lambda (t,x)$ the solution of problem \rife{plin3} with $u_{0}= \lambda v$ as initial datum for some $\lambda>1$ and again $\mu\geq 0$. Hence, since $\lambda v$ does not depend on time, we have that it is a duality supersolution of the parabolic problem \rife{plin3}, and, observing that $v$ is a subsolution of the same problem, we can apply again the comparison lemma finding that $v(x)\leq u_\lambda (t,x)\leq \lambda v(x) $ a.e. in $\Omega$, for all positive $t$. Moreover, thanks to the fact that the datum $\mu$ does not depend on time, we can apply the comparison result also between $u_\lambda (t+s,x)$ solution with $u_{0}=u_\lambda (s,x)$, with $s$ a positive parameter, and $u_\lambda (t,x)$, the solution with $u_{0}=\lambda v$ as initial datum; so we obtain $u_\lambda (t+s,x)\leq u_\lambda (t,x)$ for all $t,s>0$, a.e. in $\Omega$. So, by virtue of this monotonicity result we have that there exists a function $\overline{v}\geq v$ such that $u_\lambda (t,x)$ converges to $\overline{v}$ a.e. in $\Omega$ as $t$ tends to infinity. Clearly $\overline{v}$ does not depend on $t$ and we can develop the same argument used before to prove that we can pass to the limit in the approximating duality formulation, and so, by uniqueness, we can obtain that $\overline{v}=v$. So, we have proved that the result holds for the solution starting from $u_{0}= \lambda v $ as initial datum, with $\lambda >1$ and $\mu\geq 0$. Since we proved before that the result holds true also for the solution starting from $u_{0}=0$, then, again applying a comparison argument, we can conclude in the same way that the convergence to $v$ holds true for solutions starting from $u_{0}$ such that $0\leq u_{0}\leq \lambda v$ as initial datum, for fixed $\lambda>1$. {\it Step $3$.} Now, let $u_0 \inL^{1}(\Omega)$ a nonnegative function and $\mu\geq 0$, and recall that, thanks to suitable Harnack inequality (see \cite{t}), if $\mu\neq 0$, then $v>0$ (which implies $\lambda v$ tends to $+\infty$ on $\Omega$ as $\lambda$ diverges). Without loss of generality we can suppose $\mu\neq 0$ (the case $\mu\equiv 0$ is the easier one and it can be proved as in \cite{pe}); let us define the monotone nondecreasing (with respect to $\lambda$) family of functions \[ u_{0,\lambda}=\min(u_{0},\lambda v). \] As we have shown above, for every fixed $\lambda>1$, $u_{\lambda}(t,x)$, the duality solution of problem \rife{plin3} with $u_{0,\lambda}$ as initial datum, converges to $v$ a.e. in $\Omega$, as $t$ tends to infinity. Moreover, using again standard compactness arguments,we also have that $T_{k}(u_{\lambda}(t,x))$ converges to $T_{k}(v)$ weakly in $H^1_0 (\Omega)$ as $t$ diverges, for every fixed $k>0$. So, thanks to \emph{Lebesgue theorem}, we can easily check that $u_{0,\lambda}$ converges to $u_{0}$ in $L^{1}(\Omega)$ as $\lambda$ tends to infinity. Therefore, using a stability result for renormalized solutions of the linear problem \rife{plin3} (see \cite{pe1}) we obtain that $T_{k}(u_{\lambda}(t,x))$ converges to $T_{k}(u(t,x))$ strongly in $L^2 (0,T;H^1_0 (\Omega))$ as $\lambda$ tends to infinity. On the other hand, since $z_\lambda=u-u_\lambda$ solves the problem \begin{equation} \begin{cases} (z_\lambda)_t -\mathrm{div}(M(x)\nabla z_\lambda)=0& \text{in}\ (0,T)\times\Omega,\\ z_\lambda (0)=u_0-u_{0,\lambda} & \text{in}\ \Omega,\\ z_\lambda =0 & \text{on}\ (0,T)\times\partial\Omega, proof}o{\Box{cases} proof}o{\Box{equation} in the duality sense, then $z_\lambda$ turns out to be an entropy solution of the same problem and so we have (see \cite{p}) \[ \int_{\Omega} \Theta_{k}(u-u_{\lambda})(t)\ dx\leq \int_{\Omega}\Theta_{k}(u_{0}-u_{0,\lambda})\ dx, \] for every $k,\ t>0$. Dividing the above inequality by $k$, and passing to the limit as $k$ tends to $0$ we obtain \begin{equation}\label{unif} \|u(t,x)-u_{\lambda}(t,x)\|_{L^{1}(\Omega)}\leq \|u_{0}(x)-u_{0,\lambda}(x)\|_{L^{1}(\Omega)}, proof}o{\Box{equation} for every $t>0$. Hence, we have \[ \|u(t,x)-v(x)\|_{L^{1}(\Omega)}\leq \|u(t,x)-u_{\lambda}(t,x)\|_{L^{1}(\Omega)}+\|u_{\lambda}(t,x)-v(x)\|_{L^{1}(\Omega)}; \] then, thanks to the fact that the estimate in (\ref{unif}) is uniform in $t$, for every fixed $\epsilon$, we can choose $\bar{\lambda}$ large enough such that \[ \|u(t,x)-u_{\bar{\lambda}}(t,x)\|_{L^{1}(\Omega)}\leq \frac{\epsilon}{2}, \] for every $t>0$; on the other hand, thanks to the result proved above, there exists $\bar{t}$ such that \[ \|u_{\bar{\lambda}}(t,x)-v(x)\|_{L^{1}(\Omega)}\leq \frac{\epsilon}{2}, \] for every $t>\bar{t}$, and this concludes the proof of the result in the case of nonnegative data $\mu$ and $u_0\inL^{1}(\Omega)$. {\it Step $4$.} Let $\mu\in \mathcal{M}(Q)$ be independent on $t$ and $u_0 \inL^{1}(\Omega)$ with no sign assumptions. We consider again the function $z(t,x)= u(t,x)-v(x)$; thanks to Proposition \ref{vsol} it turns out to solve problem \begin{equation}\label{dualasize} \begin{cases} z_t -\mathrm{div}(M(x)\nabla z)=0& \text{in}\ (0,T)\times\Omega,\\ z(0)=u_0 - v & \text{in}\ \Omega,\\ z=0 & \text{on}\ (0,T)\times\partial\Omega, proof}o{\Box{cases} proof}o{\Box{equation} and so, if either $u_0 \leq v$ or $u_0\geq v$ then the result is true since $z(t,x)$ tends to zero in $L^{1}(\Omega)$ as $t$ diverges thanks to what we proved above. Now, if $u^\oplus$ solves $$ \begin{cases} u^{\oplus}_{t} -\mathrm{div}(M(x)\nabla u^\oplus)=\mu & \text{in}\ (0,T)\times\Omega,\\ u^\oplus (0)=\max{(u_0 , v)} & \text{in}\ \Omega,\\ u^\oplus=0 & \text{on}\ (0,T)\times\partial\Omega, proof}o{\Box{cases} $$ and $u^\ominus$ solves $$ \begin{cases} u^{\ominus}_{t} -\mathrm{div}(M(x)\nabla u^\ominus)=\mu & \text{in}\ (0,T)\times\Omega,\\ u^\ominus (0)=\min{(u_0 , v)} & \text{in}\ \Omega,\\ u^\ominus=0 & \text{on}\ (0,T)\times\partial\Omega, proof}o{\Box{cases} $$ then by comparison we have $u^\ominus (t,x)\leq u(t,x)\leq u^\oplus (t,x)$ for any $t$, a. e. in $\Omega$, and this concludes the proof since the result holds true for both $u^\oplus$ and $u^\ominus$. proof}o{\Box{proof}
1,477,468,750,257
arxiv
\section{Introduction} \label{sec:intro} One function of a Court of Law is to attempt to assign responsibility or blame for some undesirable outcome. In many such cases there will be relevant testimony about statistical or epidemiological evidence arising from studies done on specialized populations, but this evidence addresses the main issue only indirectly, at best. It has until now been unclear how to use such evidence to focus on the issue at hand which involves specific individuals experiencing the undesirable outcome. Although there is a considerable literature on certain aspects of this problem---see for example \textcite{refepi}, which aims to assist US judges in managing cases involving complex scientific and technical evidence---we consider that there are important logical subtleties that have not as yet been accorded the appreciation they warrant. Here we show that, even in the (very rare) case that we have the best possible and most extensive data \adds{on the ``effects of causes''}, and can accept certain very strong but necessary conditions, there will still remain irreducible uncertainty---which we can express as interval bounds---about the relevant ``probability of causation''. With less than fully perfect data, this interval uncertainty will be further compounded by statistical uncertainty. Such multiple \adds{forms of} uncertainty \takep{raises new questions}\addp{raise subtle issues} of interpretation and presentation. The structure of the paper is as follows. In \secref{frachon} we consider a high-profile case where serious side-effects led to the withdrawal of a drug from the market, and, in turn, to litigation against the manufacturer. \adds{Since the evidence in this litigation has yet to be presented formally at trial in court, we} \takes{We} consider how general evidence of incidence of effects might or might not be relevant to a hypothetical tort action in which an affected patient sues the manufacturer for damages, and we relate this to the distinction we draw in \secref{coeeoc} between ``effects of causes'' and ``causes of effects''. After a brief consideration of inference from statistical data about effects of causes in \secref{stateoc}, the remainder of the paper focuses on inference about causes of effects, based on a ``probability of causation'' defined using counterfactual logic. Although this probability is typically impossible to pinpoint on the basis of epidemiological data, however extensive, \addp{in \secref{coe}} we give bounds between which it must lie---bounds which, however, will themselves be subject to statistical uncertainty\addp{, which we discuss in \secref{statunc}}. In \secref{best} and \secref{datanal} we illustrate our theory with a new analysis of a case study in child protection. \addp{Section~\ref{sec:conc} offers some concluding remarks.} \section{Epidemiological Evidence in Litigation} \label{sec:frachon} \subsection {Epidemiological background} The drug Mediator, also known as benfluorex, was for many years marketed as an anti-diabetic drug. It was also widely used off-label as an appetite suppressant. In November 2010, however, following the publication of a popular book by Ir\`ene \textcite{frachon:book}, the French Health Agency CNAM announced its finding that around 500 deaths in France over a thirty year period could be attributed to Mediator---see also \textcite{Hill:2011}. This was based on extrapolation of results in two scientific studies, published at about the same time, focusing on the effects of benfluorex on valvular heart disease. \textcite{frachon:plos1}\ showed a significantly higher prevalence of unexplained valvular heart disease in patients taking benfluorex, as compared to controls. \textcite{Weill:2010}\ examined the records of over a million diabetic patients in a cohort study, and reported a higher hospitalisation rate for valvular heart disease in benfluorex takers. \subsection{Litigation} As the news about Mediator reverberated through the media, the French authorities withdrew the drug from sale. At the same time, hundreds of individuals jointly filed a criminal lawsuit against the manufacturer of Mediator, the French pharmaceutical giant \adds{Les Laboratoires Servier}\takes{Servier}. The trial has been under way since May 2012, with initial aspects focused on whether the company was guilty of misconduct. \takep{As of}\addp{At} the time of preparation of this article, the issue of whether Mediator was in fact the cause of the heart disease in any of those who brought the lawsuit had yet to be addressed, and no expert scientific testimony had been presented to the court. \adds{As of September, 2014, however, the company has agreed to compensate over 350 individual plaintiffs.\footnote{http://www.lejdd.fr/Societe/Sante/Mediator-350-victimes-indemnisees-685286\#new-reactions}} In the US benfluorex was removed from the marketplace in the 1990s. \takep{But the}\addp{The} banning in 1997 of a related drug, Redux, led to a \$12 billion settlement, following a class action by thousands of individuals \cite{mnt}. Thus considerable attention both in France and elsewhere is focused on the case against Servier. \subsection{Scientific results} The matched case-control study of \textcite{frachon:plos1}\ involved 27 cases of valvular heart disease and 54 controls. Investigators determined whether the patients had or had not used benfluorex. We display the core data in \tabref{twobytwo}. The face-value odds ratio in this table is $(19\times51)/(3\times8) = 40.1$, but this could be misleading because of confounding factors\addp{\footnote{See \textcite{HollandRubin1988} for when it is necessary and appropriate to adjust for covariate information in such a study.}}. A logistic regression analysis reported by \textcite{frachon:plos1} adjusted for body mass index, diabetes and dexfenfluramine use, and reduced the odds ratio to $17.1$ \addm{($95\%$ CI $3.5$ to $83.0$)}, a value which is still a large and highly significant measure of positive association between benfluorex and valvular heart disease. In the same direction, \textcite{Weill:2010} computed a risk ratio (though with relatively crude adjustments) of \takem{the order of 3}\addm{$3.1$ ($95\%$ CI $2.4$ to $4.0$).} \begin{table}[htdp] \begin{center} \begin{tabular}{|c|cc|c|}\hline Benfluorex Use & Cases & Controls & Totals\\ \hline Yes & 19 & 3 &22\\ No & 8& 51 & 59\\ \hline Totals & 27 & 54 & 81\\ \hline \end{tabular} \end{center} \caption{Raw results from case-control study linking benfluorex and valvular heart disease. Source: {\protect\textcite{frachon:plos1}}.} \label{tab:twobytwo} \end{table} \subsubsection{Robustness of odds ratio} \label{sec:robust} While the risk ratio may be a more relevant and incisive measure of the strength of an effect than the odds ratio (and it will feature importantly in our analysis of \secref{stateoc} below), it faces a very serious problem: it is simply not possible to compute it from a retrospective study, such as that of \textcite{frachon:plos1}. In contrast, the odds ratio, whether raw or adjusted via a logistic regression, has the important property that it is simultaneously a meaningful measure of association \takep{\cite{BFH:1975}} and computable from retrospective data \takep{\cite{Farewell:1979}}\addp{\cite{altham1970,BFH:1975,Farewell:1979}}. Furthermore, \addp{under suitable adjustment for covariates} it will be a good approximation to the risk ratio when the outcome is rare. \subsection{Toxic tort---A hypothetical case} \label{sec:hypo} Now consider a (currently purely hypothetical) case that might be brought on the basis of these scientific reports. A woman with unexplained valvular heart disease sues the manufacturer of benfluorex, claiming that it was \takep{that}\addp{this} that caused her illness. An epidemiologist, testifying \addp{as expert witness} for plaintiff, claims that, on the evidence of Dr.\ Frachon's and Dr.\ Weill's studies, the medication can cause valvular heart disease. \adds{\takep{especially since }\addp{Citing \textcite{Nicot:2011}, who} argued that ``the probabilistic information, derived from the available epidemiological studies, needs to be considered as part of evidence to establish or refute a causal link between benfluorex and valvular disease for a given patient''\addp{, this witness goes on to claim that this is evidence for a causal link in the current case}}. The defendants in turn proffer their expert, who testifies that in the manufacturer's clinical trials there was no evidence of such a side effect. How should the court rule? The court needs to decide on the cause of this woman's heart disease. But the plaintiff's expert addresses something different, the general scientific question ``Can benfluorex be shown to cause heart disease?'' For an epidemiologist, the evidence for this would ideally be captured by the risk ratio, though, as we have seen, for the Frachon data we would have to be satisfied with the adjusted odds ratio instead. But even if we had perfect and unassailable statistical evidence in support of this general scientific hypothesis, that would still be only very indirectly relevant to the individual case at issue. We shall see below that the relationship between such a generalisation and the specific issue before the court is extremely subtle. \adds{For an extended discussion of the legal nexus of individual causation and the ``causes of effects'', see \textcite{fienberg2014}}. \section{Causes of Effects and Effects of Causes} \label{sec:coeeoc} One might be tempted to assume that the ``effects of causes'' (henceforth EoC) and the ``causes of effects'' (CoE) are related probabilistically via Bayes theorem. After all, this was how \textcite{laplace1986}\ introduced the topic: ``If an event can be produced by a number $n$ of different causes, the probabilities of these causes given the event are to each other as the probabilities of the event given the causes,\ldots''\@ Later authors recognized the issue to be more complex. John Stuart Mill distinguished between inferences about effects of causes and about causes of effects, and remarked ``{\ldots}as a general rule, the effects of causes are far more accessible to our study than the causes of effects\ldots'' \cite[Book~3, Chapter~10, \S8]{mill}. Although a similar distinction has sometimes been expressed in statistical contexts (see {\em e.g.\/}\xspace\ \textcite{pwh:jasa}), Mill's associated warning has largely gone unheeded. We consider that it deserves more careful attention. Though evidently related in some way, problems of CoE are distinct from problems of EoC; indeed, as Mill understood, they are considerably more subtle and difficult to handle. In this article, which builds on and extends \textcite{apd:aberdeen}\ and \textcite{fienberg2014}, we attempt to delineate both the differences and the connexions between these two distinct inferential enterprises. An understanding of these issues will clearly be crucial if generic \adds{retrospective observational} EoC evidence, such as that of the \textcite{frachon:plos1}\ study, is to be brought to bear on an individual CoE case, such as the toxic tort case of \secref{hypo}. In particular we shall consider the possibilities of using statistical evidence to inform CoE inferences. \subsection{Aspirin trial} \label{sec:aspirin} As a simple concrete example, we contrast the following two questions: \begin{description} \item[Effects of Causes (EoC)] Ann has a headache. She is wondering whether to take aspirin. Would that cause her headache to disappear (within, say, 30 minutes)? \item[Causes of Effects (CoE)] Ann had a headache and took aspirin. Her headache went away after 30 minutes. Was that caused by the aspirin? \end{description} Note that---in a departure from previous related treatments---in both questions we have separated out the r\^oles of the subject (``Ann''), on whom we have some information, and the questioner or analyst (henceforth ``I''), who wants to interpret that information: these could be the same individual, but need not be. Any uncertainty about the answers to the above queries is my personal uncertainty, and is most properly regarded as a subjective probability, though informed by relevant data. This is somewhat analogous to the situation in court, where we distinguish between a witness, who supplies evidence ({\em e.g.\/}\xspace, on epidemiology), and the trier of fact, be it a judge or a jury, who has to assess the uncertainty to associate with the question of ultimate legal interest: the cause of the effect. What might be relevant data in the present instance? We suppose that a well-conducted (large, prospective, randomised, double-blind,\ldots) comparative clinical trial has indicated the following recovery rates: \begin{eqnarray} \label{eq:asp1} \Pr(R = 1\,|\, E = 1) & = & 30\%\\ \label{eq:asp0} \Pr(R = 1\,|\, E = 0) & = & 12\% \end{eqnarray} where $E=1$ [resp., $0$] denotes ``exposure to'' ( = treatment with) aspirin [resp., no aspirin], and $R=1$ [resp., $0$] denotes that the headache does [resp., does not] disappear (within 30 minutes). Here and throughout, we use $\Pr(\cdot)$ to denote probabilities (henceforth termed {\em chances\/}) underlying a population data-generating process. \section{Statistical Evidence for EoC} \label{sec:stateoc} \begin{quote} \bf Ann has a headache. She is wondering whether to take aspirin. Would that cause her headache to disappear (within, say, 30 minutes)? \end{quote} Most of classical statistical experimental design and inference is geared to elucidating the effects of causes, and much careful attention over many years has gone into clarifying and improving methods for doing this, for example by the use of randomised comparative experiments \cite{fisher:doe,hill:clintrial} to control for potential confounding factors. Even when emphasis is specifically targeted on statistical causality \takep{\cite{dbr:jep,pearl:book}}\addp{\cite{dbr:jep,HollandRubin1988,pearl:book}} this still mostly addresses EoC problems, albeit in observational rather than experimental settings. In order to highlight the major issue, we confine attention here to data from a study, such as the aspirin trial of \secref{aspirin}, that can be regarded as supporting genuinely causal inferences.\footnote{Some considerations relevant to the possibilities for causal inference in various data-collection settings can be found in \textcite{apd:aberdeen}.} In particular, for the aspirin trial this would mean that---so long as I can regard Ann as being comparable with the patients in the trial---if she takes aspirin I can expect her headache to disappear within 30 minutes with probability $30\%$, but with probability only $12\%$ if she does not. If I myself am Ann, then (other things being equal) taking the aspirin is my preferred option. In this case, the EoC causal inference is based on a simple contrast between the two ``prospective'' conditional probabilities, $\Pr(R = 1\,|\, E = 1)$ and $\Pr(R = 1\,|\, E = 0)$. In particular, the information needed for making EoC causal inferences---and so for guiding future decisions---is subsumed in the \takem{joint} \addm{conditional} probability distribution of the \takep{exposure $E$ and the }response $R$ given \addp{exposure $E$}. In more complex situations we may have to make various modifications, {\em e.g.\/}\xspace\ adjustment for covariates, but the essential point remains that purely probabilistic knowledge, properly conditioned on known facts, is sufficient to address EoC-type questions. \takem{Addressing a CoE-type question is much more problematic---indeed, even to formulate the question clearly is a nontrivial enterprise. We can no longer base our approach purely on the probability distribution of $E$ and $R$ conditioned on known facts, since we know the values of both variables ($E=1$, $R=1$), and after conditioning on that knowledge there is no probabilistic uncertainty left to work with. One possible approach, popular in statistical circles, is based on the concept of the ``counterfactual contrast'', which in turn rests on the introduction of ``potential responses'' \cite{dbr:jep}. We proceed by splitting the response variable $R$ into two variables, $R_0$ and $R_1$, where we conceive of $R_1$ [resp., $R_0$] as a potential value of $R$, that will eventuate if in fact $E=1$ [resp., $0$]. Both these potential responses are regarded as existing prior to the determination of $E$. We thus now need to model the three variables $(E, R_0, R_1)$ together, rather than (as previously) just the two variables $(E, R)$.\footnote{The observed response $R$ is determined by these three variables as $R = R_E$.} We might now cast the CoE question as enquiring about the relationship between $R_0$ and $R_1$. Thus ``$R_1 = 1, R_0 = 0$'' describes the situation where Ann's headache disappears if she takes the aspirin, but does not if she does not---a state of affairs that might reasonably be described as the disappearance of Ann's headache being {\em caused\/} by taking the aspirin. In particular, if Ann has taken the aspirin and her headache disappeared (thus $R_1 = 1$), these two events can be regarded as causally connected just in the case that $R_0 = 0$.} \takem{\subsection{Science and Policy} \label{sec:policy} Although we shall follow through with the above formulation in the remainder of this article, we here turn aside to consider an objection to it: it simply might not be appropriate to regard, as the ``counterfactual foil'' to the factual response ($R_1$), what would have happened ($R_0$) if the exposure had not occurred ($E=0$) but all other prior circumstances were the same. For example, there has been a series of legal cases in which various administrations have sued tobacco companies on the basis that they had not properly informed the public of the dangers of smoking when they first had that evidence, and should therefore be liable for the increased costs that fell on health services due to that act of omission. But it could be argued that, since smokers tend to die earlier than non-smokers, encouraging (or at least not discouraging) smoking would in fact reduce the total burden on the health services. Such an attempted defence has, however, usually been ruled inadmissible. Instead, as a matter of policy, the relevant counterfactual comparator is taken to be a hypothetical universe in which every one lives just as long as they do in fact, but they are healthier because they smoke less. Here we see Science and Policy as inextricably intertwined in formulating the appropriate CoE question. And the conceptual and implementational difficulties that we discuss below, that beset even the simplest case of inference about causes of effects, will be hugely magnified when we wish to take additional account of such policy considerations. } \section{Statistical Evidence for CoE} \label{sec:coe} \begin{quote} \bf Ann had a headache and took aspirin. Her headache went away after 30 minutes. Was that caused by the aspirin? \end{quote} \addm{\subsection{How to Understand ``Causes of Effects''?} Addressing a CoE-type question is much more problematic---indeed, even to formulate the question clearly is a nontrivial enterprise. We can no longer base our approach purely on the probability distribution of $E$ and $R$ conditioned on known facts, since we know the values of both variables ($E=1$, $R=1$), and after conditioning on that knowledge there is no probabilistic uncertainty left to work with. One possible approach, popular in statistical circles, is based on the concept of the ``counterfactual contrast'', which in turn rests on the introduction of ``potential responses'' \cite{dbr:jep}. We proceed by splitting the response variable $R$ into two variables, $R_0$ and $R_1$, where we conceive of $R_1$ [resp., $R_0$] as a potential value of $R$, that will eventuate if in fact $E=1$ [resp., $0$]. Both these potential responses are regarded as existing prior to the determination of $E$. We thus now need to model the three variables $(E, R_0, R_1)$ together, rather than (as previously) just the two variables $(E, R)$.\footnote{The observed response $R$ is determined by these three variables as $R = R_E$.} We might now cast the CoE question as enquiring about the relationship between $R_0$ and $R_1$. Thus ``$R_1 = 1, R_0 = 0$'' describes the situation where Ann's headache disappears if she takes the aspirin, but does not if she does not---a state of affairs that might reasonably be described as the disappearance of Ann's headache being {\em caused\/} by taking the aspirin. In particular, if Ann has taken the aspirin and her headache disappeared (thus $R_1 = 1$), these two events can be regarded as causally connected just in the case that $R_0 = 0$. \subsection{Science and Policy} \label{sec:policy} Although we shall follow through with the above formulation in the remainder of this article, we here turn aside to consider an objection to it: it simply might not be appropriate to regard, as the ``counterfactual foil'' to the factual response ($R_1$), what would have happened ($R_0$) if the exposure had not occurred ($E=0$) but all other prior circumstances were the same. For example, there has been a series of legal cases in which various administrations have sued tobacco companies on the basis that they had not properly informed the public of the dangers of smoking when they first had that evidence, and should therefore be liable for the increased costs that fell on health services due to that act of omission. But it could be argued that, since smokers tend to die earlier than non-smokers, encouraging (or at least not discouraging) smoking would in fact reduce the total burden on the health services. Such an attempted defence has, however, usually been ruled inadmissible. Instead, as a matter of policy, the relevant counterfactual comparator is taken to be a hypothetical universe in which every one lives just as long as they do in fact, but they are healthier because they smoke less. Here we see Science and Policy as inextricably intertwined in formulating the appropriate CoE question. And the conceptual and implementational difficulties that we discuss below, that beset even the simplest case of inference about causes of effects, will be hugely magnified when we wish to take additional account of such policy considerations. \subsection{Statistical Evidence} } \label{sec:statcoe} After the above detour, we return to our formulation of the CoE question, in terms of a contrast between $R_1$, the actually observed response (in this case, $R_1=1$) to the treatment actually taken ($E=1$), and $R_0$, the (necessarily unknown) counterfactual response, that would have been observed had Ann in fact not taken the aspirin. If ``in counterfact'' $R_0 = 1$, then Ann's headache would have disappeared even if she had not taken the aspirin, so I must conclude that it was not the aspirin that cured her. Conversely, if $R_0=0$ then I can indeed attribute her cure to having taken the aspirin. In this way, we formulate the CoE causal question in terms of the contrast between the factual outcome $R_1$ and the counterfactual outcome $R_0$. To address the CoE question I thus need to query $R_0$. Since $R_0$ has not been observed, it retains a degree of uncertainty, which I could try to express probabilistically. However, not only have I not observed $R_0$, there is, now, no way I could ever observe it, since once I have observed $R_1$, $R_0$ has becomes a counterfactual quantity, predicated on a condition ($E=0$) that is counter to known facts ($E=1$). This logical difficulty leads to a degree of unavoidable ambiguity affecting our ability to address the CoE question. In evaluating my probabilistic uncertainty, I should condition on all I know. My full knowledge about Ann can be expressed as $(E=1, R_1 =1, H)$, where $H$ denotes all the background knowledge I have about Ann, and the other variables are likewise individualised to her. With this understanding, we formally define my PROBABILITY OF CAUSATION\takep{\footnote{This is similar to what \textcite[Chapter~9]{pearl:book}\ terms the ``Probability of Necessity'' (PN)---which however does not account for the conditioning on $H$. See also \textcite{tianpearl2000}.}} as the {\em conditional probability\/}: \begin{equation} \label{eq:pc00} {\sc pc}\xspace_A = \mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_0 = 0 \,|\, H, E=1, R_1 = 1) \end{equation} where $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A$ denotes my probability distribution over attributes of Ann. But how can I go about evaluating ${\sc pc}\xspace_A$, and what other evidence could be used, and how, to inform this evaluation? In particular, how---if at all---could I make use of EoC probabilities such as \eqref{asp1} and \eqref{asp0} to assist my evaluation of the CoE probability \eqref{pc00}? \subsection{Bounding the probability of causation} \label{sec:ass} We note that \eqref{pc00} involves a joint distribution of $(R_0, R_1)$. Since, as a matter of definition, it is never possible to observe both $R_0$ and $R_1$ on the same individual, it is problematic to estimate such a joint distribution. We might however have a hope of assessing separate marginal probabilities for $R_0$ and $R_1$; and this information can be used to set bounds on {\sc pc}\xspace. Indeed it is straightforward to show ({\em cf.\/}\xspace \textcite{apd:aberdeen}): \takep{ \begin{equation} \label{eq:rrA} \min\left\{1, \frac{1}{\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_1 = 1 \,|\, H, E=1)} - \frac{1}{\mbox{RR}\xspace_A}\right\} \geq {\sc pc}\xspace_A \geq \max\left\{0,1 - \frac{1}{\mbox{RR}\xspace_A}\right\}, \end{equation} } \addp{ \begin{equation} \label{eq:rrA} \min\left\{1, \frac{\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_0 = 0 \,|\, H, E=1)}{\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_1 = 1 \,|\, H, E=1)}\right\} \geq {\sc pc}\xspace_A \geq \max\left\{0,1 - \frac{1}{\mbox{RR}\xspace_A}\right\}, \end{equation} } where \begin{equation} \label{eq:exptrre} \mbox{RR}\xspace_A := \frac{\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_1 = 1 \,|\, H, E=1)}{\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_0 = 1 \,|\, H, E=1)}. \end{equation} Readers will recognize (\ref{eq:rrA}) as a version of the Bonferroni-Fr\'echet-Hoeffding bounds \cite{bonferroni1936,frechet1940,hoeffding1940}\ that play important r\^oles in other areas of statistics \adds{such as in the study of copulas}. The inequality (\ref{eq:rrA}) will yield a non-trivial lower bound so long as $\mbox{RR}\xspace_A > 1$, which we can interpret as saying that there is a positive causal effect of exposure on outcome: {\em cf.\/}\xspace the related argument in \textcite{robinsgreenland1989}. Whenever $\mbox{RR}\xspace_A$ exceeds 2, we can deduce from \eqref{rr}, without making any further assumptions, that ${\sc pc}\xspace_A$ must exceed $50\%$. In a civil legal case such as that of \secref{hypo}, causality might then be concluded ``on the balance of probabilities''. It is however important to note \addm{(see \textcite{robins1989})} that, when $\mbox{RR}\xspace_A < 2$, it would not be correct to conclude from this that ${\sc pc}\xspace_A < 50\%$ (which would lead to the case failing); rather, we can only say that we can not be sure that the probability of causation exceeds $50\%$. The upper bound in \eqref{rrA} is more subtle. It is less than $1$ when $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_0 = 1 \,|\, H, E=1) + \mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_1 = 1 \,|\, H, E=1) > 1$. This happens in general only when both Ann's potential outcomes \takep{are ``highly likely.''}\addp{have a substantial probability of taking value $1$.} If $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_1 = 1 \,|\, H, E=1) = \mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R = 1 \,|\, H, E=1)$ is only modest in size, {\em e.g.\/}\xspace, less than 1/2 and $\mbox{RR}\xspace_A > 1$, then the upper bound is 1. If $\mbox{RR}\xspace_A$ is large, {\em e.g.\/}\xspace, $\mbox{RR}\xspace_A>10$, the upper bound will again be 1 unless $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R = 1 \,|\, H, E=1)$ is close to 1. For the remainder of the paper, for simplicity we proceed using an upper bound of 1. Thus we work with the bounds \begin{equation} \label{eq:bounds} 1\geq {\sc pc}\xspace_A \geq \max\left\{0,1 - \frac{1}{\mbox{RR}\xspace_A}\right\} \end{equation} with $\mbox{RR}\xspace_A$ given by \eqref{exptrre}. \subsection{The risk ratio} \label{sec:estbounds} \addp{Expression \eqref{pc00} and the}\takep{The} denominator of \eqref{exptrre} involve\takep{s} a counterfactual consideration: of $R_0$, Ann's potential response were she not to have taken the aspirin, in the situation that she is known to have taken aspirin ($E=1$). So it would seem problematic to \takep{estimate it}\addp{attempt to identify these quantities} from data. However, if my background knowledge $H$ of Ann (on which my distribution $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A$ is being conditioned) is sufficiently detailed, then, at the point before Ann has decided whether or not to take the aspirin, it might seem appropriate to consider that my uncertainty, conditional on $H$, about the way her treatment decision $E$ will be made would not further depend on the (so far entirely unobserved) potential responses $(R_0,R_1)$. That is, in this case we might assume \begin{equation} \label{eq:ci} (R_0,R_1) \mbox{$\perp\!\!\!\perp$}_A E \,|\, H \end{equation} where $\mbox{$\perp\!\!\!\perp$}_A$ denotes conditional independence \cite{apd:CIST}\ in my distribution $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A$ for Ann's characteristics. When \eqref{ci} holds we will term the background information $H$ {\em sufficient\/}. \addp{Then \eqref{pc00} becomes \begin{equation} \label{eq:pc01} {\sc pc}\xspace_A = \mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_0 = 0 \,|\, H, R_1 = 1) \end{equation} and in the lower bound in \eqref{bounds} we can replace \eqref{exptrre} by \begin{equation} \label{eq:rr} \mbox{RR}\xspace_A = \frac{\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_1 = 1 \,|\, H)}{\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_0 = 1 \,|\, H)}, \end{equation} my {\em causal risk ratio\/} for Ann.}\footnote{\addp{We can derive \eqref{rr}--though not \eqref{pc01}--from the weaker condition that replaces the joint property \eqref{ci} by the two marginal properties $R_j \mbox{$\perp\!\!\!\perp$}_A E \,|\, H$, $j=0,1$. Since we are only concerned with bounds in this paper, that weaker condition would be adequate for our purposes. However, we find it hard to imagine circumstances where we would be willing to accept the weaker but not the stronger condition, so will continue to use conditions like \eqref{ci}.}} Sufficiency is a kind of ``no confounding'' requirement on my distribution $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A$ for Ann \addm{(see for instance \textcite{Dawid2014})}. It would fail if, for example, I thought that Ann might take the treatment if she felt really poorly, and not otherwise; but I did not initially have information as to how she felt. Then observing that she took the treatment ($E=1$) would inform me that she was feeling poorly, so decreasing the probability of a good response (whether actual, $R_1$, or counterfactual, $R_0$). Now if I myself am Ann, my $H$ will already include my own knowledge of my perceived state of health, so this argument does not apply, and sufficiency is an acceptable condition. If I am an external observer, however, the sufficiency condition is much more problematic, since I must be able to satisfy myself that my knowledge $H$ of Ann is complete enough to avoid the above possibility of confounding. If I can not assume sufficiency, I can not replace the counterfactual denominator of \eqref{exptrre} by anything even potentially estimable from data. Note that the ``no confounding'' property of sufficiency relates solely to Ann and my knowledge of her. It should not be confused with the superficially similar no confounding property of {\em exogeneity\/} described in \secref{estrr} below, which refers, not to Ann, but to the process whereby possibly relevant data on other individuals have been gathered. \subsection{Estimating the risk ratio} \label{sec:estrr} Henceforth we assume sufficiency, which at least gets us started, and \adds{we} aim to see what further progress can be made, and under what conditions, to get a handle on the bounds on ${\sc pc}\xspace_A$ supplied by $\mbox{RR}\xspace_A$. It is important to be explicit about the assumptions required, which can be very strong and not easy to justify! It would be valuable if the probabilities featuring in \eqref{exptrre} could be related in some way to chances such as \eqref{asp1} and \eqref{asp0} that are estimable from data. Consider first the numerator, the Ann-specific probability $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_1 = 1 \,|\, H, E=1) = \mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R = 1\,|\, H, E = 1)$. It is tempting to replace this by the analogous chance, $\Pr(R = 1\,|\, H, E = 1)$, which could be estimated from data as for \eqref{asp1}, based on the subset of treated trial subjects sharing the same $H$-value as Ann.\footnote{Alternatively, the estimate might be constructed from a model for the dependence of the response $R$ on $H$ and $E=1$, fitted to all the data, and applied with Ann's value of $H$. We might also be able to reduce to a smaller information set $H$, if that is all that is relevant for prediction of the responses.}\takep{.} This would be justified if we could make the following {\em bold assumption\/} (where Bayesians can replace the intuitive term ``comparable'' with the more precise term ``exchangeable''): \begin{cond} \label{cond:treated} Conditional on my knowledge of the pre-treatment characteristics of Ann and the trial subjects, I regard Ann's potential responses as comparable with those of the treated subjects having characteristic $H$. \end{cond} Up to this point we have not needed the assumption that $H$ is sufficient. But consider now the denominator of \eqref{rrA}. Because of its counterfactual nature, we can not argue directly as above. However, with sufficiency of $H$ we have $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_0 = 1 \,|\, H, E=1) = \mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R_0 = 1\,|\, H, E=0) = \mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(R = 1\,|\, H, E=0)$; and we can estimate this from the \adds{clinical} trial data, {\em e.g.\/}\xspace as the estimated chance ${\Pr}(R = 1\,|\, H, E=0)$, if we can assume: \begin{cond} \label{cond:untreated} Conditional on my knowledge of the pre-treatment characteristics of Ann and the trial subjects, I regard Ann's potential responses as comparable with those ot the untreated subjects having characteristic $H$. \end{cond} Now if both \condref{treated} and \condref{untreated} are to hold, then (by Euclid's first axiom, ``Two things that are equal to the same thing are also equal to each other''), the groups of trial subjects with Ann's characteristics $H$ in both arms must be comparable with each other. This requires that $H$ be {\em exogenous\/}, in the sense that, conditional on $H$, the potential outcomes $(R_0,R_1)$ have the same distribution among treated and untreated study subjects. This will hold for a suitably randomised study, and also in certain observational studies where the possibility of further confounding factors can be discounted. Note however that we can not take, as $H$, just {\em any\/} exogenous set of variables. The full set of required conditions is: \begin{enumerate} \item \label{it:c1} $H$ is exogenous. \item \label{it:c2} $H$ is sufficient for Ann's response. \item \label{it:c3} Conditional on $H$, Ann's potential responses are comparable with those of the trial subjects. \end{enumerate} \addp{We will refer to this set of conditions as the {\em fundamental conditions\/}.} \takep{Only when}\addp{When} we can make good arguments for the acceptability of \takep{all these strong conditions can we justify estimating $\mbox{RR}\xspace_A$ by the population counterpart of \eqref{rr}, the {\em observational risk ratio\/}: \begin{equation} \label{eq:obsrr} \mbox{ORR}\xspace = \frac{\Pr(R = 1 \,|\, H, E=1)}{\Pr(R = 1 \,|\, H, E=0)}. \end{equation}}% \addp{these fundamental conditions, equation~\eqref{pc00} becomes \begin{equation} \label{eq:pc000} {\sc pc}\xspace_A = \Pr(R_0 = 0 \,|\, H, R_1 = 1), \end{equation} and, in the lower bound in \eqref{rrA}, we can identify $\mbox{RR}\xspace_A$ with the population counterpart of \eqref{rr}, the {\em observational risk ratio\/}: \begin{equation} \label{eq:obsrr} \mbox{ORR}\xspace: = \frac{\Pr(R = 1 \,|\, H, E=1)}{\Pr(R = 1 \,|\, H, E=0)}. \end{equation} Now that we have made clear that the fundamental conditions can be expected to hold only in special circumstances, when they will require detailed justification, we shall henceforth confine ourselves to futher consideration of just these special cases. In particular we shall accept \eqref{pc000}, and $\mbox{RR}\xspace_A = \mbox{ORR}\xspace$ as in \eqref{obsrr}. So we will use the bounds \begin{equation} \label{eq:ineqpc} 1\geq {\sc pc}\xspace \geq \max\left\{0,1 - \frac{1}{\mbox{ORR}\xspace}\right\} \end{equation} with $\mbox{ORR}\xspace$ given by \eqref{obsrr}. \addp{(Here and henceforth, unless the context requires otherwise we drop the identifier $A$ on ${\sc pc}\xspace$: these bounds will apply to any individual for whom the fundamental conditions hold.)} } \addp{ \subsection{An alternative approach} \label{sec:pearl} Our probability of causation, ${\sc pc}\xspace_A$ given by \eqref{pc00}, is essentially the same as what \textcite[Chapter~9]{pearl:book}\ terms the ``Probability of Necessity'' (PN). \textcite{tianpearl2000} take an alternative approach to supplying bounds for PN, based on data and assumptions different from ours. In particular, they drop our requirement that $H$ be sufficient for Ann's response, requiring instead the availability of two sets of data on individuals comparable to Ann: one set in which treatment was (or can be regarded as) randomized, and another in which it arose ``naturally'' in the same way as for Ann. Because of these differences it is not in general possible to compare their bounds and ours. See \textcite{pearlcomm:smr,dff:pearlresp} for further discussion of these issues. } \subsection{Uncertain exposure} \label{sec:uncexp} \takep{Above}\addp{So far} we have supposed we know both the fact of exposure ($E=1$) and the fact of response ($R=1$), the only uncertainty being about whether there was a causal link between these two facts. There are other situations where we might observe the response, and wonder whether it was caused by exposure, without knowing with certainty whether or not that exposure had in fact taken place. In such cases we have to multiply the probability of causation ${\sc pc}\xspace_A$ by the probability of exposure, conditional on the known fact of a positive response, yielding a modified probability of causation: \begin{equation} \label{eq:pc*} {\sc pc}\xspace_A^* = {\sc pc}\xspace_A \times \mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(E=1 \,|\, H, R=1). \end{equation} In particular, \takep{when the strong conditions of \secref{estrr}, justifying the use of population chances $\Pr(\cdot)$ in place of Ann-specific probabilities $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(\cdot)$, can be assumed}\addp{under the fundamental conditions}, combining this with \takep{\eqref{pc*} (and using the upper bound $1$ of \eqref{bounds})}\addp{\eqref{ineqpc}} delivers the inequalities \begin{equation} \label{eq:ineqpc*} \Pr(E = 1 \,|\, H, R=1) \geq {\sc pc}\xspace^* \geq \max\left\{0,1 - \frac{\Pr(E = 0 \,|\, H, R = 1)}{\Pr( E=0\,|\, H)}\right\}. \end{equation} \takep{(Here and henceforth we drop the identifier $A$ on ${\sc pc}\xspace^*$: these bounds will apply to any individual for which the required conditions hold.)} \section{Statistical Uncertainty} \label{sec:statunc} Our discussion so far has treated estimates, such as those in \eqref{asp1} and \eqref{asp0}, as if they were the true values of the chances. Even so, we found that we obtain, at best, only partial CoE information, which confines ${\sc pc}\xspace$ or ${\sc pc}\xspace^*$ to an interval but does not yield a point value. In real applications our data will not be extensive enough to give us pinpoint estimates of even the bounds featuring in the\takep{se} inequality formulae\addp{ \eqref{ineqpc} or \eqref{ineqpc*}}, and so we have to take additional account of the resulting statistical uncertainty. The result of our inference is thus an {\em uncertain interval\/} within which a {\em probability\/} (${\sc pc}\xspace$ or ${\sc pc}\xspace^*$) must lie---thus compounding three different kinds of uncertainty. \takep{This is a novel form of inferential output, and it is far from clear how best to express and display it, and what use to make of it.} \notep{Here --- or somewhere --- I want to add some material relating to the alternative strategy of simply replacing probabilities featuring in the bounds by their posterior expectation, as discussed in my emails of around 20 August} \notes{I have seen people do this and then ignore the impact and so I am interested in seeing the details of how far you would take this.} Statistical uncertainty, at least, is well studied, and can be expressed and understood in a variety of different ways, as touted and debated by the various competing schools of statistical inference. \addp{The generic problem of inference for a quantity (like ${\sc pc}\xspace$) that, being only partly identified by the data, is subject to interval bounds, has been treated from both a classical perspective} \adds{\cite{manski:2003,manski:2007,stijn2006} and a Bayesian perspective \cite{greenland:2005,greenland2009,gustafson2005,gustafson2009}, but these approaches usually involve adding assumptions or data via model expansion and \takep{thus }are not directly applicable here.} We \takep{consider it most straightforward here to}\addp{here} take a Bayesian approach, \takep{which delivers }\addp{to derive} a joint \addp{posterior} probability distribution (which, following the helpful terminology of \textcite{best}, we henceforth term a {\em credence distribution\/}) for \takep{all the}\addp{the estimable} unknown chances in the problem\takep{, conditional on observed data}. One possible \addp{Bayesian} tactic would be to \takep{work with a joint credence}\addp{assign a prior} distribution to the \addp{multivariate parameter, $\phi$ say, comprising the} chances assigned to the four configurations of $(R_0,R_1)$\addp{ conditioned on $H$. Under the fundamental conditions, ${\sc pc}\xspace_A$ is a function of $\phi$, given by \eqref{pc000}, so that a Bayesian analysis, based on such a prior, would deliver a fully determined posterior distribution for ${\sc pc}\xspace_A$. However,} \takep{While this would deliver a seemingly comprehensible inference, in the form of a posterior credence distribution for ${\sc pc}\xspace^*$,} this is problematic: because $R_0$ and $R_1$ are never simultaneously observable, these joint chances can not be consistently estimated from data, so that this ``inference'' remains highly sensitive to the specific prior assumptions made, however extensive the data. \addp{Alternatively put, the parameter $\phi$ describing the joint distribution of $(R_0,R_1)$ (given $H$) is not fully identifiable from data; at best, only the parameter, $\lambda$ say (a non-invertible function of $\phi$), determining the associated marginal distributions of $R_0$ and $R_1$, is identifiable. Then $\lambda$ is a {\em sufficient parameter\/} \cite{barankin60}. For extensive data (and a non-dogmatic prior), the posterior distribution of $\lambda$ will converge to a point mass at its true value, but the posterior conditional distribution of $\phi$ given $\lambda$ will be exactly the same in the posterior as in the prior \cite{kadane1974,apd:CIST}. In particular the marginal posterior distribution of any non-identifiable function of $\phi$ will be non-degenerate, and highly dependent on the form of the conditional prior for $\phi$ given $\lambda$ \cite{gustafson2005,gustafson2009,gustafson2012}.} \takep{Instead}\addp{For these reasons}, we prefer to \takep{work with}\addp{assign} a joint credence distribution for the (estimable) marginal chances \addp{alone}: given sufficient data, of sufficiently good quality, these will be well estimated and insensitive to prior assumptions. The price of this increased statistical precision, however, is logical imprecision, since from \addp{even perfect knowledge of} these chances we can at best derive \addp{interval} inequalities for ${\sc pc}\xspace$ or ${\sc pc}\xspace^*$. Thus our inference has the form of a {\em random interval\/} asserted to contain ${\sc pc}\xspace$ or ${\sc pc}\xspace^*$. \addp{ \subsection{Group or individual inference?} \label{sec:altview} In the above approach, we considered the probabilities featuring in the inequalities \eqref{ineqpc} and \eqref{ineqpc*} as ``objective chances'', which we might interpret as limiting relative frequencies computed in an appropriate groups of exchangeable individuals. We focused on the \takep{non-degenerate} posterior joint credence distribution of these chances, given the available statistical data $D$---thus giving rise to a random uncertainty interval for the probability of causation, itself regarded as an objective chance. We refer to this as the ``group-focused'' approach. Another approach to using data to inform the inference about ${\sc pc}\xspace$ or ${\sc pc}\xspace^*$ is to regard these concepts, and the probabilities featuring in the bounds for them, themselves as credences, quantifying numerically the relevant uncertainty about attributes of the specific individual, Ann, on whom we are focusing. This is the ``individual-focused'' approach. For an interchange on these issues and ``group-to-individual'' inference, see the discussion by Dawid and the authors' rejoinder in \textcite{best}. In the individual-focused formulation, the term $\Pr(E = 0 \,|\, H, R = 1)$ in \eqref{ineqpc*}, for example, would be replaced by $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(E_A = 0 \,|\, H_A, R_A = 1, D)$, where the suffix $A$ refers to attributes of Ann. We \addp{now} obtain a non-random uncertainty interval for ${\sc pc}\xspace_A$ (or ${\sc pc}\xspace_A^*$)---but one that is computed in the light of the available evidence, and would be likely to change were further data to become available. To continue with this example, let $\psi$ denote the chance $\Pr(E=0 \,|\, H, R=1)$. If the individuals in $D$ can be regarded as exchangeable with Ann, and we interpret $\psi$ as a limiting relative frequency in this exchangeable setting, we will have: \begin{eqnarray} \nonumber \mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(E_A = 0 \,|\, H_A, R_A = 1, D) &=& {\mbox{E}}\left\{\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(E_A = 0 \,|\, H_A, R_A = 1, \psi) \,|\, H_A, R_A = 1, D\right\}\\ \label{eq:psi} &=&{\mbox{E}}\left(\psi \,|\, H_A, R_A = 1, D\right). \end{eqnarray} Often, given the data $D$, the further conditioning in \eqref{psi} on the Ann-specific information $(H_A, R_A=1)$ will have negligible effect---in which case the desired Ann-specific probability $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_A(E_A = 0 \,|\, H_A, R_A = 1, D)$ can be approximated by the posterior expectation ({\em i.e.\/}\xspace, conditioned on $D$ alone) of the conditional chance $\psi = \Pr(E=0 \,|\, H, R=1)$. A similar argument applies to any other required credences in the problem. } \subsection{Additional issues} \label{sec:issues} \notep{I think this section needs some substantial rethinking and rewriting, in the light of other things we have said} \takep{The}\addp{All our} above analysis is predicated on the causal relevance of the epidemiological data, assuming that we can use the study to obtain a sound estimate of the causal risk ratio \takep{\mbox{RR}\xspace\ }\addp{$\mbox{RR}\xspace_A$} that features in \eqref{exptrre}. For example, in a simple fully randomised study we could use \mbox{ORR}\xspace, as given by \eqref{obsrr}, as a proxy for $\mbox{RR}\xspace$. But such studies are the exception in epidemiology, so that the issues in real world settings where interest is focused on the causes of effects are typically much more complex. Thus in the benfluorex example of \secref{frachon}, using the frequencies in \tabref{twobytwo} for this purpose, by plugging them into the formula for \mbox{ORR}\xspace\ and interpreting this as \mbox{RR}\xspace, would be totally misleading, even if we attempted to account for statistical uncertainty as described above. Indeed, as \takes{the discussion} \adds{we noted} in Section~\ref{sec:frachon}\takes{noted}, there are additional problems in this case: because the study of \textcite{frachon:plos1} was retrospective, and the frequencies in \tabref{twobytwo} could not \takes{even} be used to estimate \mbox{ORR}\xspace, even in the absence of confounding. And this problem remains when, admitting the likely existence of confounding, we conduct a more sophisticated analysis---such as the multiple logistic regression that produced the adjusted odds ratio---to try and account for it. Even when this ploy can be regarded as successful, still the best we can ever do with retrospective data is to estimate the causal odds ratio---which will approximate the desired causal risk ratio, as required for setting the lower bound on ${\sc pc}\xspace$, only when the outcome is rare. The judge in the hypothetical case we pose should therefore be doubly wary of the relevance of the epidemiological evidence when trying to assess whether the drug caused the plaintiff's heart disease. There are even more complex situations where the data are \takep{of retrospective sort}\addp{retrospective} and where there are multiple outcomes of interest and multiple time points for their assessment. A notable example comes from \takep{a}\addp{the} continuing effort in the United States to examine the long-term health effects of exposure to Agent Orange among US Vietnam veterans. From 1962 to 1971, the US military sprayed herbicides over Vietnam. In 1991 the US Congress passed the Agent Orange Act, requiring a comprehensive evaluation of scientific and medical information regarding the health effects of exposure to Agent Orange and other herbicides used in Vietnam: \textcite{national2011Veterans} is the eighth biennial update implementing this Congressional mandate. The report examines epidemiological studies of the health status of veterans considering a multiplicity of deleterious effects, {\em e.g.\/}\xspace, different forms of cancer and early-onset peripheral neuropathy, and with limited information on exposure, both at the aggregate and individual level. A standard tool in the studies incorporated into this regularly-updated assessment is the use of adjusted odds-ratios from retrospective logistic regression analyses. Identification of a substantial \mbox{RR}\xspace triggers compensation to veterans for health and disability outcomes associated with putative exposure. \section{Case Study} \label{sec:best} We illustrate our analysis with an example taken from \textcite{best}. The motivating real life case was the diagnosis of abuse in an infant child, $c$, presenting with an acute life threatening event (``\mbox{ALTE}\xspace'')\takep{ and nosebleed (``\mbox{bleed}\xspace'')}. So now we take exposure, $E=1$, to denote \mbox{abuse}\xspace, and response, $R=1$ to denote \takep{the combination of \mbox{ALTE}\xspace and \mbox{bleed}\xspace}\addp{\mbox{ALTE}\xspace}. \subsection{Three tasks} \label{sec:3tasks} We can distinguish three tasks that we might wish to address probabilistically concerning the relationship between exposure and response in this individual case; these are quite distinct and should not be confused---although there are of course relationships (far from trivial) between them. \begin{description} \item[Forecasting] If the child is abused, what is the probability the child will suffer \mbox{ALTE}\xspace \takep{ and nosebleed?} \takep{ --- $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_c(\mbox{ALTE}\xspace\ \&\ \mbox{bleed}\xspace \,|\, \mbox{abuse}\xspace)$}\addp{ --- $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_c(\mbox{ALTE}\xspace \,|\, \mbox{abuse}\xspace)$}? \item[Backcasting] If the child suffers \mbox{ALTE}\xspace\takep{ and nosebleed}, what is the probability the child was abused? \takep{ --- $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_c(\mbox{abuse}\xspace \,|\, \mbox{ALTE}\xspace\ \&\ \mbox{bleed}\xspace)$}\addp{ --- $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_c(\mbox{abuse}\xspace \,|\, \mbox{ALTE}\xspace)$}? \item[Attribution] If the child suffers \mbox{ALTE}\xspace\takep{ and nosebleed}, what is the probability \takep{these were}\addp{this was} caused by abuse? \end{description} In the above, we have used $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_c$ to indicate my probabilities for this child (implicitly conditioned on the background information $H$ I have about the child). Even so, \takep{can take}\addp{we have a choice between taking} a group-focused approach, in which $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_c$ is interpreted as an uncertain chance, \addp{relevant to a group of individuals of which this child is one;} or an individual-focused approach, with $\mbox{P}\xspace} \newcommand{\pc}{\mbox{PC}\xspace_c$ \takep{indicating a credence}\addp{denoting my credence about this specific child}. We start by taking the group-focused approach: the individual-focused approach will be considered in \secref{indiv}. \subsection{Attribution analysis} \label{sec:attrib} \textcite{best}\ focused on the backcasting task: of assessing whether or not abuse has in fact taken place, based on the data on the individual case and on relevant statistical studies. Their analysis directly addressed the main substantive concern, since it was the occurrence of abuse---whether or not it in fact caused the observed signs---that was at issue. They did not need to enquire whether or not the observed signs were {\em caused\/} by abuse. That attribution question however will be our focus here. We note that, since the very fact of abuse is itself uncertain, we also need to consider the backcasting issue. This is done by taking, as the relevant probabilistic target of our inference, the modified probability of causation ${\sc pc}\xspace^*$, as given by \eqref{pc*}. We have described in \secref{statcoe} the many very strong assumptions that have to be made in order to justify using data to estimate even the weak interval bounds of \eqref{ineqpc*} for ${\sc pc}\xspace^*$. In the present example, the data used by \textcite{best}\ were gleaned from a search for relevant published studies. Those identified were of varying design and quality, and the data extracted from them can in no sense be regarded as supporting genuine causal inferences --- indeed, it is not easy to find real examples where the conditions supporting causal inference of this type could be regarded as satisfied. \takep{Nevertheless, purely}\addp{Purely} for illustration we shall proceed as if they are, so that we can use the inequalities of \eqref{ineqpc*}. As a further\takep{ highly unrealistic}\addp{---admittedly highly unrealistic---}assumption, we take the sufficient information $H$ to be trivial. \addp{All these imperfections in the data, and in our understanding of the context, mean that our analysis must not be taken as delivering a credible conclusion in this particular application; however, we hope that, by following it through in detail, we may help to clarify the points to which attention should given when analysing any similar problem.} Using a Gibbs sampler implemented in the {\sc WinBUGS}\xspace\hspace{-.3em}$^\copyright$\ software \cite{bugsbook}, \textcite{best}\ find the posterior credence distributions for various conditional chances, based on the data.\footnote{\textcite{best}\ conduct several alternative analyses, with some of the less reliable data values being either included or excluded. Our own analysis is based on the predictive model and data in the combined {\sc WinBUGS}\xspace\ code of Appendices~B and D of their paper, as for their own Table~4. This analysis targets a case-specific chance, having greater relevance, but also more uncertainty, than the overall population-based chance.} In particular, they obtain the posterior credence distribution for the conditional chance $\Pr(E=1 \,|\, R=1)$, and thus for $\Pr(E=0 \,|\, R=1) = 1 - \Pr(E=1 \,|\, R=1)$, as needed on both sides of \eqref{ineqpc*}. For our purposes, however, we need more: the lower bound for ${\sc pc}\xspace^*$ in \eqref{ineqpc*} also involves the marginal prior chance $\Pr(E = 0)$---or, essentially equivalently, $\theta := \Pr(E = 1)$, the chance of abuse having taken place (in this individual case), before the evidence of \mbox{ALTE}\xspace\ \takep{and \mbox{bleed}\xspace\ }is taken into consideration. And there is no available statistical evidence relevant to this quantity. We therefore proceed by introducing our own prior credence distribution for $\theta$, and treating this chance as independent of all the others in the problem. We can expect considerable sensitivity to the specific choice made. To begin to explore this, we try two different prior credence distributions for $\theta$, both beta distributions for simplicity and tractability: \begin{description} \item[Prior~1:] $\theta \sim \beta(0.1, 0.1)$.\\ This has mean $0.5$ and standard deviation $0.46$. It can be regarded as representing very substantial prior uncertainty about $\theta$. \item[Prior~2:] $\theta\sim \beta(1,9)$.\\ This has mean $0.1$ and standard deviation $0.09$. While still admitting uncertainty, it attempts to take into account the prior unlikelihood of abuse: its mean $0.1$ is the unconditional probability assigned to this event. \end{description} Density functions of these two prior credence distributions are displayed in \figref{priors}. \begin{figure}[htbp] \centering \hfill \subfigure[Prior~1: $\theta \sim\beta(0.1,0.1)$] {\includegraphics[scale=.3]{FIG1a.eps}}\hfill \subfigure[Prior~2: $\theta \sim\beta(1,9)$]{\includegraphics[scale=.3]{FIG1b.eps}}\hfill\hfill \caption{Two prior credence distributions for $\theta$\addp{, the prior chance that abuse has taken place.}} \label{fig:priors} \end{figure} \addp{We note that the lower bound in \eqref{ineqpc*} is $0$ if and only if $\mbox{RR}\xspace_A\leq 1$, which event is independent of $\theta$. Consequently the posterior probability that the lower bound is $0$ is unaffected by the assumptions made about the prior distribution for $\theta$---as similarly is the conditional posterior distribution of the upper bound, given that the lower bound is $0$.} \section{Data Analysis} \label{sec:datanal} We have conducted our own analysis of the data, based on the {\sc WinBUGS}\xspace\ code of \textcite{best}\ elaborated so as to incorporate $\theta$. \takem{A chain of length 500000 was generated after a burn-in phase of 500000 iterations. Interpreting the generated chain as an independent sample of size 500000 would give potentially misleading density estimates: for this reason we have based the estimates on thinned samples taking every 10th elements of the chain, as suggested by an analysis (not reported) of autocorrelation estimates.} \addm{After a burn-in phase of 500000 iterations, to get rid of autocorrelation we have based the estimates on thinned samples taking every 10th elements of the chain. Then we have considered a chain of length 50000.} \subsection{Bivariate distribution} \label{sec:biv} A complete inference would describe the posterior credence distribution of the \addp{uncertainty} interval \eqref{ineqpc*} for ${\sc pc}\xspace^*$, whose end-points are functions of random chances, and hence themselves have a bivariate distribution. Note that, whenever the inequality $\Pr(E = 1 \,|\, R = 1) \leq \Pr( E=1)$ between chances holds, which corresponds to negative association between exposure and outcome and will happen with positive probability in the posterior credence distribution, the lower bound of the \addp{uncertainty} interval is 0 and is thus entirely uninformative. Thus the posterior credence distribution is a mixture of a continuous bivariate distribution, and (with positive probability) a distribution for the upper bound alone. The probability that the lower bound of the \addp{uncertainty} interval is $0$ \addp{(which is independent of the prior distribution used)} is estimated as \takep{$0.627$ for Prior~1, and $0.677$ for Prior~2.}\addp{$0.65$.} \figref{lengths} displays, for the two different priors, samples from the bivariate posterior credence distribution (ordered by lower bound). In the plots are reported 100 \addp{uncertainty} intervals obtained by selecting one iteration of the chain every \takem{5000} \addm{500}. \begin{figure}[htbp] \centering \hfill \subfigure[Prior~1] {\includegraphics[scale=.3]{FIG2a.eps}}\hfill \subfigure[Prior~2] {\includegraphics[scale=.3]{FIG2b.eps}}\hfill\hfill \caption{\addp{For each of the priors of \figref{priors}, 100 uncertainty intervals, randomly sampled from the bivariate posterior distribution of the lower and upper bounds, are displayed. The i}\takep{I}ntervals are ordered in increasing value of the lower bound.} \label{fig:lengths} \end{figure} In \figref{contour} are shown bivariate contour plots, for Priors~1 and 2, of the end-points of the random \addp{uncertainty} interval, excluding those cases where the lower bound is equal to zero. The full joint distribution is completed by specifying the distribution of the upper bound for these cases\takep{: these are shown in \figref{UB0}.}\addp{. This distribution, which is independent of the assumed prior for $\theta$, is shown in \figref{UB0}, which can also be interpreted as displaying the conditional distribution of the length of the \addp{uncertainty} interval for ${\sc pc}\xspace^*$, given that its lower bound is $0$.} \begin{figure}[htbp] \centering \hfill \subfigure[Prior~1] {\includegraphics[scale=.3]{FIG3a.eps}}\hfill \subfigure[Prior~2] {\includegraphics[scale=.3]{FIG3b.eps}}\hfill\hfill \caption{\addp{For each of the priors of \figref{priors}, a }contour plot of the joint posterior distribution of \addp{the} lower and upper bounds \takep{(for lower bound $>0$)}\addp{of the random uncertainty interval, conditional on the lower bound being positive.}} \label{fig:contour} \end{figure} \begin{figure}[htbp] \centering {\includegraphics[scale=.3]{FIG4.eps} \caption{\takep{Posterior density of upper bound (for lower bound $=0$}\addp{The posterior density of the upper bound of the random uncertainty interval, conditional on the lower bound being $0$. This distribution is independent of the chosen prior for the parameter $\theta$.}} \label{fig:UB0} \end{figure} \subsection{Univariate summaries} \label{sec:univ} Useful univariate summaries of the overall bivariate inference are the marginal posterior credence distributions of the upper and lower bounds, and of the length of the \addp{uncertainty} interval. \subsubsection{Upper bound} \label{sec:upper} The upper bound $\Pr(E = 1 \,|\, R=1)$ in \eqref{ineqpc*} is the chance of abuse given the case evidence \addp{of \mbox{ALTE}\xspace}, as already considered by \textcite{best}. Its posterior credence distribution (which is unaffected by the choice of prior for $\theta$) is summarised in the \takep{first}\addp{second} row of Table~4 of \textcite{best}. We compute the posterior mean and standard deviation for this upper bound to be $0.043$ and $0.013$, respectively. Its posterior density is shown in \figref{upperdens}. \begin{figure}[htbp] \centering \includegraphics[scale=0.3]{FIG5.eps} \caption{\addp{The p}\takep{P}osterior credence density of the upper bound for ${\sc pc}\xspace^*$. \addp{This distribution is independent of the chosen prior for the parameter $\theta$.}} \label{fig:upperdens} \end{figure} \subsubsection{Lower bound} \label{sec:lower} The lower bound on ${\sc pc}\xspace^*$ in \eqref{ineqpc*}, $\max\{0,1 - {\Pr(E = 0 \,|\, R = 1)}/{\Pr( E=0)}\}$, depends also on $\theta = \Pr( E=1)$, and its posterior credence distribution could be sensitive to the prior credence distribution chosen for $\theta$. We have already noted that the posterior credence probability that the lower bound is $0$ is \takep{$0.627$ for Prior~1, and $0.677$ for Prior~2.}\addp{$0.65$, independent of the prior for $\theta$.} \figref{low} displays the posterior densities for the lower bound, conditional on its being strictly positive, for \takep{these two priors}\addp{Prior~1 and Prior~2}; the means are $0.039$ and $0.025$, and the standard deviations are $0.015$ and $0.016$, respectively. We see that the effects of the differences between the priors are relatively minor. \begin{figure}[htbp] \centering \hfill \subfigure[Prior~1] {\includegraphics[scale=.3]{FIG6a.eps}}\hfill \subfigure[Prior~2] {\includegraphics[scale=.3]{FIG6b.eps}}\hfill\hfill \caption{\takep{Posterior credence density of lower bound for ${\sc pc}\xspace^*$. Data equal to zero have been excluded}\addp{For each of the priors of \figref{priors}, the posterior credence density of the lower bound for ${\sc pc}\xspace^*$, conditional on this being greater than $0$.}} \label{fig:low} \end{figure} \subsubsection{Length of interval} \label{sec:length} Another useful summary of the full inference is the posterior credence distribution of the length of the interval between the lower and upper bounds on ${\sc pc}\xspace^*$, as displayed in \figref{Int}. \begin{figure}[htbp] \centering \hfill \subfigure[Prior~1] {\includegraphics[scale=.3]{FIG7a.eps}}\hfill \subfigure[Prior~2] {\includegraphics[scale=.3]{FIG7b.eps}}\hfill\hfill \caption{\takep{Posterior credence density of length of interval for ${\sc pc}\xspace^*$}\addp{For each of the priors of \figref{priors}, the posterior credence density of the length of \addp{the uncertainty} interval for ${\sc pc}\xspace^*$.}} \label{fig:Int} \end{figure} The posterior mean and standard deviation based on Prior~1 are, respectively, $0.028$ and $0.022$, while for Prior~2 these quantities are $0.035$ and $0.016.$ We see high sensitivity to the prior assumptions. This is particularly apparent when we exclude data with lower bound equal to zero (see \figref{Int1}). For cases with lower bound equal to $0$, the interval length is identical with the upper bound, as displayed in \figref{UB0}, \takep{, where the two priors give very similar results.}\addp{and is independent of the prior distribution for $\theta$.} These features are also \takep{clearly} visible in \figref{lengths}. \begin{figure}[htbp] \centering \hfill \subfigure[Prior~1] {\includegraphics[scale=.3]{FIG8a.eps}}\hfill \subfigure[Prior~2] {\includegraphics[scale=.3]{FIG8b.eps}}\hfill\hfill \caption{\addp{For each of the priors of \figref{priors}, the p}\takep{P}posterior credence density of \addp{the} length of \addp{the uncertainty} interval for ${\sc pc}\xspace^*$, \takep{for data with}\addp{conditional on the} lower bound \takep{different from zero}\addp{being greater than $0$.}} \label{fig:Int1} \end{figure} \subsubsection{Coverage probability} \label{sec:cover} Finally, for any probability value $p$, we can compute the posterior credence that this is included in the random interval \eqref{ineqpc*}---and thus is at least a candidate as a value for ${\sc pc}\xspace^*$. We graph this coverage measure, as a function of $p$, in \figref{cover} for both priors. \begin{figure}[htbp] \centering \hfill \subfigure[Prior~1] {\includegraphics[scale=.3]{FIG9a.eps}}\hfill \subfigure[Prior~2] {\includegraphics[scale=.3]{FIG9b.eps}}\hfill\hfill \caption{\addp{For each of the priors of \figref{priors}, the p}\takep{P}osterior credence probabi\addp{l}ity that the \addp{random uncertainty} interval covers a\addp{ny} specific value\addp{.}} \label{fig:cover} \end{figure} \addp{ \subsection{Individual-focused inference} \label{sec:indiv} The individual-focused inference is much simpler in form: according to the analysis in \secref{altview} (and assuming the approximation mentioned there is valid), we simply replace the chances featuring in the bounds of \eqref{ineqpc*} by their posterior expectations. (Recall that we are taking $H$ to be trivial, so it can be omitted from the notation). The posterior expectation of the upper bound, $\Pr(E = 1 \,|\, R=1)$, is $0.043$, independent of the assumed prior distribution for $\theta$. As for the lower bound, the posterior expectation of $\Pr(E = 0 \,|\, R = 1)$ is $1-0.043 = 0.957$. Also, $\Pr( E=0)=1-\theta$, and since we have no data relevant to $\theta$ the posterior expectation of this quantity is the same as its prior expectation, namely $0.5$ for Prior~1, or $0.9$ for Prior~2. It is clear that there could be high sensitivity to the prior distribution assessed for $\theta$. However, in this case the lower bound is $0$ for both priors. Hence our individual-focused uncertainty interval for ${\sc pc}\xspace^*$ is $(0, 0.043)$ in both cases. } \notep{Needs further discussion} \section{Conclusions} \label{sec:conc} We have seen that statistical inference about ``causes of effects'' is particularly problematic from many points of view, and difficult to justify even in ideal circumstances. First, in order merely to formalise the question, we need to carefully specify, separately, both who is making the inference (in \secref{statcoe} we called that person``I'') and who (there called ``Ann'') the inference relates to. Next, we need to be satisfied that my information $H$ about Ann is {\em sufficient\/}, in the sense of there being no confounding that could make Ann's treatment choice informative (for me) about her potential outcome variables. When all these conditions are satisfied we can begin to try and learn from relevant data about the two versions, ${\sc pc}\xspace$ and ${\sc pc}\xspace^*$, of the probability of causation. For that purpose we should have good experimental data from which we can get good estimates of the distribution of the outcome, conditional on exposure $E$ and $H$. And even with such ideal estimated probabilities, the resulting inferences are complex, compounding as they do three different kinds of uncertainty: interval bounds, for a probability, that are themselves random. We have made a start at exploring ways of understanding, describing and displaying such triple uncertainty (in an example that admittedly falls far short of the ideal situation), but much remains to be done.\\ \takep{In the case study of \secref{datanal} we addressed the question whether the \mbox{ALTE}\xspace and \mbox{bleed}\xspace were together caused by \mbox{abuse}\xspace. But we can formulate other CoE questions, such as whether the \mbox{ALTE}\xspace alone was caused by \mbox{abuse}\xspace. Since we have observed \mbox{bleed}\xspace, this would involve replacing the denominator of the lower bound in \eqref{ineqpc*} by $1-\Pr(\mbox{abuse}\xspace \,|\, \mbox{bleed}\xspace)$. Again, we have no data directly relevant to $\Pr(\mbox{abuse}\xspace \,|\, \mbox{bleed}\xspace)$, and would need to assess a prior credence distribution for it. This might reasonably be taken to be higher and tighter than that for $\Pr(\mbox{abuse}\xspace)$ alone, but it would again be important to investigate sensitivity to a range of reasonable choices.} \noindent{\bf Acknowledgement.} We thank Catherine Laurent for valuable background about Ir\`ene Frachon and benfluorex. \bibliographystyle{oupvar}
1,477,468,750,258
arxiv
\section{Introduction} \label{sec:intro} Currently the observations of electromagnetic radiation from astrophysical sources and high energy phenomena in the Universe are restricted to energies outside an observation gap extending from $\approx $ 10 GeV to $\approx $ 300 GeV. From the continuation of current measurements into this energy region we expect to find hints or answers to important physics questions in astrophysics, cosmology, and particle physics. The reason for the observation gap lies in the flux limitation of space-borne instruments for $\gamma$-astronomy due to very small collection areas ($\mathcal{O}$(0.1) m$^2$) which limits the measurements to energies \textit{below} 10 GeV. At the same time the not yet optimized technology of ground-based instruments for $\gamma$-astronomy, i.e., the existing Imaging Air \v{C}erenkov Telescopes (IACTs), limits the detection threshold of $% \gamma$-showers to energies \textit{above} 300 GeV. In order to close this gap by the comparitively cheap ground-based technique, the 17 m $\oslash$ MAGIC Telescope has been designed and important components have been developed during the last 2 years \cite{magic}. \section{Physics Goals} \label{sec:phyics} Some of the physics goals of the MAGIC Telescope project can be summarized as follows: \begin{itemize} \item Most of the blazar type active galactic nuclei (AGN) that have been detected by the EGRET detector \cite{egret} onboard the Compton Gamma Ray Observatory (CGRO) below 10 GeV must exhibit cutoff features below 300 GeV. The reason is that of the more than 60 blazars detected by EGRET {\sl below} 10 GeV only 2 (+1) have been detected by the ground-based detectors, although the average fluxes of many of them would have been within the sensitivity range of the IACTs for the naive extrapolation of the spectra, i.e. assuming a continuation of the power law spectra as measured by EGRET. Both the coverage of the observation gap as well as an improvement of the sensitivity at current energies is therefore needed. \item The visible universe in high energy photons is limited because of pair production on low energy diffuse background photons. Due to the low energy photon density varying strongly with energy an instrument with a lower threshold compared to current IACTs will have access to a much larger fraction of the Hubble volume. Current IACTs view the universe out to z $% \approx 0.1$ ($\approx $ 1.8 billion light years for H$_{0}=$ 50 km sec$^{-1} $Mpc$^{-1}).$ The MAGIC Telescope will be able to observe objects out to very early times, i.e., out to z $\approx $ 2.8. \item Gamma-Ray Bursts (GRBs) will also be observable out to cosmological distances. Extrapolation of the GRBs detected by EGRET reveal that even medium strength bursts will yield very large $\gamma$ rates detectable by the MAGIC Telescope, i.e., rates up to the order of kHz. \item Supernova remnants (SNRs) as the sites of cosmic ray acceleration in most models of the cosmic ray origin seem to be more complex than previously believed \cite{jones}. Although three SNRs have been observed above 300 GeV (Crab nebula, Vela, and SN1006) the question of the origin of the cosmic rays is far from answered. More sensitive measurements at lower energies will be of great importance. \item Of the more than 800 known radio pulsars EGRET has revealed 7 to emit pulsed $\gamma$-rays up to $\approx$ 10 GeV. To clarify the production mechanism measurements in the 10 GeV to 100 GeV energy domain are crucial. In some models no pulsed emission is expected beyond some tens of GeV. \item The Dark Matter in the universe visible most pronounced in the rotation curves of galaxies may exist in form of the lightest suppersymmetric particle. In most astrophysical models of the dark halo of our Galaxy these particles would cluster in the centre of the Galaxy opening up the possibility of a $\gamma$ annihilation line and a $\gamma$ continuum to be measurable by IACTs with the preferred energy around 100 GeV. \end{itemize} \section{Basic Considerations} \label{sec:basic} As shown in fig.~1 the \v{C}erenkov light pool at 2200 m above sea level (asl) for $\gamma$ induced air showers is almost linearily dependent on the incident $\gamma$ energy. Current IACTs like the 10 m $\oslash$ Whipple telescope in Arizona \cite{whipple} have a photon sensitivity of about 35 photons/m$^2$ corresponding to an $\gamma$ energy threshold of about 300 GeV. As hadron induced air showers produce less \v{C}erenkov light than $\gamma$ induced ones, a natural $\gamma$/hadron seperation at the threshold is provided. From fig.~1 one can deduce that this inherent hadron suppression factor rises with falling energy. A telescope that would be sensitive at $\mathcal{O% }$(10) GeV therefore does not need to have excellent hadron rejection capabilities based on image analyses already at the threshold. Note that in general the different development of $\gamma$ and hadron induced air showers is exploited for the suppression of the hadronic component by an image analysis of the showers recorded with highly granular cameras. \begin{figure}[t] \begin{center} \psfig{figure=fig1.eps,height=2.5in} \end{center} \caption{Photon density (300-600 nm) at 2000 m asl as a function of incident energy and particle. The photon density is averaged over an area of 50 000 m$% ^2$. Taken from ref.{\protect\cite{chantel}}.} \label{fig:density} \end{figure} Table~1 shows a comparison of some existing air \v{C}erenkov detectors in terms of sensitivity and corresponding physics energy thresholds for $\gamma$% -rays, and the minimum number of photoelectrons (ph.e.s) that have to be recorded for a successful image analysis. The trigger threshold energies usually are lower by 15 - 30\%. \begin{table}[t] \caption{Sensitivity of operating, to be upgraded, and planned \v{C}erenkov telecopes in terms of the minimum number of photons/m$^2$ in the \v{C}% erenkov light pool. In addition the required number of photoelectrons for reconstruction of the image parameters is given. Note that physics energy thresholds are given. Trigger thresholds generally are lower by 15 to 30\%. The two energy values quoted for VERITAS and HESS correspond to a single telescope or the telescope system, respectively. } \label{tab:sens} \begin{center} \begin{tabular}{lllll} \hline & & & & \\ Telescope) & Mirror size & Sensitivity & E$_{thres}$ & ph.e./image \\ (-Array) & (m$^2$) & (Ph./m$^2$) & after cuts & \\ \hline & & & & \\ \multicolumn{5}{c}{Operating Telescopes} \\ \hline HEGRA CT1 & 5 & 220 & 1.5 TeV & $\geq$ 100 \\ HEGRA CT3-6 & 8.4 & 150 & 700 GeV & $\geq$ 100 \\ CAT & 18 & 35 (?) & 300 GeV (?) & $\geq$ 30 \\ WHIPPLE & 74 & 35 & 300 GeV & $\geq$ 300 \\ \hline & & & & \\ \multicolumn{5}{c}{Upgraded Telescopes} \\ \hline WHIPPLE 98 & 74 & 16 (?) & 100 GeV & $\geq$ 100 \\ \hline & & & & \\ \multicolumn{5}{c}{Planned Telescopes} \\ \hline VERITAS & 9 x 74 & 16 (?) & 70 - 100 GeV & $\geq$ 100 \\ HESS & 16 x 74 (?) & 14 (?) & 70 - 100 GeV (?) & $\geq$ 100 \\ MAGIC & 234 & 1.1 & 12 - 14 GeV & $\geq$ 80 \\ MAGIC (APD) & 234 & 0.6 & $\approx$ 7 GeV & $\geq$ 120 \\ \hline \end{tabular} \end{center} \vspace{-1cm} \end{table} Note, that the minimum number of ph.e.s per image required for a successful image analysis is a function of the pixel size, the noise level, and the speed of the camera which ultimately is limited by the degree of isochronicity of the mirrors. The first and to a certain extend the third influences have e.g. been optimized by the CAT collaboration \cite{degrange} in order to achieve a low threshold with a comparatively small mirror area. In the case of the MAGIC Telescope, however, the very low photon densities cause the first and second influences to dominate; hence the requiremnet of at least 80 ph.e.s for successful MAGIC Telescope image analysis. Note also that low noise avalanche photo diodes (APDs) that are required for $\gamma$-ray astronomy are not yet available. The development, however, is progressing fast \cite{lorenz,holl,pichler} and APDs, once they are available, will allow for a further lowering of the energy threshold as indicated in table~\ref{tab:sens}. \section{The Technical Realization} \label{sec:tech} Compared to the currently largest operating Whipple telescope with a mirror dish diameter of 10 m, the MAGIC Telescope will need a sensitivity that is better by a factor of $\approx$ 15 (see table~ \ref {tab:sens}) in order to reach the $\mathcal{O}$(10) GeV threshold. In addition the sensitivity to the night sky background (NSB) has to be reduced. These goals will be met by the MAGIC Telescope technology items that have either been developed or which is existing technology that will be adapted to $\gamma$-ray astronmomy. The steps and the gain in ph.e.s connected with the steps are summarized in table~\ref{tab:gain}. \begin{table}[htb] \caption{Steps to lower the energy threshold by raising the gain in ph.e.s for image analysis. The gain in sensitivity for strong signals will be linear, for weak signals it will go like the square root of the quoted numbers. } \label{tab:gain} \begin{center} \begin{tabular}{lc} \hline & \\ Technology step & Gain \\ & in ph.e.s \\ \hline Enlarging the mirror area (10 m $\oslash$ $\rightarrow$ 17 m $\oslash$) & $% \approx$ 3 \\ $\approx$ 100\% light collection efficiency in camera (Winston cones) & 1 - 1.5 \\ Application of \textbf{red sensitive} light sensors & $\approx$ 3 \\ Reduction of exessive noise factor & (1.3 - 2) \\ (not multiplicative) & \\ Improved ph.e. collection efficiency & $\approx$ 1.3 \\ Other small improvements & 1.1 - 1.3 \end{tabular} \end{center} \end{table} Note, that the gain in sensitivity is linear as long as the signal is large compared to the NSB noise. If the signal and noise are of comparable strength, the gain will only be proportional to the square root of the gain factor. The reduction of the NSB influence will be facilitated by reducing the time spread of the photons arriving at the camera from different parts of the mirror dish with the help of an isochronous mirror dish, i.e., of paraboloid shape. In addition we will reduce the readout time to the intrinsic signal width by the use of a 300 MHz Flash-ADC readout, we shall minimize the read out image area by using small pixels, and we shall minimize background light incident under large angles by using optimized light guides. \subsection{The MAGIC Telescope} \label{sec:magic} The steps necessary to raise the sensitivity as summarized in table \ref{tab:gain} are realized by the new technology or the adaption of technolgy items to $\gamma$ astronomy for the MAGIC Telescope (see fig. 2). These items are: \begin{figure}[t] \hspace{1cm} \psfig{figure=fig2.ps,height=3in} \caption{Sketch of the 17 m MAGIC Telescope. } \label{fig:sketch} \end{figure} \begin{itemize} \item A light weight carbon-fibre space frame which will enable us to increase the mirror dish diameter to 17 m. At the same time the inertia will be kept low for rapid turning capability for GRB searches. \item Newly developed light weight all-Aluminium mirrors with internal heating. \item An active mirror control for reducing the remaining deformations in the mirror frame during telescope turning. \item For the camera we are considering three variants: \begin{itemize} \item a camera equipped with classical photo multiplier tubes (PMTs), i.e., a copy of the now operational 271 pixel camera of the HEGRA telescopes; \item a camera equipped with hybrid PMTs with high quantum efficiency (QE) also in the red part of the spectrum (QE $\approx $ 45\%); \item as a future option we anticipate the use of silicon avalanche photo diodes (APDs) with about 80\% QE. Here still further major developments are needed, however. \end{itemize} \item The analog signals from the camera PMTs (APDs) will be transported from the camera to the electronics container at ground level by optical fibres. This will result in a small camera weight and allows constant access to the electronics on the ground. \item The signals will be digitized by 8-bit Flash-ADCs with a sampling rate $\geq $ 300 MHz. Besides minimizing the noise this will give the precise shape of the signals which then can be exploited for hadron background suppression. It will also provide buffering for higher level trigger decisions and will allow to add more telescopes in the future in order to build the first large $\gamma $-ray observatory \cite{duo}. \end{itemize} \subsection{Some MAGIC Telescope technology elements in detail} The three-layer space frame will be made from carbon-fibre epoxy tubes which are both lightweight and rigid. A finite element analysis has shown that the residual deformations can be kept below 3.5 mm with respect to the nominal curvature at any position for a total weight of the frame and mirror elements of less than 9 tons. Fig. 3 shows a computer generated view of the space frame with the three layers of 1m, $\approx$ 1.14 $\cdot \sqrt{2}$ m, and $\approx$ 2 m grid spacing. A circumpherical ring of 1m height is added to further stiffen the frame. \begin{figure}[htb] \leavevmode \centering \epsfxsize=8cm \epsffile{fig3.ps} \caption{Computer generated view of the space frame consisting out of a 3 layer structure stiffened at the circumference by an additional 1 m high structure. The thicker lines correspond to the inset welded steel frame construction in the area of the axis of the dish.} \label{fig-mero_gitter2} \end{figure} \begin{figure}[htb] \leavevmode \centering \epsfxsize=8cm \epsffile{fig4.ps} \caption{Cross section of a hybrid photomultiplier with avalanche diode readout.} \label{fig-hybrid_tube} \end{figure} The mirror will be tesselated with a basic element size of 50 x 50 cm$^{2}$. These new lightweight elements are sandwich aluminium panels, equipped with internal heating to prevent dew and ice deposits. By diamond turning a high quality surface with a residual roughness below 10 nm is achieved yielding a typical focal spot size of 6 mm at a focal length of 34 m. The preproduction series of these mirrors that have been installed on the HEGRA CT1 prototype telescope have already shown the soundness of this design. The active mirror control has been newly developed and sucessfully tested in the laboratory. It works on panels of 4 preadjusted mirror elements which can be tilted by two stepping motors. A videocamera will record the position of a laser pointer on the casing of the camera and from the comparison of the actual spot position with the nominal one the steering commands will be derived. \begin{figure}[htb] \begin{center} \epsfig{file=fig5.ps,height=4.5cm} \caption{Pulse height spectrum of a modified Intevac IPC when illuminated by a fast blue LED pulser. Settings: $% U_{{\rm c-a}}=\ 10$ kV; $U_{{\rm AP}}=\ 30.0$ V, $\tau =\ 50$ ns.} \label{fig-phe} \end{center} \end{figure} The camera will have a field-of view with a diameter of 3.6$^{\circ }$ with a pixel size of 0.1$^{\circ}$in the central region of 2.5$^{\circ}$ and a coarser pixelisation (0.2$^{\circ}$) in the outer part. The photon sensor we intend to use is of the hybrid PMT type (hybrid photon detector, HPD) with a GaAsP photocathode as e.g. produced by INTEVAC (see fig. 4) \cite{daniel}. These type of HPDs are characterized by a considerably higher QE of $\approx $ 45 \% that extends into the red part of the spectrum. The QE in the blue will be enhanced to the same level by the application of a wavelength shifter dye. The second main element of these detectors, the readout diode, which in the original INTEVAC design was a GaAs Pin diode, for the MAGIC Telescope application will have to be exchanged by a Si avalanche diode (AD) in order to achieve a gain of 30,000 - 50,000 already with an $U_{cathode-anode} (U_{c-a}$) of the order of 5 kV. Note that the connected loss in speed will not be crucial for our application but the low operation voltage will considerably ease the operation under harsh environmental conditions and will allow the use of cheaper and less complex transimpedance amplifiers compared to charge sensitive ones. The pulse height spectrum recorded with a prototype HPD using a blue LED pulser of 5 ns FWHM and $<n_{photon}>$ $\approx$ 6-8 is shown in fig. 5. The complete electronics setup for a single channel is shown in fig. 6. \begin{figure}[htb] \begin{center} \epsfig{file=fig6.eps,width=8cm} \caption{Basic block diagram of a single camera pixel readout chain.} \label{fig-block} \end{center} \end{figure} The transport of analog PMT pulses with optical fibres has been developed for the AMANDA collaboration \cite{karle}. For the MAGIC Telescope we are currently performing measurements aimed at optimizing the transmitter and receiver ends for our needs. \section{Performance} We have performed extensive Monte Carlo simulations of the MAGIC Telescope in order to optimize the design and to get performance estimates. The trigger threshold (defined as the maximum differential counting rate) is slightly below 10 GeV. The effective collection area will reach $\approx$ 10$^5$ m$^2$ at about 100 GeV (for observations near the zenith) and will be as large as 6$\cdot$10$^6$ m$^2$ at very large zenith angles. The corresponding sensitivity is shown in fig.7 together with the numbers for some current IACTs and for the EGRET detector. Also shown is the sensitivity as quoted for the planned 9-telescope array VERITAS and the planned satellite detector GLAST. \begin{figure}[htb] \leavevmode \centering \epsfxsize=8cm \epsffile{fig7.eps} \caption{Comparison of the point-source sensitivity of the MAGIC Telescope at 0$^{\circ }$zenith angle and at zenith angles of about 75$^{\circ }$ (denoted MAGIC (large Zenith Angles)) to the point-source sensitivity of existing (CELESTE, HEGRA CT system, MILAGRO, Whipple) or planned ground-based installations (VERITAS) and to the sensitivity within 1 month of observations for the existing (EGRET) and planned (GLAST) space-borne high energy $\gamma -$ray experiments.} \label{fig-sensitivity} \end{figure} \section{Conclusions} The 17 m diameter MAGIC Telescope has been designed to measure $\gamma$-rays with energies above 10 GeV. Most of the new technology for this telescope has been developed during the last 2 years \cite{magic}. Using innovative elements it will be possible to close the existing observation gap in the electromagnetic spectrum for about 1\% (!) of the cost of a satellite experiment, which until now was believed to be necessary in order to do measurements in this energy domain. At the same time the sensitivity in the energy region of current \v{C}erenkov telescopes will be improved by up to an order of magnitude. The innovative elements of the MAGIC Telescope technology will very likely be the basis for all IACTs of the next generation. We estimate the hardware-price of the telescope to be around 3.5 M\$. The construction time will be 2.5 - 3.5 years. \section*{References}
1,477,468,750,259
arxiv
\section{Introduction}\label{section: Introduction} \textit{Functional magnetic resonance imaging} (fMRI) is a non-invasive brain imaging technique used to estimate both brain regional activity and interactions between brain regions due to its ability to detect \textit{neuronal activity} in the entire brain simultaneously. In fMRI studies, brain signals are measured on 3D volume elements (\textit{voxels}) during a certain period of time, and each signal is observed at discrete time points. For example, if an fMRI study lasts for 20 minutes and data are collected at every 2 seconds, that is, the time between successive brain scans referred to as \textit{repetition time} (TR, denoted as $\Delta$ hereafter) is 2 seconds, then we obtain observations at 900 time points. It is often of interest to investigate neuronal activity. However, since neuronal activity occurs in milliseconds, it is impossible to directly observe it using fMRI technology. It is implicitly captured by \textit{blood-oxygenation level-dependent} (BOLD) signals. When neuronal activity in a brain area occurs, it is followed by localized changes in metabolism, where the corresponding local oxygen consumption increases, and then oxygen-rich blood flows to this area. This process results in an increase in oxyhemoglobin and a decrease in deoxyhemoglobin. The BOLD signal value at each time point is the difference between the oxyhemoglobin and deoxyhemoglobin levels. The signal consisting of BOLD values across all time points measures the localized metabolic activity influenced by the local brain vasculature, and it indirectly measures the localized neuronal activity. FMRI captures BOLD signals while study subjects are either resting (resting-state fMRI) or performing tasks (task-fMRI). Our brains are networks that can be expressed as graphs. Dependencies between activation in any two regions of the brain are referred to as {\it functional connectivity} (FC, \cite{cribben2016functional}). These dependencies have been widely studied, and several approaches are proposed for estimation of FC, especially at rest. Task-evoked FC is fundamentally different from resting-state FC. \cite{lynch2018task} show that existing estimates of task-evoked FC cannot explain differences between FC at rest and during a task. While brain FC in an individual subject is often of interest, referred to as subject-level FC, population-level estimates of FC have been proposed in the literature and used for obtaining more robust subject-level FC estimates (e.g., see \cite{mejia2018improved}). To overcome the limitations of existing task-evoked FC estimation approaches when investigating population-level FC, we propose a novel definition of \textit{population-level task-evoked FC} (ptFC). This definition is based on a correlation of subject-specific random effects capturing task-evoked neuronal activity. Importantly, our proposed ptFC framework takes into account the complex biological processes of the brain response to task stimuli that many existing estimators ignore. We are interested in modeling neuronal activity and BOLD signals of a set of \textit{subjects} $\omega$ from a population of interest. At each time point $t$, we denote by $Y_k(\omega;t)$ the BOLD signal value at the $k^{th}$ node of $\omega$'s brain. The word ``node" herein refers to either a voxel or a \textit{region of interest} (ROI). Aggregated BOLD signals at a macro-area level are often of interest. That is, for each subject, BOLD signals are spatially averaged within pre-selected ROI and the resulting ROI-specific signals are analyzed. The BOLD signals for subject $\omega$ are represented by a $K$-vector-valued function $\{\pmb{Y}(\omega;t)=\left(Y_1(\omega;t), \cdots, Y_K(\omega;t)\right)^T \vert t\in \mathcal{T}\}$ on $\mathcal{T}$, where $\mathcal{T}$ is the collection of time indices and $K$ is the number of nodes. Furthermore, we assume that $\mathcal{T}$ is a compact subset of $\mathbb{R}$. In this paper, we model three types of signals in task-fMRI studies: (i) \textit{stimulus signals} denoted by $N(t)$ representing the experimental designs of tasks, (ii) \textit{task-evoked neuronal activity signals} at the $k^{th}$ node $\Phi_k[\omega; N(t)]$ stemming solely from the task stimulus $N(t)$, where the ``stimulus-to-activity" maps $\Phi_k[\omega;\bullet]: N(t)\mapsto \Phi_k[\omega; N(t)]$ depend on subjects $\omega$, and (iii) observed BOLD signals $Y_k(\omega;t)=\Psi_k\left\{\omega; \Phi_k\left[\omega; N(t)\right]\right\}$ that are associated with the task-evoked neuronal activity $\Phi_k[\omega; N(t)]$, where the ``activity-to-BOLD" maps $\Psi_k\{\omega; \bullet\}:\Phi_k[\omega; N(t)]\mapsto Y_k(\omega;t)$ are $\omega$-dependent. In (ii) and (iii), map $\Phi_k[\omega;\bullet]$ characterizes how neurons at the $k^{th}$ node of $\omega$ react to stimulus $N(t)$, and map $\Psi_k\{\omega;\bullet\}$ describes how neuronal activity $\Phi_k[\omega;N(t)]$ induces BOLD signal $Y_k(\omega;t)$. We are interested in modeling $\Phi_k[\omega; N(t)]$. However, only BOLD signals $Y_k(\omega;t)$ are observable. The goal of many fMRI studies, including our work, is to recover $\Phi_k[\omega;\bullet]$ by analyzing $Y_k(\omega;t)$. In Section \ref{section: A New Definition of FC}, we provide nonparametric models for $\Phi_k[\omega;\bullet]$ and $\Psi_k\{\omega;\bullet\}$. Theoretically, time $t$ is continuous and $\mathcal{T}=[0,t^*]$, where $t^*<\infty$ denotes the end of experiment. In applications, we obtain data only at discrete and finite time points in $\mathcal{T}=\{\tau \Delta\}_{\tau=0}^T$, where $\Delta$ is the predetermined TR and $T$ indicates that BOLD signals are observed at $T+1$ time points. A BOLD signal at the $k^{th}$ node of subject $\omega$ in task-fMRI consists of three components: (i) $P_k(\omega;t)$ denotes the one evoked solely by the experimental task $N(t)$, (ii) $Q_k(\omega;t)$ denotes the one stemming from spontaneous brain activity, e.g., the activity coordinating respiration and heartbeat, and the neuronal activity responding to stimuli that we are not of interest, and (iii) random error $\epsilon_k(\omega;t)$. We assume that these components have an additive structure, and observed BOLD signals $Y_k(\omega;t)$ are of the following form. \begin{align}\label{eqn: BOLD signal decomposition} Y_k(\omega;t)=P_k(\omega;t) + Q_k(\omega;t) + \epsilon_k(\omega;t), \ \ k=1,2,\cdots,K,\ \ t\in\mathcal{T}, \end{align} where $P_k(\omega;t)$ are called \textit{task-evoked terms}. $P_k(\omega;t)$ are of primary interest and identifiable under some probabilistic conditions. The proof of identifiability of $P_k(\omega;t)$ is in Web Appendix C. Model (\ref{eqn: BOLD signal decomposition}) is equivalent to many existing approaches for modeling BOLD signals in task-fMRI (e.g., \cite{joel2011relationship}, \cite{zhang2013semi}, and \cite{warnick2018bayesian}). The relationship between (\ref{eqn: BOLD signal decomposition}) and these approaches is discussed in Web Appendix A. Similar to these existing methods, we assume no interaction between $P_k(\omega;t)$ and $Q_k(\omega;t)$ in (\ref{eqn: BOLD signal decomposition}). Inclusion of an interaction between these terms is discussed in Web Appendix H. We implement our proposed methods to estimate functional connectivity in a study investigating the human motor system. To investigate motor FC, we use a cohort of subjects from a task-evoked fMRI study publicly available at the Human Connectome Project (HCP). The \textit{block-design} motor task used in this study is adapted from experiments by \cite{buckner2011organization} and \cite{yeo2011organization}, while details on the HCP implementation are in \cite{barch2013function}. During the experiment, the subjects are asked to perform five tasks when presented with a cue: tap left/right fingers, squeeze left/right toes, and move their tongue. BOLD signals $Y_k(\omega;t)$ are collected from 308 participants, and each BOLD signal is obtained with $\Delta=0.72$ (seconds). Each experiment lasts for about $204$ seconds, i.e., $T=283$. We focus on the brain functional connectivity for the task of \textit{squeezing right toes}. The onsets of these task blocks vary across subjects. Nevertheless, the corresponding onsets of any two participants differ in less than 0.1 seconds ($\le\Delta$). Therefore, we assume that all subjects share the same stimulus signal $N(t)=\mathbf{1}_{[86.5, 98.5)}(t)+\mathbf{1}_{[162, 174)}(t)$. In statistical analysis of task-fMRI data $\{\pmb{Y}(\omega;t)\}_{t\in\mathcal{T}}$ in (\ref{eqn: BOLD signal decomposition}), two topics are primarily of interest: (i) identification of nodes presenting task-evoked neuronal activity, i.e., the indices $k$ such that $P_k(\omega;t)\ne 0$, and (ii) detection of task-evoked associations between brain nodes. \textit{General linear models} (GLM, \cite{friston1994analysis}) are commonly implemented to detect task-evoked nodes (\cite{lindquist2008statistical}). GLM is conducted at an individual node level and is not informative for investigating associations between nodes. Associations between nodes captured by BOLD signals characterize FC (\cite{friston1993functional}). FC observed during a task experiment tends to be different from that observed in a resting-state experiment (\cite{lowe2000correlations}). The difference between task-fMRI and resting-state fMRI is presented by the task-evoked terms $P_k(\omega;t)$. In this paper, we investigate the associations between $\{P_k(\omega;t)\}_{k=1}^K$, instead of those between BOLD signals $\{Y_k(\omega;t)\}_{k=1}^K$, corresponding to FC stemming solely from the task of interest $N(t)$. A considerable amount of work has been done to define and estimate FC. For example, \cite{friston1993functional} defines FC as the temporal Pearson correlation between a pair of BOLD signals across time. Following this approach, one may measure task-evoked FC between $P_k(\omega;t)$ and $P_l(\omega;t)$ by the absolute value of the Pearson correlation as follows \begin{align}\label{eq: raw Pearson correlation} \left\vert corr(\omega;P_k, P_l)\right\vert:=\left\vert\frac{\int_{\mathcal{T}} P_k^*(\omega;t)\times P_l^*(\omega;t)\mu(dt)}{\sqrt{ \int_{\mathcal{T}}\left\vert P_k^*(\omega;t)\right\vert^2 \mu(dt) \times \int_{\mathcal{T}}\left\vert P_l^*(\omega;t)\right\vert^2 \mu(dt)}}\right\vert, \end{align} where $P^*_{k'}(\omega;t)=P_{k'}(\omega;t)-\frac{1}{\mu(\mathcal{T})}\int_{\mathcal{T}}P_{k'}(\omega;s)\mu(ds)$ for $k'\in\{k,l\}$; if $\mathcal{T}=[0,T^*]$, then $\mu(dt)=dt$; if $\mathcal{T}=\{\tau \Delta\}_{\tau=0}^T$, then $\mu(dt)$ is the counting measure $\sum_{\tau\in\mathbb{Z}}\delta_{\tau \Delta}(dt)$, where $\delta_{\tau \Delta}$ is the point mass at $\tau \Delta$. There are many other approaches to defining and estimating FC, e.g., \textit{coherence analysis} (\cite{muller2001multivariate}) and \textit{beta-series regression} (\cite{rissman2004measuring}) are widely used in FC studies. These approaches have several limitations discussed in Section \ref{section: advantages of the task-fc def} that are addressed by our proposed approach. This paper is organized as follows. Section \ref{section: A New Definition of FC} proposes models for task-evoked neuronal activity and BOLD signals, respectively; based on these models, we propose a rigorous and interpretable definition of ptFC. Section \ref{section: Estimation} presents an algorithm for estimating ptFC. We compare the performance of our proposed algorithm with existing approaches using simulations in Section \ref{section: simulations}. In Section \ref{section: applications}, we apply the proposed approach to estimate the ptFC during a motor task using the publicly available HCP data set. Section \ref{section: Conclusions and Further Discussions} concludes the paper. \section{Population-level Task-evoked Funtional Connectivity (ptFC)}\label{section: A New Definition of FC} We first propose models for neuronal activity and BOLD signals at individual nodes. Using these models, we provide the definition of ptFC. Lastly, we list advantages of ptFC compared to existing approaches. Hereafter, $\Omega$ denotes a study population of interest, i.e., the collection of subjects of interest. $\Omega$ is discrete and finite. Let $\mathbb{P}$ define a study-dependent probability on $\Omega$. Then $(\Omega, 2^\Omega, \mathbb{P})$ is a probability space, where $2^\Omega$ is the power set of $\Omega$. $\mathbb{E}$ denotes the expectation with respect to $\mathbb{P}$. In the HCP experiment described in Section \ref{section: Introduction}, $\Omega$ represents the population of healthy adults. FMRI data are collected for each subject $\omega$ in a sample of size $n=308$ drawn from $\Omega$ according to the underlying distribution $\mathbb{P}$. We estimate $\mathbb{P}$ by empirical distribution $\frac{1}{n}\sum_{\omega}\delta_\omega$. The expectation and correlation with respect to this empirical distribution are the sample average and sample correlation, respectively. \subsection{Definition of ptFC}\label{section: Definition of ptFC} Let $h_k(t)$ denote the \textit{hemodynamic response function} (HRF, \cite{lindquist2008statistical}) at the $k^{th}$ node corresponding to the task of interest $N(t)$ and shared by all subjects $\omega\in\Omega$. For each $\omega\in\Omega$, the GLM approach essentially models the task-evoked terms in (\ref{eqn: BOLD signal decomposition}) by \begin{align}\label{eq: convolution model} P_k(\omega;t)= \beta_k(\omega)\times (N*h_k)\left(t-t_{0,k}\right),\ \ t\in\mathcal{T},\ \ \omega\in\Omega,\ \ k=1,2,\cdots, K. \end{align} where $\beta_k(\omega)$ is a GLM regression coefficient representing the subject-specific random effects, $t_{0,k}$ is the latency shared by all subjects as the reaction of the $k^{th}$ node to $N(t)$ at time $t$ is not instantaneous, and $*$ denotes convolution. Specifically, this is a GLM with response $\{P_k(\omega;t)\}_{t\in\mathcal{T}}$ and independent variable $\{(N*h_k)(t-t_{0,k})\}_{t\in\mathcal{T}}$. The error term of the GLM is absorbed by $\{\epsilon_k(\omega;t)\}_{t\in\mathcal{T}}$ in (\ref{eqn: BOLD signal decomposition}). Task-evoked terms $ P_k(\omega;t)$ in (\ref{eq: convolution model}) model the neuronal activity evoked by task $N(t)$, and the subject-specific $\beta_{k}(\omega)$ measures the magnitude of this component. Visualizations of (\ref{eq: convolution model}) are in Web Figure 6. Theoretically, latency depends on subject $\omega$. However, in most studies, the latency is much shorter than the corresponding TR $\Delta$, and the difference between the latency times of any two subjects is negligible. Therefore, we model the latency as $t_{0,k}$ shared by all subjects. With model (\ref{eq: convolution model}), we represent the model (\ref{eqn: BOLD signal decomposition}) for BOLD signals as follows. \begin{align}\label{eq: BOLD time-course model} Y_k(\omega;t)=\beta_k(\omega)\times \left(N*h_k\right)\left(t-t_{0,k}\right)+R_k(\omega;t),\ \ t\in\mathcal{T},\ \ k=1, \cdots, K,\ \ \omega\in\Omega, \end{align} where $R_k(\omega; t)=Q_k(\omega;t)+\epsilon_k(\omega;t)$ are referred to as \textit{reference terms} for succinctness. Task-evoked terms $P_k(\omega;t)=\beta_k(\omega)\times (N*h_k)(t-t_{0,k})= \left[\{\beta_k(\omega)\times N(\cdot-t_{0,k})\}*h_k \right](t)$ correspond to neuronal activity evoked by $N(t)$, where $h_k$ represent metabolism and vasculature and do not characterize neuronal activity. Therefore, we model neuronal activity signals $\Phi_k[\omega; N(t)]$ responding to $N(t)$ as $\beta_k(\omega)\times N(t-t_{0,k})$. Using model (\ref{eq: BOLD time-course model}), the ``stimulus-to-activity" map $\Phi_k[\omega; \bullet]$ and ``activity-to-BOLD" map $\Psi_k\{\omega; \bullet\}$ in Section \ref{section: Introduction} are $\Psi_k\left\{\omega; \bullet \right\}: \Phi_k\left[\omega; N(t)\right]\mapsto \left\{\Phi_k[\omega; N(\cdot)]*h_k \right\} (t) +Q_k(\omega;t)+\epsilon_k(\omega;t)=Y_k(\omega;t)$ and \begin{align}\label{eq: stimulus-BOLD maps} \Phi_k\left[\omega; \bullet\right]: N(t)\mapsto \beta_k(\omega)\times N\left(t-t_{0,k}\right). \end{align} FC is anticipated to characterize the mechanism of neuronal activity rather than metabolism or vasculature. Therefore, task-evoked FC is defined using task-evoked neuronal activity signals $\Phi_k[\omega; N(t)]$. In model (\ref{eq: stimulus-BOLD maps}) of $\Phi_k[\omega; N(t)]$, tasks $N(t)$ are fully determined by experimental designs. Additionally, latency $t_{0,k}$ can be viewed as a parameter of HRF $h_k$ since task-evoked terms can be expressed as $\{[\beta_k(\omega)\times N]*[h_k(\cdot-t_{0,k})]\}(t)$. Therefore, task-evoked FC is expected to be determined by $\beta_k(\omega)$. Since $\beta_k(\omega)$, for $k=1,\cdots,K$, measure the magnitude of the neuronal activity evoked by task $N(t)$, nodes $k$ and $l$ are functionally connected at the population level during the task if one of the following scenarios holds: (i) for subjects with strong reaction to $N(t)$ at their $k^{th}$ nodes, their $l^{th}$ nodes' reaction to $N(t)$ is strong as well and vice versa; (ii) for subjects with strong reaction to $N(t)$ at their $k^{th}$ nodes, their $l^{th}$ nodes' reaction to $N(t)$ is weak and vice versa, i.e., either positively or negatively correlated. Therefore, we make the following assumptions on the distribution of $\beta_k(\omega)$ across $(\Omega,\mathbb{P})$: (1) if there exists a functional connection evoked by $N(t)$ between nodes $k$ and $l$, the corresponding $\beta_k(\omega)$ and $\beta_l(\omega)$ are approximately \textit{linearly} associated; (2) each variance $\mathbb{V}\beta_k=\mathbb{E}\left(\beta_k^2\right)-\left(\mathbb{E}\beta_k\right)^2>0$, i.e., random variable $\beta_k(\omega)$ is not deterministic, where $\mathbb{E}\beta_k^\nu=\int_{\Omega}\{\beta_k(\omega)\}^\nu \mathbb{P}(d\omega)$ for $\nu=1,2$. Since $corr\left(\beta_k, \beta_l\right):=\frac{\mathbb{E}\left(\beta_k\beta_l\right)-\left(\mathbb{E}\beta_k\right)\left(\mathbb{E}\beta_l\right)}{\sqrt{\mathbb{V}\beta_k\times\mathbb{V}\beta_l}}$ measures the linear correlation between $\beta_k(\omega)$ and $\beta_l(\omega)$, we define ptFCs as follows. \begin{definition}\label{def: group-level task-evoked FC and EC} For each subject $\omega\in\Omega$, the task-evoked BOLD signals $\{Y_k(\omega; t)\}_{k=1}^K$ are of form (\ref{eq: BOLD time-course model}). The ptFC between the $k^{th}$ and $l^{th}$ nodes is defined as $\vert corr(\beta_k,\beta_l)\vert$. \end{definition} \noindent An advantage of ptFC is its scale-invariance. Since $\beta_k(\omega) (N*h_k)=\frac{\beta_k(\omega)}{c_1\times c_2} [(c_2 N)*(c_2 h_k)]$, the scale of $\beta_k(\omega)$ changes if the scale of $h_k$ or $N(t)$ changes. But $\vert corr(\beta_k,\beta_l)\vert$ is invariant to the transform $\beta_{k'}(\omega)\mapsto c\beta_{k'}(\omega)$, for $k'\in\{k,l\}$ and any $c\ne0$. Furthermore, using this scale-invariance, we show in Web Appendix C that ptFC is identifiable under some probabilistic conditions. Additionally, $\vert corr(\beta_k,\beta_l)\vert$ is invariant to the transform $\beta_{k'}(\omega)\mapsto \beta_{k'}(\omega)+c$ for $k'\in\{k,l\}$ and any $c\in\mathbb{R}$, hence we may assume $\mathbb{E}\beta_{k'}=0$ for $k'\in\{k,l\}$ in Section \ref{section: Estimation}. \noindent\textbf{Interpretation of $\beta_k(\omega)$}: Each signal $\notag\Phi_k\left[\omega; N(t)\right] = \beta_k(\omega)\times N\left(t-t_{0,k}\right)$ describes the neuronal activity evoked by task $N(t)$, excluding effects of local vasculature or metabolism. If the $k^{th}$ node of $\omega$ does not react to task $N(t)$, then $\beta_k(\omega)=0$. For given $N(t)$ and $h_k$, the magnitude of $\vert\beta_k(\omega)\vert$ indicates the strength of the reaction in $k^{th}$ node of $\omega$ to $N(t)$. \noindent\textbf{Interpretation of $\vert corr(\beta_k, \beta_l)\vert$}: The pair $\{\beta_k(\omega), \beta_l(\omega)\}_{\omega\in\Omega}$ quantifies the magnitude of neuronal activity in response to $N(t)$ for nodes $k$ and $l$. These two nodes have functional connectivity evoked by $N(t)$ if $\{\beta_k(\omega)\}_{\omega\in\Omega}$ and $\{\beta_l(\omega)\}_{\omega\in\Omega}$ are linearly associated across $(\Omega,\mathbb{P})$. More explicitly, a perfectly linear relationship between $\{\beta_k(\omega)\}_{\omega\in\Omega}$ and $\{\beta_l(\omega)\}_{\omega\in\Omega}$ implies strongest functional connectivity between nodes $k$ and $l$ evoked by $N(t)$. Finally, $\vert corr(\beta_k, \beta_l)\vert$ quantifies the strength of $N(t)$-induced connectivity. The proposed framework for estimation of population-level FC results in interpretable quantification of FC based on a more realistic model of task-evoked neuronal activity relationships between brain nodes compared with the existing methods discussed in Section \ref{section: advantages of the task-fc def}. The Pearson correlation approach defines FC between two brain nodes by the correlation defined on time index space $\mathcal{T}$ (see equation (\ref{eq: raw Pearson correlation})). In contrast, ptFC in Definition \ref{def: group-level task-evoked FC and EC} is of a correlation form defined on the population space $\Omega$. Although the $\mathcal{T}$-correlation and $\Omega$-correlation forms are different from the mathematical perspective, they model the same brain activity mechanism. Therefore, the correlations are comparable. Recently, there is an emerging consensus in the literature suggesting that brain networks undergo temporal fluctuations corresponding to experimental tasks. \textit{Dynamic connectivity} (DC) is a collection of approaches investigating these fluctuations. Although ptFC in Definition \ref{def: group-level task-evoked FC and EC} does not change over time, the ptFC framework can be directly incorporated to obtain \textit{sliding-window} based DC estimates (\cite{hutchison2013dynamic}). Specifically, for a preselected window of time, BOLD signals are extracted for that time interval, and ptFC is estimated using the algorithm proposed in this paper. Next, the window is shifted in time for a certain number of time points, and ptFC is estimated for BOLD signals within the time interval of the same length as in the first step for the shifted time interval. As a result, we obtain a sequence of ptFCs illustrating the dynamic structure of FC. \subsection{Advantages of ptFC}\label{section: advantages of the task-fc def} In this subsection, we present advantages of ptFC in Definition \ref{def: group-level task-evoked FC and EC} compared to existing approaches. We first discuss the limitations of Pearson correlation approach (\ref{eq: raw Pearson correlation}) using model (\ref{eq: convolution model}). Plugging (\ref{eq: convolution model}) into (\ref{eq: raw Pearson correlation}), we obtain the following association between $P_k(\omega;t)$ and $P_l(\omega;t)$. \begin{align}\label{eq: Pearson correlation with the convolution model} \left\vert corr(\omega;P_k, P_l)\right\vert=\left\vert \int_{\mathcal{T}} \phi_k(t) \times \phi_l(t)\mu(dt)\Bigg/\sqrt{ \int_{\mathcal{T}}\left\vert \phi_k(t)\right\vert^2 \mu(dt)\times \int_{\mathcal{T}}\left\vert \phi_l(t)\right\vert^2 \mu(dt)}\right\vert, \end{align} where $\phi_{k'}(t)=N*h_{k'}(t-t_{0,k'})-\frac{1}{\mu(\mathcal{T})}\int_{\mathcal{T}}N*h_{k'}(s-t_{0,k'})\mu(ds)$ for $k'\in\{k,l\}$. (\ref{eq: Pearson correlation with the convolution model}) reveals the following limitations of Pearson correlation approach (\ref{eq: raw Pearson correlation}). \noindent\textbf{Only nuisance parameters:} Based on the reasoning in Section \ref{section: Definition of ptFC}, task-evoked neuronal activity is modeled by $\beta_k(\omega)$. Hence, it is counterintuitive that (\ref{eq: Pearson correlation with the convolution model}) does not depend on $\beta_k(\omega)$ and depends only on the nuisance parameters $N(t)$ and $h_k(t)$ when considering neuronal activity. For example, if the $k^{th}$ node does not react to $N(t)$, then the task-evoked term $\beta_k(\omega)\times (N*h_k)(t-t_{0,k})$ is expected to be zero, i.e., $\beta_k(\omega)=0$, and there should be no task-evoked interaction between the $k^{th}$ and other nodes. However, (\ref{eq: Pearson correlation with the convolution model}) can still be very large as the non-reaction information presented by $\beta_k(\omega)= 0$ vanishes. In contrast with (\ref{eq: Pearson correlation with the convolution model}), Definition \ref{def: group-level task-evoked FC and EC} is based on $\{(\beta_k(\omega), \beta_l(\omega))\vert\omega\in\Omega\}$. \noindent\textbf{Variation in latency:} $t_{0,k}$ may vary across nodes, e.g., see \cite{miezin2000characterizing} for investigation of the left and right visual and motor cortices and the corresponding difference between response onsets. If $t_{0,k}\ne t_{0,l}$, for example $t_{0,k}< t_{0,l}$, then it is likely that node $k$ reacts to task $N(t)$ first, and then the neuronal activity at node $k$ causes that at node $l$. Because of this potential causality represented by $t_{0,k}< t_{0,l}$, it is natural to expect that nodes $k$ and $l$ are likely functionally connected. However, measurement (\ref{eq: Pearson correlation with the convolution model}) can be very small if $t_{0,k} \ne t_{0,l}$ and may not reveal the true interaction between neuronal activity of two nodes. An example of this issue is illustrated in Web Appendix F. Since our proposed ptFCs do not involve latency, the variation in latency does not influence ptFCs in Definition \ref{def: group-level task-evoked FC and EC}. \noindent\textbf{Variation in HRFs:} HRFs can heavily vary across brain nodes (\cite{miezin2000characterizing}). Since task-evoked FC is not expected to depend on HRFs, variation of HRFs in different brain regions should not influence task-evoked FC. However, (\ref{eq: Pearson correlation with the convolution model}) can be small if $h_k\ne h_l$ as illustrated in Web Figure 6 (c). Given that ptFCs do not involve HRFs, they are invariant to the variation in HRFs across brain nodes. Coherence analysis for FC is not influenced by any of the issues discussed above. In this approach, FC between BOLD signals $Y_k(\omega;t)$ and $Y_l(\omega;t)$ is measured by the \textit{coherence} evaluating the extent of the \textit{linear time-invariant relationship} between these two signals via Fourier frequencies. However, there is no guarantee that two BOLD signals are linear time-invariant if the corresponding two nodes are functionally connected. Last but not least, the Pearson correlation and coherence analysis approaches are designed to measure FC evoked by all stimuli - both the tasks of interest and nuisance stimuli. Hence, they are not interpretable from the task-evoked FC viewpoint. On the contrary, beta-series regression and ptFCs are designed to measure the FC evoked by an experiment's specific task of interest. Additionally, our simulation studies in Section \ref{section: simulations} show that beta-series regression performs worse than our proposed ptFC approach in many cases. \section{ptFC Estimation (ptFCE) Algorithm}\label{section: Estimation} To estimate $\vert corr(\beta_k, \beta_l)\vert$, one may want to estimate $\{\beta_k(\omega), \beta_l(\omega)\}_{\omega\in\Omega}$ in (\ref{eq: BOLD time-course model}) using linear models. However, only task-evoked terms $P_k(\omega)$ are of a linear form. Linear models need further linear assumptions on reference terms $R_k(\omega)$, which are not made herein. In this section, we propose a ptFC estimation (ptFCE) algorithm based on the Fourier transform and the AMUSE algorithm (\cite{tong1991indeterminacy}). Since BOLD signals are observed at discrete and finite time points, we assume $\mathcal{T}=\{\tau\Delta\}_{\tau=0}^T$ and $t=\tau\Delta$ for $\tau\in\{0,1,\cdots,T\}$. First, we provide notations for the Fourier transform. For any function $f:\mathcal{T} \rightarrow\mathbb R$, we extend $f$ as follows to be a periodic function on $\{\tau'\Delta\}_{\tau'\in\mathbb{Z}}$. \begin{align}\label{eq: periodic extension} f(\tau'\Delta)=f(\tau\Delta),\ \ \ \tau'\equiv\tau\mbox{ (mod $T+1$)}\ \ \mbox{for }\tau=0,1,\cdots,T\mbox{ and all }\tau'\in\mathbb{Z}. \end{align} Hereafter, all functions on $\mathcal{T}$ are implicitly extended using (\ref{eq: periodic extension}) to be periodic functions on $\{\tau'\Delta \}_{\tau' \in \mathbb{Z}}$. Convolution $\left(N*h_k\right)\left(\tau\Delta\right):=\frac{1}{T+1}\sum_{\tau'=0}^T N(\tau'\Delta)h_k\left((\tau-\tau')\Delta\right)$, for $\tau=0,1,\cdots,T$. We define the Fourier transform of $f$ as $\widehat{f}(\xi):=\frac{1}{T+1}\sum_{\tau=0}^{T} f(\tau\Delta) e^{-2\pi i \xi (\tau\Delta)}$ for $\xi\in\mathbb R$. $\widehat{f}(\xi)$ is a periodic function with period $1/\Delta$. Throughout this paper, $\widehat{(\cdot)}$ denotes the Fourier transform. Additionally, we assume $\mathbb{E}\beta_k=\mathbb{E}R_k(t)=0$, for all $k$ and $t$, motivated by the centralization $Y_k(\omega;t)-\mathbb{E}Y_k(t)=\left\{\beta_k(\omega)-\mathbb{E}\beta_k\right\}(N*h_k)(t-t_{0,k}) + \left\{R_k(\omega;t)-\mathbb{E}R_k(t)\right\}$. Using $\left\{\beta_k(\omega)-\mathbb{E}\beta_k\right\}$ and $\left\{R_k(\omega;t)-\mathbb{E}R_k(t)\right\}$ as $\beta_k(\omega)$ and $R_k(\omega;t)$, respectively, we model the demeaned signals $Y_k(\omega;t)-\mathbb{E}Y_k(t)$. Because of the invariance $corr(\beta_k, \beta_l)=corr((\beta_k-\mathbb{E}\beta_k), (\beta_l-\mathbb{E}\beta_l))$, the assumption $\mathbb{E}\beta_k=0$ does not prevent the detection of ptFC $\vert corr(\beta_k, \beta_l) \vert$. Our proposed ptFC depends on neither latency $t_{0,k}$ nor reference terms $R_k(\omega;t)$. We note that, applying periodic extension (\ref{eq: periodic extension}) to $(N*h_k)$, one may verify that the two stochastic processes $(N*h_k)(t-t_{0,k}-U(\omega))$ and $(N*h_k)(t-U(\omega))$, where the random variable $U:\Omega\rightarrow\mathcal{T}$ is uniformly distributed on $\mathcal{T}$, are identically distributed. Then $(N*h_k)(t-U(\omega))$, and hence the distribution of $(N*h_k)(t-t_{0,k}-U(\omega))$, does not depend on $t_{0,k}$. Therefore, we investigate the time-shifted signals $Y_k\left(\omega;t-U(\omega)\right)=\beta_k(\omega) (N*h_k)\left(t-t_{0,k}-U(\omega)\right)+R_k\left(\omega;t-U(\omega)\right)$, for $k=1,\cdots,K$. Based on the discussion above, the distributions of $Y_k\left(\omega;t-U(\omega)\right)$ do not depend on $t_{0,k}$. Then, we consider the autocovariance $\mathbb{E}\left\{ Y_k(t-U) Y_l(t+s-U) \right\}$ when estimating $\vert corr(\beta_k,\beta_l)\vert$, where $s=\underline{s}\Delta$ with $\underline{s}\in\mathbb{Z}$. Additionally, $\vert corr(\beta_k,\beta_l)\vert$ depends only on task-evoked terms $\{Y_{k}(\omega;t-U(\omega))-R_k(\omega;t-U(\omega))\}$ (see (\ref{eq: BOLD time-course model})). Hence, we investigate the autocovariance difference $\mathbb{E}\left\{ Y_k(t-U) Y_l(t+s-U) \right\} -\mathbb{E}\left\{R_k(t-U)R_l(t+s-U)\right\}$. If $U(\omega)$, $\{\beta_k(\omega)\}_{k=1}^K$, and $\{R_k(\omega;t)\vert t\in\mathcal{T}\}_{k=1}^K$ are independent, Web Theorem 1 implies that this difference depends only on $s$, rather than $t$. Therefore, we denote it as follows. \begin{align}\label{eq: expected difference} \mathcal{A}_{kl}(s) = \mathbb{E}\left\{ Y_k(t-U) Y_l(t+s-U) \right\} -\mathbb{E}\left\{ R_k(t-U)R_l(t+s-U)\right\}, \ \ k,l=1,\cdots,K. \end{align} We propose $\mathcal{C}_{kl}(\xi):= \vert \widehat{\mathcal{A}_{kl}}(\xi)\vert \Big/ \sqrt{ \vert \widehat{\mathcal{A}_{kk}}(\xi) \widehat{\mathcal{A}_{ll}}(\xi) \vert }$, for $\xi\in\mathbb{R}$, by incorporating a normalization multiplier in (\ref{eq: expected difference}). Web Theorem 1 implies the \textit{substitute equation} $\mathcal{C}_{kl}(\xi)=\left\vert corr(\beta_k, \beta_l)\right\vert$, for $\xi\in\mathbb{R}$, i.e., $\mathcal{C}_{kl}(\xi)$ as a new estimand is a substitute of $\vert corr(\beta_k, \beta_l)\vert$. The substitute equation motivates the ptFCE algorithm. Instead of estimating $\vert corr(\beta_k, \beta_l)\vert$, we propose estimating $\mathcal{C}_{kl}(\xi)$. The definition of $\vert corr(\beta_k, \beta_l)\vert$ depends on the explicit form $\beta_k(\omega)\times N*h_k(t-t_{0,k})$, which needs modeling HRFs $h_k(t)$ and latency $t_{0,k}$. The substitute $\mathcal{C}_{kl}(\xi)$ circumvents modeling $h_k(t)$ and $t_{0,k}$. The main steps of our proposed ptFCE algorithm provide estimation of factors $\mathcal{A}_{kl}(s)$ of $\mathcal{C}_{kl}(\xi)$, for $k,l\in\{1,\cdots,K\}$. BOLD signals $Y_k(\omega;t)$ are observed in experiments, and $U(\omega)$ is an auxiliary variable artificially generated in our estimation procedure (step 1 of Algorithm \ref{algorithm: ptFCE algorithm}). Hence, the first term of (\ref{eq: expected difference}) can be estimated using the method of moments. However, the reference signals $R_k(\omega;t)$ implicitly contained in $Y_k(\omega;t)=P_k(\omega;t)+R_k(\omega;t)$ are not observable. Section \ref{section: The Estimation of Reference Signals} provides the estimation of $R_k(\omega;t)$. \subsection{The Estimation of Reference Signals}\label{section: The Estimation of Reference Signals} In this subsection, we derive an approximation $\tilde{R}_k(\omega;t)\approx R_k(\omega;t)$ from observed $Y_k(\omega;t)$. We start by introducing the following concept: a stochastic process $\pmb{G}(\omega;t)=(G_1(\omega;t), \cdots, G_K(\omega;t))^T$ is called \textit{weakly stationary with mean zero} (WSMZ) if $\mathbb{E}G_k(t)=0$, for all $t\in\mathcal{T}$, and $\mathbb{E}\{G_k(t)G_l(t+s)\}$ depends only on $s$, rather than $t$, for all $k,l\in\{1,\cdots,K\}$. First, we note that $C_k:=\mathbb{E}\{(N*h_k)(t-t_{0,k}-U)\}$ is a constant depending on neither $t$ nor $t_{0,k}$. Define stochastic processes $J_k(\omega;t)=\beta_k(\omega) \{(N*h_k)(t-t_{0,k}-U(\omega))-C_k\}$, for $k=1,\cdots,K$. For each fixed $k$, one can verify that scalar-valued stochastic process $J_k(\omega;t)$ is WSMZ conditioning on $\beta_k$, then $\mathbb{E}\left\{J_k(t) J_k(t+s)\big\vert\beta_k\right\}$ depends only on $s$. The following theorem gives the foundation for the estimation of reference terms $R_k(\omega;t)$. \begin{theorem}\label{thm: 1st thm for AMUSE-ptFC} For each $k\in\{1,\cdots,K\}$, suppose that BOLD signals $Y_k(\omega;t)$ are of form (\ref{eq: BOLD time-course model}), the scalar-valued stochastic process $\{R_k(\omega;t)\}_{t\in\mathcal{T}}$ is WSMZ, and the random variable $U:\Omega\rightarrow\mathcal{T}$ is uniformly distributed. Additionally, for each $k$, $\beta_k(\omega)$, $\{R_k(\omega;t)\}_{t\in\mathcal{T}}$, and $U(\omega)$ are independent, and there exist $t^*\in\mathcal{T}$ and $\beta^*\in\mathbb{R}$ so that $ \frac{\mathbb{E}\{J_k(t) J_k(t-t^*)\vert\beta_k=\beta^*\}}{\mathbb{E}\left\{J_k(t)^2\vert\beta_k=\beta^*\right\}} \ne \frac{\mathbb{E}\{R_k(t-U) R_k(t-t^*-U)\}}{\mathbb{E}\left\{R_k(t-U)^2\right\}}$. If nonsingular matrix $\pmb{A}=(a_{ij})_{1\le i, j \le 2}$ and stochastic process $\{\pmb{s}(\omega;t)=(s_1(\omega;t), s_2(\omega;t))^T\}_{t\in\mathcal{T}}$ satisfy (i) given $\beta_k(\omega)=\beta^*$, $\pmb{s}(\omega;t)$ is WSMZ, (ii) $s_1(\omega;t_1)$ and $s_2(\omega;t_2)$ are uncorrelated for $t_1,t_2\in\mathcal{T}$, (iii) $\frac{\mathbb{E}\{s_1(t)s_1(t-t^*)\vert\beta_k=\beta^*\}}{\mathbb{E}\{s_1(t)\vert\beta_k=\beta^*\}} \ne \frac{\mathbb{E}\{s_2(t)s_2(t-t^*)\vert\beta_k=\beta^*\}}{\mathbb{E}\{s_2(t)\vert\beta_k=\beta^*\}}$, and $\pmb{A}(s_1(\omega;t), s_2(\omega;t))^T=(Y_k(\omega;t-U(\omega)) - \beta^* C_k, (N*h_k)(t-t_{0,k}-U(\omega))-C_k)^T$ for $t\in\mathcal{T}$ then there exist a non-singular diagonal matrix $\pmb{\Lambda}$ and a permutation matrix $\pmb{P}$, such that \begin{align}\label{AMUSE thm formula 2} \begin{pmatrix} 1 & 1 \\ \frac{1}{\beta^*} & 0 \end{pmatrix} = \pmb{A} \pmb{\Lambda}^{-1} \pmb{P}^{-1} \mbox{ and } \begin{pmatrix} J_k(\omega;t) \\ R_k(\omega;t-U(\omega)) \end{pmatrix} = \pmb{P}\pmb{\Lambda}\pmb{s}(\omega;t), \mbox{ for all }t\in\mathcal{T}. \end{align} \end{theorem} \noindent Theorem \ref{thm: 1st thm for AMUSE-ptFC} herein is a straightforward result of the Theorem 2 in \cite{tong1991indeterminacy}. For each fixed $k$ and $\omega$ with setting $\beta^*=\beta_k(\omega)$, the pair $(\pmb{A}, \pmb{s}(\omega;t))$ in Theorem \ref{thm: 1st thm for AMUSE-ptFC} is derived by the AMUSE algorithm with 2D signal $\left\{\left( Y_k(\omega;t-U(\omega)) - \beta^* C_k, (N*h_k)(t-t_{0,k}-U(\omega))-C_k\right)^T \right\}_{t\in\mathcal{T}}$ as its input. However, coefficient $\beta^*=\beta_k(\omega)$ is unknown. Since $\mathbb{E}\beta_k=0$, we ignore $\beta_k^* C_k$ and apply the following vector-valued signal as input of the AMUSE algorithm. \begin{align}\label{eq: AMUSE input} \left\{\left( Y_k(\omega;t-U(\omega)), (N*h_k)(t-t_{0,k}-U(\omega))-C_k\right)^T \big\vert t\in\mathcal{T}\right\}. \end{align} The choice of latency $t_{0,k}$ and HRFs $h_k$ in (\ref{eq: AMUSE input}) is of importance. In our implementation of (\ref{eq: AMUSE input}) in HCP analysis, we choose $t_{0,k}=0$, for all $k$, based on the following considerations: (i) latency times are usually much smaller than their corresponding experimental time units $\Delta$ (e.g., see \cite{zhang2013semi} (p 138)); (ii) latency $t_{0,k}$ can be incorporated into the corresponding HRF $h_k$ as its parameter, i.e., we view $h_k(\cdot-t_{0,k})$ as an HRF, then the choice of $t_{0,k}$ is equivalent to the choice of corresponding HRF; (iii) The HCP data are from a block design; if latency $t_{0,k}$ are much shorter than the length of task blocks, $t_{0,k}$ are negligible. Additionally, we choose $h_k$ in (\ref{eq: AMUSE input}) to be the canonical HRF (the \texttt{R} function \texttt{canonicalHRF} with default parameters in package \texttt{neuRosim}), for all $k$, in our analysis of the HCP data, since in block-design experiments, the influence of biased HRFs on the ptFCE algorithm is moderate as we illustrate using simulations in Section \ref{section: simulations}; this holds even if the support of biased HRFs is longer than task blocks. Finally, $t_{0,k}$ and $h_k$ can be estimated using data-adaptive approaches, e.g., the spline-based method by \cite{zhang2013semi}. In general, given an algorithm for estimating $t_{0,k}$ and $h_k$, one may simply use the estimated $t_{0,k}^{est}$ and $h_k^{est}$ as input in (\ref{eq: AMUSE input}). Since the distribution of $U(\omega)$ is known, constants $C_k$ can be estimated. To approximately recover $R_k(\omega;t)$ from $(\pmb{A}, \pmb{s}(\omega;t))$, we show the following result. \begin{theorem}\label{thm: 2nd thm for AMUSE-ptFC} For each $k\in\{1,\cdots,K\}$ and $\omega\in\Omega$, suppose the pair $(\pmb{A}, \pmb{s}(\omega;t))$ satisfies (\ref{AMUSE thm formula 2}). Then there exists $i'\in\{1,2\}$ such that $a_{1i'} s_{i'}(\omega;t)=J_k(\omega;t)$. \end{theorem} \noindent The proof of Theorem \ref{thm: 2nd thm for AMUSE-ptFC} is in Web Appendix B. The index $i'$ in Theorem \ref{thm: 2nd thm for AMUSE-ptFC} can be computed as $i'=\argmax_i\left\{corr(s_i(\omega;t), (N*h_k)(t-t_{0,k}-U(\omega_j))-C_k), \mbox{ across }t \big\vert i=1,2\right\}$. The AMUSE algorithm and Theorem \ref{thm: 2nd thm for AMUSE-ptFC} recover $J_k(\omega;t)=\beta_k(\omega)(N*h_k)(t-t_{0,k}-U(\omega))-\beta_k(\omega)C_k$. Again, we ignore $\beta_k(\omega)C_k$ as $\mathbb{E}\beta_k=0$, that is, $J_k(\omega;t)\approx \beta_k(\omega)(N*h_k)(\omega;t-t_{0,k}-U(\omega))$. Then we estimate $R_k(\omega;t-U(\omega))$ by $\tilde{R}_k(\omega;t-U(\omega)):= Y_k(\omega;t-U(\omega))-J_k(\omega;t)\approx R_k(\omega;t-U(\omega))$, for all $k$ and $\omega$. In applications, $U(\omega)$ are artificially generated and known. Then we have the approximation $\tilde{R}_k(\omega;t)\approx R_k(\omega;t)$. We conclude the derivation of $\tilde{R}_k(\omega;t)$ from $Y_k(\omega;t)$ as follows: Step 1, each observed signal $Y_k(\omega;t)$ provides input (\ref{eq: AMUSE input}) for the AMUSE algorithm; Step 2, the AMUSE algorithm computes $(\pmb{A}, \pmb{s}(\omega;t))$; Step 3, Theorem \ref{thm: 2nd thm for AMUSE-ptFC} indicates $J_k(\omega;t)=a_{1i'}s_{i'}(\omega;t)$; Step 4, compute $\tilde{R}_k(\omega;t)$ using $J_k(\omega;t)$ and the artificially generated $U(\omega)$. \subsection{The Estimator for $\mathcal{C}_{kl}(\xi)$ and The ptFCE Algorithm}\label{section: estimator and the ptFCE algorithm} With observed signals $\{Y_{k'}(\omega;t)\vert k'=k,l\}_{\omega=1}^n$ and the signals $\{\tilde{R}_{k'}(\omega;t)\vert k'=k,l\}_{\omega=1}^n$ derived from $\{Y_{k'}(\omega;t)\vert k'=k,l\}_{\omega=1}^n$, we propose the following estimator for $\mathcal{C}_{kl}(\xi)=\vert corr(\beta_k,\beta_l)\vert$. \begin{align}\label{eq: main formula for data} \mathcal{C}_{kl}^{est,n}(\xi):= \left\vert \widehat{\mathcal{A}_{kl}^{est,n}}(\xi)\right\vert \Bigg/ \sqrt{ \left\vert \widehat{\mathcal{A}_{kk}^{est,n}}(\xi) \widehat{\mathcal{A}_{ll}^{est,n}}(\xi) \right\vert } ,\ \ \ \mbox{for all } \xi\in\mathbb{R}, \mbox{ where } \end{align} $\mathcal{A}_{kl}^{est,n}(s):=\underline{Y}(s)-\underline{\tilde{R}}(s)$, $\underline{Y}(s):=\frac{1}{T+1}\sum_{t\in\{\tau\Delta\}_{\tau=0}^T}\left[ \frac{1}{n}\sum_{\omega=1}^n \left\{Y_k\left(\omega;t-U(\omega)\right)Y_l(\omega;t+s-U(\omega))\right\} \right]$, and $\underline{\tilde{R}}(s):= \frac{1}{T+1}\sum_{t\in\{\tau\Delta\}_{\tau=0}^T}\left[ \frac{1}{n}\sum_{\omega=1}^n \{\tilde{R}_k(\omega;t-U(\omega)) \tilde{R}_l(\omega;t+s-U(\omega))\} \right]$. The periodicity property of the Fourier transform implies that $\mathcal{C}_{kl}^{est,n}(\xi)$ is a periodic function of $\xi$ with period $1/\Delta$. Web Lemma 1 implies that (\ref{eq: main formula for data}) is well-defined for all $\xi\in[0,1/\Delta]$ except for at most finitely many points $\xi$ such that $\widehat{\mathcal{A}_{kk}^{*est}}(\xi) \widehat{\mathcal{A}_{ll}^{*est}}(\xi)=0$. $\mathcal{A}_{kl}^{est,n}(s)$ has a method of moments form except $R_k$ in (\ref{eq: expected difference}) is replaced with the estimated $\tilde{R}_k$. This has an effect on the consistency of $\mathcal{C}_{kl}^{est,n}(\xi)$ as an estimator for $\mathcal{C}_{kl}(\xi)$ and results in the asymptotic bias $\vert\lim_{n\rightarrow\infty}\mathcal{C}_{kl}^{est,n}(\xi)-\mathcal{C}_{kl}(\xi)\vert$. The bias can be small for properly chosen $\xi$. Specifically, we derive a formula of the bias as a function of $\xi$ and choose $\xi$ that minimizes the bias. Since $\tilde{R}_k(\omega;t)$ approximates $R_k(\omega;t)$, we model the difference $W_k(\omega;t):=R_k(\omega;t)-\tilde{R}_k(\omega;t)$ as random noise and assume that $\pmb{W}(\omega;t):=(W_1(\omega;t), \cdots,W_K(\omega;t))^T$ satisfies (i) $\pmb{W}(\omega;t_1)$ is independent of $\pmb{W}(\omega;t_2)$ if $t_1\ne t_2$, (ii) $\pmb{W}(\omega;t)$ is WSMZ, and (iii) $\Sigma_{kl}:=\mathbb{E}[W_k(t)W_l(t)]$ for $t\in\mathcal{T}$. Because of $\Sigma_{kl}\le\sqrt{\Sigma_{kk}\Sigma_{ll}}$ and $\mathbb{E}\beta_k=\mathbb{E}\beta_l=0$, under the assumptions on $\pmb{W}(\omega;t)$, Web Theorem 2 implies $\lim_{n\rightarrow\infty}\mathcal{C}^{est,n}_{kl}(\xi)= \frac{ \left\vert \mathbb{E}(\beta_k\beta_l)\frac{\overline{\widehat{h}_k(\xi)}\widehat{h}_l(\xi)}{\left\vert \widehat{h}_k(\xi) \widehat{h}_l(\xi) \right\vert} e^{2\pi i \xi (t_{0,k}-t_{0,l})} \right\vert }{ \sqrt{ \mathbb{E}(\beta_k^2) \mathbb{E}(\beta_l^2) } } = \frac{\vert\mathbb{E}(\beta_k \beta_l)\vert}{ \sqrt{ \mathbb{E}(\beta_k^2) \mathbb{E}(\beta_l^2) } } = \vert corr(\beta_k \beta_l)\vert$ \textit{almost surely} at frequencies $\xi$ such that \begin{align}\label{eq: zero contamination} \Sigma_{kk}\Big/[(T+1)\vert\widehat{N}(\xi)\widehat{h}_k(\xi)\vert^2]\approx 0,\ \ \ \Sigma_{ll}\Big/[(T+1)\vert\widehat{N}(\xi)\widehat{h}_l(\xi)\vert^2]\approx 0. \end{align} The $\xi$ resulting in sufficiently large $\vert\widehat{h}_k(\xi)\vert$ and $\vert\widehat{h}_l(\xi)\vert$ satisfying (\ref{eq: zero contamination}) reduces the estimation bias. From the signal processing perspective, large $\vert \widehat{h}_k(\xi)\vert$ and $\vert \widehat{h}_l(\xi)\vert$ filter out the random noise in $W_k(\omega;t)$, and HRFs act as band-pass filters (typically $0-0.15$ Hz, \cite{aguirre1997empirical}). Hence, we are interested in $\xi\in(0,0.15)$. Therefore, motivated by (\ref{eq: zero contamination}), $\vert corr(\beta_k,\beta_l)\vert$ is approximated by the median of $\mathcal{C}_{kl}^{est,n}(\xi)$ across $\xi\in(0,0.15)$, since the median is more stable than mean (see Web Figure 7). One typical curve of $\{\mathcal{C}_{kl}^{est,n}(\xi)\vert\xi\in\mathbb{R}\}$ is in Web Figure 7. The estimator $\mathcal{C}_{kl}^{est,n}(\xi)$ in (\ref{eq: main formula for data}) and the choice of $\xi$ complete the estimation of $\mathcal{C}_{kl}(\xi)=\vert corr(\beta_k \beta_l)\vert$. The estimation procedure is concluded in the ptFCE algorithm (Algorithm \ref{algorithm: ptFCE algorithm}). \begin{algorithm}[h] \caption{ptFCE Algorithm}\label{algorithm: ptFCE algorithm} \begin{algorithmic}[1] \INPUT \noindent(i) Task-fMRI BOLD signals $\{(Y_k(\omega;\tau\Delta), Y_l(\omega;\tau\Delta))\}_{\tau=0}^T$ for sampled subjects $\omega\in\{1,\cdots,n\}$; (ii) repetition time $\Delta$; (iii) the stimulus signal $\{N(\tau\Delta)\}_{\tau=0}^T$; (iv) latency $t_{0,k'}=\tau_{0,k'}\Delta$ with some $\tau_{0,k'}\in\mathbb{Z}$ for $k'\in\{k,l\}$, and their default values are $t_{0,k'}=0$ for $k'\in\{k,l\}$; (v) HRF $h_{k'}$ for $k'\in\{k,l\}$. The default HRFs are $h_k=h_l=$ the \texttt{R} function \texttt{canonicalHRF} with its default parameters. \OUTPUT An estimation of the ptFC $\vert corr(\beta_k, \beta_l)\vert$ between the $k^{th}$ and $l^{th}$ nodes. \STATE Generate i.i.d. $\{U(\omega)\}_{\omega=1}^n$ from the uniform distribution on $\mathcal{T}=\{\tau\Delta\}_{\tau=0}^T$. \STATE For each $\omega\in\{1,\cdots,n\}$ and $k'\in\{k,l\}$, apply the AMUSE algorithm to input (\ref{eq: AMUSE input}) and obtain estimated reference signals $\{\tilde{R}_k(\omega;\tau\Delta)\}_{\tau=0}^T$. \STATE Compute the estimator $\mathcal{C}_{kl}^{est,n}(\xi)$ in (\ref{eq: main formula for data}) and the median of $\{\mathcal{C}_{kl}^{est,n}(\xi)\vert\xi\in(0, 0.15)\}$. The median is the output of this algorithm. \end{algorithmic} \end{algorithm} \section{Simulations}\label{section: simulations} When comparing methods for FC estimation, direct comparisons of methods are difficult as the estimates are defined on different scales (\cite{cisler2014comparison}). In the context of FC analysis, we may compare the methods by measuring their accuracy in correctly identifying the relative values of FC among pairs of nodes estimated by each approach, i.e., the performance of the algorithm in terms of detecting underlying ``weak vs. strong" patterns instead of the estimated values. Specifically, suppose we investigate two pairs of nodes. The connectivity between one pair is weak while the connectivity between another pair is strong; we are interested in whether the estimated value for the weak connectivity is smaller than that of the strong connectivity. Therefore, in this section, we only compare ptFCE with other approaches in detecting underlying ``weak vs. strong" patterns. Simulation studies showing the performance of ptFCE in terms of estimation bias and variance are presented in Web Appendix D. For the comparative analysis, we use synthetic data generated by two different mechanisms designed to mimic the properties of HCP motor data. We design the simulations by using non-canonical HRFs in generating synthetic data to illustrate the robustness of our algorithm that uses the canonical HRF as input. We show that the algorithm can still detect underlying task-evoked FC patterns. We compare the ptFCE algorithm with the following methods: beta-series regression, \textit{naive Pearson correlation}, \textit{task Pearson correlation}, coherence analysis using two data-generating mechanisms such that the first is based on model (\ref{eq: BOLD time-course model}) for defining ptFCs, while the second is motivated by Pearson correlations. We use the second data generating mechanism to illustrate the comparable performance of our proposed approach with others even when the data generating mechanism is not compatible with the ptFC framework. \noindent\textbf{Mechanism 1}: \textbf{Step 1}, generate coefficients $\left(\beta_{1}(\omega), \beta_{2}(\omega), \beta_{3}(\omega)\right)^T\sim N_3\left(\pmb{0}, (\sigma_{ij})_{1\le i,j \le 3}\right)$ for $\omega=1,\cdots,n$. Assume $\rho_{ij}:=\vert\sigma_{ij}/\sqrt{\sigma_{ii}\sigma_{jj}}\vert$, for $(i,j)\in\{(1,2), (2,3)\}$ are the true underlying ptFCs between nodes $1$ and $2$ and between nodes $2$ and $3$, respectively. \textbf{Step 2}, generate reference signals $\{R_{k'}(\omega;\tau\Delta)=Q_{k'}(\omega;\tau\Delta)+\epsilon_{k'}(\omega;\tau\Delta)\}_{\tau=0}^T$, for all $\omega$ and $k'\in\{1,2,3\}$, where the $3nT$ noise values $\{\epsilon_{k'}(\omega;\tau\Delta) \vert k'=1,2,3; \omega=1,\cdots,n; \tau=0,\cdots,T\}\sim_{iid} N(0,V)$. \textbf{Step 3}, compute synthetic BOLD signals $\{Y_{k'}(\omega;\tau\Delta)\vert k'=1,2,3\}_{\tau=0}^T$, for $\omega=1,\cdots,n$, by $Y_{k'}(\omega; \tau\Delta)=9000+\beta_{k'}(\omega)\times \left(N * h_{k'}\right)(\tau\Delta) + R_{k'}(\omega; \tau\Delta)$. $\pmb{Y}(\omega;t):=\{Y_{k'}(\omega;\tau\Delta)\}_{k'=1}^3$. \noindent\textbf{Mechanism 2}: \textbf{Step 1}, for each $\omega\in\{1, \cdots, n\}$, generate independent normal vectors $\{\pmb{\varepsilon}(\omega;\tau\Delta)=(\varepsilon_1(\omega;\tau\Delta), \varepsilon_2(\omega;\tau\Delta), \varepsilon_3(\omega;\tau\Delta))^T\}_{\tau=0}^T$, where $\pmb{\varepsilon}(\omega;\tau\Delta)\sim N_3(\pmb{0}, \pmb{\Sigma}_{task})$ if $N(\tau\Delta)=1$, and $\pmb{\varepsilon}(\omega;\tau\Delta)\sim N_3(\pmb{0}, \pmb{\Sigma}_{resting})$ if $N(\tau\Delta)=0$; matrices $\pmb{\Sigma}_{task}$ and $\pmb{\Sigma}_{resting}$ share the same diagonals, the off-diagonal elements of $\pmb{\Sigma}_{resting}$ are $0$, and $\{\varrho_{ij}\}_{i,j=1}^3$ denote the correlations deduced from covariance matrix $\pmb{\Sigma}_{task}$. \textbf{Step 2}, compute $Y_{k'}(\omega;\tau\Delta)=N*h_{k'}(\tau\Delta)+\sum_{j=1}^4 N_{j}*h_{k',j}(\tau\Delta)+\varepsilon_{k'}(\tau\Delta)$, for $k'\in\{1,2,3\}$, where the tasks $\{N_j(t)\}_{j=1}^4$ are nuisance tasks, and $\{h_{k',j}(t)\}_{j=1}^4$ are the corresponding HRFs. \textbf{Step 3}, repeat Steps 1 and 2 for all $\omega=1,\cdots,n$. Details of Mechanisms 1 and 2 are in Web Appendix E. In Mechanism 2, the zero off-diagonal elements of $\pmb{\Sigma}_{resting}$ indicate no task-evoked connectivity when the task of interest is absent. When $N(\tau\Delta)=1$, the correlation $\varrho_{ij}$ in $\pmb{\Sigma}_{task}$ measures the connectivity between nodes $i$ and $j$ evoked by task $N(\tau\Delta)=1$. We are interested in estimating the connectivity between nodes $1,2$ and nodes $2,3$. With the synthetic signals $\{\pmb{Y}(\omega;t)\}_{\omega=1}^n$ generated by Mechanisms 1 or 2, the existing methods are implemented as follows. \noindent\textbf{Naive Pearson correlation:} For each subject $\omega$ and underlying $\rho_{ij}$ or $\varrho_{ij}$, compute the Pearson correlation between $\{Y_i(\omega;\tau\Delta)\}_{\tau=0}^T$ and $ \{Y_j(\omega;\tau\Delta)\}_{\tau=0}^T$ across all $\tau$ and denote the absolute value of this correlation by $\hat{\rho}^{naiveCorr}_{ij,\omega}$. Let $\hat{\rho}^{naiveCorr}_{ij, mean}$ and $\hat{\rho}^{naiveCorr}_{ij, median}$ denote the mean and median, respectively, of $\{\hat{\rho}^{naiveCorr}_{ij, \omega}\}_{\omega=1}^{n}$ across all $\omega$. \noindent\textbf{Task Pearson correlation:} For each $\omega$ and underlying $\rho_{ij}$ or $\varrho_{ij}$, compute the Pearson correlation between $\{Y_i(\omega;\tau\Delta)\vert N(\tau\Delta)=1\}$ and $ \{Y_j(\omega;\tau\Delta)\vert N(\tau\Delta)=1\}$ across $\tau$ such that $N(\tau\Delta)=1$ and denote its absolute value by $\hat{\rho}^{taskCorr}_{ij, \omega}$. Compute the mean and median of $\{\hat{\rho}^{taskCorr}_{ij, \omega}\}_{\omega=1}^{n}$ across $\omega$ and denote them by $\hat{\rho}^{taskCorr}_{ij, mean}$ and $\hat{\rho}^{taskCorr}_{ij, median}$, respectively. \noindent Details of implementing the beta-series regression and coherence analysis to obtain estimates $(\hat{\rho}^{betaS}_{ij,mean}, \hat{\rho}^{betaS}_{ij,median})$ and $(\hat{\rho}^{Coh}_{ij, mean}, \hat{\rho}^{Coh}_{ij, median})$ are presented in Web Appendix G. Let $(\rho_{12}, \rho_{23})=(\varrho_{12}, \varrho_{23})=(0.4, 0.6)$, indicating weak ($0.4$) or strong ($0.6$) connectivity between the two node pairs. We use sample sizes $n=50$ and $308$ to illustrate the performance in different sample sizes, where 308 is the size of the HCP data set. PtFCs estimated by the ptFCE algorithm are denoted by $(\widehat{\rho}_{12}, \widehat{\rho}_{23})$. For each simulated data set, applying all FC estimation methods, we obtain estimates $\{(\widehat{\rho_{ij}}, \hat{\rho}^{betaS}_{ij, mean}, \cdots, \hat{\rho}^{Coh}_{ij, median})\vert i<j\}_{i,j=1}^3$. For each method, we evaluate whether it can identify the ``weak vs. strong" pattern. For example, our proposed ptFCE algorithm is effective if $\widehat{\rho}_{12} < \widehat{\rho}_{23}$. We repeat this procedure 500 times. The rates of correct identification of the connectivity patterns across 500 simulations are presented in Table \ref{table: identification rate}. When synthetic BOLD signals are from the mechanism compatible with the definition of ptFCs - Mechanism 1, our ptFCE algorithm performs overwhelmingly better than other methods. If the synthetic signals are from the mechanism motivated by Pearson correlations - Mechanism 2, our proposed algorithm still provides a good identification rate and is better than the coherence analysis. Importantly, it is unlikely that the true data generating process in task-fMRI follows the second data generating process. Mechanism 2 implies the effect of task disappears as soon as the task is absent. However, it is known that there is a delay in brain response associated with time needed by brain vasculature to respond to the decrease in oxygen (\cite{lindquist2008statistical}). Even in this unrealistic scenario, our proposed method is comparable with others. Naive Pearson correlation and coherence analysis are designed to measure FC evoked by both the task of interest and nuisance tasks. In contrast, beta-series regression, task Pearson correlations, and ptFCE are developed for quantifying FC evoked by specific tasks of interest. Hence, only beta-series regression, task Pearson correlations, and ptFCE algorithms are useful when estimating task-evoked FC. Additionally, task Pearson correlation model assumes that the effect of task of interest disappears immediately when the task is absent. Hence, because of the delay in brain response, task Pearson correlations may be biased. Lastly, Table \ref{table: identification rate} shows that ptFCE performs better than the beta-series regression for data from Mechanism 1. PtFCE is computationally efficient and scalable. On a PC with \texttt{2.4 GHz 8-Core Intel Core i9} processor, the approximate computational time of ptFCE is 3.5 seconds for 50 subjects and 11.5 seconds for 1000 subjects. \begin{table} \caption{Identification rates $\widehat{p}:=n'/n$ of different methods for the ``weak vs strong" pattern, where $n'$ is the numer of correct identifications among all $n=500$ synthetic samples. The correct identification rates in the table are presented in $95\%$-confidence interval form $\widehat{p}\pm z_{0.05}\cdot\sqrt{\widehat{p}(1-\widehat{p})/n}$, where $z_{0.05}$ is the $0.95$-quantile of the standard normal distribution $N(0,1)$.}\label{table: identification rate} \begin{tabular}{lllllllll} \hline & & Rates & & ($n=50$) & & Rates & & ($n=308$) \\ \cline{3-5} \cline{7-9} Methods & & Mech. 1 & & Mech. 2 & & Mech. 1 & & Mech. 2 \\ \hline ptFCE & & $84.0(\pm 3.2)\%$ & & $70.2(\pm 4.0)\%$ & & $99.8(\pm 0.4)\%$ & & $87.2(\pm 2.9)\%$ \\ naive Pearson (mean) & & $58.2(\pm 4.3)\%$ & & $93.8(\pm 2.1)\%$ & & $66.6(\pm 4.1)\%$ & & $100(\pm 0)\%$ \\ naive Pearson (median) & & $55.6(\pm 4.4)\%$ & & $88.0(\pm 2.8)\%$ & & $60.0(\pm 4.3)\%$ & & $100(\pm 0)\%$ \\ task Pearson (mean) & & $47.6(\pm 4.4)\%$ & & $100(\pm 0)\%$ & & $49.2(\pm 4.4)\%$ & & $100(\pm 0)\%$ \\ task Pearson (median) & & $47.8(\pm 4.4)\%$ & & $100(\pm 0)\%$ & & $51.4(\pm 4.4)\%$ & & $100(\pm 0)\%$ \\ beta-series (mean) & & $49.4(\pm 4.4)\%$ & & $96.6(\pm 1.6)\%$ & & $46.6(\pm 4.4)\%$ & & $100(\pm 0)\%$ \\ beta-series (median) & & $48.2(\pm 4.4)\%$ & & $97.8(\pm 1.3)\%$ & & $45.4(\pm 4.4)\%$ & & $100(\pm 0)\%$ \\ coherence (mean) & & $51.0(\pm 4.4)\%$ & & $55.8(\pm 4.4)\%$ & & $59.8(\pm 4.3)\%$ & & $72.0(\pm 3.9)\%$ \\ coherence (median) & & $51.6(\pm 4.4)\%$ & & $57.6(\pm 4.3)\%$ & & $55.6(\pm 4.4)\%$ & & $68.0(\pm 4.1)\%$ \\ \hline \end{tabular} \end{table} \section{Analysis of HCP Motor Data}\label{section: applications} In this section, we present estimation of FC in a task-evoked functional MRI study using data from HCP. For comparison, we apply the ptFCE algorithm and existing methods to measure FC in the database of 308 subjects from HCP performing motor tasks. A detailed description of the HCP data is provided in Section \ref{section: Introduction}. We model squeezing right toes as the task of interest. Before the estimation step, we compute region-specific time courses using the AAL atlas (\cite{tzourio2002automated}) that consists of 120 brain regions. For each region, we extract the voxel-specific time series in that region and compute their spatial average for each time point. As a result, we obtain 120 time courses corresponding to 120 regions of interest. We select the left precentral gyrus (\texttt{PreCG.L}) as the seed region since it is located in the primary motor cortex and the motions of the right toes are associated with the left brain. We measure FC induced by $N(t)$ at a population level between the seed region and other regions using the following five approaches: the ptFCE algorithm, naive and task Pearson correlations, beta-series regression, and coherence analysis. In Section \ref{section: simulations}, the means $\hat{\rho}_{ij, mean}^{naiveCorr}$, $\hat{\rho}_{ij, mean}^{taskCorr}$, $\hat{\rho}_{ij, mean}^{betaS}$, and $\hat{\rho}^{Coh}_{ij, mean}$ tend to perform better than the corresponding medians. Hence, we show only the mean results in data analysis. Because of numerical issues, we omit regions \texttt{CB7.L}, \texttt{CB7.R}, and \texttt{CB10.L} and investigate the rest of the 117 regions. The detailed procedures of applying these approaches are described in Section \ref{section: simulations}. \begin{figure} \centering \includegraphics[scale=0.55, angle=270]{dataanalysis.eps} \caption{Illustration of estimation results from five FC estimation methods. The horizontal axis indicates the $116$ regions compared to \texttt{PreCG.L}. The abbreviations of region names are provided in the data set \texttt{aal2.120} in the \texttt{R} package \texttt{brainGraph}. The vertical axis presents the standardized connectivity measurements $\mathcal{X}_{r,j}^{(st)}$ between each region and the seed region \texttt{PreCG.R}. The red dotted horizontal line indicates the threshold 0.5 implemented for determining the connected/nonconnected relationships between \texttt{PreCG.L} and other regions.} \label{fig: Data Analysis Comparison} \end{figure} Suppose $\mathcal{X}_{rj}$, for $r=1,\cdots,116$ and $j=1,\cdots,5$, are the estimated FC values between \texttt{PreCG.L} and other 116 regions computed using the referred five approaches indexed by $j$. Since these approaches use different scales, we standardize the estimates by $\mathcal{X}_{r,j}^{(st)}=\frac{\mathcal{X}_{r,j}-\min\{\mathcal{X}_{r',j}\}_{r'=1}^{116}}{\max\{\mathcal{X}_{r',j}\}_{r'=1}^{116}-\min\{\mathcal{X}_{r',j}\}_{r'=1}^{116}}$, for all $r$ and $j$, to enable comparisons. The standardized versions of the estimates are presented in Figure \ref{fig: Data Analysis Comparison}. To quantify the agreement between estimation approaches implemented in this study, we use 0.5 as the threshold for standardized $\mathcal{X}_{r,j}^{(st)}$ to determine whether the node $r$ is connected to \texttt{PreCG.L} according to method $j$. Specifically, if $\mathcal{X}_{r,j}^{(st)}>0.5$, then method $j$ estimates that region $r$ is functionally connected to \texttt{PreCG.L}. Define $\mathbf{1}(\mathcal{X}_{r,j}^{(st)}>0.5)=:\vartheta_{r,j}$. As a result of applying each method $j$, we obtain the connectivity pattern $\{\vartheta_{r,j}\}_{r=1}^{116}$. Agreement between methods $j_1$ and $j_2$, for all $j_1,j_2=1,\cdots,5$, is measured by Cohen's kappa statistic denoted as $\kappa_{j_1, j_2}$ (\cite{cohen1960coefficient}) using classification results $\{\vartheta_{r,j_1}\}_{r=1}^{116}$ and $\{\vartheta_{r,j_2}\}_{r=1}^{116}$. The $\kappa_{j_1, j_2}$ along with p-values testing the null hypothesis indicating that the extent of agreement between each pair of methods is the same as random ($\kappa_{j_1, j_2}=0$) are presented in Table \ref{table: data comparison}. Using $\alpha=0.05$ and implementing multiple comparisons correction, we find that all methods have significant agreement with each other. The postcentral gyrus (\texttt{PoCG.L}) region is identified by all five methods as the region with strongest functional connectivity with the seed region. The estimated ptFCs indicate that the neuronal activity of the left precentral gyrus and that of left postcentral gyrus corresponding to squeezing right toes are highly correlated (either negatively or positively). Magnitudes of the neuronal activity corresponding to the task of interest in these two regions tend to be linearly dependent across the entire population. In addition to modeling advantages of our proposed ptFC approach described in Section \ref{section: advantages of the task-fc def}, we obtain differences between the estimates of ptFCE and those of competitor methods. We identify high connectivity between several regions and the seed region that are missed by competitor methods. Specifically, we obtain high task-evoked FC between the seed region and left/right thalamus (passing motor signals to the cerebral cortex), left/right paracentral lobule (motor nerve supply to the lower extremities), left superior temporal gyrus (containing the auditory cortex), and left Heschl gyrus (in the area of primary auditory cortex). These regions are related to the motor function or auditory cortex and can add to the existing results on functional connectivity between the precentral gyrus and the rest of the brain. To further visualize ptFCs induced by the task of interest, we apply MNI space coordinates of the 117 regions from the AAL atlas, where three-dimensional coordinates are obtained from \texttt{aal2.120} dataset in \texttt{R} package \texttt{brainGraph}. The regions are depicted by their MNI coordinates in Figure \ref{fig: 6plots12}. Grayscale shade of edges connecting each region to the seed region illustrates estimated ptFC between the corresponding region and the seed region (Figure \ref{fig: 6plots12}, left panel), while the edges in the right panel of Figure \ref{fig: 6plots12} present the 30 highest ptFC values. Figure \ref{fig: 6plots12} shows that most of the large estimated ptFC values are in the left brain. This is expected, since it is known that behaviors of extremities are functionally associated with contralateral brain regions (\cite{nieuwenhuys2014central}). \begin{figure}[ht] \centering \includegraphics[scale=0.45]{mniplots.eps} \caption{Each dot denotes a region from the AAL atlas, located using its corresponding MNI coordinates. The abbreviated region names are given next to each dot. We apply the ptFCE algorithm to estimate the ptFCs between region \texttt{PreCG.L} and each of the rest 116 regions. In the left panel, we use grayscale coloring of the edges to indicate the magnitude of ptFC between the corresponding two vertices; specifically, the larger a ptFC, the darker the line segment connecting the corresponding region pair. In the right panel, the presented blue line segments indicate the 30 largest ptFCs estimated by the ptFCE algorithm among all 116 regions.} \label{fig: 6plots12} \end{figure} \begin{table} \caption{The Kappa statistics between each pair of methods are computed using the \texttt{R} function \texttt{Kappa.test} in package \texttt{fmsb}. The p-value of a Kappa statistic is presented in the parentheses below the statistic. }\label{table: data comparison} \begin{tabular}{l|lllllllll} \hline Methods & ptFCE & & naive Persn & & task Persn & & beta series & & coherence \\ \hline ptFCE & $*$ & & $0.696$ & & $0.503$ & & $0.260$ & & $0.277$ \\ & & & $(<0.0001)$ & & $(<0.0001)$ & & $(0.00636)$ & & $(0.00402)$ \\ naive Persn & $*$ & & $*$ & & $0.732$ & & $0.391$ & & $0.367$ \\ & & & & & $(<0.0001)$ & & $(0.00022)$ & & $(0.00051)$ \\ task Persn & $*$ & & $*$ & & $*$ & & $0.478$ & & $0.497$ \\ & & & & & & & $(<0.0001)$ & & $(<0.0001)$ \\ beta series & $*$ & & $*$ & & $*$ & & $*$ & & $0.961$ \\ & & & & & & & & & $(<0.0001)$ \\ coherence & $*$ & & $*$ & & $*$ & & $*$ & & $*$ \\ \hline \end{tabular} \end{table} \section{Conclusions}\label{section: Conclusions and Further Discussions} In this paper, we propose a rigorous random-effects type model for task-evoked BOLD signals in fMRI studies. Based on this model, we define a measurement, ptFC, of task-evoked FC at a population level. We discuss shortcomings of several existing methods for evaluating task-evoked population-level FC that are addressed by our proposed framework. Thorough theoretical results on the properties of our proposed estimation procedure are presented. We develop the ptFCE algorithm for estimating ptFC and show that it is computationally efficient. The superior performance of our proposed ptFCE algorithm when data are generated using a block-design task fMRI data-generating mechanism is illustrated using simulation studies. We use ptFCE to estimate ptFCs in a motor-task data set publicly available from HCP. PtFCE results in similar FC patterns with the widely used existing methods. Additionally, ptFCE provides FC discoveries of connections between nodes not identified by existing methods. Extensions of our proposed work can include development of a subject-level task-evoked FC definition and estimation procedure, as well as considerations on the inclusion of interaction terms $P_k(\omega;t)Q_k(\omega;t)$ (see Web Appendix H). \section*{Acknowledgements} The project herein was supported by Grant Number 5P20GM103645 from the National Institute of General Medical Sciences. \noindent\textbf{Data Availability Statement} \noindent The data supporting the findings in Section \ref{section: applications} are publicly available on the HCP website: \begin{center} {\small {\url{https://protocols.humanconnectome.org/HCP/3T/task-fMRI-protocol-details.html}} (\cite{barch2013function})}. \end{center} \noindent The \texttt{R} code for simulation studies and data analyses is available at \begin{center} {\small \url{https://github.com/KMengBrown/Population-level-Task-evoked-Functional-Connectivity.git}}. \end{center} \newpage \begin{center} \huge {\textbf{Supporting Information for ``Population-level Task-evoked Functional Connectivity"}} \end{center} \section{Web Appendix A: Relationship between the proposed BOLD signal model and some existing models} In this paper, we implement the following model for observed task-fMRI BOLD signals $Y_k(\omega;t)$, for $\omega\in\Omega$ and $k\in\{1,\cdots,K\}$. \begin{align}\label{eq: BOLD model in our paper} & Y_k(\omega;t)=P_k(\omega;t)+Q_k(\omega;t)+\epsilon_k(\omega;t),\ \ \mbox{with}\\ \notag & P_k(\omega;t)=\beta_k(\omega)\times N*h_k(t-t_{0,k}), \end{align} where $P_k(\omega;t)$ denotes the BOLD signal component stemming solely from the task $N(t)$ of interest, $Q_k(\omega;t)$ denotes spontaneous neuronal activity component and neuronal activity responding to nuisance tasks, and $\epsilon_k(\omega;t)$ is random error. Additionally, $t_{0,k}$ denotes the population shared latency and is usually smaller than the experimental unit, e.g., see \cite{zhang2013semi} (p 138). The HRF shared by the whole population is $h_k(t)$. Here, we explore three existing models for BOLD signals. We show that these models are special cases of our proposed BOLD signal model (\ref{eq: BOLD model in our paper}). \noindent {\bf Example 1:} Using the notations in our paper, the BOLD signal model implemented by \cite{zhang2013semi} can be represented as follows. \begin{align}\label{eq: BOLD in Zhang (2013)} Y_k(\omega;t)=X_k(\omega;t)^T d(\omega) + \sum_{\gamma=1}^\Gamma \beta_{k,\gamma}(\omega)\times \left(N_\gamma * h_{k,\gamma}\right)(t)+\epsilon_k(\omega;t), \end{align} where $N_\gamma(t)$ are task stimulus signals, $h_{k, \gamma}(t)$ are corresponding HRFs, and $X_k(\omega;t)^T d(\omega)$ characterizes the BOLD signal components stemming from other known sources, e.g., respiration and heartbeat. The BOLD signal model proposed by \cite{zhang2013semi} includes participant-dependent latency times. These latency times are essentially modeled as zero, since the values are usually much smaller than the corresponding experimental time unit. Suppose we are interested in the first experimental task $N_1(t)$. Define the following. \begin{align}\label{eq: notional changes} & \beta_k(\omega):=\beta_{k,1}(\omega),\ \ N(t):=N_1(t),\ \ P_k(\omega;t)=\beta_k(\omega)\times \left(N*h_k\right)(t), \\ \notag & Q_k(\omega;t):=X_k(\omega;t)^T d(\omega) + \sum_{\gamma=2}^\Gamma \beta_{k,\gamma}(\omega)\times \left(N_\gamma * h_{k,\gamma}\right)(t). \end{align} Then (\ref{eq: BOLD in Zhang (2013)}) is equivalent to the model (\ref{eq: BOLD model in our paper}) with latency $t_{0,k}=0$, and there is no interaction term $P_k(\omega;t)Q_k(\omega;t)$. \noindent {\bf Example 2:} Using the notations in our paper, the BOLD signal model implemented by \cite{warnick2018bayesian} is represented as follows (see the equations (1) and (2) in \cite{warnick2018bayesian}). \begin{align}\label{eq: BOLD in Warnick (2018)} Y_k(\omega;t)=\mu_k(\omega)+\sum_{\gamma=1}^\Gamma \beta_{k,\gamma}(\omega)\times \left(N_\gamma*h_{k,\gamma}\right)(t)+\epsilon_k(\omega;t), \end{align} where $N_\gamma(t)$ are task stimulus signals, $h_{k,\gamma}(t)$ are corresponding HRFs, $\epsilon_k(\omega;t)$ are random error, and $\mu_k(\omega)$ present the baseline. Suppose we are interested in task $N_1(t)$. We apply the transform (\ref{eq: notional changes}) and define $Q_k(\omega;t)=\mu_k(\omega)+\sum_{\gamma=2}^\Gamma \beta_{k,\gamma}(\omega)\times \left(N_\gamma * h_{k,\gamma}\right)(t)$. Then model (\ref{eq: BOLD in Warnick (2018)}) is equivalent to our proposed model (\ref{eq: BOLD model in our paper}) with latency $t_{0,k}=0$, without an interaction term $P_k(\omega;t)Q_k(\omega;t)$. \noindent {\bf Example 3:} Motivated by the \textit{independent component analysis} framework, \cite{joel2011relationship} implemented the following model for task-fMRI BOLD signals. \begin{align}\label{eq: BOLD model Joel (2011)} \notag Y_k(\omega;t)=& M_{m,k}(\omega)\left\{\beta_{tm}(\omega)\times N*h_k(t)+\beta_{im}(\omega)\iota_{m}(t)\right\} \\ \notag & + M_{v,k}(\omega)\left\{\beta_{tv}(\omega)\times N*h_k(t)+\beta_{iv}(\omega)\iota_{v}(t)\right\} \\ & +M_b(\omega) \beta_b(\omega)\gamma(\omega;t), \end{align} where $M_{(\cdot),k}$ is the spatial mask at the $k^{th}$ node for visual cortex ($M_{v,k}$), motor cortex ($M_{m,k}$) or the whole brain ($M_{b}$), $N(t)$ is stimulus signal corresponding to the task of interest, $h_k(t)$ is an HRF, $\iota_{(\cdot)}$ is the intrinsic activity of the corresponding cortex, $\gamma(\omega;t)$ presents random noise, and $\beta_{tm}(\omega), \beta_{im}(\omega), \beta_{tv}(\omega), \beta_{iv}(\omega), \beta_{b}(\omega)$ are the weights of motor task, intrinsic motor activity, visual task, intrinsic visual activity and noise, respectively. Since the task of interest in the HCP data is a motor task, we define the following notations. \begin{align*} & \beta_k(\omega):=M_{m,k}(\omega)\times \beta_{tm}(\omega), \\ & Q_k(\omega):=M_{m,k}(\omega)\beta_{im}(\omega)\iota_m(t) + M_{v,k}(\omega)\left\{\beta_{tv}(\omega)\times N*h_k(t)+\beta_{iv}(\omega)\iota_{v}(t)\right\}, \\ & \epsilon_k(\omega;t)= M_b(\omega) \beta_b(\omega)\gamma(\omega;t). \end{align*} Using the notations above, model (\ref{eq: BOLD model Joel (2011)}) is the same as model (\ref{eq: BOLD model in our paper}) with latency $t_{0,k}=0$, and there is no interaction term $P_k(\omega;t)Q_k(\omega;t)$. \section{Web Appendix B: Lemmas, Theorems, and Their Proofs} In this section, we provide proofs of theorems and lemmas presented in our paper. Throughout this section, the time index collection $\mathcal{T}$ denotes $\{\tau\Delta\}_{\tau=0}^T$, and $\Omega$ is a discrete and finite set. \begin{lemma}\label{lemma: zero points} If $f: \mathcal{T}\rightarrow\mathbb R$ is not constant zero, its Fourier transform $\widehat{f}(\xi)$ has at most finitely many zero points - $\xi\in\mathbb R$ such that $\widehat{f}(\xi)=0$ - in any compact subset of $\mathbb R$. \end{lemma} \begin{proof} Define the complex function $\Phi_f(z):=\frac{1}{T+1}\sum_{\tau=0}^T f(\tau\Delta)e^{-2\pi z (\tau\Delta)}$, for all $z=\eta+i\xi\in\mathbb{C}$. Since it is straightforward that $\Phi_f(z)$ satisfies the \textit{Cauchy-Riemann} equation $\frac{\partial}{\partial\overline{z}}\Phi_f(z)=0$, where $\frac{\partial}{\partial\overline{z}}=\frac{1}{2}(\frac{\partial}{\partial \eta}+ i \frac{\partial}{\partial \xi})$ is a Wirtinger derivative, the Looman–Menchoff theorem implies that $\Phi_f(z)$ is a holomorphic function. Then the zero points of $\Phi_f(z)$ are isolated, i.e., every zero point has a neighbourhood that does not contain any other zero point (Theorem 3.7 of \cite{conway1973functions}). Therefore, $\widehat{f}(\xi)=\Phi_f(i\xi)$ implies the desired result. \end{proof} \begin{theorem}\label{thm: estimation theorem} Suppose signals $\{Y_k(\omega;t)\vert t\in\mathcal{T}\}_{k=1}^K$, for $\omega\in\Omega$, are defined as in (\ref{eq: BOLD model in our paper}), $\mathbb{E}\beta_k=\mathbb{E}R_k(t)=0$, and $t_{0,k}=\tau_{0,k}\Delta$ with some $\tau_{0,k}\in\mathbb{Z}$ for all $k=1,2,\cdots,K$ and $t\in\mathcal{T}$. Let the random variable $U:\Omega\rightarrow\mathcal{T}$ be uniformly distributed on $\mathcal{T}$. Furthermore, we assume that $U(\omega)$, $\{\beta_k(\omega)\}_{k=1}^K$, and $\{R_k(\omega;t)\vert t\in\mathcal{T}\}_{k=1}^K$ are independent. Then, the autocovariance differences $\mathcal{A}_{kl}(s) := \mathbb{E}\left\{ Y_k(t-U) Y_l(t+s-U) \right\} -\mathbb{E}\left\{ R_k(t-U)R_l(t+s-U)\right\}$ depend only on $s$, the Fourier transforms of $\mathcal{A}(s)$ are \begin{align*} \widehat{\mathcal{A}}_{kl}(\xi)=\mathbb{E}\left( \beta_k \beta_l\right) \times \left\vert \widehat{N}(\xi) \right\vert^2 \overline{\widehat{h}_k(\xi)} \widehat{h}_l(\xi) e^{2\pi i (t_{0,k}-t_{0,l})\xi}. \end{align*} Furthermore, we have the following substitute equation. \begin{align*} \mathcal{C}_{kl}(\xi):= \vert \widehat{\mathcal{A}}_{kl}(\xi)\vert \Big/ \sqrt{ \vert \widehat{\mathcal{A}}_{kk}(\xi) \widehat{\mathcal{A}}_{ll}(\xi) \vert } = \left\vert corr\left(\beta_k, \beta_l\right)\right\vert, \mbox{ for all }\xi\in\mathbb R. \end{align*} \end{theorem} \begin{proof} The independence between $U(\omega)$, $\{\beta_k(\omega)\}_{k=1}^K$, and $\{R_k(\omega;t)\vert t\in\mathcal{T}\}_{k=1}^K$ implies \begin{align}\label{eq: convolution calculation} \notag & \mathbb{E}[Y_k(t-U) Y_l(t+s-U)] - \mathbb{E}[R_k(t-U) R_l(t+s-U)]\\ \notag & = \mathbb{E}(\beta_k\beta_l)\times \mathbb{E}\left[ (N*h_k)(t-t_{0,k}-U) \times (N*h_l)(t+s-t_{0,l}-U)\right]\\ \notag & = \mathbb{E}(\beta_k \beta_l) \times \frac{1}{T+1} \sum_{u=0}^T \Big\{(N*h_k)\left((\tau-\tau_{0,k}-u)\Delta\right)\times (N*h_l)\left((\underline{s}+\tau_{0,k}-\tau_{0,l})\Delta+(\tau-\tau_{0,k}-u)\Delta\right)\Big\}\\ \notag & = \mathbb{E}(\beta_k \beta_l) \times \frac{1}{T+1} \sum_{v=-(\tau-\tau_{0,k})}^{T-(\tau-\tau_{0,k})} \Big\{(N*h_k)\left(-v\Delta\right)\times (N*h_l)\left((\underline{s}+\tau_{0,k}-\tau_{0,l})\Delta-v\Delta\right)\Big\}\\ & = \mathbb{E}(\beta_k \beta_l) \times \left[N*h_k(-\cdot)\right]*\left[N*h_l\right]\left(s+(t_{0,k}-t_{0,l})\right), \end{align} which only depends on $s$ and does not depend on $t$. Here, the last equality follows from the periodic extension and the definition of convolution. Then, we have the following Fourier transform, \begin{align*} \widehat{\mathcal{A}}_{kl}(\xi)& = \mathbb{E}(\beta_k \beta_l) \times \overline{\widehat{(N*h_k)}(\xi)} \widehat{(N*h_l)}(\xi)e^{2\pi i (t_{0,k}-t_{0,l})\xi}\\ & =\mathbb{E}(\beta_k \beta_l) \times \left\vert\widehat{N}(\xi)\right\vert^2\overline{\widehat{h_k}(\xi)}\widehat{h_l}(\xi) e^{2\pi i (t_{0,k}-t_{0,l})\xi}, \end{align*} for all $\xi\in\mathbb R$, which implies $\mathcal{C}_{kl}(\xi) = \left\vert corr\left(\beta_k, \beta_l\right)\right\vert$ for all $\xi\in\mathbb R$. \end{proof} \begin{lemma}\label{lemma: stationarity with U} The vector-valued stochastic process $\pmb{Z}(\omega;t)=\{(Z_1(\omega;t), Z_2(\omega;t), \cdots, Z_K(\omega;t))^T\}_{t\in\mathcal{T}}$ is defined on probability space $(\Omega, 2^{\Omega}, \mathbb{P})$ and is weakly stationary with mean zero, $U:\Omega\rightarrow\mathcal{T}$ is uniformly distributed, and $U(\omega)$ and $\pmb{Z}(\omega;t)$ are independent. Then the vector-valued stochastic process $\pmb{Z}(\omega;t-U(\omega))=\{(Z_1(\omega;t-U(\omega)), Z_2(\omega;t-U(\omega)), \cdots, Z_K(\omega;t-U(\omega)))^T\}_{t\in\mathcal{T}}$ is weakly stationary with mean zero as well. Additionally, $\mathbb{E}[Z_k(t-U)Z_l(t+s-U)]=\mathbb{E}[Z_k(0)Z_l(s)]$, for all $k,l=1,2,\cdots,K$. \end{lemma} \begin{proof} Let $\mu$ be the probability measure on $\mathcal{T}$ associated with the uniform random variable $U:\Omega\rightarrow\mathcal{T}$, i.e., $\mu=\mathbb{P}\circ U^{-1}$, then we have $\mathbb{E}\{Z_k(t-U)\}=\int_{\mathcal{T}}\mathbb{E}\{Z_k(t-U)\vert U=u\}\mu(du)$. The independence between $U$ and $\pmb{Z}$ implies $\mathbb{E}\{Z_k(t-U)\vert U=u\}=\mathbb{E}\{Z_k(t-u)\}=0$ for all $t$, which results in $\mathbb{E}\{Z_k(t-U)\}=0$ for all $t\in\mathcal{T}$. Similarly, we have \begin{align*} \mathbb{E}\left\{Z_k(t-U)Z_l(t+s-U)\right\}&=\int_{\mathcal{T}} \mathbb{E}\left\{Z_k(t-U)Z_l(t+s-U)\vert U=u\right\}\mu(du)\\ &=\int_{\mathcal{T}} \mathbb{E}\left\{Z_k(t-u)Z_l(t+s-u)\right\}\mu(du)\\ &=\int_{\mathcal{T}} \mathbb{E}\left\{Z_k(0)Z_l(s)\right\}\mu(du)\\ &=\mathbb{E}\left\{Z_k(0)Z_l(s)\right\}, \end{align*} This completes the proof of the desired result. \end{proof} \begin{theorem}\label{theorem: asymptotic bias} Suppose we have the following random vectors and stochastic processes. \begin{itemize} \item $\mathfrak{B}:=\left\{(\beta_k(\omega), \beta_l(\omega))\right\}_{\omega}^n$ are $n$ i.i.d. random vectors. \item $\mathfrak{R}:=\left\{\left(\tilde{R}_k(\omega;t), \tilde{R}_l(\omega;t)\right)\vert t\in\mathcal{T}\right\}_{\omega=1}^n$ are $n$ i.i.d. stochastic processes. \item $\mathfrak{W}:=\left\{\left(W_k(\omega;t), W_l(\omega;t)\right)\vert t\in\mathcal{T}\right\}_{\omega=1}^n$ are $n$ i.i.d. stochastic processes satisfying (i) random vectors $\left(W_k(\omega;t_1), W_l(\omega;t_1)\right)$ and $\left(W_k(\omega;t_2), W_l(\omega;t_2)\right)$ are independent whenever $t_1\ne t_2$, (ii) the stochastic process $\{\left(W_k(\omega;t), W_l(\omega;t)\right)\vert t\in\mathcal{T}\}$ is weakly stationary with mean zero, and (iii) $\Sigma_{kl}=\mathbb{E}\left\{W_k(t)W_l(t)\right\}$ for all $t\in\mathcal{T}$. \end{itemize} The collections $\mathfrak{B}, \mathfrak{R}, \mathfrak{W}$ are independent. Furthermore, we have the following task-fMRI BOLD signals. \footnote{Observed task-fMRI BOLD signals are $Y_k(\omega;t)=\beta_k(\omega)\times N*h_k(t-t_{0,k})+R_k(\omega;t)$, where $R_k(\omega;t)$ are true reference signals. However, the true reference signals are not observable in applications and should be estimated by $\tilde{R}_k(\omega;t)$ using the AMUSE algorithm. The corresponding estimation bias $R_k(\omega;t)-\tilde{R}_k(\omega;t)$ is denoted by $W_k(\omega;t)$. Hence, we have $Y_k(\omega;t)=\beta_k(\omega)\times N*h_k(t-t_{0,k})+\tilde{R}_k(\omega;t)+W_k(\omega;t)$, which is just a representation of $Y_k(\omega;t)=\beta_k(\omega)\times N*h_k(t-t_{0,k})+R_k(\omega;t)$.} \begin{align*} Y_{k'}(\omega;t)=\beta_{k'}(\omega)\times N*h_{k'}(t-t_{0,{k'}})+\tilde{R}_{k'}(\omega;t)+W_{k'}(\omega;t), \end{align*} for $k'\in\{k,l\}$ and $\omega=1,\cdots,n$. We define the following estimator for $\mathcal{C}_{kl}(\xi)$. \begin{align*} \mathcal{C}_{kl}^{est,n}(\xi):= \left\vert \widehat{\mathcal{A}_{kl}^{est,n}}(\xi)\right\vert \Bigg/ \sqrt{ \left\vert \widehat{\mathcal{A}_{kk}^{est,n}}(\xi) \widehat{\mathcal{A}_{ll}^{est,n}}(\xi) \right\vert } ,\ \ \ \mbox{for all } \xi\in\mathbb{R}, \end{align*} where hat $\widehat{(\cdot)}$ denotes the Fourier transform and \begin{align*} \notag \mathcal{A}_{kl}^{est,n}(s):=& \frac{1}{T+1}\sum_{t\in\{\tau\Delta\}_{\tau=0}^T}\left[ \frac{1}{n}\sum_{\omega=1}^n \left\{Y_k\left(\omega;t-U(\omega)\right)Y_l(\omega;t+s-U(\omega))\right\} \right] \\ & - \frac{1}{T+1}\sum_{t\in\{\tau\Delta\}_{\tau=0}^T}\left[ \frac{1}{n}\sum_{\omega=1}^n \left\{\tilde{R}_k(\omega;t-U(\omega)) \tilde{R}_l(\omega;t+s-U(\omega))\right\} \right]. \end{align*} Then we have the following asymptotic behavior of $\mathcal{C}_{kl}^{est,n}(\xi)$ as $n\rightarrow\infty$. \begin{align}\label{eq: asymptotic behavior} \mathbb{P}\left\{\lim_{n\rightarrow\infty}\mathcal{C}^{est,n}_{kl}(\xi) = \frac{ \left\vert \mathbb{E}(\beta_k\beta_l)\frac{\overline{\widehat{h}_k(\xi)}\widehat{h}_l(\xi)}{\left\vert \widehat{h}_k(\xi) \widehat{h}_l(\xi) \right\vert} e^{2\pi i \xi (t_{0,k}-t_{0,l})} + \frac{\Sigma_{kl}}{(T+1)\left\vert\widehat{N}(\xi)\right\vert^2\left\vert\widehat{h}_k(\xi) \widehat{h}_l(\xi)\right\vert} \right\vert }{ \sqrt{ \left(\mathbb{E}(\beta_k^2)+\frac{\Sigma_{kk}}{(T+1)\left\vert\widehat{N}(\xi)\widehat{h}_k(\xi)\right\vert^2}\right) \left(\mathbb{E}(\beta_l^2)+\frac{\Sigma_{ll}}{(T+1)\left\vert\widehat{N}(\xi)\widehat{h}_l(\xi)\right\vert^2}\right) } }\right\}=1. \end{align} \end{theorem} \begin{proof} For each pair $(s,t)\in\mathcal{T}^2$, the \textit{strong law of large numbers} implies that there exists $\mathcal{N}_{s,t}\in 2^{\Omega}$ depending on the pair $(s,t)$ such that $\mathbb{P}(\mathcal{N}_{s,t})=0$ and \begin{align*} & \frac{1}{n}\sum_{\omega=1}^n \left[ \left\{Y_k\left(\omega;t-U(\omega)\right)Y_l(\omega;t+s-U(\omega))\right\}-\left\{\tilde{R}_k\left(\omega;t-U(\omega)\right)\tilde{R}_l(\omega;t+s-U(\omega))\right\}\right]\\ & \rightarrow\mathbb{E}\left\{Y_k(t-U)Y_l(t+s-U)\right\}-\mathbb{E}\left\{\tilde{R}_k(t-U)\tilde{R}_l(t+s-U)\right\}=:\mathcal{D}_{kl}(t,s), \end{align*} in $\Omega-\mathcal{N}_{s,t}$ as $n\rightarrow\infty$. Then we have \begin{align}\label{eq: limit of A} \lim_{n\rightarrow\infty}\mathcal{A}_{kl}^{est,n}(s)=\frac{1}{T+1}\sum_{t\in\{\tau\Delta\}_{\tau=0}^T} \mathcal{D}_{kl}(t,s) \end{align} for all $s\in\mathcal{T}$ in $\Omega-\mathcal{N}$, where $\mathcal{N}:=\bigcup_{{s,t}\in\mathcal{T}}\mathcal{N}_{s,t}$. The limit (\ref{eq: limit of A}) implies the following in $\Omega-\mathcal{N}$. \begin{align*} \lim_{n\rightarrow\infty}\widehat{\mathcal{A}_{kl}^{est,n}}(\xi)&=\lim_{n\rightarrow\infty}\left\{\frac{1}{T+1}\sum_{s\in\{\tau\Delta\}_{\tau=0}^T} \mathcal{A}_{kl}^{est,n}(s) e^{2\pi i \xi s}\right\}\\ &=\frac{1}{T+1}\sum_{s\in\{\tau\Delta\}_{\tau=0}^T} \lim_{n\rightarrow\infty}\left\{\mathcal{A}_{kl}^{est,n}(s)\right\} e^{2\pi i \xi s}\\ &=\frac{1}{T+1}\sum_{t\in\{\tau\Delta\}_{\tau=0}^T} \left\{\frac{1}{T+1}\sum_{s\in\{\tau\Delta\}_{\tau=0}^T} \mathcal{D}_{kl}(t,s) e^{2\pi i \xi s}\right\} \end{align*} for all $\xi\in\mathbb{R}$. Since $\frac{1}{T+1}\sum_{s\in\{\tau\Delta\}_{\tau=0}^T} \mathcal{D}_{kl}(t,s) e^{2\pi i \xi s}$ is the Fourier transform of $\mathcal{D}_{kl}(t,s)$ with respect to $s$, repeating the calculation strategy in (\ref{eq: convolution calculation}), we have \begin{align*} \frac{1}{T+1}\sum_{s\in\{\tau\Delta\}_{\tau=0}^T} \mathcal{D}_{kl}(t,s) e^{2\pi i \xi s}=\mathbb{E}(\beta_k\beta_l)\frac{\overline{\widehat{h}_k(\xi)}\widehat{h}_l(\xi)}{\left\vert \widehat{h}_k(\xi) \widehat{h}_l(\xi) \right\vert} e^{2\pi i \xi (t_{0,k}-t_{0,l})} + \frac{\Sigma_{kl}}{(T+1)\left\vert\widehat{N}(\xi)\right\vert^2\left\vert\widehat{h}_k(\xi) \widehat{h}_l(\xi)\right\vert}, \end{align*} which does not depend on $t$. Then we have \begin{align}\label{eq: limit of A hat} \lim_{n\rightarrow\infty}\widehat{\mathcal{A}_{kl}^{est,n}}(\xi)=\mathbb{E}(\beta_k\beta_l)\frac{\overline{\widehat{h}_k(\xi)}\widehat{h}_l(\xi)}{\left\vert \widehat{h}_k(\xi) \widehat{h}_l(\xi) \right\vert} e^{2\pi i \xi (t_{0,k}-t_{0,l})} + \frac{\Sigma_{kl}}{(T+1)\left\vert\widehat{N}(\xi)\right\vert^2\left\vert\widehat{h}_k(\xi) \widehat{h}_l(\xi)\right\vert} \end{align} for all $\xi\in\mathbb{R}$ in $\Omega-\mathcal{N}$. The limit (\ref{eq: limit of A hat}) implies \begin{align*} \lim_{n\rightarrow\infty}\mathcal{C}_{kl}^{est,n}(\xi)&= \left\vert \lim_{n\rightarrow\infty} \widehat{\mathcal{A}_{kl}^{est,n}}(\xi)\right\vert \Bigg/ \sqrt{ \left\vert \lim_{n\rightarrow\infty} \widehat{\mathcal{A}_{kk}^{est,n}}(\xi)\times \lim_{n\rightarrow\infty} \widehat{\mathcal{A}_{ll}^{est,n}}(\xi) \right\vert }\\ &=\frac{ \left\vert \mathbb{E}(\beta_k\beta_l)\frac{\overline{\widehat{h}_k(\xi)}\widehat{h}_l(\xi)}{\left\vert \widehat{h}_k(\xi) \widehat{h}_l(\xi) \right\vert} e^{2\pi i \xi (t_{0,k}-t_{0,l})} + \frac{\Sigma_{kl}}{(T+1)\left\vert\widehat{N}(\xi)\right\vert^2\left\vert\widehat{h}_k(\xi) \widehat{h}_l(\xi)\right\vert} \right\vert }{ \sqrt{ \left(\mathbb{E}(\beta_k^2)+\frac{\Sigma_{kk}}{(T+1)\left\vert\widehat{N}(\xi)\widehat{h}_k(\xi)\right\vert^2}\right) \left(\mathbb{E}(\beta_l^2)+\frac{\Sigma_{ll}}{(T+1)\left\vert\widehat{N}(\xi)\widehat{h}_l(\xi)\right\vert^2}\right) } } \end{align*} for all $\xi\in\mathbb{R}$ in $\Omega-\mathcal{N}$. Since $\mathbb{P}(\mathcal{N})=\mathbb{P}\left(\bigcup_{s,t\in\mathcal{T}}\mathcal{N}_{s,t}\right)\le\sum_{s,t\in\mathcal{T}}\mathbb{P}(\mathcal{N}_{s,t})=0$, the desired result (\ref{eq: asymptotic behavior}) follows. \end{proof} \textbf{Proof of Theorem 2 in the manuscript:} In the following equation provided in Theorem 1, $\pmb{P}$ is a permutation matrix and $\pmb{\Lambda}$ is a diagonal matrix. \begin{align}\label{eq: thm 1 equation 1} \begin{pmatrix} 1 & 1 \\ \frac{1}{\beta^*} & 0 \end{pmatrix} = \pmb{A} \pmb{\Lambda}^{-1} \pmb{P}^{-1}. \end{align} Then equation (\ref{eq: thm 1 equation 1}) implies $\beta^*=\beta_k(\omega)=a_{1i'}/a_{2i'}$ for some $i'\in\{1,2\}$ and $a_{2j}=0$ when $j\ne i'$. Furthermore, the pair $(\pmb{A}, \pmb{S}(\omega;t))$ also satisfies the following equation, which is provided in Theorem 1 as well. \begin{align}\label{eq: thm 1 equation 2} \pmb{A} \begin{pmatrix} s_1(\omega;t) \\ s_2(\omega;t) \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ \frac{1}{\beta^*} & 0 \end{pmatrix} \begin{pmatrix} J_k(\omega;t) \\ R_k(\omega;t-U(\omega)) \end{pmatrix} = \begin{pmatrix} Y_k(\omega;t-U(\omega)) - \beta^* C_k \\ (N*h_k)(t-t_{0,k}-U(\omega))-C_k \end{pmatrix}, \end{align} and equation (\ref{eq: thm 1 equation 2}) implies $a_{2i'}s_{i'}(\omega;t)=(N*h_k)(t-t_{0,k}-U(\omega))-C_k$. Therefore, we have \begin{align*} a_{1i'}s_{i'}(\omega;t)=\frac{a_{1i'}}{a_{2i'}}\times a_{2i'}s_{i'}(\omega;t)=\beta_k(\omega)\times\left\{(N*h_k)(t-t_{0,k}-U(\omega))-C_k\right\}=J_k(\omega;t). \end{align*} \section{Web Appendix C: The Identifiability of Task-evoked Terms} Let $U: \Omega\rightarrow\mathcal{T}$ be a uniformly distributed random variable. Identifying the task-evoked terms $P_k(\omega;t)$ from $Y_k(\omega;t)=P_k(\omega;t)+R_k(\omega;t)$ is equivalent to identifying $P_k(\omega;t-U(\omega))$ from $Y_k(\omega;t-U(\omega))$ given the auxiliary random variable $U(\omega)$, which we can assumed to be known as it is artificially generated. Our proposed BOLD signal model $Y_k(\omega;t-U(\omega))=P_k(\omega;t-U(\omega))+R_k(\omega;t-U(\omega))$ can be represented in the following ``mixing" form. \begin{align}\label{eq: our proposed BOLD signal model} Y_k\left(\omega;t-U(\omega)\right)=\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\begin{pmatrix} P_k(\omega;t-U(\omega))\\ R_k(\omega;t-U(\omega)) \end{pmatrix}, \end{align} where the identity matrix mixes the source signals $P_k(\omega;t-U(\omega))$ and $R_k(\omega;t-U(\omega))$. In this section, we provide the identifiability of the task-evoked terms $P_k(\omega;t-U(\omega))=\beta_k(\omega)N*h_k(t-t_{0,k}-U(\omega))$ as well as reference terms $R_k(\omega;t-U(\omega))$ in (\ref{eq: our proposed BOLD signal model}), up to deterministic coefficients, among the family of \textit{mixing forms} defined as follows. \begin{align}\label{eq: linear mixing forms} \mathcal{M}:=\left\{ \begin{pmatrix} s_1(\omega;t)\\ s_2(\omega;t) \end{pmatrix} \Bigg\vert \exists \pmb{A}\in\mathbb{R}^{2\times 2}\mbox{ such that }Y_k\left(\omega;t-U(\omega)\right)=\pmb{A}\begin{pmatrix} s_1(\omega;t)\\ s_2(\omega;t) \end{pmatrix}\right\}, \end{align} where all matrices $\pmb{A}$ are deterministic 2-by-2 matrices mixing \textit{source signals} $s_1(\omega;t)$ and $s_2(\omega;t)$. Specifically, we will show that, under some probabilistic conditions, all the stochastic signals $\left(s_1(\omega;t), s_2(\omega;t)\right)$ in (\ref{eq: linear mixing forms}) are proportional to $(P_k(\omega;t-U(\omega)), R_k(\omega;t-U(\omega)))$ or $(R_k(\omega;t-U(\omega)), P_k(\omega;t-U(\omega)))$. Then the task-evoked terms $P_k(\omega;t-U(\omega))$ are identifiable in $\mathcal{M}$ in the following sense: there exist $i'\in\{1,2\}$ and $\lambda\ne0$ such that $s_{i'}(\omega;t)=\lambda P_k(\omega;t-U(\omega))=\{\lambda\beta_k(\omega)\}N*h_k(t-t_{0,k}-U(\omega))$ for all $t\in\mathcal{T}$. Since the deterministic coefficient $\lambda$ is canceled out in computing $corr(\beta_k,\beta_l)$, the identifiability of $P_k(\omega;t)$ up to a deterministic coefficient implies the exact identifiability of ptFC $\vert corr(\beta_k,\beta_l)\vert$. The following theorem implies the identifiability of $P_k(\omega;t)$ and provides a formal foundation for the discussion above. \begin{theorem}\label{thm: identifiability theorem} Let $U:\Omega\rightarrow\mathcal{T}$ be uniformly distributed. Suppose $U(\omega)$, $\beta_k(\omega)$, and $\{R_k(\omega;t)\vert t\in\mathcal{T}\}$ are independent, and $R_k(\omega;t)$ is WSMZ. Furthermore, we assume that there exists a $t^*\in\mathcal{T}$ such that \begin{align*} \frac{ \mathbb{E}\left\{P_k(t-U)P_k(t-t^*-U)\right\} }{\mathbb{E}\left\{P_k(t-U)\right\}^2} \ne \frac{ \mathbb{E}\left\{R_k(t-U)R_k(t-t^*-U)\right\} }{\mathbb{E}\left\{R_k(t-U)\right\}^2}. \end{align*} If there exists a WSMZ stochastic process $(s_1(\omega;t), s_2(\omega;t))$ such that $\{s_1(\omega;t)\}_{t\in\mathcal{T}}$ and $\{s_2(\omega;t)\}_{t\in\mathcal{T}}$ are uncorrelated, \begin{align*} & \frac{\mathbb{E}\left\{s_1(t)s_1(t-t^*)\right\}}{\mathbb{E}\left\{s_1(t)\right\}^2} \ne \frac{\mathbb{E}\left\{s_2(t)s_2(t-t^*)\right\}}{\mathbb{E}\left\{s_2(t)\right\}^2}, \mbox{ and}\\ & Y_k(\omega;t-U(\omega))=P_k(\omega;t-U(\omega))+R_k(\omega;t-U(\omega))=\pmb{A}\begin{pmatrix} s_1(\omega;t)\\ s_2(\omega;t) \end{pmatrix} \end{align*} for some deterministic 2-by-2 matrix $\pmb{A}$, then there exists a permutation matrix $\pmb{P}$ and a non-singular diagonal matrix $\pmb{\Lambda}$ such that \begin{align}\label{eq: the last identifiability equation} \begin{pmatrix} P_k(\omega;t-U(\omega))\\ R_k(\omega;t-U(\omega)) \end{pmatrix}=\pmb{P\Lambda}\begin{pmatrix} s_1(\omega;t)\\ s_2(\omega;t) \end{pmatrix}. \end{align} \end{theorem} \noindent\textbf{Remark}: (i) Our discussion of identifiability presented at the beginning of this section is based on (\ref{eq: the last identifiability equation}). (ii) Since $\mathbb{E}\beta_k=\mathbb{E}R_k(t)=0$ for all $t$, we have $\mathbb{E}\left\{P_k(t_1)R_k(t_2)\right\}=0$ for all $t_1, t_2\in\mathcal{T}$, i.e., $P_k(\omega;t_1)$ and $R_k(\omega;t_2)$ are uncorrelated. (iii) One can verify that $P_k(\omega;t-U(\omega))=\beta_k(\omega)N*h_k(t-t_{0,k}-U(\omega))$ is WSMZ. (iv) Since $R_k(\omega;t)$ is WSMZ, Lemma \ref{lemma: stationarity with U} in the preceding section shows that $R_k(\omega;t-U(\omega))$ is WSMZ as well. Based on these remarks, Theorem \ref{thm: identifiability theorem} is a straightforward result following from the Theorem 2 of \cite{tong1991indeterminacy}. \section{Web Appendix D: The Performance of ptFCE in Estimating ptFCs} In this section, we analyze the performance of the ptFCE algorithm in terms of estimation bias and variance from both theoretical and simulational perspectives. We first provide the estimation bias mechanism of the ptFCE algorithm motivated by \begin{align}\label{eq: estimation formula with noise} \mathbb{P}\left\{\lim_{n\rightarrow\infty}\mathcal{C}^{est,n}_{kl}(\xi) = \frac{ \left\vert \mathbb{E}(\beta_k\beta_l)\frac{\overline{\widehat{h}_k(\xi)}\widehat{h}_l(\xi)}{\left\vert \widehat{h}_k(\xi) \widehat{h}_l(\xi) \right\vert} e^{2\pi i \xi (t_{0,k}-t_{0,l})} + \frac{\Sigma_{kl}}{(T+1)\left\vert\widehat{N}(\xi)\right\vert^2\left\vert\widehat{h}_k(\xi) \widehat{h}_l(\xi)\right\vert} \right\vert }{ \sqrt{ \left(\mathbb{E}(\beta_k^2)+\frac{\Sigma_{kk}}{(T+1)\left\vert\widehat{N}(\xi)\widehat{h}_k(\xi)\right\vert^2}\right) \left(\mathbb{E}(\beta_l^2)+\frac{\Sigma_{ll}}{(T+1)\left\vert\widehat{N}(\xi)\widehat{h}_l(\xi)\right\vert^2}\right) } }\right\}=1, \end{align} which is from Web Theorem \ref{theorem: asymptotic bias}. Because the quantities in \begin{align*} \Sigma_{kk}\Big/[(T+1)\vert\widehat{N}(\xi)\widehat{h}_k(\xi)\vert^2]\approx 0,\ \ \ \Sigma_{ll}\Big/[(T+1)\vert\widehat{N}(\xi)\widehat{h}_l(\xi)\vert^2]\approx 0. \end{align*} are not exactly zeros, the asymptotic estimation bias \begin{align*} \left\vert \lim_{n\rightarrow\infty}\mathcal{C}^{est,n}_{kl}(\xi) - \mathcal{C}_{kl}(\xi) \right\vert = \left\vert \lim_{n\rightarrow\infty}\mathcal{C}^{est,n}_{kl}(\xi) - \frac{\left\vert\mathbb{E}(\beta_k \beta_l)\right\vert}{\sqrt{\mathbb{E}(\beta_k^2)\mathbb{E}(\beta_l^2)}} \right\vert \end{align*} always exists. We further model that approximation residual $W_{k'}(\omega;t)=R_{k'}(\omega;t)-\tilde{R}_{k'}(\omega;t)$ at different nodes are independent, implying $\Sigma_{kl}=0$ in (\ref{eq: estimation formula with noise}). Then the following inequality, as a result of (\ref{eq: estimation formula with noise}), indicates that our ptFCE algorithm tends to underestimate ptFC. \begin{align}\label{eq: underestimation inequality} \notag\lim_{n\rightarrow\infty}\mathcal{C}_{kl}^{est,n}(\xi)&=_{a.s.}\frac{ \left\vert \mathbb{E}(\beta_k\beta_l) \right\vert }{ \sqrt{ \left(\mathbb{E}(\beta_k^2)+\frac{\Sigma_{kk}}{(T+1)\left\vert\widehat{N}(\xi)\widehat{h}_k(\xi)\right\vert^2}\right) \left(\mathbb{E}(\beta_l^2)+\frac{\Sigma_{ll}}{(T+1)\left\vert\widehat{N}(\xi)\widehat{h}_l(\xi)\right\vert^2}\right) } }\\ & \le \frac{ \left\vert \mathbb{E}(\beta_k\beta_l) \right\vert }{ \sqrt{ \mathbb{E}(\beta_k^2) \mathbb{E}(\beta_l^2) } } = \vert corr(\beta_k, \beta_l)\vert. \end{align} Furthermore, the larger the variance of the noise $\epsilon_k(\omega;t)$ in \begin{align*} Y_k(\omega;t)=\beta_k(\omega)\times \left(N*h_k\right)\left(t-t_{0,k}\right)+R_k(\omega;t),\ \ t\in\mathcal{T},\ \ k=1, \cdots, K,\ \ \omega\in\Omega, \end{align*} the less accurate is the approximation $R_{k'}(\omega;t)\approx\tilde{R}_{k'}(\omega;t)$ and the larger $\Sigma_{kk}$ and $\Sigma_{ll}$ in the inequality (\ref{eq: underestimation inequality}). Hence, the larger the variance of noise $\epsilon_k(\omega;t)$, the larger the underestimation bias. The formula above also indicates that the larger the underlying ptFC, the larger the underestimation bias. Subsequent simulations confirm all the conclusions on bias presented herein. We also show that the underestimation bias is moderate in general in terms of estimating ptFCs. We present the performance of ptFCE in estimating ptFC by investigating estimation bias and variance using simulated data. Specifically, suppose the true ptFC in our simulation mechanism is $\rho$, and the estimated ptFC from this algorithm is $\widehat{\rho}$, then we investigate bias $\widehat{\rho}-\rho$ and the variance of $\widehat{\rho}$. We implement the following data-generating mechanism, compatible with the definition of ptFC. The main steps of this mechanism are the following, while the details are provided in Web Appendix E. \noindent \textbf{Mechanism 0}: \textbf{Step 1}, we generate random coefficients $\left(\beta_{1}(\omega), \beta_{2}(\omega)\right)^T\sim N_2\left(\pmb{0}, (\sigma_{ij})_{1\le i,j \le 2}\right)$ for $\omega=1,\cdots,n$, and $\rho:=\vert\sigma_{12}/\sqrt{\sigma_{11}\sigma_{22}}\vert$ is the true underlying ptFC between synthetic nodes $1$ and $2$. \textbf{Step 2}, we generate reference signals $\{R_{k'}(\omega;\tau\Delta)=Q_{k'}(\omega;\tau\Delta)+\epsilon_{k'}(\omega;\tau\Delta)\}_{\tau=0}^T$ for $\omega=1,\cdots,n$ and $k'\in\{1,2\}$, where the $2nT$ noise values $\{\epsilon_{k'}(\omega;\tau\Delta) \vert k'=1,2; \omega=1,\cdots,n; \tau=0,\cdots,T\}\sim_{iid} N(0,V)$. \textbf{Step 3}, we compute the signals $\{Y_{k'}(\omega;\tau\Delta)\vert k'=1,2\}_{\tau=0}^T$, for $\omega=1,\cdots,n$, by $Y_{k'}(\omega; \tau\Delta)=9000+\beta_{k'}(\omega)\times \left(N * h_{k'}\right)(\tau\Delta) + R_{k'}(\omega; \tau\Delta)$. We investigate sample sizes $n\in\{50, 100,308, 1000\}$, where $308$ is the sample size of the HCP dataset used to obtain the results in our paper. We apply the ptFCE algorithm to estimate the underlying $\rho$ from synthetic signals. The corresponding estimate is denoted by $\widehat{\rho}$. For each $\rho\in\{0.25, 0.5, 0.75\}$, we repeat this procedure 500 times. The resulting estimates $\hat{\rho}$ are summarized in Web Table \ref{table: simulation 1 summary} and Web Figure \ref{fig: Boxplots_estimation}. \begin{table} \caption{The summaries of the estimated $\widehat{\rho}$ in different underlying ptFC $\rho$ scenarios. The percentage in the parenthesis after each mean shows the corresponding relative bias $(\widehat{\rho}-\rho)/\rho$, where the minus signs indicate underestimation.}\label{table: simulation 1 summary} \begin{tabular}{llllllllll} \hline & & $\rho=0.25$ & & & $\rho=0.5$ & & & $\rho=0.75$ & \\ \cline{3-4} \cline{6-7} \cline{9-10} & & mean & sd & & mean & sd & & mean & sd \\ \hline $n=50$ & & 0.255 (2$\%$) & 0.120 & & 0.456 (-8.8$\%$) & 0.114 & & 0.670 (-10.7$\%$) & 0.081 \\ $n=100$ & & 0.246 (-1.6$\%$) & 0.093 & & 0.460 (-8$\%$) & 0.079 & & 0.671 (-10.5$\%$) & 0.055 \\ $n=308$ & & 0.248 (-0.8$\%$) & 0.055 & & 0.457 (-8.6$\%$) & 0.047 & & 0.670 (-10.7$\%$) & 0.030 \\ $n=1000$ & & 0.246 (-1.6$\%$) & 0.028 & & 0.462 (-7.6$\%$) & 0.026 & & 0.675 (-10$\%$) & 0.018 \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[scale=0.65]{EstimationPurpose.eps} \caption{The boxplots summarizing the estimated $\widehat{\rho}$ in different underlying ptFC $\rho$ and different sample sizes $n$ scenarios.} \label{fig: Boxplots_estimation} \end{figure} Additionally, we investigate the influence of random noise $\{\epsilon_{k'}(\omega;\tau\Delta)\}_{k',\tau,\omega}\sim_{iid} N(0, V)$ on the ptFCE algorithm. Specifically, we fix sample size $n=308$; for each $\lambda\in[0.5, 5]$, we conduct the 500-run simulation study described above except we change the random noise to $\{\epsilon_{k'}(\omega;\tau\Delta)\}_{k',\tau,\omega}\sim_{iid} N(0, \lambda V)$. For each $\lambda$ and simulation run $r$, the estimated ptFC between synthetic nodes is denoted as $\widehat{\rho}_{\lambda}^{(r)}$, then we obtain 500 curves $\{\widehat{\rho}_{\lambda}^{(r)}\vert\lambda\in[0.5,5]\}_{r=1}^{500}$ presented in Web Figure \ref{fig: Influence of noise}. \begin{figure} \centering \includegraphics[scale=1, angle=270]{Influence_of_noise.eps} \caption{The red, orange, and green curves present the collections $\{\widehat{\rho}_{\lambda}^{(r)}\vert\lambda\in[0.5,5]\}_{r=1}^{500}$ corresponding to underlying ptFC $\rho=0.25, 0.5, 0.75$. The three blue solid curves present the mean curves $\{\frac{1}{500}\sum_{r=1}^{500} \widehat{\rho}_{\lambda}^{(r)}\vert\lambda\in[0.5,5]\}$ in the three underlying ptFC scenarios, and the three black dotted lines present the three underlying ptFC.} \label{fig: Influence of noise} \end{figure} As expected from the theoretical analysis of the bias mechanism, our proposed ptFCE algorithm tends to underestimate true ptFCs, see Web Table \ref{table: simulation 1 summary} and Web Figures \ref{fig: Boxplots_estimation} and \ref{fig: Influence of noise}. But Web Table \ref{table: simulation 1 summary} shows that the bias is moderate. Simulations also confirm other conclusions on estimation bias presented in the theoretical analysis above. Additionally, Web Figure \ref{table: simulation 1 summary} shows that the magnitude of noise $\epsilon_k(\omega;t)$ does not influence the variance of the ptFCE estimates. The increase in sample size reduces estimation variance. \section{Web Appendix E: Data-generating Mechanisms for Simulations}\label{section: data-generating mechanisms} In this section, we provide the details of Mechanisms 0, 1, and 2 for generating synthetic signals in simulation studies in our paper. To mimic the HCP data of interest, we apply the following parameters in the two data-generating mechanisms. \begin{itemize} \item Repeatition time $\Delta=0.72$ (seconds) \item Number of observation time points $T=283$. \item The task of interest (squeezing right toes) is presented by the stimulus signal $N(t)=\mathbf{1}_{[86.5, 98.5)}(t)+\mathbf{1}_{[162, 174)}(t)$. \item The tasks that are not of interest are presented by stimulus signals $\tilde{N}_1(t)=\mathbf{1}_{[71.35,83.35)}(t)+\mathbf{1}_{[177.125, 189.125)}(t)$, $\tilde{N}_2(t)=\mathbf{1}_{[11, 23)}(t)+\mathbf{1}_{[116.63, 128.63)}(t)$, $\tilde{N}_3(t)=\mathbf{1}_{[26.13, 38.13)}(t)+\mathbf{1}_{[146.88, 158.88)}(t)$, and $\tilde{N}_4(t)=\mathbf{1}_{[56.26, 68.26)}(t)+\mathbf{1}_{[131.75, 143.75)}(t)$. They correspond to the tasks of squeezing left toes, squeezing left/right fingers, and moving tongue. \item HRF functions $h_k$ and $\tilde{h}_{k,\gamma}$, for $k\in\{1,3\}$ and $\gamma\in\{1,2,3,4\}$, are the double-gamma variate functions implemented in the \texttt{R} function \texttt{canonicalHRF} with parameters \texttt{a1}$=4$, \texttt{a2}$=10$, \texttt{b1}$=0.8$, \texttt{b2}$=0.8$, \texttt{c}$=0.4$; HRF functions $h_2$ and $\tilde{h}_{2,\gamma}$, for $\gamma\in\{1,2,3,4\}$, are the same function with parameters \texttt{a1}$=8$, \texttt{a2}$=14$, \texttt{b1}$=1$, \texttt{b2}$=1$, \texttt{c}$=0.3$. It is important to emphasize that none of the HRFs herein is canonical. The curves of all HRF functions used in our simulations studies are presented in Web Figure \ref{fig: Different HRFs}. \end{itemize} \begin{figure} \centering \includegraphics[scale=1]{Different_HRF.eps} \caption{The black solid curve presents the canonical HRF (the \texttt{R} function \texttt{canonicalHRF} with default parameters). The blue dashed curve presents the \texttt{R} function \texttt{canonicalHRF} with parameters \texttt{a1}$=4$, \texttt{a2}$=10$, \texttt{b1}$=0.8$, \texttt{b2}$=0.8$, \texttt{c}$=0.4$. The red dotted curve presents the \texttt{R} function \texttt{canonicalHRF} with parameters \texttt{a1}$=8$, \texttt{a2}$=14$, \texttt{b1}$=1$, \texttt{b2}$=1$, \texttt{c}$=0.3$.} \label{fig: Different HRFs} \end{figure} \subsection{Mechanism 0} This mechanism is based on the task-fMRI BOLD signal model proposed in our paper. Specifically, we implement the following model to generate synthetic data. \begin{align}\label{eq: data-generating model 1} & Y_{k'}(\omega;t)=9000+\beta_{k'}\times N*h_{k'}(t) + \left\{\sum_{\gamma=1}^4 \beta_{k',\gamma}(\omega)\times \tilde{N}_{\gamma}*\tilde{h}_{k',\gamma}(t)\right\}+\epsilon_{k'}(\omega;t),\\ \notag& \mbox{where } t\in\mathcal{T}=\{\tau\Delta\}_{\tau=0}^T \mbox{ and }k'\in\{1,2\}. \end{align} We generate signals $\{(Y_1(\omega;\tau\Delta), Y_2(\omega;\tau\Delta)\}_{\tau=0}^T$, for $\omega=1,\cdots,n$, by the following steps. \begin{itemize} \item \textbf{Step 1}: Generate bivariate normal random vectors $\left(\beta_{1}(\omega), \beta_{2}(\omega)\right)^T\sim_{i.i.d.} N_2\left((0,0)^T, (\sigma_{ij})_{1\le i,j \le2}\right)$ for $\omega=1,\cdots,n$, where $\rho:=\vert\sigma_{12}/\sqrt{\sigma_{11}\sigma_{22}}\vert$ is the true underlying ptFC, and \begin{align}\label{eq: underlying rho} \sigma_{11}=2,\ \ \sigma_{22}=3,\ \ \sigma_{12}=\rho\times\sqrt{\sigma_{11}\sigma_{22}}. \end{align} \item\textbf{Step 2}: For each $\gamma\in\{1,2,3,4\}$, generate bivariate normal random vector \begin{align*} \begin{pmatrix} \beta_{1,\gamma}(\omega)\\ \beta_{2,\gamma}(\omega) \end{pmatrix} \sim_{i.i.d.} N_2\left(\begin{pmatrix} 0\\ 0 \end{pmatrix}, \begin{pmatrix} 2 & 0.3\times\sqrt{2\times3}\\ 0.3\times\sqrt{2\times3} & 3 \end{pmatrix}\right), \end{align*} for $\omega=1,\cdots,n$. The four collections $\left\{\left(\beta_{1,\gamma}(\omega), \beta_{2,\gamma}(\omega)\right)^T\right\}_{\omega=1}^n$, for $\gamma\in\{1,2,3,4\}$, are independently generated. \item\textbf{Step 3}: For each fixed $\omega\in\{1,\cdots,n\}$, generate (white) noise as follows. \begin{align*} \begin{pmatrix} \epsilon_1(\omega;\tau\Delta)\\ \epsilon_2(\omega;\tau\Delta) \end{pmatrix}\sim_{i.i.d.} N_2\left(\begin{pmatrix} 0\\ 0 \end{pmatrix}, \begin{pmatrix} 30 & 0\\ 0 & 30 \end{pmatrix}\right), \mbox{ for }\tau=0,\cdots,T. \end{align*} The $n$ collections $\left\{\left(\epsilon_1(\omega;\tau\Delta), \epsilon_2(\omega;\tau\Delta)\right)\right\}_{\tau=0}^T$, for $\omega\in\{1,\cdots,n\}$, are independently generated. \item\textbf{Step 4}: Compute the signals $\{(Y_1(\omega;\tau\Delta), Y_2(\omega;\tau\Delta)\}_{\tau=0}^T$, for $\omega=1,\cdots,n$, by model (\ref{eq: data-generating model 1}). \end{itemize} A pair of simulated $\{(Y_1(\omega;\tau\Delta), Y_2(\omega;\tau\Delta)\}_{\tau=0}^T$ using Mechanisms 0 is presented in Web Figure \ref{fig: Signals from Mechanism 1}. \begin{figure} \centering \includegraphics[scale=0.6]{Signals_mech_1.eps} \caption{Left panels present signals generated for node 1, and right panels present signals generated for node 2. The underlying ptFCs for pairs in the upper, middle, and lower rows are 0.25, 0.5, and 0.75, respectively. } \label{fig: Signals from Mechanism 1} \end{figure} \subsection{Mechanism 1} Mechanism 1 is very similar to Mechanism 0 and is based on (\ref{eq: data-generating model 1}) as well, except for $k'\in\{1,2,3\}$. We generate signals $\{(Y_1(\omega;\tau\Delta), Y_2(\omega;\tau\Delta), Y_3(\omega;\tau\Delta)\}_{\tau=0}^T$, for $\omega=1,\cdots,n$, using the following steps. \begin{itemize} \item \textbf{Step 1}: Generate normal random vectors $\left(\beta_{1}(\omega), \beta_{2}(\omega), \beta_{3}(\omega)\right)^T\sim_{i.i.d.} N_3\left(\pmb{0}, (\sigma_{ij})_{1\le i,j \le 3}\right)$ for $\omega=1,\cdots,n$, where \begin{align}\label{eq: underlying rho} \left(\sigma_{ij}\right)_{1\le i,j \le3}=\begin{pmatrix} 2 & \rho_{12}\times\sqrt{2\times3} & 0\\ \rho_{12}\times\sqrt{2\times3} & 3 & \rho_{23}\times\sqrt{2\times3} \\ 0 & \rho_{23}\times\sqrt{2\times3} & 2 \end{pmatrix}, \end{align} and $\rho_{ij}:=\vert\sigma_{ij}/\sqrt{\sigma_{ii}\sigma_{jj}}\vert$ for $(i,j)\in\{(1,2), (2,3)\}$ are the true underlying ptFCs. \item\textbf{Step 2}: For each $\gamma\in\{1,2,3,4\}$, generate bivariate normal random vector \begin{align*} \begin{pmatrix} \beta_{1,\gamma}(\omega)\\ \beta_{2,\gamma}(\omega)\\ \beta_{3,\gamma}(\omega) \end{pmatrix} \sim_{i.i.d.} N_3\left(\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix}, \begin{pmatrix} 2 & 0.3\times\sqrt{2\times3} & 0\\ 0.3\times\sqrt{2\times3} & 3 & 0.3\times\sqrt{2\times3} \\ 0 & 0.3\times\sqrt{2\times3} & 2 \end{pmatrix}\right), \end{align*} for $\omega=1,\cdots,n$. The four collections $\left\{\left(\beta_{1,\gamma}(\omega), \beta_{2,\gamma}(\omega), \beta_{3,\gamma}(\omega)\right)\right\}_{\omega=1}^n$, for $\gamma\in\{1,2,3,4\}$, are independently generated. \item\textbf{Step 3}: For each fixed $\omega\in\{1,\cdots,n\}$, generate (white) noise as follows. \begin{align*} \begin{pmatrix} \epsilon_1(\omega;\tau\Delta)\\ \epsilon_2(\omega;\tau\Delta)\\ \epsilon_3(\omega;\tau\Delta) \end{pmatrix}\sim_{i.i.d.} N_3\left(\begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix}, \begin{pmatrix} 30 & 0 & 0\\ 0 & 30 & 0\\ 0 & 0 & 30 \end{pmatrix}\right), \mbox{ for }\tau=0,\cdots,T. \end{align*} The $n$ collections $\left\{\left(\epsilon_1(\omega;\tau\Delta), \epsilon_2(\omega;\tau\Delta), \epsilon_3(\omega;\tau\Delta)\right)\right\}_{\tau=0}^T$, for $\omega\in\{1,\cdots,n\}$, are independently generated. \item\textbf{Step 4}: Compute the signals $\{(Y_1(\omega;\tau\Delta), Y_2(\omega;\tau\Delta), Y_3(\omega;\tau\Delta)\}_{\tau=0}^T$, for $\omega=1,\cdots,n$, by model (\ref{eq: data-generating model 1}). \end{itemize} \subsection{Mechanism 2} This mechanism is motivated by the Pearson correlation approach and implemented by the following steps. \begin{itemize} \item \textbf{Step 1}: For each fixed $\omega\in\{1,\cdots,n\}$, we independently generate $(\epsilon_1(\omega;\tau\Delta), \epsilon_2(\omega;\tau\Delta),\epsilon_2(\omega;\tau\Delta))^T$, for $\tau\in\{1,\cdots,T\}$, using the following distributions: \noindent (i) If $N(\tau\Delta)=1$, we apply \begin{align}\label{eq: underlying varrho} \begin{pmatrix} \epsilon_1(\omega;\tau\Delta)\\ \epsilon_2(\omega;\tau\Delta)\\ \epsilon_3(\omega;\tau\Delta) \end{pmatrix}\sim N_3\left( \begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix}, \begin{pmatrix} 30 & \varrho\times30 & 0\\ \varrho\times30 & 30 & \varrho\times30\\ 0 & \varrho\times30 & 30 \end{pmatrix} \right); \end{align} if $N(\tau\Delta)=0$, we implement \begin{align*} \begin{pmatrix} \epsilon_1(\omega;\tau\Delta)\\ \epsilon_2(\omega;\tau\Delta)\\ \epsilon_3(\omega;\tau\Delta) \end{pmatrix}\sim N_3\left( \begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix}, \begin{pmatrix} 30 & 0 & 0\\ 0 & 30 & 0\\ 0 & 0 & 30 \end{pmatrix} \right). \end{align*} The $n$ collections $\left\{\left(\epsilon_1(\omega;\tau\Delta), \epsilon_2(\omega;\tau\Delta), \epsilon_3(\omega;\tau\Delta)\right)\right\}_{\tau=0}^T$, for $\omega\in\{1,\cdots,n\}$, are independently generated. \item\textbf{Step 2}: Compute the signals $\{(Y_1(\omega;\tau\Delta), Y_2(\omega;\tau\Delta), Y_3(\omega;\tau\Delta)\}_{\tau=0}^T$, for $\omega=1,\cdots,n$, by the following. \begin{align}\label{eq: Mechanism 2 model} & Y_{k'}(\omega;t)=9000+ N*h_{k'}(t) + \left\{\sum_{\gamma=1}^4 \tilde{N}_{\gamma}*\tilde{h}_{k',\gamma}(t)\right\}+\epsilon_{k'}(\omega;t),\\ \notag& \mbox{where } t\in\mathcal{T}=\{\tau\Delta\}_{\tau=0}^T \mbox{ and }k'\in\{1,2,3\}. \end{align} \end{itemize} Simulated $\{(Y_1(\omega;\tau\Delta), Y_2(\omega;\tau\Delta), Y_3(\omega;\tau\Delta)\}_{\tau=0}^T$ using Mechanisms 1 and 2, are presented in Web Figure \ref{fig: Signals from Mechanism 2}. \begin{figure} \centering \includegraphics[scale=0.9, angle=270]{Signals_mech_2.eps} \caption{Signals generated for nodes 1, 2, and 3 using Mechanism 2.} \label{fig: Signals from Mechanism 2} \end{figure} \section{Web Appendix F: Supplementary Figures} \subsection{Limitations of the Pearson correlation approach} Web Figure \ref{fig: HRF Pearson} illustrates the limitations of the Pearson correlation approach defined as follows in terms of the influence of latency/HRF variance and noise. \begin{align}\label{eq: Pearson correlation with the convolution model} \left\vert corr(P_k, P_l)\right\vert=\left\vert \int_{\mathcal{T}} \phi_k(t) \times \phi_l(t)\mu(dt)\Bigg/\sqrt{ \int_{\mathcal{T}}\left\vert \phi_k(t)\right\vert^2 \mu(dt)\times \int_{\mathcal{T}}\left\vert \phi_l(t)\right\vert^2 \mu(dt)}\right\vert. \end{align} \begin{figure} \centering \includegraphics[scale=0.55]{HRFPearson.eps} \put(-460, -10){\small{(a): $\vert corr(P_k, P_l)\vert=0.571$}} \put(-295, -10){\small{(b)}} \put(-225, -10){\small{(c): $\vert corr(P_k, P_l)\vert=0.445$}} \put(-105, -10){\small{(d): $\vert corr(P_k, P_l)\vert=0.422$}} \caption{An example illustrating the limitations of the Pearson correlation approach in (\ref{eq: Pearson correlation with the convolution model}). Let $N(t)=\sum_{m=1}^4\mathbf{1}_{(20m-10, 20m]}(t)$ be the stimulus signal of a block design, $\mathcal{T}=[0,80]$, and $h_k$ be the \texttt{canonicalHRF} function with default parameters in the \texttt{R} package \texttt{neuRosim}. $h_k$ is illustrated by the solid blue curve in (b). Panel (a) shows the influence of the variation in latency on (\ref{eq: Pearson correlation with the convolution model}), where $h_l(t)=h_k(t+3)$, i.e., $t_{0,k}=-3$; the task-evoked terms at the $k^{th}$ and $l^{th}$ nodes are presented by blue and red curves, respectively. In (b), $h_l$ is replaced by the \texttt{canonicalHRF} function with parameters $\texttt{a1}=10, \texttt{a2}=15, \texttt{b1}=\texttt{b2}=0.9, \texttt{c}=0.35$, and presented by the dashed red curve. Panel (c) shows the influence of variation in HRF on (\ref{eq: Pearson correlation with the convolution model}), and the task-evoked terms at the $k^{th}$ and $l^{th}$ nodes are presented by blue and red curves, respectively, where $h_l$ is defined in (b) as the red curve. Panel (d) is a noise-contaminated version of (c), i.e., curves of $P_k(t)+\epsilon_k(t)$ (blue) and $P_l(t)+\epsilon_l(t)$ (red) with $\epsilon_k(t), \epsilon_l(t)\sim_{iid} N(0, 0.1)$ for each $t$. Panel (d) shows that random noise can further influence (\ref{eq: Pearson correlation with the convolution model}). In panels (a,c,d), the presented $\vert corr(P_k, P_l)\vert$ is computed by (\ref{eq: Pearson correlation with the convolution model}).} \label{fig: HRF Pearson} \end{figure} \subsection{A typical curve of estimator $\mathcal{C}_{kl}^{est,n}(\xi)$} Web Figure \ref{fig: curve of C as a function of xi} presents the estimator $\mathcal{C}_{kl}^{est,n}(\xi)$ as a function of Fourier frequencies $\xi$ using synthetic BOLD signals. \begin{figure} \centering \includegraphics[scale=1]{Fourier_frequencies.eps} \caption{To generate this figure, we first generate synthetic BOLD signals using the Mechanism 0 in Web Appendix E with sample size $n=308$ and underlying $\rho=0.25$ (presented by the black dashed line). Then we apply our proposed ptFCE algorithm to the synthetic BOLD signals generated by Mechanism 0. The solid blue curve presents the estimator $\mathcal{C}_{kl}^{est,n}(\xi)$ as a function of Fourier frequencies $\xi\in(0, \frac{1}{2\Delta})$, and the solid orange curve presents $\{\mathcal{C}_{kl}^{est,n}(\xi)\vert\xi\in(0,0.15)\}$ for taking the desired median. The unreasonably large value part of this orange curve motivates us to implement the median instead of the mean of $\{\mathcal{C}_{kl}^{est,n}(\xi)\vert\xi\in(0,0.15)\}$. The dotted red line presents the median of $\mathcal{C}_{kl}^{est,n}(\xi)$ across $\xi\in(0, 0.15)$, i.e., the output of the ptFCE algorithm for the synthetic BOLD signals.} \label{fig: curve of C as a function of xi} \end{figure} \section{Web Appendix G: Details of Beta-series Regression and Coherence Analysis} \noindent\textbf{Beta-series regression:} We apply the procedure described in Chapter 9.2 of \cite{ashby2019statistical} to estimate task-evoked FC using beta-series regression, except we ignore the ``nuisance term" therein. For each subject $\omega$ and underlying $\rho_{ij}$ or $\varrho_{ij}$ with $i<j$, we apply beta-series regression to signals $\{Y_i(\omega;\tau\Delta), Y_j(\omega;\tau\Delta)\}_{\tau=0}^T$, estimate the FC evoked by the task of interest $N(t)$, and denote the estimated quantity by $\hat{\rho}^{betaS}_{ij,\omega}$. Then we compute the mean and median of $\{\hat{\rho}^{betaS}_{ij,\omega}\}_{\omega=1}^{n}$ across all $\omega=1,\cdots,n$ and denote them by $\hat{\rho}^{betaS}_{ij,mean}$ and $\hat{\rho}^{betaS}_{ij,median}$. \noindent\textbf{Coherence analysis:} For each $\omega$ and $\rho_{ij}$ or $\varrho_{ij}$, compute the coherence between signals $\{Y_i(\omega;\tau\Delta)\}_\tau$ and $ \{Y_j(\omega;\tau\Delta)\}_{\tau}$ by the \texttt{R} function \texttt{coh} in package \texttt{seewave}. The coherence is a function $coh_{ij, \omega}(\xi)$ of $\xi$. Since HRFs act as band-pass filters ($0-0.15$ Hz, \cite{aguirre1997empirical}), compute the median of $coh_{ij, \omega}(\xi)$ across all $\xi\in(0,0.15)$ and denote it by $\hat{\rho}^{Coh}_{ij, \omega}$. The mean and median of $\{\hat{\rho}^{Coh}_{ij, \omega}\}_{\omega=1}^{n}$ across all $\omega$ are denoted by $\hat{\rho}^{Coh}_{ij, mean}$ and $\hat{\rho}^{Coh}_{ij, median}$, respectively. \section{Web Appendix H: Future Research} An interesting extension of our proposed model is the inclusion of an interaction term between the $P_k(\omega;t)$ and $Q_k(\omega;t)$ in model $Y_k(\omega;t)=P_k(\omega;t) + Q_k(\omega;t) + \epsilon_k(\omega;t)$. In order to experimentally validate whether the inclusion of such an interaction term would be biologically relevant, novel experimental designs would be needed. Particularly, we may consider an experiment where the subjects are at rest for a certain period during the scanning session followed by the performance of the task. Using this design, we may obtain estimates of $P_k$ and $Q_k$ considering the model with and without an interaction term and discuss the biological relevance of the results. Future work may investigate this issue and develop theoretical conditions for the identifiability of terms in a model with an interaction term and estimation algorithms. In many applications, task-evoked FC at the subject level instead of the population level is of interest. In our subsequent research, we will propose a framework parallel to the ptFC one at the subject level. Additionally, implementing the \textit{persistent homology} (PH) framework to FC estimates is an effective way of circumventing the choice of threshold for FC measurements, e.g., see \cite{lee2001relative}. Applying the PH approach to our proposed ptFCs is left for future research as well.
1,477,468,750,260
arxiv
\section{INTRODUCTION} \label{Introduction} Owing to the versatility and high mobility, the unmanned aerial vehicles (UAVs) have been considered to be a promising new paradigm to facilitate scenarios including communication relay, hazardous exploration, traffic control, and disaster management \cite{ChengX19JCIN,FeiZS19IoTJ,ZhongWZ19ChCom}. Compared with the conventional vehicle-to-vehicle communication, the UAV-to-vehicle (U2V) communication has some unique channel characteristics due to the UAV flight, e.g., three-dimensional (3D) scattering space, 3D arbitrary trajectory, 3D antenna arrangement, and 3D fuselage posture\cite{ZhuQM19WCL}. These new features would cause different non-stationarity from traditional communication channel, which conventional channel models cannot describe appropriately. To better design and evaluate the future U2V communication systems, it is essential to establish a realistic and reliable U2V channel model \cite{Ullah20TCCN,GuanK19MAP,ZhuQM20IWCMC}. In past decades, there were quite a lot of researches on the UAV channel modeling. These works can be classified into two categories, i.e., the small-scale fading model and large-scale fading model \cite{Khawaja19CST}. The former studied the fast time-variant parameters like Doppler shift and channel fading, while the latter focused the slow time-variant parameters such as the path loss and shadowing \cite{FengW20WC,FengW19IoTJ,AlHourani18WCL}. Due to the fast time-variant and non-stationary characteristics of U2V communication scenarios, the small-scale fading becomes more severe. Among the existing small-scale fading channel models, the geometry based stochastic model (GBSM) has been widely accepted \cite{WangWM20IJEC,ZhaoXW19ChCom,ZhuQM20EuCAP}, which has moderate complexity and accuracy compared with deterministic and other stochastic models. For example, a basic 3D stationary GBSM was proposed in \cite{Gulfam16AS} to study the UAV channels. The authors in \cite{JiangLG17CL} considered the channel non-stationarity by adding the dynamic evolution of scatterers into the model. These models assumed the terminals were fixed, which limited the versatility of such kind of models. Different from these basic UAV channel models, U2V channel models focus on the movements of both UAV and ground terminal, and has become a significant research issue. Some of existing U2V channel models assumed that the UAV or vehicle moves at uniform speed in a straight line \cite{ZhangZC18CL,GuanK19VTC, WangCX17TWC}. However, in a practical environment, the vehicle may experience velocity change and arbitrary trajectory. The non-stationary U2V channel models in \cite{Borhani17TVT,ZhuQM20Access} considered both the UAV and vehicle moved with fixed velocities, which was not consist with the practical facts. Some modified non-stationary channel model was proposed by taking the 3D speed variation of UAV into considerition \cite{WangCX20IoTJ,ChengX19IETCom1,ChengX19IETCom2}, but the movement of ground terminal is fixed. By setting the fixed trajectory, authors in \cite{GuanK19AWPL,GuanK19ISAP,HeDP18EuCAP} proposed more realistic UAV channel models and studied the related simulations. Moreover, the authors in \cite{ZhuQM18ChCom,ZhuQM19MAP} proposed a U2V multiple-input multiple-output (MIMO) channel model and the transceivers moved along 3D trajectories. However, the time-variant fuselage posture rotation, which is inevitable in the realistic scenario, was not involved. Note that the aforementioned U2V channel models considered the linear motion, curved motion or 3D arbitrary motion of UAVs. However, the posture rotation (including roll, pitch, and yaw) of the UAV was not been considered. Recently, the impact of drone pitch was investigated in \cite{AiB20TVT}, some corresponding statistical properties were derived. However, to the best of the authors’ knowledge, the thoroughly study of 3D posture and the realistic U2V channel model incorporating fuselage posture is still missing. This paper aims to fill this research gap. The main contributions and innovations of this paper are summarized as follows: 1) A realistic non-stationary U2V MIMO channel model for 3D scattering space, 3D arbitrary trajectory, 3D antenna arrangement, and 3D fuselage posture is proposed. It pays the special consideration of UAV's 3D posture rotation, i.e. pitch, yaw, and roll. Thus, the non-stationarity caused by the fuselage posture is taken into account. In order to support 3D trajectory and rotation of UAV, this paper introduces the rotation matrix to model parameters that related with the time-variant velocity and posture. 2) The expressions of some key statistical properties, i.e., the temporal autocorrelation function (ACF) and spatial cross correlation function (CCF) are obtained and verified by the analytical and simulation results. Besides, different influence on channel statistical properties caused by the UAV posture are simulated and discussed. The observations and conclusions can be used as a reference for the system design and performance analysis of U2V MIMO communication systems. 3) The generality of proposed non-stationary U2V channel model is validated by comparing the statistical properties with measurement results in different scenarios. The simulated results have a good consistency with the measured results. Therefore, the presented realistic model can be adopted to diverse UAV communication scenarios by adjusting model parameters. The remainder of this paper is organized as follows. In Section~\ref{sec:U2V Channel Model}, a new 3D non-stationary channel model for U2V communications incorporating fuselage posture is presented. Section~\ref{sec:Statistical Property Analysis of Proposed Model} studies the spatial-temporal correlation and the rotation matrix with effective phase, along with two typical statistical properties of proposed model. Section~\ref{sec:Numerical Results and Discussions} compares and discusses the analytical and simulation results. Finally, conclusions are drawn in Section~\ref{sec:Conclusions}. \section{U2V CHANNEL MODEL INCORPORATING FUSELAGE POSTURE} \label{sec:U2V Channel Model} \subsection{Channel Models Comparison} To illustrate the contribution of this paper, the proposed model, the model in \cite{ZhuQM18ChCom,ZhuQM19MAP}, and the standardized models, i.e., ITU-R \cite{A}, METIS \cite{B},mmMAGIC \cite{D}, IEEE 802.11 ad \cite{C}, 3GPP \cite{3GPP2020}, and QuaDRiGa \cite{F}, are investigated and compared in Table 1. The supported frequency band and scenario are listed in numerical form. The features like MIMO, 3D propagation, dual mobility, non-stationarity, and fuselage posture are displayed with a tick or cross. Note that most of standardized channel models cannot support both the non-stationarity and fuselage posture, the proposed model aims to fill this gap. \begin{table*}[htb \caption{Comparison with different channel models.} \setlength{\tabcolsep}{3pt} \begin{tabular}{|p{100pt}|p{70pt}|p{80pt}|p{40pt}|p{40pt}|p{40pt}|p{40pt}|p{40pt}|} \hline \makecell[c]{Standards}& \makecell[c]{ Frequency}&\makecell[c]{Scenario}&\makecell[c]{MIMO}&3D pro- pagation&Dual mobility&Non-stationary&Fuselage posture\\ \hline \makecell[c]{ITU-R M \cite{A}}&\makecell[c]{Up to 6 GHz}&\makecell[c]{Dense urban}& \makecell[c]{$\surd$} & \makecell[c]{$\surd$} &\makecell[c]{$\times$} &\makecell[c]{$\times$} &\makecell[c]{$\times$}\\ \hline \makecell[c]{METIS \cite{B}}& \makecell[c]{2-60 GHz} &\makecell[c]{Outdoor/indoor}& \makecell[c]{$\surd$}& \makecell[c]{$\surd$ }&\makecell[c]{$\times$} &\makecell[c]{$\times$}& \makecell[c]{$\times$}\\ \hline \makecell[c]{mmMAGIC \cite{D}}& \makecell[c]{6-100 GHz}& \makecell[c]{Outdoor/indoor}&\makecell[c]{ $\surd$ }&\makecell[c]{$\surd$} &\makecell[c]{$\times$}& \makecell[c]{$\times$}& \makecell[c]{$\times$}\\ \hline \makecell[c]{IEEE 802.11 ad \cite{C}}&\makecell[c]{Up to 60 GHz}&\makecell[c]{ Indoor} &\makecell[c]{$\surd$}& \makecell[c]{$\surd$}& \makecell[c]{$\surd$}& \makecell[c]{$\times$}& \makecell[c]{$\times$}\\ \hline \makecell[c]{3GPP TR 38.901 \cite{3GPP2020}}&\makecell[c]{0.5-100 GHz}& \makecell[c]{Outdoor/indoor}& \makecell[c]{$\surd$ }&\makecell[c]{$\surd$} &\makecell[c]{$\surd$}& \makecell[c]{$\times$}&\makecell[c]{ $\times$}\\ \hline \makecell[c]{QuaDRiGa \cite{F}}&\makecell[c]{Up to 100 GHz}& \makecell[c]{Indoor/satellite} &\makecell[c]{$\surd$} &\makecell[c]{$\surd$} &\makecell[c]{$\surd$ }&\makecell[c]{$\surd$ }&\makecell[c]{$\times$}\\ \hline \makecell[c]{Model in \cite{ZhuQM18ChCom,ZhuQM19MAP}}& \makecell[c]{0.5-100 GHz}& \makecell[c]{Outdoor }&\makecell[c]{$\surd$}& \makecell[c]{$\surd$}& \makecell[c]{$\surd$} &\makecell[c]{$\surd$} &\makecell[c]{$\times$}\\ \hline \makecell[c]{The proposed model}& \makecell[c]{0.5-100 GHz}& \makecell[c]{Outdoor }&\makecell[c]{$\surd$} &\makecell[c]{$\surd$}& \makecell[c]{$\surd$} &\makecell[c]{$\surd$}&\makecell[c]{ $\surd$}\\ \hline \end{tabular} \label{table1} \end{table*} \subsection{Channel Impulse Response of Porposed Model} The proposed non-stationary MIMO GBSM for U2V communication systems is as shown in Fig. \ref{fig1}. The mobile transmitter (Tx), i.e. the UAV, is equipped with $P$ antennas, while the mobile receiver (Rx), i.e. the vehicle on the ground, is equipped with $Q$ antennas. The proposed model consists of a line-of-sight (LoS) component and non-line-of-sight (NLoS) components that are equivalent to the single bounce components in this case. It should be noted that the Tx and Rx belong to two different coordinate systems, which are respectively marked as $\widetilde{x}\text{ - }\widetilde{y}\text{ - }\widetilde{z}$ coordinate and $x\text{ - }y\text{ - }z$ coordinate. The two original positions are the center of Tx and Rx, respectively. To describe the posture of fuselage, three posture angles, i.e. roll, pitch, and yaw, are introduced in the coordinate. The detailed definitions of parameters in Fig. \ref{fig1} are listed in Table \ref{table2}. \begin{figure}[htbp \centering{\includegraphics[width=0.5\textwidth]{fig1.eps}} \caption{3D non-stationary GBSM for U2V channels.} \label{fig1} \end{figure} \begin{table}[htb \caption{Parameter definitions in the proposed model} \setlength{\tabcolsep}{3pt} \begin{tabular}{|p{80pt}|p{150pt}|} \hline ${{\mathbf{v}}^{\text{Tx}}}\left( t \right)$,${{\mathbf{v}}^{\text{Rx}}}\left( t \right)$, $\mathbf{v}_{n}^{\text{Scatt}}\left( t \right)$& 3D velocity vectors of the Tx, Rx, and scatterers\\ \hline & \\[-10pt] $\alpha _{\text{LoS}}^{\text{Tx}}\left( t \right)$, $\alpha _{\text{LoS}}^{\text{Rx}}\left( t \right)$& Azimuth angle of departure and arrival for Tx and Rx in the LoS path\\ \hline & \\[-10pt] $\beta _{\text{LoS}}^{\text{Tx}}\left( t \right)$, $\beta _{\text{LoS}}^{\text{Rx}}\left( t \right)$& Elevation angle of departure and arrival for Tx and Rx in the LoS path\\ \hline & \\[-10pt] $\alpha _{n,m}^{\text{Tx}}\left( t \right)$, $\alpha _{n,m}^{\text{Rx}}\left( t \right)$& Azimuth angle of departure and arrival in the NLoS paths\\ \hline & \\[-10pt] $\beta _{n,m}^{\text{Tx}}\left( t \right)$, $\beta _{n,m}^{\text{Rx}}\left( t \right)$& Elevation angle of departure and arrival for Tx and Rx in the NLoS paths\\ \hline & \\[-10pt] ${{\phi }^{\text{Tx}}}\left( t \right)$, ${{\phi }^{\text{Rx}}}\left( t \right)$& Azimuth angle of the velocity vector of Tx and Rx, respectively\\ \hline & \\[-10pt] ${{\theta }^{\text{Tx}}}\left( t \right)$, ${{\theta }^{\text{Rx}}}\left( t \right)$& Elevation angle of the velocity vector of Tx and Rx, respectively\\ \hline & \\[-10pt] $\omega \left( t \right)$, $\gamma \left( t \right)$, $\varphi \left( t \right)$& Roll, pitch, and yaw angle of the UAV posture, respectively\\ \hline \end{tabular} \label{table2} \end{table} The U2V MIMO channel between the UAV installed $P$ antenna elements and the vehicle installed $Q$ antenna elements can be defined as a complex matrix \cite{ZhuQM18TC}, i.e., \begin{equation}\text{H}\left( t,\tau \right)\!=\!{{\left[ \begin{matrix} {{h}_{1,1}}\left( t,\tau \right)\! & \!{{h}_{1,2}}\left( t,\tau \right)\! & \!\cdots \! & \!{{h}_{1,P}}\left( t,\tau \right) \\ {{h}_{2,1}}\left( t,\tau \right)\! &\! {{h}_{2,2}}\left( t,\tau \right)\! &\! \cdots\! &\! {{h}_{2,P}}\left( t,\tau \right) \\ \vdots \! & \!\vdots \! & \!\ddots \!& \!\vdots \\ {{h}_{Q,1}}\left( t,\tau \right)\! &\! {{h}_{Q,2}}\left( t,\tau \right) \!& \!\cdots \! & \!{{h}_{Q,P}}\left( t,\tau \right) \\ \end{matrix} \right]}_{Q\!\times\! P}}\label{eq1}\end{equation} where ${{h}_{qp}}\left( t,\tau \right)$ denotes the complex channel impulse response (CIR) between the $p$-th transmitting antenna element and the $q$-th receiving antenna element. Moreover, the single CIR can be modeled as a superposition of a LoS path component and several NLoS path components, i.e., \begin{equation}\begin{aligned} & {{h}_{qp}}\left( t,\tau \right)\text{=}\sqrt{\frac{K}{K+1}}{{B}^{\text{LoS}}}\left( t \right)h_{qp}^{\text{LoS}}\left( t \right)\delta \left( \tau -{{\tau }^{\text{LoS}}}\left( t \right) \right) \\ & +\sqrt{\frac{1}{K+1}}\sum\limits_{n=1}^{N\left( t \right)}{B_{n}^{\text{NLoS}}\left( t \right)h_{qp,n}^{\text{NLoS}}\left( t \right)\delta \left( \tau -\tau _{n}^{\text{NLoS}}\left( t \right) \right)} \end{aligned}\label{eq2}\end{equation} where $K$ denotes the Ricean factor in LoS path, ${{B}^{\text{LoS}}}\left( t \right)$ and $B_{n}^{\text{NLoS}}\left( t \right)$ are variables reflecting the birth-death process of clusters in NLoS paths, ${{\tau }^{\text{LoS}}}\left( t \right)$ and $\tau _{n}^{\text{NLoS}}\left( t \right)$ denote the path delays, $N\left( t \right)$ denotes the number of valid scattering paths, the complex channel coefficient $h_{qp}^{\text{LoS}}\left( t \right)$ and $h_{qp,n}^{\text{NLoS}}\left( t \right)$ can be expressed as (\ref{eq3}) and (\ref{eq4}), \begin{figure*}[htb \small \begin{equation} h_{qp}^{\text{LoS}}\left( t \right)={{\left[ \begin{matrix} F_{p,\text{V}}^{\text{Tx}}\left( \alpha _{\text{LoS}}^{\text{Tx}}\left( t \right),\beta _{\text{LoS}}^{\text{Tx}}\left( t \right) \right) \\ F_{p,\text{H}}^{\text{Tx}}\left( \alpha _{\text{LoS}}^{\text{Tx}}\left( t \right),\beta _{\text{LoS}}^{\text{Tx}}\left( t \right) \right) \\ \end{matrix} \right]}^{T}}\left[ \begin{matrix} 1 & 0 \\ 0 & -1 \\ \end{matrix} \right]\left[ \begin{matrix} F_{q,\text{V}}^{\text{Rx}}\left( \alpha _{\text{LoS}}^{\text{Rx}}\left( t \right),\beta _{\text{LoS}}^{\text{Rx}}\left( t \right) \right) \\ F_{q,\text{H}}^{\text{Rx}}\left( \alpha _{\text{LoS}}^{\text{Rx}}\left( t \right),\beta _{\text{LoS}}^{\text{Rx}}\left( t \right) \right) \\ \end{matrix} \right]{{e}^{\text{j}\left( \Phi _{\text{I}}^{\text{LoS}}\left( t \right)\text{+}\Phi _{\text{D}}^{\text{LoS}}\left( t \right)+\Phi _{{{\text{A}}_{qp}}}^{\text{LoS}}\left( t \right) \right)}} \label{eq3}\end{equation} \end{figure*} \begin{figure*}[htb \footnotesize \begin{equation} h_{qp,n}^{\text{NLoS}}\!\left( t \right)\text{=}\!\underset{M\to \infty }{\mathop{\lim }}\,\!\!\!\sqrt{\!\frac{1}{M}}\!\!\!\sum\limits_{m=1}^{M}{{{\!\!\left[ \begin{matrix} F_{p,\!\text{V}}^{\text{Tx}}\!\left( \alpha _{\text{n,m}}^{\text{Tx}}\!\left( t \right)\!,\beta _{\text{n,m}}^{\text{Tx}}\!\left( t \right) \right) \\ F_{p,\!\text{H}}^{\text{Tx}}\!\left( \alpha _{\text{n,m}}^{\text{Tx}}\!\left( t \right)\!,\beta _{\text{n,m}}^{\text{Tx}}\!\left( t \right) \right) \\ \end{matrix} \right]}^{T}}\!\!\left[ \begin{matrix} {{e}^{\text{j}\Phi _{n,m}^{\text{VV}}}} \!\!&\!\!\!\! \sqrt{\!{{\kappa }_{n\!,\!m}}^{\!\text{-}1}}{{e}^{\text{j}\Phi _{n\!,\!m}^{\text{VH}}}} \\ \sqrt{{{\kappa }_{n\!,\!m}}^{\!\text{-}1}}{{e}^{\text{j}\Phi _{n\!,\!m}^{\text{HV}}}}\!\!\! &\!\!\!\!\! {{e}^{\text{j}\Phi _{n,m}^{\text{HH}}}} \\ \end{matrix} \right]\!\!\!\left[ \begin{matrix} F_{q,\!\text{V}}^{\text{Rx}}\!\left( \alpha _{\text{n,m}}^{\text{Rx}}\!\left( t \right)\!,\beta _{\text{n,m}}^{\text{Rx}}\!\left( t \right) \right) \\ F_{q,\!\text{H}}^{\text{Rx}}\!\left( \alpha _{\text{n,m}}^{\text{Rx}}\!\left( t \right)\!,\beta _{\text{n,m}}^{\text{Rx}}\!\left( t \right) \right) \\ \end{matrix} \right]{{e}^{\text{j}\left( \!\Phi _{{{\text{I}}_{n\!,\!m}}}^{\text{NLoS}}\!\left(\! t \!\right)\text{+}\Phi _{{{\text{D}}_{n\!,\!m}}}^{\text{NLoS}}\!\left(\! t \!\right)\text{+}\Phi _{{{\text{A}}_{q\!p\!,\!n\!,\!m}}}^{\text{NLoS}}\!\left( \!t\! \right) \!\right)}}} \label{eq4}\end{equation} \end{figure*} where $M$ is the number of sub-paths in a path, $F_{p,\text{V}}^{\text{Tx}}$, $F_{p,\text{H}}^{\text{Tx}}$, $F_{q,\text{V}}^{\text{Rx}}$ and $F_{q,\text{H}}^{\text{Rx}}$ denote the vertically and horizontally polarized field components of the transmitter or receiver, respectively. For each $m$-th sub-path in the $n$-th path, $\Phi _{n,m}^{\text{VV}}$, $\Phi _{n,m}^{\text{VH}}$,$\Phi _{n,m}^{\text{HV}}$ and $\Phi _{n,m}^{\text{HH}}$ are random initial phases for four different polarization combinations, which are uniformly distributed within $\left(-\pi,\pi\right)$, and ${{\kappa }_{n,m}}$ is the cross polarization power ratio. Besides, $\Phi _{{{\text{I}}_{n,m}}}^{\text{LoS/NLoS}}\left( t \right)$ is the random initial phase which obeys the uniform distribution over $\left( 0,\text{ }2\pi \right]$, $\Phi _{{{\text{D}}_{n,m}}}^{\text{LoS/NLoS}}\left( t \right)$ is the time-variant Doppler phase caused by Doppler frequency variation, which can be derived as \begin{equation} \Phi _{{{\text{D}}_{{}}}}^{\text{LoS}}\left( t \right)\!\text{=}k\!\!\int_{0}^{t}{\!\left(\! {{\mathbf{v}}^{\text{Tx}}}\left( t' \right)\!\cdot\! \mathbf{s}_{\text{LoS}}^{\text{Tx}}\left( t' \right)\text{+}{{\mathbf{v}}^{\text{Rx}}}\left( t' \right)\!\cdot\! \mathbf{s}_{\text{LoS}}^{\text{Rx}}\left( t' \right)\!\right)\!\text{ }}\text{d}t' \label{eq5}\end{equation} \begin{equation}\Phi _{{{\text{D}}_{n,m}}}^{\text{NLoS}}\left( t \right)\!=\!k\!\int_{0}^{t}{\left[ \begin{aligned} & \left( {{\mathbf{v}}^{\text{Tx}}}\left( t' \right)\!-\!\mathbf{v}_{n}^{\text{Scatt}}\left( t' \right) \right)\!\cdot\! \mathbf{s}_{n,m}^{\text{Tx}}\left( t' \right) \\ & \text{+}\left( {{\mathbf{v}}^{\text{Rx}}}\left( t' \right)\!-\!\mathbf{v}_{n}^{\text{Scatt}}\left( t' \right) \right)\!\cdot\! \mathbf{s}_{n,m}^{\text{Rx}}\left( t' \right) \\ \end{aligned} \right]}\text{ d}t'\label{eq6}\end{equation} where $k=2\pi {{f}_{0}}/{{c}_{0}}$ denotes the wave number, ${{f}_{0}}$ and ${{c}_{0}}$ represent the operating frequency and speed of electromagnetic wave. $\mathbf{s}_{\text{LoS}}^{\text{Tx}}\left( t \right)$, $\mathbf{s}_{\text{LoS}}^{\text{Rx}}\left( t \right)$, $\mathbf{s}_{n,m}^{\text{Tx}}\left( t \right)$, and $\mathbf{s}_{n,m}^{\text{Rx}}\left( t \right)$ are the departure and arrival angle unit vectors of the LoS path and the $m$-th sub-path within the $n$-th NLoS path, respectively. Furthermore, they can be defined by \begin{equation} \mathbf{s}_{\text{LoS}/n,m}^{\text{Tx}/\text{Rx}}\!\left( t \right)\!=\!\left[ \begin{matrix} \cos \beta _{\text{LoS/}n,m}^{\text{Tx}/\text{Rx}}\left( t \right)\cos \alpha _{\text{LoS/}n,m}^{\text{Tx}/\text{Rx}}\left( t \right) \\ \cos \beta _{\text{LoS/}n,m}^{\text{Tx}/\text{Rx}}\left( t \right)\sin \alpha _{\text{LoS/}n,m}^{\text{Tx}/\text{Rx}}\left( t \right) \\ \sin \beta _{\text{LoS/}n,m}^{\text{Tx}/\text{Rx}}\left( t \right) \\ \end{matrix} \right]. \label{eq7}\end{equation} Note that the term $\Phi _{{{\text{A}}_{qp,n,m}}}^{\text{LoS/NLoS}}\left( t \right)$ in (\ref{eq3})--(\ref{eq4}) is the time varying spatial phase related to movement direction and the UAV posture, which can be expressed as \begin{equation}\begin{aligned} & \Phi _{{{\text{A}}_{qp,n,m}}}^{\text{LoS/NLoS}}\left( t \right)=k\left( \mathbf{r}_{p}^{\text{Tx}}\cdot {{\mathbf{R}}^{\text{Tx}}}\left( t \right)\cdot {{\mathbf{R}}^{\text{P}}}\left( t \right)\cdot \mathbf{s}_{\text{LoS/}n,m}^{\text{Tx}}\left( t \right) \right) \\ & +k\left( \mathbf{r}_{q}^{\text{Rx}}\cdot {{\mathbf{R}}^{\text{Rx}}}\left( t \right)\cdot \mathbf{s}_{\text{LoS/}n,m}^{\text{Rx}}\left( t \right) \right) \end{aligned} \label{eq8}\end{equation} where $\mathbf{r}_{p}^{\text{Tx}}$ and $\mathbf{r}_{q}^{\text{Rx}}$ are the position vectors of the $p$-th transmitting antenna and the $q$-th receiving antenna, respectively, and the rotation matrix ${{\mathbf{R}}^{i}}\left( t \right)$, $i\in \left\{ \text{Tx},\text{Rx} \right\}$ aims to modify the position vector of Tx or Rx in real time \cite{ZhuQM20Sensors}, and can be expressed as \begin{equation} \begin{aligned} & {{\mathbf{R}}^{i}}\left( t \right) \\ & \!\!=\!\!\left[ \begin{matrix}\! \cos {{\theta }^{i}}\left( t \right)\cos {{\phi }^{i}}\left( t \right) \!\!& \!-\!\sin {{\phi }^{i}}\left( t \right) \!\!&\!\! \!-\!\sin {{\theta }^{i}}\left( t \right)\cos {{\phi }^{i}}\left( t \right) \\ \cos {{\theta }^{i}}\left( t \right)\sin {{\phi }^{i}}\left( t \right)\!\! & \!\!\cos {{\phi }^{i}}\left( t \right)\!\! &\!\! \!-\!\sin {{\theta }^{i}}\left( t \right)\sin {{\phi }^{i}}\left( t \right) \\ \sin {{\theta }^{i}}\left( t \right) \!\!&\!\! 0 \!\!&\!\! \cos {{\theta }^{i}}\left( t \right) \\ \end{matrix} \right] \\ \end{aligned} \label{eq9} \end{equation} To take the posture variation of UAV into account, a specific matrix ${{\mathbf{R}}^{\text{P}}}\left( t \right)$ is introduced in this paper. The time-variant roll angle $\omega \left( t \right)$, yaw angle $\varphi \left( t \right)$, and pitch angle $\gamma \left( t \right)$ are the rotated Euler angles when converting the world coordinate $x\text{-}y\text{-}z$ to the fuselage coordinate $\tilde{x}\text{-}\tilde{y}\text{-}\tilde{z}$ with respect to z axis, y axis, and x axis, respectively. In order to simplify the formulas, these time-variant parameters are marked as $\omega $, $\varphi $, and $\gamma $. Note that $\omega \in \left[ -\pi ,\pi \right]$, $\varphi \in \left[ 0,2\pi \right)$, and $\gamma \in \left[ -\pi ,\pi \right]$. Then, the transfer matrices from the fuselage coordinate system to the world coordinate system are defined as \begin{equation}{{\mathbf{R}}_{x}}=\left[ \begin{matrix} 1 & 0 & 0 \\ 0 & \cos \left( \gamma \right) & -\sin \left( \gamma \right) \\ 0 & \sin \left( \gamma \right) & \cos \left( \gamma \right) \\ \end{matrix} \right]\label{eq10}\end{equation} \begin{equation}{{\mathbf{R}}_{y}}=\left[ \begin{matrix} \cos \left( \varphi \right) & 0 & \sin \left( \varphi \right) \\ 0 & 1 & 0 \\ -\sin \left( \varphi \right) & 0 & \cos \left( \varphi \right) \\ \end{matrix} \right]\label{eq11}\end{equation} \begin{equation}{{\mathbf{R}}_{z}}=\left[ \begin{matrix} \cos \left( \omega \right) & -\sin \left( \omega \right) & 0 \\ \sin \left( \omega \right) & \cos \left( \omega \right) & 0 \\ 0 & 0 & 1 \\ \end{matrix} \right].\label{eq12}\end{equation} \newcounter{TempEqCnt} \setcounter{TempEqCnt}{\value{equation}} \setcounter{equation}{12} \begin{figure*}[htbp] \normalsize \begin{equation}\begin{aligned} & {{\mathbf{R}}^{\text{P}}}\left( t \right)={{\mathbf{R}}_{z}}{{\mathbf{R}}_{y}}{{\mathbf{R}}_{x}} \\ & =\left[ \begin{matrix} \cos \left( \omega \right)\cos \left( \varphi \right) & \cos \left( \omega \right)\sin \left( \varphi \right)\sin \left( \gamma \right)-\sin \left( \omega \right)\cos \left( \gamma \right) & \cos \left( \omega \right)\sin \left( \varphi \right)\cos \left( \gamma \right)+\sin \left( \omega \right)\sin \left( \gamma \right) \\ \sin \left( \omega \right)\cos \left( \varphi \right) & \sin \left( \omega \right)\sin \left( \varphi \right)\sin \left( \gamma \right)+\cos \left( \omega \right)\cos \left( \gamma \right) & \sin \left( \omega \right)\sin \left( \varphi \right)\cos \left( \gamma \right)-\cos \left( \omega \right)\sin \left( \gamma \right) \\ -\sin \left( \varphi \right) & \cos \left( \varphi \right)\sin \left( \gamma \right) & \cos \left( \varphi \right)\cos \left( \gamma \right) \\ \end{matrix} \right] \\ \end{aligned}\label{eq13}\end{equation} \end{figure*} \setcounter{equation}{\value{TempEqCnt}} Then the rotation matrix between the fuselage coordinate and the world coordinate can be calculated. The proposed term can be expressed as (\ref{eq13}), and the transformation can be obtained by \setcounter{equation}{13} \begin{equation} \left[ \begin{matrix} x \\ y \\ z \\ \end{matrix} \right]={{\mathbf{R}}^{\text{P}}}(t)\left[ \begin{matrix} {\tilde{x}} \\ {\tilde{y}} \\ {\tilde{z}} \\ \end{matrix} \right]. \label{eq14} \end{equation} Based on the posture matrix, the new model takes the posture variation of UAV into account, and thus can describe the arbitrary UAV posture rotations by setting the 3D rotational angles $\omega $, $\varphi $, and $\gamma $. \section{STATISTICAL PROPERTY ANALYSIS OF PROPOSED MODEL} \label{sec:Statistical Property Analysis of Proposed Model} Due to the randomness of the channel, the statistical characteristics is generally used to evaluate its quality. When the channel model incorporates the fuselage posture, some typical channel statistical properties will change, thus their expressions need to be modified. First of all, the channel transfer function can be calculated by using Fourier transform on the corresponding time-variant CIR, i.e., \begin{equation} \begin{aligned} & {{H}_{qp}}\left( \mathbf{r},f,t \right)=\int_{-\infty }^{\infty }{{{h}_{qp}}\left( t,\tau \right){{e}^{-\text{j}2\pi f\tau }}}\text{d}\tau \\ & =\sqrt{\frac{K}{K+1}}{{B}^{\text{LoS}}}\left( t \right)h_{qp}^{\text{LoS}}\left( t \right){{e}^{-\text{j}2\pi f{{\tau }^{\text{LoS}}}\left( t \right)}} \\ & +\sqrt{\frac{1}{K+1}}\sum\limits_{n=1}^{N\left( t \right)}{B_{n}^{\text{NLoS}}\left( t \right)h_{qp,n}^{\text{NLoS}}\left( t \right){{e}^{-\text{j}2\pi f\tau _{n}^{\text{NLoS}}\left( t \right)}}} \end{aligned} \label{eq15}\end{equation} , where $\mathbf{r}=\left\{ \Delta {{\mathbf{r}}^{\text{Tx}}},\Delta {{\mathbf{r}}^{\text{Rx}}} \right\}$ is the space lag, $\Delta {{\mathbf{r}}^{\text{Tx}}}=\mathbf{r}_{{{q}_{2}}}^{\text{Tx}}-\mathbf{r}_{{{q}_{1}}}^{\text{Tx}}$ and $\Delta {{\mathbf{r}}^{\text{Rx}}}=\mathbf{r}_{{{p}_{2}}}^{\text{Rx}}-\mathbf{r}_{{{p}_{1}}}^{\text{Rx}}$ denote the antenna element spacing of Tx and Rx, respectively. Then, by setting the frequency variation $\Delta f=0$, the spatial-temporal correlation function (ST-CF) of two specific CIRs ${{h}_{{{q}_{1}}{{p}_{1}}}}\left( t,\tau \right)$ and ${{h}_{{{q}_{2}}{{p}_{2}}}}\left( t,\tau \right)$ can be defined as \begin{equation}\begin{aligned} & {{\rho }_{{{q}_{1}}{{p}_{1}},}}_{{{q}_{2}}{{p}_{2}}}\left( t,\mathbf{r};\Delta t,\Delta \mathbf{r} \right) \\ & =E\left\{ H_{{{q}_{1}}{{p}_{1}}}^{*}\left( t,\mathbf{r} \right){{H}_{{{q}_{2}}{{p}_{2}}}}\left( t+\Delta t,\mathbf{r}+\Delta \mathbf{r} \right) \right\} \\ & \text{=}\rho _{{{q}_{1}}{{p}_{1}},{{q}_{2}}{{p}_{2}}}^{\text{LoS}}\left( t;\Delta t,\left\{ \Delta {{\mathbf{r}}^{\text{Tx}}},\Delta {{\mathbf{r}}^{\text{Rx}}} \right\} \right) \\ & \text{+}\rho _{{{q}_{1}}{{p}_{1}},{{q}_{2}}{{p}_{2}}}^{\text{NLoS}}\left( t;\Delta t,\left\{ \Delta {{\mathbf{r}}^{\text{Tx}}},\Delta {{\mathbf{r}}^{\text{Rx}}} \right\} \right) \end{aligned}\label{eq16}\end{equation} where ${{\left( \cdot \right)}^{*}}$ denotes the complex conjugation operation. Here, the local ST-CF can be calculated by the summation of LoS and NLoS components, which are assumed to be independent with each other. The detailed CF can be expressed as (\ref{eq17}) and (\ref{eq18}), where the effective phase term $\Phi _{qp}^{\text{LoS}}\left( t \right)$ and $\Phi _{qp,n,m}^{\text{NLoS}}\left( t \right)$ can be rewritten as \begin{figure*}[htbp] \normalsize \begin{equation} \rho _{{{q}_{1}}{{p}_{1}},{{q}_{2}}{{p}_{2}}}^{\text{LoS}}\left( t;\Delta t\left\{ \Delta {{\mathbf{r}}^{\text{Tx}}},\Delta {{\mathbf{r}}^{\text{Rx}}} \right\} \right)=\frac{K}{K+1}{{e}^{\text{j}\left( \Phi _{{{q}_{2}}{{p}_{2}}}^{\text{LoS}}\left( t+\Delta t \right)-\Phi _{{{q}_{1}}{{p}_{1}}}^{\text{LoS}}\left( t \right) \right)}}{{e}^{\text{j}2\pi f\left( {{\tau }^{\text{LoS}}}\left( t \right)-{{\tau }^{\text{LoS}}}\left( t+\Delta t \right) \right)}}\label{eq17}\end{equation}\end{figure*} \begin{figure*}[htbp] \footnotesize \begin{equation} \rho _{{{q}_{1}}{{p}_{1}},{{q}_{2}}{{p}_{2}}}^{\text{NLoS}}\left( t;\Delta t,\left\{ \Delta {{\mathbf{r}}^{\text{Tx}}},\Delta {{\mathbf{r}}^{\text{Rx}}} \right\} \right)=\frac{1}{M(K+1)}E\left\{ \sum\limits_{n=1}^{N(t)\cap N(t+\Delta t)}{\sum\limits_{m=1}^{M}{{{e}^{\text{j}\left(\! \Phi _{{{q}_{2}}{{p}_{2,n\!,\!m}}}^{\text{NLoS}}\left( t+\Delta t \right)-\Phi _{{{q}_{1}}{{p}_{1,n\!,\!m}}}^{\text{NLoS}}\left( t \right) \!\right)}}{{e}^{\text{j}2\pi f\left( \tau _{n}^{\text{NLoS}}\left( t \right)-\tau _{n}^{\text{NLoS}}\left( t+\Delta t \right) \right)}}}} \right\} \label{eq18}\end{equation} \end{figure*} \begin{equation}\begin{aligned} \Phi _{qp}^{\text{LoS}}\left( t \right)=\Phi _{\text{D}}^{\text{LoS}}\left( t \right)+\Phi _{{{\text{A}}_{qp}}}^{\text{LoS}}\left( t \right) \end{aligned}\label{eq19}\end{equation} \begin{equation} \Phi _{qp,n,m}^{\text{NLoS}}\left( t \right)=\Phi _{{{\text{D}}_{n,m}}}^{\text{NLoS}}\left( t \right)+\Phi _{{{\text{A}}_{qp,n,m}}}^{\text{NLoS}}\left( t \right). \label{eq20}\end{equation} \subsection{Time-variant Auto Correlation Functions} The temporal ACF is usually used to evaluate the time correlation of U2V channels and we can estimate the channel fading sensitivity with respect to the delay from the fluctuation of ACFs. The effect of survival probability from $t$ to $t+\Delta t$ should be taken into account and the ACF can be obtained by assuming $\Delta {{\mathbf{r}}^{\text{Tx}}}\text{=}0$ and $\Delta {{\mathbf{r}}^{\text{Rx}}}\text{=}0$, i.e., substituting ${{q}_{1}}\text{=}{{q}_{2}}\text{=}q$ and ${{p}_{1}}\text{=}{{p}_{2}}\text{=}p$ into (\ref{eq16}). The detailed expressions are shown as \begin{equation}{{\rho }_{qp}}\left( t;\Delta t \right)\text{=}\rho _{qp}^{\text{LoS}}\left( t;\Delta t \right)\text{+}\rho _{qp}^{\text{NLoS}}\left( t;\Delta t \right)\label{eq21}\end{equation} where \begin{equation} \begin{aligned} & \rho _{qp}^{\text{LoS}}\left( t;\Delta t \right) \\ & =\frac{1}{K+1}{{e}^{\text{j}\left( \Phi _{qp}^{\text{LoS}}\left( t+\Delta t \right)-\Phi _{qp}^{\text{LoS}}\left( t \right) \right)}}{{e}^{\text{j}2\pi f\left( {{\tau }^{\text{LoS}}}\left( t \right)-{{\tau }^{\text{LoS}}}\left( t+\Delta t \right) \right)}} \\ \end{aligned} \label{eq22}\end{equation} and (\ref{eq23}), and $N(t)\cap N(t+\vartriangle t)$ denotes the set of shared paths at the time $t$ and $t+\Delta t$. \begin{figure*}[htbp] \small \begin{equation} \rho _{qp,n}^{\text{NLoS}}\left( \Delta t,t \right)=\frac{1}{M\left( K+1 \right)}E\left\{ \sum\limits_{n=1}^{N(t)\cap N(t+\Delta t)}{\sum\limits_{m=1}^{M}{{{e}^{\text{j}\left( \Phi _{q{{p}_{,n,m}}}^{\text{NLoS}}\left( t+\Delta t \right)-\Phi _{q{{p}_{,n,m}}}^{\text{NLoS}}\left( t \right) \right)}}{{e}^{\text{j}2\pi f\left( \tau _{n}^{\text{NLoS}}\left( t \right)-\tau _{n}^{\text{NLoS}}\left( t+\Delta t \right) \right)}}}} \right\} \label{eq23} \end{equation} \end{figure*} \subsection{Time-variant Cross-correlation Functions} The spatial CCF can reflect the spatial correlation of the channel, especially the influence of change of antenna spacing on the U2V channel. By substituting $\Delta t\text{=}0$ into (\ref{eq16}), the normalized CCF between two different channel coefficients of proposed model can be obtained as \begin{equation} \begin{aligned} & {{\rho }_{{{q}_{1}}{{p}_{1}},}}_{{{q}_{2}}{{p}_{2}}}\left( \left\{ \Delta {{\mathbf{r}}^{\text{Tx}}},\Delta {{\mathbf{r}}^{\text{Rx}}} \right\};t \right) \\ & \!\!=\!\rho _{{{q}_{1}}\!{{p}_{1}},{{q}_{2}}\!{{p}_{2}}}^{\text{LoS}}\!\left( \!\left\{ \Delta {{\mathbf{r}}^{\text{Tx}}}\!,\!\Delta {{\mathbf{r}}^{\text{Rx}}} \right\}\!;\!t \right)\!\text{+}\rho _{{{q}_{1}}\!{{p}_{1}},{{q}_{2}}\!{{p}_{2}}}^{\text{NLoS}}\!\left( \!\left\{ \Delta {{\mathbf{r}}^{\text{Tx}}}\!,\!\Delta {{\mathbf{r}}^{\text{Rx}}} \right\}\!;\!t \right)\! \\ \end{aligned} \label{eq24}\end{equation} where \begin{equation} \rho _{{{q}_{1}}{{p}_{1}},{{q}_{2}}{{p}_{2}}}^{\text{LoS}}\left( \left\{ \Delta {{\mathbf{r}}^{\text{Tx}}},\Delta {{\mathbf{r}}^{\text{Rx}}} \right\};t \right)\text{=}\frac{K}{K+1}{{e}^{\text{j}\left( \Phi _{{{q}_{2}}{{p}_{2}}}^{\text{LoS}}\left( t \right)-\Phi _{{{q}_{1}}{{p}_{1}}}^{\text{LoS}}\left( t \right) \right)}}\label{eq25}\end{equation} \begin{equation} \begin{aligned} & \rho _{{{q}_{1}}{{p}_{1}},{{q}_{2}}{{p}_{2}}}^{\text{NLoS}}\left( \left\{ \Delta {{\mathbf{r}}^{\text{Tx}}},\Delta {{\mathbf{r}}^{\text{Rx}}} \right\};t \right) \\ & \text{=}\frac{1}{M\left( K+1 \right)}E\left\{ \sum\limits_{n=1}^{N(t)}{\sum\limits_{m=1}^{M}{{{e}^{\text{j}\left( \Phi _{{{q}_{2}}{{p}_{2,n,m}}}^{\text{NLoS}}\left( t \right)-\Phi _{{{q}_{1}}{{p}_{1,n,m}}}^{\text{NLoS}}\left( t \right) \right)}}}} \right\} \\ \end{aligned} \label{eq26}\end{equation} \section{NUMERICAL RESULTS AND DISCUSSIONS} \label{sec:Numerical Results and Discussions} In this section, some key channel statistical properties of proposed U2V channel model are studied. Especially, we investigate the impact of the rotation caused by the UAV posture rotations, and then verify the analytical statistical properties. The flight trajectory of UAV for simulations is shown in Fig. \ref{fig2}. It should be mentioned that the proposed model mainly focuses on the velocity and trajectory of ground terminal but does not consider the shape. Considering vehicles experience the most complicated movement on the ground, the other types of terminals, i.e., base station or pedestrian can be seen as special cases of vehicle. Therefore, we take the vehicle as an example in Fig. \ref{fig2}. The proposed model can be compatible with most air-to-ground communication scenarios through parameter modification. Apparently, the movement of UAV not only includes the 3D arbitrary motion, but also the rotation movement. At the initial moment, the UAV is at an altitude of 150 meters with a speed of 50 m/s, without any rotation. Then, it rotates around an axis drawn through the body of the vehicle from tail to nose with an angular velocity ${\pi }/{2}\;\text{rad/s}$until the pitch angle reaches 90 degrees at $t$ = 1s. Also, from the moment of $t$ = 1s, the UAV begins to rotate upward related to the horizontal plane with an angular velocity ${\pi }/{2}\;\text{rad/s}$ until the roll angle reaches 90 degrees at $t$ = 2s. The vehicle on the ground has the absolute value of velocity as 20 m/s. Besides, the antenna pattern affects the small scale fading drastically. In the simulation, the radiation pattern of antenna array is assumed to obey 3D antenna model in the 3GPP TR standard \cite{3GPP2020} to make sure it is practical. \begin{figure}[H \centering{\includegraphics[width=0.45\textwidth]{fig2.eps}} \caption{the trajectories of UAV and vehicle.} \label{fig2} \end{figure} It should be noticed that we take the LoS path component and the first scattering component as examples to analyze the influence of time-variant fuselage posture on the channel characteristic, respectively. To highlight the impact of posture instead of the multipath propagation, the birth-death process of clusters in NLoS paths and the delay variations caused by moving of clusters are omitted. The simulated U2V communication system operates at 2.4 GHz and the path and sub-path angles are assumed to follow the 3GPP 38.901 standard definition \cite{ZhuQM21Wiley,3GPP2020}. The mean value of the Ricean K-factor is set as 7 and the variation is set as 4, which means the LoS component is also time-variant. For the LoS case, the temporal ACFs of proposed model as well as the model in \cite{ZhuQM19MAP} which did not consider the UAV posture, are shown in Fig. \ref{fig3}. During the simulation, the moving direction of Tx and Rx and the fuselage posture change with time. From the figure, the values of ACFs are quite different at three moments, therefore the non-stationarity of U2V channel can be observed directly. The analytical results can be acquired by (\ref{eq22}), and the simulation results are obtained by calculating the correlation of channel CIRs. When $t$ = 0s, the ACFs of reference model and proposed model are precisely same, because there is no posture change of UAV. When $t$ = 1s and $t$ = 2s, the ACF of proposed model shows an obvious difference from the one of model in \cite{ZhuQM19MAP}. In addition, it can be seen that the simulated ACFs have a good consistency with the corresponding analytical results, which prove the correctness of derivations. \begin{figure}[htb \centering{\includegraphics[width=0.45\textwidth]{fig3.eps}} \caption{ACFs of LoS component with/without the UAV posture.} \label{fig3} \end{figure} Furthermore, the changing of UAV posture has certain impacts on the ACF. Different pitch and roll angles result in different trends of ACFs, which leads to the variation between the reference model and the proposed model. When $t$ = 1s, the pitch angle of UAV reaches 90 degrees. The posture matrix component ${{\mathbf{R}}_{x}}$ in (\ref{eq10}) has a predicable change as \begin{equation}{{\mathbf{R}}_{x}}\left| _{\gamma =0} \right.=\left[ \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{matrix} \right]\xrightarrow{\gamma \to \pi /2}\left[ \begin{matrix} 1 & 0 & 0 \\ 0 & 0 & \text{-1} \\ 0 & 1 & 0 \\ \end{matrix} \right]\label{eq27}\end{equation} which leads to a time-variant rotation of posture matrix ${{\mathbf{R}}^{\text{P}}}$ and the effective phase term $\Phi _{qp}^{\text{LoS}}\left( t \right)$ in (\ref{eq19}), and eventually affects the channel ACFs. Likewise, when $t$ = 2s, the posture matrix component ${{\mathbf{R}}_{z}}$ changes as \begin{equation}{{\mathbf{R}}_{z}}\left| _{\omega =0} \right.=\left[ \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{matrix} \right]\xrightarrow{\omega \to \pi /2}\left[ \begin{matrix} 0 & \text{-1} & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{matrix} \right].\label{eq28}\end{equation} Moreover, the coherence time, the minimum time lag when the ACF declines by half, can also be obtained directly from Fig. \ref{fig3}. With the increasing of pitch and roll angles, the coherence time becomes shorter. It can be found that the variation of pitch angles changes the coherence time from approximately 16 ms to 14 ms (when $t$ = 1s), while the variation of roll angles changes the coherence time from 19 ms to 17 ms (when $t$ = 2s). In terms of physical meaning, the additional posture rotation complicates the scattering environment which contributes to the decrease of coherence time. \begin{figure}[htb \centering{\includegraphics[width=0.45\textwidth]{fig4.eps}} \caption{ACFs of NLoS component with/without the UAV posture.} \label{fig4} \end{figure} Fig. \ref{fig4} shows the temporal ACFs of proposed model and the model in [29] in the NLoS case. It can be found that the variation of pitch angle leads the coherence time to change from approximately 31 ms to 57 ms (when $t$ = 1 s), while the variation of roll angle changes the coherence time from 34 ms to 66 ms (when $t$ = 2 s). Compared with the LoS case, the time-variant ACF is drastically affected and the coherence time becomes larger since the fuselage posture would aggravate the randomness of angles of departure and angles of arrival. Under the same simulation scenario and trajectories, the spatial CCFs of proposed model as well as the reference model are shown in Fig. \ref{fig5}. We also give the analytical results which can be obtained by (\ref{eq25}) for comparison purpose. It can be observed that the simulated CCF provide a good match with the corresponding analytical results, which ensure the correctness of the derivations. Furthermore, the CCFs of reference model in \cite{ZhuQM19MAP} and that of the proposed model have a highly consistent trend, only with minor amplitude differences in some antenna spacing values. There is no evident variation between the reference model and the proposed model. It might be because the time-variant posture matrix ${{\mathbf{R}}^{\text{P}}}(t)$ is dominant temporal dependent, and varied slightly when the fuselage posture rotates. As a result, the effective phase term $\Phi _{qp}^{\text{LoS}}\left( t \right)$ and $\Phi _{qp,n,m}^{\text{NLoS}}\left( t \right)$ of different antenna pairs demonstrates a stability to antenna spacing. Thus, it can be inferred that the UAV posture rotation has slight impact on the values of CCFs, different pitch and roll angles of UAV will result in semblable trends of channel CCFs. \begin{figure}[htb \centering{\includegraphics[width=0.45\textwidth]{fig5.eps}} \caption{CCFs of LoS component with/without the UAV posture.} \label{fig5} \end{figure} As a supplementary argument to the inference above, the CCFs of proposed model and reference model under the NLoS scenario are shown in Fig. \ref{fig6}. It is obvious that the CCFs considering the fuselage posture have the same trends with the ones in \cite{ZhuQM19MAP}. The difference between two models is smaller than the one of LoS case. There are less varying values of LoS component compared to the NLoS one. The reason might be that the angles of arrival after scattering by the clusters are no longer mainly determined by the transmitting antenna on UAV in NLoS paths. The results reflect the relatively small angular spread and more stationary conditions in the LoS case. There are larger fluctuations in the NLoS case, which result in larger correlation variations. So in conclusion, the rotation of fuselage posture has merely a little effect on the CCFs. \begin{figure}[htb \centering{\includegraphics[width=0.45\textwidth]{fig6.eps}} \caption{CCFs of NLoS component with/without the UAV posture.} \label{fig6} \end{figure} To illustrate the compatibility of proposed model, we apply the channel parameters from a measurement campaign in \cite{Simunek13TAP}. The comparison of ACFs are shown in Fig. \ref{fig7}. The measured ACF is obtained as flown distances changed and the analytical value is obtained from the proposed model with the following parameters setting. The carrier frequency is 2.5 GHz and the speed of UAV and vehicle are 40 m/s and 10 m/s, respectively. The LoS path between the Tx and Rx is 1000 m at the beginning. The elevation angle of LoS path is ${\pi }/{3}\;$ and the azimuth angles of the speed vector are $\pi $ and ${\pi }/{4}\;$ respectively. It can be found that the ACF of proposed model matches well with the measurement data. Under the condition of the measurement campaign in \cite{Payami12EuCAP}, we match the analytical CCF with the measurement data as shown in Fig. \ref{fig8}. The analytical result is obtained with the following simulation parameters. The carrier frequency is 2.6 GHz and the LoS path is 500 m at the beginning. The elevation angle of LoS path is ${\pi }/{6}\;$and the azimuth angles of the velocity vectors $\pi $ and ${\pi }/{4}\;$ respectively. As shown in the figure, the proposed model also matches well with measurement results which testifies the generality of the proposed non-stationary U2V channel model. \begin{figure}[htb \centering{\includegraphics[width=0.45\textwidth]{fig7.eps}} \caption{ACF of proposed model and measurement data.} \label{fig7} \end{figure} \begin{figure}[ht \centering{\includegraphics[width=0.45\textwidth]{fig8.eps}} \caption{CCF of proposed model and measurement data.} \label{fig8} \end{figure} \section{CONCLUSIONS} \label{sec:Conclusions} In this paper, a realistic 3D non-stationary U2V channel model incorporating the fuselage posture has been proposed. The rotational movement caused by different fuselage postures has been considered by introducing the posture matrix. The analytical expressions of ACF and CCF have been derived and verified by simulation results. The analysis and simulation results have also shown that the UAV posture has significant impacts on the ACF. The CCF is less affected because the posture matrix is mainly temporal dependent. Moreover, the proposed GBSM can be applied to diverse UAV communication scenarios by adjusting model parameters, which is useful for the design, optimization, and evaluation of realistic UAV MIMO communication systems. \section*{ACKNOWLEDGEMENT} \label{ACKNOWLEDGEMENT} This work was supported in part by the Fundamental Research Funds for the Central Universities (No.~NS2020026 and No.~NS2020063), in part by the Aeronautical Science Foundation of China (No.~201901052001), and in part by the National Key Scientific Instrument and Equipment Development Project under Grant (No.~61827801).
1,477,468,750,261
arxiv
\section{Introduction} The cores of active galaxies, most likely powered by accretion onto supermassive black holes (BH), are surrounded by two emission-line regions; the broad-line region (BLR) in proximity to the BH, and the narrow-line region (NLR) at larger distances from the nucleus. The study of both BLR and NLR provides us with important information on the nature and origin of these cloud systems, on their link with the host galaxy, and on their cosmological evolution. Determination of their velocity field and distance from the nucleus also enables us to estimate BH masses. While the BLR is too close to the nucleus, the NLR of many AGN is spatially resolved, and we can thus extract information on the NLR properties by performing spatially resolved spectroscopy. This method is a powerful approach to measure the physical conditions in the NLR and surrounding regions [e.g.~\citet{wil89, rob94, rad98, sch99, fra00, bar01, sos01, tem03, cir05}]. While [\ion{O}{iii}]\,$\lambda$5007\AA~(hereafter [\ion{O}{iii}]) narrow-band imaging is commonly used to study the NLRs of active galaxies, we have shown in \citet{ben06a} and \citet{ben06b} (hereafter paper I \& II) that this emission can be contaminated by contributions from star formation and that different sensitivities can lead to different size measurements of the NLR. Using long-slit spectroscopy, we developed methods to probe the AGN-photoionisation of the NLR and thus, its ``real'' size. From spatially resolved spectral diagnostics, we find a transition between central line ratios falling into the AGN regime and outer ones in the \ion{H}{ii}-region regime for two objects. Applying \texttt{CLOUDY} photoionisation models \citep{fer98}, we show that the observed distinction between \ion{H}{ii}-like and AGN-like ratios represents a true difference in ionisation source and cannot be explained by variations of physical parameters such as ionisation parameter, electron density or metallicity. We interpret it as a real border between the NLR, i.e. the central AGN-photoionised region, and surrounding \ion{H}{ii} regions. In addition, several physical parameters of the NLR such as reddening, ionisation parameter, electron density, and velocity can be directly accessed and analysed as a function of distance from the nucleus. We find that both the electron density and the ionisation parameter decrease with radius. The differences between the reddening distributions determined from the continuum slope and the Balmer decrement argue in favour of dust intrinsic to the NLR clouds with varying column density along the line of sight. The NLR and stellar velocity fields are similar and indicate that the NLR gas is distributed in a disk rather than a sphere. Here, we apply the same methods to a sample of six Seyfert-1 galaxies to probe the size of the NLR and derive physical properties such as reddening, ionisation parameter, electron density, and velocity in type-1 AGNs. We discuss their variations with distance from the nucleus and compare the results for Seyfert 1s and Seyfert 2s, allowing to test the facets of the unified model of AGNs. A detailed comparison of our results with literature data is given for each object [Appendix; see also \citet{ben05}]. \section{Observations, Reduction, and Analysis} The spectra were obtained with FORS1@VLT and EMMI@NTT. Relevant information on the sample and observations is summarised in Tables~\ref{objsy1} and~\ref{obssy1}. The [\ion{O}{iii}] images with the slit position overlaid are shown in Fig.~\ref{galaxies1}. As the observations, reduction, and analysis were already described in detail in paper II, we here discuss the special treatment of the Seyfert-1 spectra only. \begin{table*} \begin{minipage}{180mm} \caption[]{\label{objsy1} Properties of the sample\footnote{Unless stated otherwise, the properties were taken from the NASA/IPAC Extragalactic Database (NED).}} \begin{center} \begin{tabular}{lcccccc} \\[-2.3ex] \hline \hline\\[-2.3ex] & Fairall\,51 & NGC\,6860 & Mrk\,915 & NGC\,526a & MCG\,-05-13-017 & MCG\,-6-30-15\\[0.25ex] \hline\\[-2.3ex] altern. name & ESO\,140-G043 & ESO\,143-G009 & MCG\,-02-57-023 & MCG\,-06-04-019 & ESO\,362-G018 & ESO\,383-G035\\ $\alpha$ (J2000) & 18\h44\m54\fs0 & 20\h08\m46\fs9 & 22\h36\m46\fs5 & 01\h23\m54\fs4 & 05\h19\m35\fs8 & 13\h35\m53\fs8\\ $\delta$ (J2000) & -62\degr21\arcmin53\arcsec & -61\degr06\arcmin01\arcsec & -12\degr32\arcmin43\arcsec & -35\degr03\arcmin56\arcsec & -32\degr39\arcmin28\arcsec & -34\degr17\arcmin44\arcsec\\ i. (\degr)\footnote{Host galaxy inclination [\citet{vau91}; RC3]} & 64 & 61 & 80 & 55 & 54 & 57\\ p.a. (\degr)\footnote{Position angle of host galaxy major axis (RC3)}& 162 & 34 & 166 & 112 & 160 & 116\\ $v_{\rm hel}$ (km\,s$^{-1}$) & 4255$\pm$10 & 4462$\pm$24 & 7228$\pm$2 & 5725$\pm$39 & 3790$\pm$30 & 2323$\pm$15\\ $v_{\rm 3k}$ (km\,s$^{-1}$)\footnote{Velocity relative to the 3K background using the NED velocity calculator}& 4228 & 4377 & 6863 & 5446 & 3620 & 2595\\ dist. (Mpc)\footnote{Distance $D$ in Mpc, using $v_{\rm 3K}$ and $H_0$ = 71\,km\,s$^{-1}$\,Mpc$^{-1}$}& 60 & 62 & 98 & 78 & 52 & 37\\ lin. scale (pc/\arcsec)\footnote{Linear scale $d$ using distance $D$ and $d$ = 4.848 $\cdot$ 10$^{-6}$ $\cdot$ $D$}& 283 & 293 & 454 & 362 & 243& 175\\ morphology & (R'\_2)SB(rs)b & (R')SB(r)ab & Sb & S0pec? & S0/a& E-S0\\ AGN type (NED) & Sy1 & Sy1 & Sy1 & Sy1.5 & Sy1.5 & Sy1.2\\ AGN type (our spectra) & Sy1 & Sy1.5 & Sy1.5 & Sy1.9 & Sy1.5 & Sy1.2\\ $E_{\rm (B-V),G}$ (mag)\footnote{Foreground Milky Way reddening used for reddening correction \citep{sch98}} & 0.108 & 0.041 & 0.063 & 0.028 & 0.017 & 0.062\\ $M_B$ (mag) & 14.7 & 13.68 & 14.82 & 14.5 & 12.5& 13.7\\[0.1ex] \hline\\[-2.3ex] \end{tabular} \end{center} \end{minipage} \end{table*} \begin{figure*} \begin{center} \includegraphics[width=16cm]{fig1.eps} \caption[]{\label{galaxies1} \small HST [\ion{O}{iii}] images of Fairall\,51, NGC\,6860, Mrk\,915, MCG\,-05-13-017, and MCG\,-6-30-15 taken from \citet{sch03a} (WF chip: $\sim$0\farcs1\,pix$^{-1}$). Contours start at the 3$\sigma$ level above the background (\citet{sch03a}, their Table 2) and increase in powers of 2 times 3$\sigma$ (3$\sigma$ $\times$ $2^n$). For NGG\,526a, a groundbased image taken from \citet{mul96a} is shown. The position of the long slit is shown as dashed line. North is up, east to the left. } \end{center} \end{figure*} \begin{table*} \begin{minipage}{180mm} \caption[]{\label{obssy1} Observations of the sample} \begin{center} \begin{tabular}{lcccccc} \\[-2.3ex] \hline \hline\\[-2.3ex] & Fairall\,51 & NGC\,6860 & Mrk\,915 & NGC\,526a & MCG\,-05-13-017 & MCG\,-6-30-15\\[0.25ex] \hline\\[-2.3ex] telescope & NTT & NTT & NTT & NTT & NTT & VLT\\ date (beg.) & 14/15-Sep-04 & 15/16-Sep-04 & 14/15-Sep-04 & 16-Sep-04 & 17-Sep-04& 25-Feb-04 \\ exp. time blue (s)\footnote{Total integration time. At the VLT, the blue and red spectral range were covered in one exposure.} & 3000 & 6000 & -\footnote{The spectra taken in the blue wavelength range were corrupted due to instrumental problems.} & 3000 & 3600 & 1800\\ exp. time red (s)$^a$ & 3600 & 3600 & 3000 & 2400 & 3000 & 1800\\ seeing & $<$ 1\arcsec & $<$ 1\arcsec & $<$ 1\arcsec & $<$ 1\arcsec & $<$ 1\arcsec & $\sim$1\farcs5\\ slit width & 1\arcsec & 1\arcsec & 1\arcsec & 1\arcsec & 1\arcsec& $\sim$1\arcsec\\ FWHM$_{\rm instr}$ (km\,s$^{-1}$) & 250 & 250 & 250 & 250 & 250 & 590\\ p.a.~(\degr)\footnote{Position angle of the slit} & 160 & 85 & 5 & 123 & 140 & 115\\ hel. corr. (km\,s$^{-1}$)\footnote{This heliocentric correction was added to the measured radial velocities.} & -2 & 0 & -4 & +17 & +29 & +12\\ average (pixel)\footnote{Number of pixel rows which were averaged} & 3 & 3 & 3 & 3 & 3 & 7\\ scale\footnote{Formal spatial resolution of final extracted spectra}& 1\farcs1 $\times$ 1\arcsec& 1\farcs1 $\times$ 1\arcsec & 1\farcs1 $\times$ 1\arcsec & 1\farcs1 $\times$ 1\arcsec & 1\farcs1 $\times$ 1\arcsec & 1\farcs4~$\times$ 1\arcsec\\[0.1ex] \hline\\[-2.3ex] \end{tabular} \end{center} \end{minipage} \end{table*} \subsection{Subtracting the stellar population} \label{stellarpop} As discussed in paper I \& II, removing the contribution of the stellar population is one of the first and most critical steps in the analysis of AGN emission-line spectra, at least in Seyfert-2 galaxies. For Seyfert-1 galaxies, the procedure described in paper I \& II may not be simply applicable: The AGN featureless continuum can be very strong, especially in the central parts where the broad emission lines are seen. Thus, a stellar template cannot simply be scaled to the continuum value in these regions as the contribution of the underlying stellar population would be overestimated. However, for type-1 AGNs, the AGN continuum and the broad and narrow emission lines often completely dominate the spectrum. We did not find signs of strong underlying Balmer absorption lines. In some cases, faint absorption is visible in \ion{Ca}{ii} H\&K and in \ion{Na}{i} D. Some of the \ion{Na}{i} D absorption may be of interstellar origin. We consider the underlying stellar absorption as negligible compared to the central emission line fluxes. Moreover, it was difficult to derive a suited high S/N template. Only for MCG\,-6-30-15, a correction of the stellar population was both necessary and possible. We were able to gain a suited template free of contaminating emission lines (Fig.~\ref{figtemplate}). Thus, a correction of underlying stellar absorption lines in the Seyfert-1 galaxies were applied only to MCG\,-6-30-15. We scaled the template to the continuum as we do not know the contribution of a featureless continuum. The reddening measure using the continuum slope variation relative to the stellar template (see paper I \& II) was, among the Seyfert-1 galaxies, only determined for MCG\,-6-30-15. \subsection{\ion{Fe}{ii} contamination} When studying optical spectra of type-1 AGNs, another issue that needs to be taken into account is the contribution of broad \ion{Fe}{ii} emission. To probe the contribution of \ion{Fe}{ii} to the observed type-1 spectra, we used the \ion{Fe}{ii} template of \citet{ver04}. It was rebinned to the same resolution and shifted to the object's redshift. We used several scaling factors and subtracted the template. The residual continuum was searched for signs of remaining \ion{Fe}{ii} emission. However, in all our type-1 objects, the contribution of \ion{Fe}{ii} seems to be negligible and for most scalings, we artificially induced ``\ion{Fe}{ii} absorption lines'', indicating that the scaling was too high. Thus, no \ion{Fe}{ii} template was finally subtracted as we believe that the \ion{Fe}{ii} contribution is negligible in our Seyfert-1 galaxies. \begin{figure*} \includegraphics[width=18cm]{fig2.eps} \caption[Central spectra] {\label{figtemplate} \small Spectra of the six Seyfert-1 galaxies in our sample. For MCG\,-6-15-30, we show the template subtraction: The template was obtained at 12\arcsec~north-west of the nucleus, averaged over 2\arcsec and median-filtered over three pixel to increase the S/N. The observed (upper; at 2\farcs8 south-east from the centre), the template (middle) and the template-subtracted spectrum (lower spectrum) are shown. In this plot, both upper spectra are shifted vertically by an arbitrary amount. Strong emission lines are truncated in the difference spectrum. The template matches the stellar absorption lines seen in the NLR spectrum fairly well.} \end{figure*} \subsection{Emission-line fluxes and reddening} To determine the fluxes of the pure emission-line spectra, the same general procedure as for the Seyfert-2 galaxies described in paper I \& II was applied. However, for the Seyfert-1 galaxies discussed here, the fitting procedure is more difficult, due to the additional broad lines of the BLR underlying all permitted emission of the NLR. Broad H$\alpha$ lines are observed in the central spectra of all our type-1 objects (according to their classification as Sy1, Sy1.2 or Sy1.5). Broad H$\beta$ emission is seen in all type 1s with the exception of NGC\,526a, classifying it as Sy1.9 (see also Section~\ref{ngc526a}). A common approach to disentangle the narrow and broad permitted lines is to use the profile of the forbidden narrow lines such as [\ion{O}{iii}] as template for the permitted narrow line, scaled to the appropriate height. A second Gaussian (a broad one) is additionally used to fit the permitted broad line profile. During the fitting procedure, we found that the use of two Gaussians, a broad and a narrow one, was in most cases not suited to fit the broad wings. For all lines with underlying broad emission, we added a third Gaussian: We fitted a narrow Gaussian, one with an intermediate width, and a broad one for an optimal total fit to the observed emission-line profiles. Three Gaussians have already been used by other authors to fit the H$\alpha$ and H$\beta$ lines in Seyfert-1 galaxies [e.g.~\citet{rey97,sul02}]. Emission-line profiles represent line-of-sight integrations of several kinematic components and even for ``narrow'' lines, considerable profile structure is measured at sufficient resolution [e.g.~\citet{vrt85,whi85,schu03}]. Gaussian fits or Lorentz fits are commonly used when single component fits fail. \citet{whi85} already describe the non-Gaussian nature of observed [\ion{O}{iii}] line profiles ``which revealed a stronger base relative to the core than Gaussians''. A Lorentz profile which has broader wings compared to a Gaussian seems to be better suited as has been shown by \citet{ver01} for the broad emission lines in narrow-line Seyfert-1 galaxies and by \citet{schu03} for narrow emission lines in Seyfert-2 galaxies. \citet{ben04} suggest the use of $d$-Lorentzians which allow to fit both permitted and forbidden lines by adjusting an additional parameter $d$. \citet{sul02} studied the broad H$\beta$ line in several AGN types and found that objects with full-width at half maximum (FHWM) $<$ $\sim$4000\,km\,s$^{-1}$ are well fitted by a Lorentz function, while AGNs with FWHM $>$ $\sim$4000\,km\,s$^{-1}$ are better fitted if two broad-line components are used: A ``classical'' broad-line component and a very broad/redshifted component. Our results are in agreement with this trend: All objects with broad emission lines in both H$\alpha$ and H$\beta$ have FWHM $>$ $\sim$4000\,km\,s$^{-1}$ and had to be fitted by two broad-line components. To conclude, we used single Gaussians to fit the narrow lines and three Gaussians to fit narrow lines with underlying broad emission which yields a very good result, taking into account the low resolution of our spectra. In all but one case (MCG\,-6-30-15), the permitted profiles show a clear separation between broad underlying emission and a narrow ``peak''. Thus, for most Seyfert-1 galaxies, we were able to distinguish between the broad and narrow emission using three Gaussians with one resembling the shape of forbidden narrow lines. For MCG\,-6-30-15, the only type-1 observed with the lower resolution of VLT/FORS1, the profile fitting to the permitted Balmer lines could not successfully disentangle the broad and narrow line. Thus, we applied a Gaussian fit to forbidden lines only (except for the [\ion{N}{ii}]\,$\lambda\lambda$6548,6583\,\AA~lines which are blended by H$\alpha$). In the central spectra, the broad emission of H$\beta$ and H$\alpha$ even affect the adjacent [\ion{O}{iii}] and [\ion{S}{ii}]\,$\lambda\lambda$6716,6731\,\AA~lines. In those cases, we subtracted the broad underlying wing by extrapolation. As a consequence, the only emission-line ratio we were able to derive directly is that of the two forbidden sulphur lines to measure the electron density. The narrow H$\alpha$ and H$\beta$ emission-line fluxes are needed to plot diagnostic line-ratio diagrams, thus we cannot present these results for MCG\,-6-30-15. Moreover, the ionisation parameter strongly depends on the reddening value. As we cannot estimate it from the narrow H$\alpha$/H$\beta$ ratio, we used as a first guess the reddening slope determined by matching the stellar template to the NLR spectra. \section{Results and Discussion} \subsection{Nuclear spectra} The central spectra of the galaxies in our sample are shown in Fig.~\ref{figtemplate}. Table~\ref{lineratio1} lists the observed and reddening-corrected line-intensity ratios relative to H$\beta$ from the nuclear spectrum (uncorrected for slit losses). For pairs of lines ([\ion{O}{iii}], [\ion{O}{i}], and [\ion{N}{ii}]) with a fixed line ratio ($\sim$3:1), only the brighter line is used. (Note that all ratios correspond to narrow lines.) Emission-line ratios of the strongest (narrow) lines as a function of distance from the centre can be found online for each individual galaxy (excluding MCG\,-6-30-15 as we were not able to disentangle the broad and narrow Balmer lines). \begin{table*} \begin{minipage}{180mm} \caption[]{\label{lineratio1} Observed and reddening-corrected narrow emission line intensity ratios relative to H$\beta$\footnote{All narrow emission line ratios were derived from the nuclear spectra. After reddening correction, other Balmer line-ratios such as H$\gamma$/H$\beta$ and H$\delta$/H$\beta$ are consistent with the recombination values within the errors. No ratios are given for MCG\,-6-30-15 as we were not able to disentangle the broad and narrow Balmer lines in the central spectra. The uncertainties are in the range of $\sim$1-15\%.}} \begin{center} \begin{tabular}{lcccccccccccc} \\[-2.3ex] \hline \hline\\[-2.3ex] \multicolumn{1}{c}{Line} & \multicolumn{2}{c}{\rm Fairall\,51} & \multicolumn{2}{c}{\rm NGC\,6860} & \multicolumn{2}{c}{\rm Mrk\,915} & \multicolumn{2}{c}{\rm NGC\,526a} & \multicolumn{2}{c}{\rm MCG\,-05-13-017}\\ & \multicolumn{1}{c}{$F_{\rm obs}$} & \multicolumn{1}{c}{$F_{\rm dered}$} & \multicolumn{1}{c}{$F_{\rm obs}$} & \multicolumn{1}{c}{$F_{\rm dered}$} & \multicolumn{1}{c}{$F_{\rm obs}$} & \multicolumn{1}{c}{$F_{\rm dered}$} & \multicolumn{1}{c}{$F_{\rm obs}$} & \multicolumn{1}{c}{$F_{\rm dered}$} & \multicolumn{1}{c}{$F_{\rm obs}$} & \multicolumn{1}{c}{$F_{\rm dered}$}\\[0.25ex] \hline\\[-2.3ex] $[\ion{O}{ii}]\,\lambda3727$\,\AA & 0.94 & 1.21 & 1.60 & 2.04 & --\footnote{Not covered by wavelength range} & --$^b$ & 3.20 & 4.20 & 1.84 & 2.62 \\*[0.01cm] $[\ion{Ne}{iii}]\,\lambda3869$\,\AA & 0.86 & 1.32 & 1.18 & 1.45 & --$^b$ & --$^b$ & 1.80 & 2.27 & 2.54 & 3.43 \\*[0.01cm] $[\ion{Ne}{iii}]\,\lambda$3967\,\AA & 0.08 & 0.11 & 0.24 & 0.29 & --$^b$ & --$^b$ & --\footnote{Underlying absorption lines} & --$^c$ & 0.28 & 0.37 \\*[0.01cm] $[\ion{O}{iii}]\,\lambda$4363\,\AA & 0.40 & 0.49 & 0.53 & 0.59 & --$^b$ & --$^b$ & 0.65 & 0.73 & 1.26 & 1.46\\*[0.01cm] $\ion{He}{ii}\,\lambda$4686\,\AA & 0.41 & 0.44 & 0.29 & 0.30 & --$^b$ & --$^b$ & 0.33 & 0.34 & 0.48 & 0.50\\*[0.01cm] $[\ion{O}{iii}]\,\lambda$5007\,\AA & \hspace*{-0.2cm}15.58 & \hspace*{-0.2cm}14.47 & 8.84 & 8.53 & \hspace*{-0.2cm}12.72 & \hspace*{-0.2cm}11.82 & \hspace*{-0.2cm}16.35 & \hspace*{-0.2cm}15.71 & \hspace*{-0.2cm}17.15 & \hspace*{-0.2cm}16.29\\*[0.01cm] $[\ion{Fe}{vii}]\,\lambda$5721\,\AA & 0.49 & 0.34 & 0.14 & 0.12 & 0.15 & 0.11 & 0.11 & 0.09 & 0.37 & 0.29\\*[0.01cm] $[\ion{Fe}{vii}]\,\lambda$6087\,\AA & 0.83 & 0.49 & 0.20 & 0.16 & 0.27 & 0.16 & 0.09 & 0.07 & 0.47 & 0.33\\*[0.01cm] $[\ion{O}{i}]\,\lambda$6300\,\AA & 0.92 & 0.51 & 0.71 & 0.53 & 0.81 & 0.45 & 0.74 & 0.54 & 1.19 & 0.79\\*[0.01cm] $[\ion{Fe}{x}]\,\lambda$6375\,\AA & 0.81 & 0.43 & 0.08 & 0.06 & 0.17 & 0.09 & 0.06 & 0.04 & 0.38 & 0.25\\*[0.01cm] H$\alpha$ & 5.60 & 2.87 & 3.95 & 2.87 & 5.60 & 2.87 & 4.11 & 2.87 & 4.57 & 2.87\\*[0.01cm] $[\ion{N}{ii}]\,\lambda$6583\,\AA & 5.57 & 2.84 & 3.38 & 2.44 & 3.70 & 1.88 & 3.14 & 2.18 & 3.02 & 1.89\\*[0.01cm] $[\ion{S}{ii}]\,\lambda$6716\,\AA & 1.47 & 0.73 & 1.28 & 0.92 & 1.29 & 0.64 & 1.18 & 0.81 & 1.01 & 0.62\\*[0.01cm] $[\ion{S}{ii}]\,\lambda$6731\,\AA & 1.68 & 0.84 & 1.15 & 0.82 & 1.31 & 0.65 & 1.16 & 0.80 & 1.19 & 0.73\\*[0.01cm] \hline\\[-2.3ex] \end{tabular} \end{center} \end{minipage} \end{table*} In Table~\ref{result}, we give the reddening-corrected H$\beta$ luminosity and summarise the results from dereddened line ratios such as the electron temperature $T_{\rm e, obs}$\footnote{Derived from the [\ion{O}{iii}]($\lambda$4959\,\AA+$\lambda$5007\,\AA)/$\lambda$4363\,\AA~emission-line ratio}, the reddening value $E_{B - V}$, the electron density $n_{\rm e, obs}$, and the ionisation parameter $U_{\rm obs}$ for the nuclear spectra of all objects. The parameters represent an average over the central several hundred parsecs. The temperature was, in most objects, only determined for the nuclear spectrum due to the faintness of the involved [\ion{O}{iii}]\,$\lambda$4363\,\AA~emission line in the outer spectra. In two objects, we were able to derive the electron temperature in the inner few arcseconds (NGC\,526a, MCG\,-05-13-017) where it stays roughly constant within the errors or scatters without showing a clear dependency on radius. The central temperature was used to apply a correction to the electron density. In those cases in which no temperature was measured we used $T = 10000$\,K or an average temperature derived from the other galaxies instead. \begin{table*} \begin{minipage}{180mm} \caption[]{\label{result} Reddening-corrected narrow H$\beta$ flux and luminosity and results from dereddened narrow emission line ratios of the nuclear spectra.} \begin{center} \begin{tabular}{lcccccc} \\[-2.3ex] \hline \hline\\[-2.3ex] & \rm{Fairall\,51} & \rm{NGC\,6860} & \rm{Mrk\,915} & \rm{NGC\,526a} & \rm{MCG\,-05-13-017}& \rm{MCG\,-6-30-15} \\[0.25ex] \hline\\[-2.3ex] $F_{\rm H\beta}$ (10$^{-14}$\,erg\,s$^{-1}$\,cm$^{-2}$) & 21$\pm$2 & 8$\pm$0.5 & 36$\pm$3 & 3$\pm$0.1 & 8$\pm$0.6 & --\footnote{No deconvolution of underlying broad Balmer line possible}\\ $L_{\rm H\beta}$ (10$^{39}$\,erg\,s$^{-1}$) & 93$\pm$9 & 37$\pm$2 & 416$\pm$30 & 18$\pm$1 & 25$\pm$2& --$^a$ \\ $T_{\rm e, obs}$ (K) & \hspace{-2mm}22200$\pm$400 & \hspace{-2mm}36325$\pm$250 & \hspace{+4mm}--\footnote{Not covered by wavelength range} & 23330$\pm$1700 & 52500$\pm$3000& --$^a$ \\ $E_{B - V}$ (mag)\footnote{Note that this central value is not necessarily representative for the reddening within the NLR; for more details on reddening see Table~\ref{reddening}} & \hspace{+2mm}0.59$\pm$0.03 & \hspace{+2mm}0.28$\pm$0.02 & \hspace{+2mm}0.59$\pm$0.02 & \hspace{+2mm}0.32$\pm$0.03 & \hspace{+2.5mm}0.41$\pm$0.03& \hspace{+2mm}0.3$\pm$0.02\footnote{Determined from reddening of continuum slope relative to template} \\ $n_{\rm e, obs}$ (cm$^{-3}$) & \hspace{-2mm}1430$\pm$40 & \hspace{-2mm}1015$\pm$50 & 570 (1045)$\pm$35\footnote{[\ion{S}{ii}]\,$\lambda$6731\,\AA~is slightly truncated by telluric absorption bands.}$^,$\footnote{Using $T_e$ = 10000\,K and, in brackets, $<$$T_{e}$$>_{\rm 4 Sy1s}$ $\sim$ 33590, respectively} & \hspace{+2mm}835$\pm$70$^e$ & \hspace{-2mm}2460$\pm$55& 300 (550)$\pm$40$^f$ \\ $U_{\rm log (n_e) = 3, obs}$ (10$^{-3}$) & 9.25$\pm$0.9 & \hspace{+2mm}2.73$\pm$0.04 & \hspace{+4mm}--$^b$ & \hspace{+2mm}2.89$\pm$0.05 & 4.28$\pm$0.1& \hspace{-2mm}2.95$\pm$0.04 \\[0.1ex] \hline\\[-2.3ex] \end{tabular} \end{center} \end{minipage} \end{table*} \subsubsection{Comparison of Sy1 and Sy2 properties} \label{longdiff12} Comparing the results for the central spectra of type-1 and type-2 Seyferts (paper II) shows that the line ratios are similar in all objects. There are no significant differences between type-1 and type-2 galaxies with the exception of the emission lines of oxygen and iron which are on average higher in the Seyfert-1 galaxies: [\ion{O}{iii}]\,$\lambda$4363\,\AA/H$\beta$ $\sim$ 0.82$\pm$0.2 (4 Sy1s) versus 0.19$\pm$0.02 (4 Sy2s); [\ion{O}{iii}]\,$\lambda$5007\,\AA/H$\beta$ $\sim$ 13.4$\pm$1.4 (5 Sy1s) versus 10.3$\pm$0.6 (6 Sy2s); [\ion{Fe}{vii}]\,$\lambda$5721\,\AA/H$\beta$ $\sim$ 0.19$\pm$0.05 (5 Sy1s) versus 0.14$\pm$0.1 (2 Sy2s); [\ion{Fe}{vii}]\,$\lambda$6087\,\AA/H$\beta$ $\sim$ 0.24$\pm$0.07 (5 Sy1s) versus 0.14$\pm$0.06 (4 Sy2s); [\ion{Fe}{x}]\,$\lambda$6375\,\AA/H$\beta$ $\sim$ 0.17$\pm$0.07 (5 Sy1s) versus 0.03$\pm$0.01 (4 Sy2s). The reddening of the nuclear spectrum is on average higher in the Seyfert 2s in our sample ($<$$E_{B - V}$$>_{\rm 6 Sy2s}$ $\sim$ 0.55$\pm$0.07\,mag versus $<$$E_{B - V}$$>_{\rm 6 Sy1s}$ $\sim$ 0.42$\pm$0.06\,mag). The electron densities are comparable in both objects ($<$$n_{e}$$>_{\rm 6 Sy2s} \sim$ 1070$\pm$180\,cm$^{-3}$ versus $<$$n_{e}$$>_{\rm 6 Sy1s} \sim 1100\pm315$\,cm$^{-3}$). The ionisation parameter is on average higher in Seyfert-1 galaxies [$<$$U_{\rm log (n_e) = 3}$$>_{\rm 5 Sy1s} \sim (4.42$$\pm$1.2)$\cdot 10^{-3}$ versus $<$$U_{\rm log (n_e) = 3}$$>_{\rm 5 Sy2s} \sim$ (2.66$\pm$0.2)$\cdot 10^{-3}$], also when excluding the exceptional high value of $U$ seen in Fairall\,51 [$<$$U_{\rm log (n_e) = 3}$$>_{\rm 4 Sy1s} \sim (3.21$$\pm$0.4)$\cdot 10^{-3}$], but the distributions overlap. Moreover, the comparison shows that higher temperatures occur in type-1 objects ($<$$T_{e}$$>_{\rm 4 Sy1s}$ $\sim$ 33590$\pm$7070\,K versus $<$$T_{e}$$>_{\rm 4 Sy2s}$ $\sim$ 14470$\pm$440\,K). Note that the difference in the flux ratio of [\ion{O}{iii}]\,$\lambda$4363\,\AA/[\ion{O}{iii}]\,$\lambda$5007\,\AA~seen between Seyfert-1 and Seyfert-2 galaxies has been interpreted by \citet{ost78} as a difference in densities ($n_{\rm H} \sim 10^{6 - 7}$\,cm$^{-3}$ for Seyfert-1 galaxies and $n_{\rm H} < 10^5$\,cm$^{-3}$ for Sy2s). However, we interpret it as a difference in temperature in agreement with the suggestions by \citet{hec79} and \citet{coh83} ($T_e > 20000$\,K for Sy1s; $T_e \sim 10000$\,K for Sy2s). Differences between the NLRs in Seyfert-1 and Seyfert-2 galaxies are known from both imaging and spectroscopy and have been discussed by various authors on the basis of the unified model [e.g.~\citet{mul96b, schm98, nag01,sch03b}]. Statistics have shown that high-ionisation emission lines as well as those with high critical densities tend to be stronger in Seyfert-1 galaxies than in type 2s [e.g.~\citet{shu81, schm98, nag00}]. One explanation is that the highly-ionised gas clouds are located close to the nucleus and can be hidden by the dust torus \citep{mur98a, mur98b, nag00}. On the contrary, \citet{schm98} proposed that the NLR sizes in Seyfert-1 galaxies are (intrinsically) smaller than those of type 2s (and not only due to projection effects): If the torus of Seyfert-1 galaxies is more likely to be aligned with the galaxy plane (but has random orientations in Sy2s) and the ionisation cone in type-1 AGNs is thus perpendicular to the galaxy plane, there is a smaller number of ionisation-bounded clouds in Seyfert-1 galaxies. Based on a sample of 355 Seyfert galaxies, \citet{nag01} favour the first explanation. Compared to the Seyfert-2 galaxies, the Seyfert 1s in our sample show on average higher iron emission line fluxes such as [\ion{Fe}{vii}] and [\ion{Fe}{x}] relative to H$\beta$, i.e.~high-ionisation lines (upper ionisation potential 125\,eV and 262.1\,eV, respectively), as well as higher [\ion{O}{iii}]\,$\lambda$4363\,\AA~intensities, a line with a rather low ionisation potential compared to these iron lines (upper ionisation potential of 54.9\,eV) but high critical densities (3.3 $\cdot 10^7$\,cm$^{-3}$; Table~\ref{lineratio1}), in agreement with the results of \citet{nag01}. Moreover, we find Seyfert-1 galaxies tending to have higher electron temperatures and ionisation parameters in their nuclear spectra. While the central electron densities are comparable taking into account the large scatter of electron densities within the individual Seyfert 1 and Seyfert 2 galaxies, the nuclear reddening is on average higher in the six Seyfert-2 galaxies. The higher average central ionisation parameter is related to the observation of stronger fluxes of high-ionisation lines in Seyfert-1 galaxies and can be explained likewise: If the high-ionisation lines, along with the Balmer and [\ion{O}{iii}] emission lines, originate in gas clouds close to the BLR, they may be partly hidden by the dust torus in Seyfert-2 galaxies. Our observations of higher nuclear reddening in Seyfert-2 galaxies argue in favour of this scenario proposed by \citet{nag01}. It is reasonable to assume that lines with high ionisation potential arise closer to the photoionising source, leading to a stratification of emission lines. This is comparable to what has been found for the BLR using reverberation-mapping: Different lines have different time lags with lines from high-ionised gas responding earlier, showing that the ionisation structure is radially stratified [e.g. \citet{pet93}]. The reason that we observe comparable nuclear densities in both type-1 and type-2 Seyferts may lie in the correction of the central electron temperature: When comparing the measured electron densities directly, i.e.~not correcting for the temperature, we get on average slightly lower densities for Seyfert-1 galaxies (which have the higher central temperatures): $n_{e,{\rm ave, 6 Sy1s}} \sim 825\pm170$\,cm$^{-3}$ versus $n_{e,{\rm ave, 6 Sy2s}} \sim 950 \pm$ 160\,cm$^{-3}$. Taking into account the critical densities of the involved forbidden emission lines, we cannot rule out that the temperature we measure corresponds to a region closer to the centre than the electron density (if the density increase towards the centre): While the critical densities of the [\ion{O}{iii}] lines are high [$3.3 \cdot 10^7$\,cm$^{-3}$ for $\lambda$4363\,\AA, $7 \cdot 10^5$\,cm$^{-3}$ for $\lambda$5007\,\AA], they are significantly lower for the [\ion{S}{ii}]\,$\lambda\lambda$6716,6731\,\AA~lines (1500-3900\,cm$^{-3}$). It implies that while the [\ion{O}{iii}] lines are still emitted in a dense central region with e.g.~$n_e \sim 10000$\,cm$^{-3}$, allowing us to measure the temperature close to the nucleus, both [\ion{S}{ii}] lines are collisionally de-excited. Thus, the flux we measure in these lines comes from regions with lower densities further out along our line-of-sight. The galaxies of the present sample underscore the so-called temperature problem [e.g. \citet{sto96}] as it is known in photoionisation modelling. This generally refers to the problem that photoionisation models underpredict the temperature in the NLR clouds, as measured by the ratio [\ion{O}{iii}]\,$\lambda\lambda$4363/5007\,\AA. Solutions include the reduction of oxygen (metal) abundances (leading to increased heating) and/or the presence of dust within the NLR clouds \citep{kom97}, or the presence of a significant fraction of matter-bounded clouds within the NLR \citep{bin96}. The presence of an inner high density component to solve the temperature problem was rejected by \citet{kom97} because such a component would strongly boost [\ion{O}{i}]\,$\lambda$6300\,\AA. Indeed, inspecting the dependence of [\ion{O}{i}] on radius, for our sample we do not find evidence for strongly increased [\ion{O}{i}] emission in the core, or at a certain radius. The three proposed solutions appear to be consistent with our data, even though we do not directly measure the metal abundances or the fraction of matter-bounded clouds. \subsection{Black hole masses} BH masses can be estimated using several methods. First, we estimated BH masses from the luminosity at 5100\AA~using the empirical formula found by \citet{pet04} (in Table~\ref{bh} denoted as $M_{\rm BH, Peterson\,et\,al.}$): \begin{eqnarray*} \log \frac{M_{\rm BH}}{10^8 M_{\odot}} = (-0.12\pm0.07) + (0.79\pm0.09) \cdot \log \frac{\lambda L_{\lambda} (5100 \AA)}{10^{44} \,{\rm erg\,s^{-1}}} \hspace{0.2cm} . \end{eqnarray*} To obtain $5100 \cdot L_{5100}$, we multiplied 5100\AA~by the monochromatic flux at the (redshifted) 5100\AA~continuum of the nuclear spectrum. We used the broad H$\alpha$ to H$\beta$ ratio to correct for the reddening of the luminosity (except for MCG\,-6-30-15 for which we used the continuum reddening instead). Another estimation of the central BH mass is obtained by first estimating the radius of the BLR \citep{kas05} (eq. 2): \begin{eqnarray*} \frac{R_{\rm BLR}}{10\,{\rm lt-days}} &=& (2.23\pm0.21) \cdot \left[\frac{\lambda L_{\lambda} (5100 \AA)}{10^{44} \,{\rm erg\,s^{-1}}}\right]^{0.69\pm0.05} \hspace{0.2cm} , \end{eqnarray*} and then calculating the virial reverberation mass, correcting $v_{\rm FWHM, \,H\beta}$ by the empirical factor of $\sqrt{5.5}/2$ [derived by normalising the AGN $M_{\rm BH} - \sigma_{\star}$ relationship to the $M_{\rm BH} - \sigma_{\star}$ relationship for quiescent galaxies, \citet{pet04}]: \begin{eqnarray*} \frac{M_{\rm BH}}{10^8 M_{\odot}} &=& 0.02685 \cdot \frac{R_{\rm BLR}}{10\,{\rm lt-days}} \cdot \left(\frac{{v}_{\rm FWHM,\,H\beta}}{10^3\,{\rm km\,s^{-1}}}\right)^2 \end{eqnarray*} (in Table~\ref{bh} denoted as $M_{\rm BH, Kaspi\,et\,al.}$). The FWHM of the broad H$\beta$ emission line was determined using only two fits to the observed H$\beta$ line, one for the narrow H$\beta$ component and one for the broad one. For MCG\,-6-30-15, where we could not disentangle the broad and the narrow H$\beta$ component, we used a fit to the total line profile and thus, the BH mass estimation ($M_{\rm BH, Kaspi\,et\,al.}$) can be considered as a lower limit (the line is dominated by the broad component such an approach is reasonable). Note that using the empirical factor of $\sqrt{5.5}/2$ instead of $\sqrt 3/2$ [as previously assumed for an isotropic velocity dispersion and $\sigma_{\rm line}$ = FWHM/2; e.g. \citet{net90}] results in a $\sim$1.8 times higher BH mass \citep{pet04}. Third, for comparison, we estimated BH masses from the stellar velocity dispersion [$\sigma_{\star}$; e.g. \citet{mer01}] using the formula from \citet{tre02}: \begin{eqnarray*} \log \frac{M_{\rm BH}}{M_{\odot}} = (8.13\pm0.06) + (4.02\pm0.32) \cdot \log \frac{\sigma_{\star}}{200\,{\rm km\,s^{-1}}} \hspace{0.2cm} . \end{eqnarray*} The stellar velocity dispersions for five of our six Seyfert-1 galaxies were derived by taking the average of $\sigma_{\star}$'s obtained by \citet{gar05} through two different methods (direct fitting and cross-correlation). Note that their $\sigma_{\star}$ was measured within the aperture of $\sim$2\arcsec$\times$2\farcs5. However, we did not correct for the aperture size [see \citet{tre02} for an extensive discussion on this topic]. The results are summarised in Table~\ref{bh}. The difference in the derived BH masses for the first two methods can be as high as a factor of 3 which is in the range of the 3$\sigma$ error. $M_{\rm BH, Tremaine\,et\,al., \sigma_{\star}}$ and $M_{\rm BH, Peterson\,et\,al.}$ are in agreement to within 1$\sigma$ for most galaxies. For NGC\,526a, $M_{\rm BH, Tremaine\,et\,al., \sigma_{\star}}$ is significantly larger (by a factor $\sim$8) than $M_{\rm BH, Peterson\,et\,al.}$. On the other hand, for MCG\,-6-30-15, the $M_{\rm BH, Tremaine\,et\,al., \sigma_{\star}}$ is by a factor of 3 smaller than $M_{\rm BH, Peterson\,et\,al.}$ but in agreement with $M_{\rm BH, Kaspi\,et\,al.}$ within the errors. However, all these values were derived from statistical formulae from which individual galaxies may deviate quite a lot. We searched the literature for other BH mass estimations for our target galaxies but were successful only for MCG\,-6-30-15 for which the results agree within the errors (see Appendix~\ref{mcg6}). \begin{table*} \begin{minipage}{180mm} \caption[]{\label{bh} BH masses\footnote{The errors given for the BH masses reflect the errors of our measurements of $L_{5100}$ and FWHM$_{\rm H\beta}$, the error in $\sigma_{\star}$ as well as the statistical errors of the three different fitting relations (Peterson et al., Kaspi et al., Tremaine et al.).}} \begin{center} \begin{tabular}{lcccccccc} \\[-2.3ex] \hline \hline\\[-2.3ex] \multicolumn{1}{c}{Galaxy} & $5100 \cdot L_{5100}$ & FWHM$_{\rm H\beta}$\footnote{Broad H$\beta$ of 2 component fit. The FWHMs of the corresponding narrow H$\beta$ component of the five galaxies (excluding MCG\,-6-30-15) in the order of listing are: 444, 480, 427, 406, and 445\,km\,s$^{-1}$, respectively.} & $M_{\rm BH, Peterson\,et\,al.}$ & $M_{\rm BH, Kaspi\,et\,al.}$ & $M_{\rm BH, Tremaine\,et\,al., \sigma_{\star}}$\\[0.25ex] & (10$^{44}$ erg\,s$^{-1}$) & (km\,s$^{-1}$) & (10$^8$ $M_{\odot}$) & (10$^8$ $M_{\odot}$) & (10$^8$ $M_{\odot}$)\\[0.25ex] \hline\\[-2.3ex] Fairall\,51 & 1.53$\pm$0.2 & 3330$\pm$300 & 1.1 (+0.4-0.3) & 0.9 (+0.4-0.3) & --\footnote{No $\sigma_{\star}$ measurement available in literature}\\ NGC\,6860 & 0.44$\pm$0.05 & 5920$\pm$600 & 0.4$\pm$0.1 & 1.2 (+0.6-0.4) & 0.4 (+0.5-0.2)\\ Mrk\,915 & 1.68$\pm$0.2 & 4560$\pm$500 & 1.1 (+0.4-0.3) & 1.8 (+0.9-0.6) & 0.6 (+0.9-0.4)\\ NGC\,526a & 0.17$\pm$0.02 & --\footnote{No broad H$\beta$ line} & 0.19 (+0.09-0.07) & --$^d$ & 1.6 (+1.2-0.8)\\ MCG\,-05-13-017 & 0.41$\pm$0.04 & 5240$\pm$500 & 0.37 (+0.13-0.11) & 0.9 (+0.4-0.3) & 0.24 (+0.15-0.11)\\[0.1ex] MCG\,-6-30-15 & 0.27$\pm$0.03 & 1990$\pm$200 & 0.27 (+0.11-0.09) & 0.1 (+0.05-0.04)\footnote{This is a lower limit for the BH mass since the narrow H$\beta$ component was not removed.} & 0.08 (+0.07-0.05)\\[0.1ex] \hline\\[-2.3ex] \end{tabular} \end{center} \end{minipage} \end{table*} \subsection{Reddening distribution} The reddening was derived from the recombination value of the narrow H$\alpha$/H$\beta$ emission-line ratio except for MCG\,-6-30-15 where we could not disentangle the broad and narrow Balmer lines (in the central $\sim$3\arcsec). Instead, we show for this object the reddening distribution of the continuum with respect to the stellar template (see also paper I \& II). On the contrary, the continuum slope reddening is not available for the other type 1s as no stellar template was fit. While the nuclear reddening is given in Table~\ref{result}, we give in Table~\ref{reddening} the highest reddening value within the NLR, the distance from the centre at which it occurs as well as the global reddening, i.e.~derived from the total H$\alpha$ and H$\beta$ fluxes within the NLR. While the highest reddening value within the NLR is on average slightly higher in Seyfert 2s (0.75$\pm$0.06) than in Seyfert 1s (0.57$\pm$0.07), we find that the reddening derived from the global Balmer decrement is comparable in Seyfert 1s and 2s (see Table~\ref{reddening} and paper II, Table 6): $<$$E_{B - V}$$>_{\rm 5 Sy1s}$ $\sim$ 0.37$\pm$0.04\,mag (excluding MCG-6-30-15) and $<$$E_{B - V}$$>_{\rm 6 Sy2s}$ $\sim$ 0.40$\pm$0.04\,mag. When excluding NGC\,526a which can be considered as a galaxy of transient Seyfert-type 1.9, the average reddening value for four Seyfert-1 galaxies is indeed the same as that for Sy2s: $<$$E_{B - V}$$>_{\rm 4 Sy1s}$ $\sim$ 0.41$\pm$0.03\,mag. This finding is opposite to the results of \citet{rhe05} who concluded that Sy 1s have much lower (or zero) reddening than Sy 2s, based on near-IR line ratios. They speculate that the difference could be caused either by a large-scale ($>$100pc) torus or by an intrinsically different grain size distributions in Sy 1s and 2s. Our values rather agree with previous measurements [e.g. \citet{coh83, gas84, tsv89}]: Although these authors find slightly larger values of reddening in Sy2s, substantial reddening is present in Sy 1s as well. In Fig.~\ref{reddening1}, we show radial profiles of the reddening (for Sy 2s, see Fig.~5 in paper I and Fig.~3 in paper II). Among Sy1s, there are clear spatial gradients of the reddening with $E_{B - V}$ peaking at or near the photometric centre in Mrk\,915 and MCG\,-6-30-15 (note that the latter was determined from the continuum); in other Sy 1s, the reddening is more even or patchy within the NLR. Among Sy 2s, the reddening clearly peaks at or near the photometric centre in IC\,5063, ESO\,362-G008 and NGC\,5643; there are also systematic spatial gradients in NGC\,1386, NGC\,7212, and NGC\,3281, though the maximum reddening does not coincide with the photometric centre. In the (online) appendix, we give the reddening of the BLR derived from the broad Balmer decrement (when discussing the objects individually). \begin{figure*} \includegraphics[width=18cm]{fig3.eps} \caption[]{\label{reddening1} \small Reddening distributions of Fairall\,51, NGC\,6860, Mrk\,915, NGC\,526a, MCG\,-05-13-017, and MCG\,-6-30-15. The edge of the NLR as determined from the diagnostic diagrams is indicated by dotted lines (NGC\,6860 and MCG\,-05-13-017).} \end{figure*} \begin{table} \begin{minipage}{80mm} \caption[] {\label{reddening} Maximum and global reddening within the NLR\footnote{We excluded MCG\,-6-30-15 as we do not have a measure of the reddening from the H$\alpha$ to H$\beta$ ratio.}} \begin{center} \begin{tabular}{lccc} \\[-2.3ex] \hline \hline\\[-2.3ex] \multicolumn{1}{c}{Galaxy} & max. $E_{B - V}$\footnote{Highest reddening value within the NLR} & Distance\footnote{Distance from the centre of highest reddening value} & global $E_{B - V}$\footnote{Derived by adding the H$\alpha$ and H$\beta$ flux within the NLR}\\ & (mag) & (\arcsec) & (mag) \\[0.25ex] \hline\\[-2.3ex] Fairall\,51 & 0.72$\pm$0.2 & 5.55 & 0.39$\pm$0.06\\ NGC\,6860 & 0.53$\pm$0.3 & -4.44 & 0.36$\pm$0.04\\ Mrk\,915 & 0.74$\pm$0.02 & -1.11 & 0.50$\pm$0.05\\ NGC\,526a & 0.34$\pm$0.07 & -5.55 & 0.22$\pm$0.02\\ MCG\,-05-13-017 & 0.54$\pm$0.1 & -3.33 & 0.38$\pm$0.04\\[0.1ex] \hline\\[-2.3ex] \end{tabular} \end{center} \end{minipage} \end{table} \subsection{Spatially resolved spectral diagnostics} \label{2ddiag} In paper I \& II, we described the use of diagnostic line-ratio diagrams of the three types pioneered by \citet{bal81} to not only distinguish bet\-ween emission-line object classes (e.g. Seyfert galaxies, LINERs, starbursts, transition objects), but to probe the ``real'' NLR size, i.e. the central region which is photoionised by the AGN, and to discriminate the contribution from starbursts. Such an approach has already been chosen by other authors to study the ionisation mechanism in the circumnuclear and extranuclear regions of Seyfert galaxies [e.g.~\citet{rad98, tem03, cir05}]. It often reveals that emission-line ratios at larger distances from the central AGN change towards \ion{H}{ii} region-like ones due to an increasing contribution to the ionisation by surrounding star-forming regions. The high S/N ratio of our spectra enables us to measure line ratios for all three diagrams (``first'': [\ion{O}{iii}]/H$\beta$ versus [\ion{S}{ii}]/H$\alpha$; ``second'': [\ion{O}{iii}]/H$\beta$ versus [\ion{O}{i}]/H$\alpha$; ``third'': [\ion{O}{iii}]/H$\beta$ versus [\ion{N}{ii}]/H$\alpha$) out to several arcseconds from the nucleus (Figs.~\ref{diag1},~\ref{diag3}). The symbols are chosen such that ``O'' refers to the central spectrum, the small letters mark regions corresponding to ``-'' arcseconds from the nucleus, the capital ones mark regions corresponding to ``+'' arcseconds from the nucleus (Table~\ref{tablediag}). In the second diagnostic diagram, the data points of the outer regions are upper limits, due to the faintness of the [\ion{O}{i}]\,$\lambda$6300\,\AA~line involved. As for NGC\,1386 and NGC\,5643 (paper I \& II), we find a clear transition between line ratios falling in the AGN regime and those typical for \ion{H}{ii} regions in two Seyfert-1 galaxies of our sample (NGC\,6860 and MCG\,-05-13-017). We present all three diagnostic diagrams of these objects in Fig.~\ref{diag1}. For the remaining four galaxies, no such transition is observed but all emission-line ratios are typical for gas ionised by an AGN power-law continuum. As the distributions in the three diagnostic diagrams are comparable, we present only the third diagnostic diagram for these objects in Fig.~\ref{diag3}. (We do not show the diagnostic diagram for MCG\,-6-30-15 as we could not disentangle the broad and narrow Balmer emission lines in the central $\sim$3\arcsec.) We use the diagnostic diagrams to determine the NLR size. The results are summarised in Table~\ref{tablediag}. For those objects which show a transition of emission-line ratios from the central AGN region to \ion{H}{ii} regions, this method gives a measure of the NLR size without [\ion{O}{iii}] contamination from circumnuclear starbursts: Although \ion{H}{ii} regions may be present over the entire emission-line region, the AGN ionisation dominates in the innermost arcseconds, determining the size of the NLR. For both objects with such a transition, the determined NLR size is about twice as large as that measured from the HST snapshot survey of \citet{sch03a}, showing the low sensitivity of this survey. On the other hand, some authors have attributed all [\ion{O}{iii}] emission to the extended NLR: For MCG\,-05-13-017, \citet{fra00} give a size of $\sim$17\arcsec~for the extended NLR, while our diagnostic diagrams reveal that only the central $\pm$3\arcsec~consist of gas ionised by the central AGN. From emission line ratios, \citet{lip93} classify NGC\,6860 as transitional object between Seyfert galaxies and starbursts. However, we can show that NGC\,6860 is a Seyfert galaxy with the NLR extending out to $r \sim 5$\arcsec~and surrounding starbursts, giving rise to [\ion{O}{iii}] emission out $r \sim 10$\arcsec. To conclude, compared to the spatially resolved spectral diagnostics measuring the ``real'' NLR size, the apparent NLR size determined by [\ion{O}{iii}] images can be either smaller in case of low sensitivity or larger in case of contributions of circumnuclear starbursts. For the remaining four objects, the estimated NLR size is a lower limit, pointing out the limitations of this method (see paper II for discussion). \begin{figure*} \includegraphics[width=18cm]{fig4.eps} \caption[] {\label{diag1} \small All three diagnostic diagrams for spatially-resolved emission-line ratios in NGC\,6860 ({\it left panels}) and MCG\,-05-13-017 ({\it right panels}).} \end{figure*} \begin{figure*} \includegraphics[width=18cm]{fig5.eps} \caption[] {\label{diag3} \small Emission-line ratios in the third diagnostic diagram for Fairall\,51, Mrk\,915, and NGC\,526a. All line ratios fall in the AGN regime. } \end{figure*} In paper I, we used \texttt{CLOUDY} photoionisation modelling to show that the observed distinction between \ion{H}{ii}-like and AGN-like line ratios represents a true difference in ionisation source, and that our method to measure the NLR radius is valid. These results can also be applied here. The second diagnostic diagram including the [\ion{O}{i}] emission-line is essential to reach this conclusion, since our photoionisation calculations showed that a combination of outwards decreasing ionisation parameter and metal abundances could mimic \ion{H}{ii}-like line ratios despite an intrinsic AGN ionisation source in the [\ion{O}{iii}]/H$\beta$ versus [\ion{N}{ii}]/H$\alpha$ and the [\ion{O}{iii}]/H$\beta$ versus [\ion{S}{ii}]/H$\alpha$ diagrams. \begin{table*} \begin{minipage}{180mm} \caption[] {\label{tablediag} Results from diagnostic diagrams\footnote{The second column gives the distance from the centre to the first spectra (marked with the letters ``a'' and ``A'' in the diagnostic diagrams). In the third column, the orientation of the small and capital letters is listed. The maximum [\ion{O}{iii}] radius (S/N $>$ 3) at the same p.a.~taken from literature is given in the fourth column. We also give the [\ion{O}{iii}] radius (S/N $>$ 3) observed from our spectra (column 5). In the sixth column, the radius is given until which we were able to plot line ratios in the diagnostic diagrams. In the last column, the radius of the NLR as determined from the diagnostic diagrams is given in \arcsec~and, in brackets, pc, respectively. The two objects with a clear transition between NLR and \ion{H}{ii} region are marked in bold.}} \begin{center} \begin{tabular}{lcr@{/}lcccc} \\[-2.3ex] \hline \hline\\[-2.3ex] \multicolumn{1}{c}{Galaxy} & ``a/A'' & \multicolumn{2}{c}{``a/A''} & $R_{\rm [OIII]}$ & $R_{\rm [OIII]}$ & $R_{\rm line-ratios}$ & $R_{\rm NLR}$\\ & Distance (\arcsec) & \multicolumn{2}{c}{Orientation} & Literature (\arcsec) & Our Data (\arcsec) & Our data (\arcsec) & Our Data (\arcsec, pc)\\[0.25ex] \hline\\[-2.3ex] Fairall\,51 & 1 & SE & NW & 2\footnote{ Taken from HST image of \citet{sch03a}} & \hspace{+1.5mm}9 & \hspace{+1.5mm}8 & $>$8 (2260)\\ {\bf NGC\,6860} & 1 & E & W & 3$^b$ & 10 & 10 & \hspace{+1.5mm}{\bf 5 (1465)}\\ Mrk\,915 & 1 & N & S & 2$^b$ & 12 & \hspace{+1.5mm}6 & $>$6 (2720)\\ NGC\,526a & 1 & SE & NW & \hspace{-2mm}11\footnote{Taken from groundbased image of \citet{mul96a}} & 20 & \hspace{+1.5mm}9 & $>$9 (3260)\\ {\bf MCG\,-05-13-017} & 1 & NW & SE & 1$^b$ & 17 & 11 & \hspace{+0.0mm}{\bf 3 (730)}\\ MCG\,-6-30-15 & 1.4 & NW & SE & 2$^b$ & 12 & \hspace{+1.5mm}4 & \hspace{+2.mm}4? (700)\footnote{In the central 3\arcsec~of MCG\,-6-30-15, we cannot disentangle the broad and narrow Balmer components and therefore do not determine the line ratios. In the outer region to a distance of $\pm$4\arcsec, they fall in the AGN regime.}\\[0.1ex] \hline\\[-2.3ex] \end{tabular} \end{center} \end{minipage} \end{table*} \subsection{Surface-brightness distribution} \label{longsur} The spatially varying luminosities in the [\ion{O}{iii}] and (narrow) H$\alpha$ emission lines as well as the continuum (at 5450-5700\,\AA) were calculated and divided by the corresponding area in square parsecs at the galaxy to allow a comparison among all galaxies in our sample (Fig.~\ref{lum1}). The surface-brightness distributions are similar to each other, centrally peaked and decreasing with distance from the nucleus. For comparison, the [\ion{O}{iii}] surface-brightness distributions from the HST images of \citet{sch03a} are shown for those objects included in the HST snapshot survey. They were derived by averaging three vectorplots along the major axis of the NLR emission (see also paper I \& II). In all objects, they clearly show the higher spatial resolution of the HST image (0\farcs05 - 0\farcs1 pix$^{-1}$) compared to the 1-2\arcsec~spatial sampling of our spectral data. However, they also reveal the low sensitivity of the HST images compared to our spectroscopy: The [\ion{O}{iii}] emission at a S/N of 3 ends significantly earlier than what can be seen in our spectral data. In some cases, the HST [\ion{O}{iii}] surface-brightness distributions reveal several subpeaks of possibly individual NLR clouds, as can be already seen in the [\ion{O}{iii}] images (Fig.~\ref{galaxies1}). These substructures are smoothed out in our $\sim$10-20 times lower spatial resolution spectra but are nevertheless still visible as a secondary or tertiary peak, mostly in emission lines. We fitted a power-law function $L = L_{0} (\frac{R}{R_0})^{\delta}$ (with projected radius $R$) to the surface-brightness distributions of [\ion{O}{iii}], H$\alpha$, and the continuum. The fitting parameters are shown in Table~\ref{fitlum} (with $L_0$ referring to $R_0$ = 100 pc from the nucleus). Only data points within the NLR were included and the central point was excluded from the fit. The [\ion{O}{iii}] surface brightness falls faster with radius than the H$\alpha$ surface brightness and also faster than the continuum ($<$$\delta_{\rm [OIII]}$$> \sim -2.95\pm0.3$; $<$$\delta_{\rm H\alpha}$$> \sim -2.58\pm0.5$; $<$$\delta_{\rm cont}$$> \sim -1.63\pm0.2$). The average slope for both the [\ion{O}{iii}] and H$\alpha$ surface brightness gets even steeper when excluding NGC\,526a which can be considered as a galaxy of transient Seyfert-type 1.9 ($<$$\delta_{\rm [OIII]}$$> \sim -3.19\pm0.2$; $<$$\delta_{\rm H\alpha}$$> \sim -2.9\pm0.4$). For all three surface-brightness distributions ([\ion{O}{iii}], H$\alpha$, continuum), Seyfert-1 galaxies show a steeper radial slope than Seyfert 2s (see paper II) ($<$$\delta_{\rm [OIII]}$$>_{\rm 6 Sy1} \sim -2.95\pm0.3$ versus $<$$\delta_{\rm [OIII]}$$>_{\rm 5 Sy2} \sim -2.24\pm0.2$; $<$$\delta_{\rm H\alpha}$$>_{\rm 5 Sy1} \sim -2.58\pm0.5$ versus $<$$\delta_{\rm H\alpha}$$>_{\rm 5 Sy2} \sim -2.16\pm0.2$; $<$$\delta_{\rm cont}$$>_{\rm 6 Sy1} \sim -1.63\pm0.2$ versus $<$$\delta_{\rm cont}$$>_{\rm 5 Sy2} \sim -1.19\pm0.1$), a difference that is even more pronounced when excluding NGC\,526a (see above). We want to point out that the continuum slope for the Seyfert-1 galaxies may be boosted by the AGN as we only excluded the nuclear datapoint but no other datapoints within the seeing range (1-2\arcsec) which may still be contaminated by the unresolved AGN contribution; excluding these datapoints leaves us with too few datapoints in most cases. However, to estimate this effect, we calculated the average continuum slope excluding the central 2 arcseconds for four Seyfert-1 galaxies for which 3-7 datapoints remain in the fit. It is still steeper than that for the Seyfert-2 galaxies: $<$$\delta_{\rm cont}$$>_{\rm 4 Sy1} \sim -1.58\pm0.2$ versus $<$$\delta_{\rm cont}$$>_{\rm 5 Sy2} \sim -1.19\pm0.1$. \begin{table*} \begin{minipage}{180mm} \caption[]{\label{fitlum} Fitting parameters of surface-brightness distributions\footnote{A linear least-squares fit was applied with $\log L = \delta \cdot \log R/R_0 + \log L_{0}$. $L_0$ corresponds to $R_0$ = 100 pc from the nucleus. The number of data points included in the fit is given in column 2 (= half the number of averaged values from both sides of the nucleus). For those objects which show a transition between line ratios typical for AGNs and \ion{H}{ii}-region like ones in the diagnostic diagrams, determining the size of the NLR, only data points within the NLR were included (NGC\,6860 and MCG\,-05-13-017). For MCG\,-6-30-15, no deconvolution of the broad and narrow H$\alpha$ was possible.}} \begin{center} \begin{tabular}{lccccccc} \\[-2.3ex] \hline \hline\\[-2.3ex] \multicolumn{1}{c}{Galaxy} & Data Points & \hspace{+3mm}$\delta_{\rm [OIII]}$ & $\log L_{\rm [OIII], 0}$ & \hspace{+3mm}$\delta_{\rm H\alpha}$ & $\log L_{\rm H\alpha, 0}$ & \hspace{+3mm}$\delta_{\rm cont}$ & $\log L_{\rm cont, 0}$\\ & & & (erg\,s$^{-1}$\,pc$^{-2}$) & & (erg\,s$^{-2}$\,pc$^{-2}$) & & (erg\,s$^{-2}$\,pc$^{-2}$)\\[0.25ex] \hline\\[-2.3ex] Fairall\,51 & 6 & $-$3.55$\pm$0.25 & 38.14 & $-$3.16$\pm$0.48 & 37.36 & $-$2.15$\pm$0.30 & 35.79 \\ NGC\,6860 & 4 & $-$3.06$\pm$0.12 & 37.69 & $-$2.59$\pm$0.45 & 36.85 & $-$1.62$\pm$0.43 & 36.27 \\ Mrk\,915 & 5 & $-$3.92$\pm$0.32 & 39.58 & $-$3.88$\pm$0.33 & 38.89 & $-$1.72$\pm$0.45 & 35.74 \\ NGC\,526a & 8 & $-$1.72$\pm$0.19 & 37.1 & $-$1.28$\pm$0.19 & 36.13 & $-$1.71$\pm$0.06 & 34.99 \\ MCG\,-05-13-017 & 3 & $-$2.90$\pm$0.07 & 37.61 & $-$1.98$\pm$0.48 & 36.4 & $-$1.66$\pm$0.02 & 34.9 \\[0.1ex] MCG\,-6-30-15 & 3 & $-$2.52$\pm$0.41 & 36.58 & \hspace{+3mm}-- & -- & $-$0.94$\pm$0.10 & 34.49 \\[0.1ex] \hline\\[-2.3ex] \end{tabular} \end{center} \end{minipage} \end{table*} \begin{figure*} \includegraphics[width=18cm]{fig6.eps} \caption[] {\label{lum1} \small Surface-brightness distributions of Fairall\,51, NGC\,6860, Mrk\,915, NGC\,526a, MCG\,-05-13-017, and MCG\,-6-30-15 in [\ion{O}{iii}] (open diamonds), narrow H$\alpha$ (filled diamonds), and continuum (at 5450-5700\,\AA, stars). The [\ion{O}{iii}] surface-brightness distribution from the HST image is shown as small open squares connected by a line (HST pixel scale $\sim$ 0\farcs1\,pix$^{-1}$). Only data points with S/N $>$ 3 were included. Error bars are smaller than the symbol size. The HST image has a 10 to 20 times higher spatial resolution but a significantly lower sensitivity, not allowing to measure the outer parts of the NLR. The edge of the NLR as determined from the diagnostic diagrams is indicated by dotted lines (NGC\,6860 and MCG\,-05-13-017). Note that NGC\,526a is not included in the HST snap-shot survey by \citet{sch03a}. } \end{figure*} \subsection{Electron-density distribution} \label{longdens} Applying the classical methods outlined in \citet{ost89}, we derive the electron density as a function of distance to the nucleus using the ratio of the [\ion{S}{ii}]\,$\lambda$$\lambda$6716,6731\,\AA~pair of emission lines. We used the observed central temperature to correct for the dependency of electron density on temperature\footnote{$n_e ({\rm T}) = n_e ({\rm [SII]\,ratio}) \cdot \sqrt{(T/10000)}$}. Due to the faintness of the involved [\ion{O}{iii}]\,$\lambda$4363\,\AA~emission line, we were not able to measure the temperature in the outer parts. For those objects for which no temperature was determined, we assumed $T = 10000$\,K. In all objects, the electron density is highest at the nucleus and decreases outwards down to the low-density limit (assumed to be 50\,cm$^{-3}$; Fig.~\ref{density1}). In some cases, it reveals a secondary or tertiary peak on one or both sides of the optical centre. A characteristic structure with a central peak and a smaller peak on both sides of the nucleus can be identified in four objects (Fairall\,51, NGC\,6860, NGC\,526a, MCG\,-05-13-017). The outer peaks are often close to the boundary of the NLR. These density enhancements may indicate shocks occurring at the edge of the NLR. \begin{figure*} \includegraphics[width=18cm]{fig7.eps} \caption[] {\label{density1} \small Electron density obtained from the [\ion{S}{ii}]\,$\lambda$6716\,\AA/$\lambda$6731\,\AA~ratio as a function of the distance from the nucleus Fairall\,51, NGC\,6860, Mrk\,915, NGC\,526a, MCG\,-05-13-017, and MCG\,-6-30-15. Open symbols indicate locations where $n_{\rm e, obs}$ is in the low-density limit (assumed $\le$ 50\,cm$^{-3}$). The edge of the NLR as determined from the diagnostic diagrams is indicated by dotted lines (NGC\,6860 and MCG\,-05-13-017).} \end{figure*} In Table~\ref{fitden}, we give the results of fitting a power-law function $n_{\rm e, obs}$ = $n_{e, 0}$$ (\frac{R}{R_0})^{\delta}$ to the observed electron densities (with $n_{\rm e, 0}$ at $R_0$ = 100 pc from the nucleus). Note that we did include only data points within the NLR. $\delta$ ranges between -0.9 and -2.3. On average, the density decreases with $R^{-1.46 \pm 0.2}$. Thus, Seyfert-1 galaxies tend to show a steeper slope than Seyfert-2 galaxies ($<$$\delta$$>_{\rm 5 Sy2} \sim -1.14\pm0.1$; paper II). However, the individual scatter is rather large. The temperature can be a function of distance from the central AGN. Unfortunately, we are not able to determine the temperature dependency on distance from the nucleus. In those objects where we are able to trace the electron temperature in the inner few arcseconds, it remains roughly constant. One may expect that the temperature is decreasing if the AGN is the only heating source. In that case, correcting with the central temperature overestimates the electron density in the outer parts. The observed decreasing slope can therefore not be artificially introduced by a wrong temperature correction. On the other hand, some authors report an increasing temperature with distance from the nucleus [e.g.~\citet{ber83}] and explain it with a decrease in electron density faster than $n_e \propto r^{-2}$. However, the average decrease of electron density $n_{\rm e, obs}$ we observe is with $\delta \sim -1.5$ slower than that. Note that the critical density for [\ion{S}{ii}]\,$\lambda\lambda$6716,6731\,\AA~is $\sim$1500\,cm$^{-3}$ and 3900\,cm$^{-3}$, respectively. Thus, these lines can only be used to measure the density in an environment with densities below $\sim$1500\,cm$^{-3}$. At least for some objects in which we measure central densities in this regime the central density may thus be underestimated. \begin{table} \begin{minipage}{80mm} \caption[] {\label{fitden} Fitting parameters of electron-density distribution\footnote{A linear least-squares fit was applied with $\log$$n_{\rm e, obs} = \delta \cdot \log R/R_0 + \log $$n_{e, 0}$. $n_{e, 0}$ corresponds to the value at $R_0$ = 100 pc distance from the centre. The number of data points included in the fit is given in column 2 (= half the number of averaged values from both sides of the nucleus). For those objects which show a transition between line ratios typical for AGNs and \ion{H}{ii}-region like ones in the diagnostic diagrams, determining the size of the NLR, only data points within the NLR were included (NGC\,6860, MCG\,-05-13-017).}} \begin{center} \begin{tabular}{lccc} \\[-2.3ex] \hline \hline\\[-2.3ex] \multicolumn{1}{c}{Galaxy} & Data Points & $\delta$ & $\log n_{e, 0}$ (cm$^{-3}$)\\[0.25ex] \hline\\[-2.3ex] Fairall\,51 & 6 & -2.10$\pm$1.50 & 4.1 \\ NGC\,6860 & 4 & -1.06$\pm$0.22 & 3.4 \\ Mrk\,915 & 3 & -1.20$\pm$0.40 & 3.5 \\ NGC\,526a & 8 & -1.15$\pm$0.50 & 3.1 \\ MCG\,-05-13-017 & 3 & -0.94$\pm$0.14 & 3.5\\[0.1ex] MCG\,-6-30-15 & 3 & -2.32$\pm$1.42 & 3.4\\[0.1ex] \hline\\[-2.3ex] \end{tabular} \end{center} \end{minipage} \end{table} \subsection{Ionisation-parameter distribution} \label{longioni} \begin{figure*} \includegraphics[width=18cm]{fig8.eps} \caption[] {\label{ioni1} \small Ionisation parameter derived from [\ion{O}{ii}]/[\ion{O}{iii}] ratio as a function of the distance from the nucleus for Fairall\,51, NGC\,6860, NGC\,526a, MCG\,-05-13-017, and MCG\,-6-30-15 (open symbols: $n_H$ = 100\,cm$^{-3}$, filled ones: $n_H$ = 1000\,cm$^{-3}$). The edge of the NLR as determined from the diagnostic diagrams is indicated by dotted lines (NGC\,6860 and MCG\,-05-13-017).} \end{figure*} The line ratio [\ion{O}{ii}]$\lambda$3727\,\AA/[\ion{O}{iii}]\,$\lambda$5007\,\AA~can be used to estimate the value of the ionisation parameter $U$ [e.g.~\citet{pen90, kom97}]. Here, we followed the method described in paper I. The ionisation parameter peaks at the optical nucleus and decreases with distance. In NGC\,6860, a secondary distinct peak is visible. We fitted a power-law function $U_{\log (n_e) = 2, obs}$ = $U_{0}$$ (\frac{R}{R_0})^{\delta}$ to the observed ionisation parameter (Table~\ref{fitioni}) (with $R_0$ = 100 pc from the nucleus; Table~\ref{fitioni}). We include only data points within the NLR. $\delta$ ranges between -0.6 and -1. As for the electron density, Seyfert-1 galaxies tend to show a steeper slope than Seyfert-2 galaxies ($\delta_{\rm 5 Sy1} \sim -0.81\pm0.07$ versus $\delta_{\rm 2 Sy2} \sim -0.51 \pm 0.08$; paper II). However, first the individual scatter is rather large and second, only two Seyfert-2 galaxies were included in this comparison. \begin{table} \begin{minipage}{80mm} \caption[]{\label{fitioni} Fitting parameters of ionisation-parameter distribution\footnote{A linear least-squares fit was applied with $\log$$U_{\log (n_e) = 2, obs}$ = $\delta \cdot \log R/R_0 + \log$$U_{0}$. $U_{0}$ corresponds to the value at $R_0$ = 100 pc distance from the centre. The number of data points included in the fit is given in column 2 (= half the number of averaged values from both sides of the nucleus). For those objects which show a transition between line ratios typical for AGNs and \ion{H}{ii}-region like ones in the diagnostic diagrams, determining the size of the NLR, only data points within the NLR were included (NGC\,6860 and MCG\,-05-13-017). For Mrk\,915, the [\ion{O}{ii}] line was not covered by the observations. }} \begin{center} \begin{tabular}{lccc} \\[-2.3ex] \hline \hline\\[-2.3ex] \multicolumn{1}{c}{Galaxy} & Data Points & $\delta$ & $\log U_{0}$\\[0.25ex] \hline\\[-2.3ex] Fairall\,51 & 6 & -0.81$\pm$0.12 & -1.9 \\ NGC\,6860 & 4 & -0.62$\pm$0.25 & -2.2 \\ NGC\,526a & 8 & -0.69$\pm$0.10 & -2.1 \\ MCG\,-05-13-017 & 3 & -1.01$\pm$0.26 & -1.9 \\ MCG\,-6-30-15\footnote{Correction with reddening determined from continuum slope} & 3 & -0.90$\pm$0.20 & -2.7\\[0.1ex] \hline\\[-2.3ex] \end{tabular} \end{center} \end{minipage} \end{table} \subsection{Velocities} \label{longvel} We derived the NLR line-of-sight velocity curve by taking the average of velocity centroids derived by fitting Gaussians to H$\alpha$ and [\ion{N}{ii}] as well as the [\ion{O}{iii}] emission lines. In addition, given the high S/N ratio of our spectra, we were able to trace the stellar rotation curves from Gaussian fits to the stellar absorption line \ion{Ca}{ii} K for two objects (before subtraction of the stellar template) throughout the whole region as these lines are not blended with emission lines. The results (with spectral lines used for individual objects indicated) are shown in Fig. 11. We estimated the uncertainty in determining the velocity peaks to $\sim 20$ km/s for both the emission and absorption lines. Note that for Fairall\,51, the [\ion{N}{ii}] emission line is blended by the strong H$\alpha$ emission and we show the velocity curve derived from the H$\alpha$ peak alone. For MCG\,-6-30-15, the H$\alpha$ and [\ion{N}{ii}] line are strongly blended and no separation is possible. As pointed out in paper II, the interpretation of the NLR velocity curves can be quite complex and requires modelling of the 3D structure which is beyond the scope of this paper. Here, we limit ourselves to point out that all the galaxies show large-scale velocity gradients across their NLR. Based on our preliminary modelling, we believe that to the zeroth order, they can be explained by rotation in at least 4 galaxies: Fairall\,51, NGC\,6860, Mrk\,915, and MCG\,-05-13-017. The situation is more complex in NGC\,526a and MCG\,-6-30-15. We will present detailed modelling of velocity fields in a separate paper. \begin{figure*} \includegraphics[width=18cm]{fig9.eps} \caption[] {\label{vel1} \small Velocity fields of Fairall\,51, NGC\,6860, Mrk\,915, NGC\,526a, MCG\,-05-13-017, and MCG\,-6-30-15. The velocities of the NLR were derived from the average value of the peak wavelengths of the H$\alpha$ and [\ion{N}{ii}] emission lines (filled diamonds), with the exceptions of Fairall\,51 and MCG\,-6-30-15 where H$\alpha$ and [\ion{O}{iii}] were used, respectively. The [\ion{O}{iii}] velocities are also shown for all objects (open squares). The stellar velocities were determined from the \ion{Ca}{ii}\,K absorption line ``peak wavelength'' as seen in the ``raw'' spectrum (open diamonds) if visible at a good S/N. The edge of the NLR as determined from the diagnostic diagrams is indicated by dotted lines (NGC\,6860 and MCG\,-05-13-017).} \end{figure*} \section{Conclusions} We use high-sensitivity spatially-resolved spectra, obtained along the extended [\ion{O}{iii}] emission with the VLT and the NTT, to study the BLR and NLR of six Seyfert-1 galaxies. The nuclear spectra reveal the typical strong NLR emission from oxygen at different ionisation states, lines from ionised nitrogen and sulphur, as well as Balmer lines. In addition, broad H$\alpha$ emission is seen in all objects, broad H$\beta$ emission in all but NGC\,526a, classifying the latter as Sy1.9. In most objects, high-excitation iron lines are seen in the central spectra, originating from the powerful and hard ionisation source in the centre. High-ionisation emission lines as well as those with high critical densities tend to be stronger in Seyfert-1 galaxies. We determine the electron temperature and ionisation parameter in the optical nucleus and find that they are in general higher in type-1 Seyferts than in type 2s. From the continuum luminosity at 5100\AA~as well as the FWHM of the broad H$\beta$ line, we estimate BH masses and compare them to those derived from $\sigma_\star$ (as taken from literature). The Seyfert-1 galaxies in our sample cover a BH mass range of $\sim$1 $\cdot$ 10$^{7}$ to $\sim$1 $\cdot$ 10$^{8}$ M$_{\odot}$. In addition to the Seyfert-2 galaxies NGC\,1386 and NGC\,5643 already discussed in paper I \& II, we observe a transition of emission-line ratios from the central AGN region to \ion{H}{ii} region in two objects (NGC\,6860 and MCG\,-05-13-017), when plotting line-ratios from our spatially resolved spectra in diagnostic diagrams. This transition occurs at a distance of several arcseconds on both sides of the optical nucleus and is observed in all three diagnostic diagrams, i.e.~including the second diagnostic diagram involving the [\ion{O}{i}] emission line. The most probable explanation for this transition is that the stellar ionisation field starts to dominate that of the AGN. This conclusion is supported by \texttt{CLOUDY} photoionisation modelling presented in paper I. We are thus able to determine the radius of the NLR in these objects to 700-1500\,pc independent of sensitivity and excluding [\ion{O}{iii}] contamination from circumnuclear starbursts. In former spectroscopic studies, the observed [\ion{O}{iii}] has often been attributed to the extended NLR. We can show that at least part of this ``extended NLR'' emission is actually predominantly powered by \ion{H}{ii} regions and that only the central few arcseconds are indeed gas photoionised by the AGN. For the other four objects, all line ratios fall in the AGN regime in all three diagnostic diagrams. Thus, the determined NLR size (700-3300\,pc) is a lower limit, limited by either the S/N of our data or the lack of a strong surrounding stellar ionisation field. We derive reddening, surface brightness, electron density, and ionisation parameter within the NLR as a function of projected distance from the nucleus. Both electron density and ionisation parameter decrease with radius. In general, the decrease is faster in Seyfert-1 galaxies than in type 2s. We discuss the results for each object individually and compare them to literature data (Appendix). Comparing the results presented here to those of six Seyfert-2 galaxies from paper II shows that both types have in general similar NLR properties. However, there are differences in emission-line strength as well as in the observed slope of spatially varying parameters. The origin of these differences will be discussed in a subsequent paper on the basis of the unified model taking into account line-of-sight integrations. Applying the methods presented here to a larger sample of Seyfert galaxies will help to measure the NLR size and thus verify the NLR size-luminosity relation \citep{ben02}. Our results have shown that although [\ion{O}{iii}] imaging is less time intensive than the spectroscopic method, it often yields an either too small NLR size in case of low sensitivity or a too large NLR size in case of circumnuclear \ion{H}{ii} regions contributing to the [\ion{O}{iii}] emission. \begin{acknowledgements} We thank the anonymous referee for valuable suggestions. N.B. is grateful for financial support by the ``Studienstiftung des deutschen Volkes''. B.J. acknowledges the support of the Research Training Network ``Euro3D-Promoting 3D Spectroscopy in Europe'' (European Commission, Human Potential Network Contract No. HPRN-CT-2002-00305) and of the Institutional Research Plan No. AV0Z10030501 of the Academy of Sciences of Czech Republic. M.H. is supported by ``Nordrhein-Westf\"alische Akademie der Wissenschaften''. We thank Pierre Ferruit for providing and helping us with the \texttt{fit/spec} line-fitting tool. Henrique Schmitt was so kind to provide the continuum-subtracted HST [\ion{O}{iii}] images of several Seyfert galaxies in this sample. This research has made use of the NASA/IPAC Extragalactic Database (NED), operated by the Jet Propulsion Laboratory, Caltech, under contract with the NASA. \end{acknowledgements}
1,477,468,750,262
arxiv
\section*{Abstract} \noindent\normalsize{Entropy stabilization has garnered significant attention as a new approach to designing novel materials. Much of the work in this area has focused on bulk ceramic processing, leaving entropy-stabilized thin films relatively underexplored. Following an extensive multi-variable investigation of polycrystalline (Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$)O thin films deposited via pulsed laser deposition (PLD), it is shown here that substrate temperature and deposition pressure have strong and repeatable effects on film texture and lattice parameter. Further analysis shows that films deposited at lower temperatures and under lower oxygen chamber pressure are $\sim$40x more electrically conductive than otherwise identical films grown at higher temperature and pressure. This electronic conductivity is hypothesized to be the result of polaron hopping mediated by transition metal valence changes which compensate for oxygen off-stoichiometry. \newline \textbf{Keywords:} Entropy Stabilized, Resistivity, Lattice, SEM, TEM, Morphology } \end{tabular} \end{@twocolumnfalse} \vspace{0.6cm} ] \renewcommand*\rmdefault{bch}\normalfont\upshape \rmfamily \section*{} \vspace{-1cm} \footnotetext{\textit{$^{a}$~National Renewable Energy Laboratory, 16000 Denver West Pkwy, Golden, Colorado, United States. E-mail: andriy.zakutayev@nrel.gov }} \footnotetext{\textit{$^{b}$~Colorado School of Mines, 1500 Illinois St, Golden, Colorado, United States. E-mail: geoff.brennecka@mines.edu}} \section{Introduction} High entropy alloys are a well established field of work with applications in metallic property optimization\cite{HEA,HEAfracRes,HEAtensile}. In 2015, entropy stabilization was first applied to ceramic oxides\cite{Rost2015}, and a five cation material system, (Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$)O, was found to crystallize reversibly into a uniform rock salt structure when heated to a sufficiently high temperature (>875$^{\circ}$C for the equimolar composition); when quenched to room temperature, the material system retains the rock salt structure. Cation distribution within the entropy stabilized rock salt structure has been shown to be truly random and homogeneous over long range, with some local distortions in the oxygen anion sublattice in order to accommodate the different cation sizes\cite{EXAFS} (Fig.~\ref{fgr:BeachBalls}). This entropy stabilized oxide has since been the focus of many studies, and versions of it have been reported to work well as an Li-ion conductor\cite{IonCon} and anode for Li-ion batteries\cite{LiStorage}, and redox material for thermochemical water splitting\cite{WaterSplit}. Extremely high dielectric constants have also been reported\cite{DConst}. \newline The majority of these studies have been carried out on samples processed as bulk ceramics\cite{bulk1,Rost2015,DConst,JTdistort}; thin film studies are less common. The seminal work on these oxides\cite{RostThesis} showed an inverse relationship between both pO$2$ and substrate temperature on out-of-plane tetragonal lattice parameter in epitaxial thin films of (Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$)O. Another thin film study used pulsed laser deposition (PLD) to show that entropy stabilized thin films of (Mg$_{x}$Co$_{x}$Ni$_{x}$Cu$_{x}$Zn$_{x}$)O can be tailored to engineer long range magnetic order and shows promise for enhancing exchange coupling\cite{ExchCoupling}. PLD has also been used to explore structural stability of the (Mg$_{x}$Co$_{x}$Ni$_{x}$Cu$_{x}$Zn$_{x}$)O system as a function of growth parameters\cite{EpiTFs}. Depositing films using high-energy methods like PLD provides access to much higher effective temperatures (on the order of 30,000 K) than bulk processing allows. Note that the term ``effective temperature'' is used here to describe the estimated temperatures of the plasma generated by laser ablation and not as a measure of cation disorder\cite{Ndione}, as cations are presumably fully disordered in this material system. \newline In the current study, PLD is used to fabricate non-epitaxial chemically uniform single phase rock salt thin films of (Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$)O on non-lattice matched substrates. Changing substrate temperature and chamber partial pressure of oxygen enabled tailoring of film texture, lattice parameter, and morphology as well as resulting through-thickness electrical conductivity. Varying deposition temperatures and pressures affects the growth of the deposited thin film, favoring (111) or (002) texture depending on pressure, as well as compressing the lattice constant and changing morphology at lower deposition temperature. Electrical current density measurements through the thicknesses of the films indicate that when (111) crystallographic texturing is coupled with a reduction in lattice constant, the conductivity of the films is significantly higher than under all other conditions. \begin{figure}[h] \centering \includegraphics[height=4cm]{BeachBalls.png} \caption{{The films in this study exhibit a rock salt crystal structure in which all five cations are randomly distributed on the cation sublattice with oxygen ions filling the anion sublattice.}} \label{fgr:BeachBalls} \end{figure} \section{Experimental Details} \subsection{Targets} Stoichiometric (Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$)O targets were fabricated by mixing equimolar amounts of MgO [Alfa Aesar, 99.99\%], CoO [Alfa Aesar, 99.7\%], NiO [Sigma Aldrich, 99.995\%], CuO [Alfa Aesar, 99.995\%], and ZnO [Alfa Aesar, 99.99\%]. These were roller milled with Y-stabilized ZrO$_2$ milling media for 6 hours. The powder mix was pressed in a 1'' die to 80 MPa using a Carver hydraulic press without binder. Resulting pellets were sintered in air using the following profile: $5^\circ$C/min ramp to $900^\circ$C, hold 6 hrs, 10$^\circ$C/min ramp to $1100^\circ$C, hold 8 hrs. After the 8 hour hold, each pellet was removed from the 1100$^{\circ}$C furnace and air quenched to room temperature. This sintering profile resulted in sufficiently dense ($\approx$87\%) and uniform targets that were confirmed by x-ray diffraction to be polycrystalline single phase rock salt without preferential orientation (Fig. \ref{fgr:T3XRD}). A CuO target was fabricated by pressing and sintering 99.995\% pure powder purchased from Alfa Aesar in a 1'' die to 80 MPa using a Carver hydraulic press without binder. The resulting pellet was sintered in air at $900^\circ$C for 6 hours with a ramp rate of $5^\circ$C per minute. The resulting CuO target was approximately 88\% dense. A PLD target of ZnO (99.9\%) was purchased from Plasma Materials. \begin{figure}[h] \centering \includegraphics[height=7cm]{T3XRD.png} \caption{\textbf{}XRD confirms the polycrystalline rock salt structure of the (Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$)O PLD target with a lattice constant of 4.30 \r{A}.\ The five precursor oxide reference patterns\cite{MgO,CoO,NiO,CuO,ZnO} are also shown to confirm that no residual precursor phases are observed. \textbf{}} \label{fgr:T3XRD} \end{figure} \subsection{Pulsed Laser Deposition} Single phase films were deposited using a KrF (248nm) excimer laser with pulse rates between 10 Hz and 30 Hz and laser energies ranging from 100 - 350mJ. Film structure was found to be rather insensitive to laser parameters, so a pulse rate of 20 Hz and laser energy of 200mJ were selected as standard for continued study. Both Zn and Cu from the (Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$)O target ablated in slightly lower concentrations than the other cations in the target, so separate CuO and ZnO targets were used to supplement depositions and grow stoichiometric films, with ZnO requiring approximately 1.2\% more laser pulses and CuO requiring approximately 3.5\% more. Substrate temperature and chamber (oxygen) pressure combinations of 200$^\circ$C, 300$^\circ$C, or 450$^\circ$C and 50 mT or 100 mT were explored. The rest of this report focuses on samples deposited with a substrate temperature of $200^\circ$C or $450^\circ$C and oxygen pressure set to 50mT or 100mT; trends reported here are consistent across the rest of the deposition space. Substrates used in this study were Borosilicate [Eagle2000] glass (EXG) and Pilkington NSG TEC 15 Glass with approximately 340nm of fluorinated tin oxide coating (FTO, 13-15 $\Omega$/sq). \subsection{Sample Characterization} Samples were characterized using a variety of methods to validate their structure and composition before analyzing properties. Each thin film was measured in 44 different locations in order to confirm consistency of the film across the 2'' x 2'' substrate. Crystal structure and phase purity of PLD Targets were confirmed using a PANalytical PW3040 X-ray Diffractometer (XRD) and Cu-k$\alpha$ radiation. XRD data for films were collected at the Stanford Linear Accelerator on Beamline 1-5 (measurements calibrated with a LaB$_6$ standard, however BL 1-5 does not correct for tilt of sample. Data presented here is measured to be approximately 0.08 Q low) and with a Bruker D8 Discover with Cu-k$\alpha$ radiation. Composition was confirmed using a Fischer XUV x-ray fluorescence tool (XRF) in addition to energy dispersive spectroscopy (EDS) data collected using a FEI Talos F200X transmission electron microscope (TEM), which was also used for bright field and high angle annular dark field imaging. Top electrical contacts were deposited through a shadow mask using a Temescal FC2000 e-beam tool, with 10nm of Ti and 50nm of Pt. Current density measurements were taken with a Keithley 2400 source meter. Microstructure of the top down was taken with an FEI Nova 630 SEM, while the cross sections were imaged using a Hitachi 4800 SEM in order to manage sample charging. The open-source COMBIgor package for commercial Igor Pro\cite{Combigor} was used for data analysis and visualization. \section{Results and Discussion} \subsection{Crystal structure} Arrays of XRF measurements (Table \ref{tbl:XRF}) confirm that cations are homogeneously distributed in these thin films, at least on a mesoscopic scale. On average, samples are all slightly lower in Cu than other measurable cations; Mg could not be measured using XRF because its atomic mass is too low for the instrument to identify. The slightly lower amounts of Cu do not alter the structure and are consistent across all depositions, so it does not appear to be the cause for the property trends reported here. \begin{table}[h] \small \caption{\ Average cation concentration from 44 points across each sample per XRF. } \label{tbl:XRF} \begin{tabular*}{0.48\textwidth}{@{\extracolsep{\fill}}llllll} \hline Sample Type & Co & Ni & Cu & Zn & Mg\\ (at\%) &&&&&\\ \hline HTLP & 26 $\pm$ 0.4 & 25 $\pm$ 0.4 & 23 $\pm$ 0.4 &25 $\pm$ 0.4 & -- \\ HTHP & 26 $\pm$ 0.5 & 2 4$\pm$ 0.5 & 23 $\pm$ 0.5 & 26 $\pm$ 0.5 & -- \\ LTLP & 25 $\pm$ 0.4 & 25 $\pm$ 0.5 & 23 $\pm$ 0.5 & 26 $\pm$ 0.4 & -- \\ \hline \end{tabular*} \end{table} XRD confirms that the rock salt structure of the films matches that of the desired rock salt ESO (Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$)O (Fig. \ref{fgr:SLACpolyxtal}), similar to the target (Fig. \ref{fgr:T3XRD}), and suggests that all constituent cations are most likely randomly and homogeneously dispersed throughout the the cation FCC sublattice.\cite{EXAFS,RostThesis}. Films grown on EXG at $200^\circ$C under 50mT pO$_2$ exhibit moderate levels of texturing in the (111) growth direction (Fig. \ref{fgr:SLACpolyxtal}) where the corrected cubic lattice constant \emph{a} is 4.22 \r{A}\.\ It is interesting to note the difference in the thin film lattice constant with that of the (Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$)O target (4.30 \r{A}), indicating that the thin film growth on an amorphous substrate exhibits some lattice compression over that of the target material in all directions. Figure~\ref{fgr:PvTXRD} shows changes in film texture with substrate temperature and chamber pressure of oxygen. Note that all of the lattice constants for samples shown here are smaller than that of the bulk ESO target lattice constant (4.3 \r{A}, Fig. \ref{fgr:T3XRD}). The samples in this study do not show the specific out of plane lattice compression identified in earlier work on this material system\cite{RostThesis} because the films here are not epitaxially constrained by the substrates; rather they show a cubic compression where both the \emph{a} and \emph{c} lattice constants are slightly compressed relative to the bulk ceramic. \newline \begin{figure}[h] \centering \includegraphics[height=11cm]{SLAC.png} \caption{\textbf{Top:}\ High resolution XRD data collected at SLAC on BL 1-5 show the (111)-textured polycrystalline growth of a cubic rock salt with moderate (111) crystallographic texture on a borosilicate glass substrate. \textbf{Bottom:}\ Integrated counts across approximately 4 degrees through the center of the measured Chi emphasize the (111) texture of the sample.} \label{fgr:SLACpolyxtal} \end{figure} \begin{figure}[h] \centering \includegraphics[height=12cm]{PvsTXRDreduced.png} \caption{\textbf{}Crystallographic texturing was found to change as a result of deposition temperature and partial pressure of oxygen. Overall, samples grown at 450$^\circ$C have a tendency to grow with an (002) texture when grown on either EXG or FTO substrates and at both 50 mT and 100 mT pO$_2$. Samples grown at 300$^\circ$C show (111) texturing at both 50 mT and 100 mT and on either EXG or FTO substrates. Samples grown at 200$^\circ$C with 50 mT or 100 mT all show (111) texturing along with peaks at higher $2\theta$, indicating a shorter lattice constant for these films. Samples grown on EXG glass versus FTO-coated glass are designated by ``EXG'' and ``FTO''.} \label{fgr:PvTXRD} \end{figure} Figure~\ref{fgr:3types} illustrates different types of growth observed in this study, and their respective definitions. Three distinct growth types have been identified based on texture and lattice constant and will be the focus for the remainder of this paper. Each of these three types of films is grown on FTO substrates and will be referred to as ``HTHP''(Higher Temperature, Higher Pressure), ``LTLP'' (Lower Temperature, Lower Pressure), and ``HTLP''(Higher Temperature, Lower Pressure) and are defined as follows: HTHP and HTLP both have a lattice constant of 4.22 \AA\, while LTLP samples have a lattice constant of 4.19 \r{A}. The Lotgering Factor\cite{LF,JonesLF} is used here to quantify the degree of crystallographic texturing; LF = 0 is purely random, LF = 100\% is perfectly textured. All LF values in this paper were calculated from an integration across 4 degrees of Chi from 2d detectors. Films designated as ``HTHP'' show a (111) texture with an LF = 31\% and are grown at 100mT pO$_2$ and 450$^\circ$C; ``LTLP'' films show (111) texturing as well, but have an LF = 41\% and are grown at 50mT pO$_2$ and 200$^\circ$C; ``HTLP'' films show (002) texture with an LF = 36\% and are grown at 50mT pO$_2$ and 450$^\circ$C. This information is summarized in Table \ref{tbl:Summary}.\newline The definition for lattice strain\cite{LNOeprops} is \( \frac{(c-c_{bulk})}{c_{bulk}} \), where $c$ is the out-of-plane lattice constant and $c_{bulk}$ is the bulk lattice constant associated with strain-free polycrystalline growth (here we use the lattice constant from the ESO target of 4.30 \r{A} as our bulk value). We can apply this model to ESO data as a way to quantify the degree of compression happening in these films relative to bulk samples. The comparative lattice constant for individual thin film samples is calculated for a cubic crystal lattice and derived from individual XRD scans using the 2$\theta$ positions of the center of at least two peaks from different crystal plane families. This lattice compression for LTLP samples indicates some fundamental difference in these samples from the HTHP and HTLP samples. All LTLP samples grown on FTO have an isotropic compression greater than $2.7\%$, while HTLP and HTHP samples have an isotropic compression no greater than $2.1\%$. \begin{figure}[ht] \centering \includegraphics[height=5.3cm]{threeTypes.png} \caption{Two dimensional XRD data collected using a Bruker D8 Discover shows the lattice shift in the ESO rock salt peaks relative to the FTO peaks. ESO peaks are identified by solid grey lines and FTO peaks by dotted grey lines. The LTLP samples show a shift to higher 2$\theta$ for ESO peaks, verifying a smaller lattice constant.} \label{fgr:3types} \end{figure} \subsection{Microstructure} SEM images revealed notably different microstructures among these three growth types. The HTLP textured samples, grown at $450^\circ$C and 50mT O$_2$, have almost circular grains growing in a largely columnar fashion, as seen in Fig.~\ref{fgr:SEM3t}a,d. HTHP samples, grown at $450^\circ$C and 100mT O$_2$, also show columnar grains, however these grains exhibit a triangular shape at the surface consistent with the cube corners of a (111)-textured rock salt, shown in Fig.~\ref{fgr:SEM3t}b,e. LTLP samples, grown at 200$^\circ$C and 50mT O$_2$, do not exhibit columnar growth, and grains show no consistent morphology, seen in Fig.~\ref{fgr:SEM3t}c,f. It is not unusual for films grown at higher temperatures to exhibit more regular columnar growth as the higher substrate temperature provides additional mobility for atoms to settle into a somewhat lower energy configuration before being fully ``quenched''. The HTHP films are grown under a higher partial pressure of oxygen and higher substrate temperature. Higher pressure leads to more oxidizing conditions during growth, while the higher temperature leads to more reducing conditions according to ideal gas law, but also may give the material the kinetic energy needed to incorporate oxygen into the lattice. The kinetic limitations of the lower substrate temperature coupled with a lower growth pO$_2$ suggest that oxygen incorporation into the lattice may be different in such samples, leading to different oxygen off-stoichiometry. \begin{figure}[h] \centering \includegraphics[height=5.2cm]{SEM3textures.png} \caption{\textbf{} \textbf{a-c)}SEM shows different surface morphologies for each of the three crystallographic orientations. \textbf{d-f)} Cross-sectional SEM shows that the HTLP and HTHP samples have a fairly columnar grain structure, while the LTLP textured samples have overlapping grains and less uniformity across the film thickness. *Note: LTLP films were grown with 30\% more laser pulses, making them thicker, as seen above.} \label{fgr:SEM3t} \end{figure} Uniformity of the cation distributions is shown with an EDS map across a HAADF image in Fig. \ref{fgr:TEMsp}. The microstructures and z-contrast seen in Fig. \ref{fgr:TEMsp} are consistent throughout the thickness of each film, suggesting that any variation in electrical response between these three types of samples arises from something other than cation segregation. A TEM and atom probe tomography study on bulk samples of the same composition\cite{Diercks} showed that annealing could lead to Cu segregation, but no evidence of such segregation is observed here. \newline \begin{figure}[h] \centering \includegraphics[height=5cm]{HAADFcompare.png} \caption{\textbf{}Film microstructure as shown in HAADF images is consistent with columnar grains seen in SEM. \textbf{a)} Samples with an HTLP texture show a higher degree of surface roughness in addition to a lower degree of collimation than in the HTHP samples but a higher degree of collimation than the LTLP samples. \textbf{b)} HTHP samples show a high degree of collimation in grain structure, \textbf{c)} LTLP samples show grains to be less ordered than in the HTHP and HTLP samples. \textbf{d-f)} Cation distributions for all three types of crystallographic texturing are uniform and homogeneous as shown by EDS mapping.} \label{fgr:TEMsp} \end{figure} It has been established\cite{RostThesis} that phase decomposition of bulk samples begins in an annealing process around $600^\circ$C, which may suggest some likelihood of phase segregation for film samples grown at higher temperatures. Such segregation is accompanied by XRD peak broadening\cite{JTdistort}. Films in this study exhibit consistent full width half maximum values for XRD peaks across all deposition temperatures (Fig. \ref{fgr:noPhaseDecomp}). Thus, neither XRD nor TEM (imaging and EDS) show any evidence of cation segregation in these films. \newline \begin{figure}[h] \centering \includegraphics[height=5cm]{noPhaseDecomp.png} \caption{\textbf{}None of the three sample types show a shift in XRD data FWHM calculations, indicating that cation segregation at higher temperatures is unlikely to be the underlying cause of the observed differences in resistance.} \label{fgr:noPhaseDecomp} \end{figure} \subsection{Electrical Properties} Films of approximately 1 $\mu$m thickness were grown on FTO coated borosilicate glass in order to enable through-thickness current density measurements. Measuring the conductivity of these films as a function of applied voltage reveals different electrical behavior for the LTLP samples from the other two types of films. Measurements collected on HTLP samples reveal a nonlinear resistance on the order of 3.42$\pm$0.175 M$\Omega$cm. HTHP samples show a nonlinear resistance on the order of 1.2$\pm$0.05 M$\Omega$cm. Samples with the LTLP textured structure are much more conductive, with an average measured resistance of 51$\pm$0.65 k$\Omega$cm as seen in Fig. \ref{fgr:IVdata}. The nonlinearity was initially hypothesized to be a result of self heating of the sample during the measurement. After a series of repeated current density measurements over a range of different frequencies and duration, this appears unlikely to be the cause because the measurements proved consistent across all acquisition timing parameters. \newline \begin{figure}[] \centering \includegraphics[height=6cm]{threeIV.png} \caption{The resistance of HTLP and HTHP samples is much higher than in the LTLP textured samples. Multiple lines for each sample type are the result of each of the 44 points mapped across each film. Reported values are an average of at least 30 measurements for each sample type. Inset: Schematic of measurement configuration. } \label{fgr:IVdata} \end{figure} Samples deposited at the lowest substrate temperature ($200^\circ$C) and lowest oxygen pressure (50 mT) were the most electrically conductive. Because these films were grown at the lowest substrate temperature, they are the most kinetically limited sample set. Because of the kinetic limitations, it is likely that these films grew with an oxygen off-stoichiometry, which could be compensated by a change in average cation valence. In this case, the most likely mechanism of electrical conductivity is polaron hopping; the hopping distance is shorter for polarons in the compressed lattice, making the hop more likely to occur and increasing electrical conductivity. There is also a clear correlation between higher electrical conductivity (Fig. \ref{fgr:IVdata}) and non-columnar microstructure (Fig. \ref{fgr:SEM3t}), but the exact origin of this correlation is not clear. The results presented here are consistent with previously published data as represented by Table \ref{tbl:Summary}. One publication has reported electrical conductivity\cite{DConst} in bulk samples to be 5 M$\Omega$cm, which is consistent with the resistive measurements in the present study. Presumably, the bulk samples from this previous publication\cite{DConst} also have a random microstructure similar to that of the LTLP samples, but the improved density of a thin film over a bulk processed sample would increase electrical conductivity and explain the lower resistance values measured in our thin film samples. Other publications do not present resistivity data, however they do show a consistent lattice constant across non-epitaxial samples. The epitaxial samples grown on MgO$_{(001)}$ substrates are reported to have a much smaller lattice constant (4.15 \AA\ ). Nanopowder samples show some tetragonal distortion not seen in other bulk studies.\newline \section{Conclusions} Single phase rock salt (Mg$_{0.2}$Co$_{0.2}$Ni$_{0.2}$Cu$_{0.2}$Zn$_{0.2}$)O samples can be deposited efficiently using PLD, and temperature and partial pressure of oxygen can be used to control crystallographic texturing. Thin film samples grown on FTO with a lattice constant of 4.22 ~\AA\ and either (111) or (002) texturing (HTHP or HTLP, respectively) demonstrate a nonlinear electrical resistance between 1 and 3 M$\Omega$cm, while (111) textured films with a lattice constant of 4.19 ~\AA\ (LTLP) are significantly more conductive, with resistivity values on the order of 50 k$\Omega$cm. The primary difference in microstructure for resistive versus conductive films is that the conductive films have a more random grain structure with nucleation occurring throughout the films, and the resistive films show columnar grains with nucleation occurring primarily at the substrate. It is unclear whether this is correlation or causation. The smaller lattice constant also indicates a shorter hopping distance for any induced charge in the material system. This, coupled with a likely oxygen off-stoichiometry during growth, makes polaron hopping a likely candidate for the conduction mechanism. \newline Ultimately, this work indicates that thin film samples grown on FTO at 50mT pO$_2$ and a substrate temperature of 200$^\circ$C are more electrically conductive than samples grown at higher substrate temperatures with a more organized microstructure by approximately 40x. The data presented here are consistent with previous reports on this material, but also shows that manipulating microstructure and crystal lattice can have dramatic effects on electrical conductivity. \begin{table*}[t] \small \caption{\ Summary of growth conditions and characteristics for all samples grown on FTO substrates and compared to previous works. *Note: LF and microstructure information for previous works have been approximated based on data provided in articles.} \label{tbl:Summary} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lllllllll} \hline Sample & Pressure & Temperature & Crystal & Lattice & Lotgering & Average Lattice & Microstructure & Resistance \\ Type & (mT P${O_2}$) & ($^\circ$C) & Texture & Constant (\AA) & Factor & Compression(\%) & &(k${\Omega}$cm) \\ \hline HTLP & 100 & 450 & (002) & 4.22 & 30\% & 1.81 $\pm$ 0.24 & Columnar & 3400 $\pm$ 175 \\ HTHP & 50 & 450 & (111) & 4.22 & 31\% & 1.59 $\pm$ 0.13 & Columnar & 1200 $\pm$ 50 \\ LTLP & 50 & 200 & (111)& 4.19 & 41\% & 2.75 $\pm$ 0.06 & Random & 51 $\pm$ 0.65 \\ Bulk Polyxtal\cite{DConst} & 20\% & 1000 & N/A & 4.22 & 0\% & 1.86 & Random & 5000 \\ Nanoparticles\cite{LiStorage} & 20\% & 1000 & N/A & a=4.17; c=4.2 & 0\% & 2.33 & Random & --\\ MgO(001) epi\cite{ExchCoupling} & 50 & 300 & (002) & 4.15 & 100\% & 3.49 & Epitaxial & -- \\ TF on Al$_2$O$_3$\cite{RostThesis} & 50 & 500 & N/A & 4.25 & 0\% & 1.16 & Random & -- \\ \end{tabular*} \end{table*} \section*{Conflicts of interest} There are no conflicts to declare. \section*{Acknowledgements} This work was authored in part by the National Renewable Energy Laboratory (NREL), operated by Alliance for Sustainable Energy LLC, for the U.S. Department of Energy (DOE) under contract no. DE-AC36-08GO28308. Work at NREL by A.Z. was funded provided by the Office of Energy Efficiency and Renewable Energy (EERE), under Fuel Cell Technologies Office (FCTO), as a part of HydroGEN Energy Materials Network (EMN) consortium. Work at CSM by G.B. and V.J. was partially supported by the National Science Foundation (DMR-1555015 and DMREF-1534503). The use of Stanford's Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, was supported by DOE‐SC‐BES under Contract No. DE‐AC02‐76SF00515. We would like to thank K. Talley and A. Mis for assistance in data processing with COMBIgor; P. Walker for helping to streamline the FIB liftout process; K. Gann for technical support; A. Mehta and M.S. Perera for support on SLAC's BL 1-5; B. Gorman for TEM instruction. The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government. \balance
1,477,468,750,263
arxiv
\section{} \usepackage{amssymb} \usepackage{times} \usepackage{graphics,latexsym} \usepackage{graphicx} \usepackage{amsmath} \usepackage{tabularx} \usepackage{color} \newcommand{\am}[1]{\mbox{{\em \rm #1}}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{center}}{\begin{center}} \newcommand{\end{center}}{\end{center}} \newcommand{\begin{displaymath}}{\begin{displaymath}} \newcommand{\end{displaymath}}{\end{displaymath}} \newcommand{\partial}{\partial} \newcommand{\derpar}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\displaystyle}{\displaystyle} \newcommand{\bfm}[1]{\mbox{\boldmath $ #1 $}} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \journal{Mech. Res. Comm.} \begin{document} \begin{frontmatter} \title{Effects of non-linear rheology on \\ the electrospinning process: a model study} \author[*]{Giuseppe Pontrelli\footnote{Corresponding author.}} \author[*]{Daniele Gentili} \author[*]{Ivan Coluzza} \vspace{20mm} \author[**]{Dario Pisignano} \author[*]{Sauro Succi} \address[*]{Istituto per le Applicazioni del Calcolo - CNR, Via dei Taurini 19-00185, Rome, Italy \\ tel. +39 0649270927, \;\; fax +39 064404306 \\ {\tt Email: giuseppe.pontrelli@gmail.com} \bigskip \\} \address[**]{Dipartimento di Matematica e Fisica ``E. De Giorgi'', University of Salento \& National Nanotechnology Laboratory of Istituto Nanoscienze - CNR, Via Arnesano-73100 Lecce, Italy} \begin{abstract} We develop an analytical bead-spring model to investigate the role of non-linear rheology on the dynamics of electrified jets in the early stage of the electrospinning process. Qualitative arguments, parameter studies as well as numerical simulations, show that the elongation of the charged jet filament is significantly reduced in the presence of a non-zero yield stress. This may have beneficial implications for the optimal design of future electrospinning experiments. \end{abstract} \begin{keyword} Electrospinning, Herschel-Bulkley, viscoelasticity, stable jet \end{keyword} \end{frontmatter} \section{Introduction} The dynamics of charged polymers in external fields is an important problem in non-equilibrium thermodynamics, with many applications in science and engineering \cite{doshi1995electrospinning, andra}. In particular, such dynamics lies at the heart of electrospinning experiments, whereby charged polymer jets are electrospun to produce nanosized fibers; these are used for several applications, as reinforcing elements in composite materials, as building blocks of non-wetting surfaces layers on ordinary textiles, of very thin polymeric separation membranes, and of nanoelectronic and nanophotonic devices ~\cite{pisi, agar, arin, mann}. In a typical electrospinning experiment, a charged polymer liquid is ejected at the nozzle and is accelerated by an externally applied electrostatic field until it reaches down to a charged plate, where the fibers are finally collected. During the process, two different regimes take place: an initial stable phase, where the steady jet is accelerated by the field in a straight path away from the spinneret (the ejecting apparatus); a second stage, in which an electrostatic-driven bending instability arises before the jet reaches down to a collector (most often a grounded or biased plane), where the fibers are finally deposited. In particular, any small disturbance, either a mechanical vibration at the nozzle or hydrodynamic perturbations within the experimental apparatus, misaligning the jet axis, would lead the jet into a region of chaotic bending instability \cite{reneker2000bending}. The stretching of the electrically driven jet is thus governed by the competition between electrostatics and fluid viscoelastic rheology. The prime goal of electrospinning experiments is to minimize the radius of the collected fibers. By a simple argument of mass conservation, this is tantamount to maximizing the jet length by the time it reaches the collecting plane. Consequently, the bending instability is a desirable effect, as long it can be kept under control in experiments. By the same argument, it is therefore of interest to minimize the length of the initial stable jet region. Analyzing such stable region is also relevant for an effective comparison with results coming from electrospinning experiments studied in real-time by means of high-speed cameras \cite{camp} or X-ray phase-contrast imaging \cite{green}. \par In the last years, with the upsurge of interest in nanotechnology, electrospinning has made the object of comprehensive studies, from both modelling~\cite{carroll2006electrospinning} and experimental viepoints ~\cite{theron2005multiple} (for a review see~\cite{carroll2008nanofibers}). Two families of models have been developed: the first treats the jet filament as obeying the equations of continuum mechanics ~\cite{spivak2000model,feng2002stretching,feng2003stretching, hohman2001electrospinningI, hohman2001electrospinningII}. Within the second one, the jet is viewed as a series of discrete elements obeying the equations of Newtonian mechanics ~\cite{reneker2000bending, yarin2001bending}. More precisely, the jet is regarded as a series of charged beads, connected by viscoelastic springs. Both approaches above typically assume Newtonian fluids, with a linear strain-stress constitutive relation. On the other hand, in a recent time, the use of viscoelastic fluids has also been investigated in a number of papers, both theoretical and experimental, for the case of power-law~\cite{feng2002stretching,spivak2000model} and other viscoelastic fluids~\cite{carroll2006electrospinning,carroll2011discretized}, with special attention to the instability region. \par In this paper, we investigate the effects of Herschel-Bulkley non-Newtonian rheology on the early stage of the jet dynamics. The main finding is that the jet elongation during such initial stable phase can be considerably slowed down for the case of yield-stress fluids. As a result, the use of yield-stress fluids might prove beneficial for the design of future electrospinning experiments. \section{The model problem} \setcounter{equation}{0} Let us consider the electrical driven liquid jet in the electrospinning experiment. We confine our attention to the initial rectilinear stable jet region and, for simplicity, all variables are assumed to be uniform across the radial section of the jet, and vary along $z$ only, thus configuring a one-dimensional model. The filament is modelled by two charged beads ({\em dimer}) of mass $m$ and charge $e$, separated by a distance $l$, and subjected to the external electrical field $V_0/h$, $h$ being the distance of the collector plate from the injection point (Fig.~\ref{fig:setup}) and $V_0$ the applied voltage. The deformation of the fluid filament is governed by the combined action of electrostatic and viscoelastic forces (gravity and surface tension are neglected), so that the momentum equation reads \cite{reneker2000bending}: \begin{equation} m {dv \over dt}=- {e^2 \over l^2} + {e V_0 \over h} + \pi a^2 \sigma\,, \label{eqn1} \end{equation} where $a$ is the cross-section radius of the bead and $v$ the velocity defined as: \begin{equation} {dl \over dt}=-v \label{eqn2} \end{equation} For a viscoelastic fluid, the stress $\sigma$ is governed by the following equation: \begin{equation} {d \sigma \over dt}=-\frac{1}{\tau}\left(\sigma-\sigma_{HB}\right)\,, \label{eqn3} \end{equation} where $\tau$ is the time relaxation constant and $\sigma_{HB}$ is the Herschel-Bulkley stress \cite{huang1998herschel,burgos1999determination} that reads \begin{equation} \sigma_{HB}=\sigma_{Y}+ K \left(\frac{dl}{ldt}\right)^{n}\, \label{eqHB} \end{equation} In the previous expression, $\sigma_{Y}$ is the yield stress , $n$ is the power-law index and $\mu_0=K \left|\displaystyle{1 \over l} \displaystyle{dl \over dt} \right| ^{n-1}$ is the effective viscosity with $K$ a prefactor having dimensions $g s^{n-2} cm^{-1}$; the case $n=1$ and $\sigma_Y=0$ recovers the Maxwell fluid model, with $\mu_0 \equiv const.$ In the stress eqns. (\ref{eqn3})--(\ref{eqHB}), the Maxwell, the power-law and the Herschel-Bulkley models are combined. A large class of polymeric and industrial fluids are described by $\sigma_Y>0$ (Bingham fluid) and $n < 1$ (shear-thinning fluid), $n > 1$ (shear-thickening fluid) \cite{bird1987dynamics, pontrelli97, succi09}. It is expedient to recast the above eqns. in a nondimensional form by defining a length scale and a reference stress as in \cite{reneker2000bending}: \begin{equation} L= \left({e^2 \over \pi a_0^2G}\right)^{1 \over 2} \qquad \qquad G={\mu_0 \over \tau} \end{equation} with $a_0$ the initial radius. With no loss of generality, we assume the initial length of the dimer to be $L$. Space is scaled in units of the equilibrium length $L$ at which Coulomb repulsion matches the reference viscoelastic stress $G$, while time is scaled with the relaxation time $\tau$. The following nondimensional groups: \begin{equation} Q= {e^2 \mu_0^2 \over L^3 m G^2} \qquad\qquad V= {e V_0 \mu_0^2 \over h L m G^2} \qquad\qquad F= {\pi a_0^2 \mu_0^2 \over L m G} \end{equation} \noindent measure the relative strength of Coulomb, electrical, and viscoelastic forces respectively \cite{reneker2000bending}. Note that the above scaling implies $F=Q$. By setting $W=-v$ and applying mass conservation: \begin{displaymath} \pi a^2 l= \pi a_0^2 L \end{displaymath} the above equations (\ref{eqn1})--(\ref{eqHB}) take the following nondimensional form: \begin{align} \label{DYN} & {dl \over dt}= W \nonumber \\ & {dW \over dt}= V + {Q/l^2} - {F \sigma/l} \nonumber \\ & {d \sigma \over dt}= \sigma_Y+ ({W/l})^n - \sigma \end{align} with initial conditions: $l(0)=1, W(0)=0, \sigma(0)=0$. Eqs. (\ref{DYN}) describe a dynamical system with non-linear dissipation for $n \ne 1$. It can conveniently be pictured as a particle rolling down the potential energy landscape $E(l) = Q/l - Vl$. Since the conservative potential is purely repulsive, the time-asymptotic state of the system is escape to infinity, i.e. $l \to \infty$ as $t \to \infty$. However, because the system also experiences a non-linear dissipation, its transient dynamics is non-trivial. This may become relevant to electrospinning experiments, as they take place in set-up about and below 1 meter size, so that transient effects dominate the scene. Before discussing numerical results, we firstly present a qualitative analysis of the problem. \section{Qualitative analysis} \setcounter{equation}{0} In the following we discuss some metastable and asymptotic regimes associated with the set of eqs. (\ref{DYN}) for $\sigma_Y=0$ and $n=1$ for simplicity, even though the qualitative conclusions apply to the general case as well (sect. 4). \vspace{20mm} \underline{Accelerated expansion: free-fall} In the absence of any Coulomb interaction and viscous drag ($Q=F=0$), the particle would experience a free-fall regime \begin{equation} \label{FREE} l(t) = l_0 + W_0 t + Vt^2/2 = l_0 + Vt^2/2 \propto t^2 \end{equation} for $t \gg 1$. The same regime would be attained whenever Coulomb repulsion comes in exact balance with stress pullback, i.e., \begin{equation} \sigma = \frac{Q}{Fl}= \frac{1}{l} \end{equation} Since $ \displaystyle{d^2 l \over dt^2}= V$, one has $\sigma \to 1/t^2$ as $l \rightarrow \infty$, configuring again accelerated free-fall as the time-asymptotic regime of the system. \bigskip \underline{Linear expansion} Another possible scenario is the linear escape, i.e. $\displaystyle{dW \over dt}=0$, yielding: \begin{equation} \label{LINESC} l(t) \propto t \end{equation} This is obtained whenever the viscous drag exceeds over the Coulomb repulsion by just the amount supplied by the external field $V$, namely: \begin{equation} \label{SIGMA1} \sigma = \frac{Q/l+Vl}{F}= \frac{1}{l} + \frac{V}{Q} l \end{equation} leaving $\displaystyle{d^2 l \over dt^2}=0$. This shows that, in order to sustain a linear growth, the stress should diverge linearly with the dimer elongation. Again, this is incompatible with any asymptotic state of the stress evolution. However, if $V$ is sufficiently small, namely: \begin{equation} l < l_Q = (Q/V)^{1/2} \label{yhj} \end{equation} such regime may be realized on a transient basis. Note that $l_Q$ designates the length below which Coulomb repulsion prevails over the external field. As we shall show, the solution $l \sim t$, $\sigma \sim 1/l \sim 1/t$ can indeed be attained as a transient quasi-steady state regime. Typical experimental values are $Q/V \sim 10$, so that $l_Q \sim 3-10$ indicating that such regime could indeed be attained in experiments with elongations $l< 1-3$ cm (see section 4). Note that the value of $l_Q$ is independent of the rheological model, this latter affecting however the time it takes to reach the condition $l=l_Q$. To analyze this issue, let us consider the steady-state limit of the stress equation for a generic value of the exponent $n$, i.e. \begin{equation} \sigma = \left(\displaystyle{1 \over l}\displaystyle{dl \over dt} \right)^{n} \end{equation} The solution $l(t) \sim t$ delivers $\sigma \sim t^{-n} \sim l^{-n}$, which is indeed compatible with the condition (\ref{SIGMA1}) for the case $n=1$. Of course this is not an exact solution, since $\displaystyle{d \sigma \over dt} = 0$ implies $\sigma=const$ in time . However, it can be realized as a quasi-solution, in the sense that $\displaystyle{1 \over \sigma}\displaystyle{d \sigma \over dt} \ll 1$. \\ The above analysis is relevant because electrospinning experiments take place under finite-size and finite-time non-equilibrium conditions, and it is therefore of great interest to understand the transition time between the two regimes. In particular, the bending instability leading to three-dimensional helicoidal structures sets in after an initial stage in which the polymer jet falls down in a linear configuration. Since the goal of the experiment is to maximize the length $l$ of the polymer fiber by the time it reaches the collector plate, it appears instrumental to trigger the bending instability as soon as possible, so as minimize the elongation of the initial stable jet. The present study is essentially a parametric analysis of this initial stage. \section{Numerical results} \setcounter{equation}{0} We have integrated the system of eqs (\ref{DYN}) with a velocity-Verlet like time marching scheme: \begin{eqnarray*} \hat l = l + W \Delta t + a \frac{\Delta t^2}{2} \nonumber\\ \sigma_{HB} =\left(\frac{2 W}{\hat l + l} \right)^{n} +\sigma_Y \\ \hat \sigma = e^{-\Delta t} \;\sigma + (1-e^{-\Delta t}) \sigma_{HB} ; \nonumber \\ \hat a = V-F {\hat \sigma \over \hat l} \ + {Q \over \hat l^2} \nonumber \\ \hat W = W + \left(\frac{\hat a +a}{2}\right) \Delta t \nonumber \end{eqnarray*} with $\Delta t$ the time step and boundary conditions $l(0)=1, \, W(0)=\sigma(0)=0$. Energy conservation has been checked and found to hold up to the sixth digit for simulations lasting up to $10^6$ time steps. \bigskip\\ \underline{Reference results in Maxwell fluid} As a reference case, we first consider the Maxwellian case $n=1, \sigma_Y=0$ and the typical values of experimental relevance are $L \sim 0.3$ cm, $\tau=10^{-2}$ s, yielding $Q=F=12$, $V=2$. In fig. ~\ref{fig:newt} we report the time evolution of the elongation $l(t)$ and the velocity $W(t)$ (left), along with the stress $\sigma(t)$ and the strain rate $W/l$ (right). Three dynamic regimes are apparent. First, an early transient, characterized by the build-up of velocity under the Coulomb drive and, to a much lesser extent, the external field as well. As a result, the strain rate $W/l$ begins to grow, thus promoting a build-up of the stress, which peaks at about $t=1.5$. Subsequently, the stress starts to decay due to viscoelastic relaxation. During the burst of the stress, lasting about up to $t=2$, the velocity comes to nearly constant value, realizing the linear regime discussed in the previous section. However, such regime cannot last long because the stress falls down very rapidly in time and is no longer able of sustain the expanding "pressure" of the electrostatic interactions. As clearly visible in figure \ref{fig:forces}, the Coulomb repulsion falls down faster than the viscoelastic drag, and consequently the subsequent dynamics is dominated by the external field, which promotes the quadratic scaling $l \sim t^2$, clearly observed in Fig. \ref{fig:newt} at $t \gg 2$. When both Coulomb repulsion and viscoelastic drag fall down to negligible values, which is seen to occur at about $t=5$ (50 ms in physical time), the free-fall regime sets in. At this time, the elongation has reached about $l=30$, corresponding to approximately $10$ cm in physical units. Taking $h=20$ cm as a reference value for the typical size of the experiment, it is observed that the condition $l=h$ would be reached roughly at $t=8$, namely $0.1$ s, corresponding to a mean velocity of about $2$ m/s, fairly comparable with the experimental values. It is now of interest to explore to what extent such a picture is affected by the fluid material properties. In particular, we wish to investigate whether a non-Newtonian rheology is able to slow down the elongation dynamics, thereby reducing the stable jet length. \bigskip \\ \underline{Effect of the shear-thinning and shear-thickening} To this purpose, the above simulations have been repeated for different values of $0.2< n < 1.8$, still keeping $\sigma_Y=0$. In Fig. \ref{fig:beta}, we report $l,W,\sigma$ and the total force $F_{tot}$ as a function of time, for $n=0.2,1,1.8$. As one can see, the former case delivers the fastest growth. This can be understood by noting that in the early stage of the evolution $W/l>1$ (see Fig. ~\ref{fig:beta}), hence $n<1$ lowers (resp. $n>1$ raises) the stress contribution as compared to the Maxwellian case, $n=1$. To be noted that in the transient $1<t<2$, at $n=1.8$, the viscoelastic drag is able to produce a mildly decreasing velocity $W(t)$. However, such mild decrease is very ephemeral, and is quickly replaced by a linear growth like for the other values of $n$. It is worth to note that for the case $n=0.2$, the stress remains substantial even at large times, which is reasonable because at $t \gg 1$ s $W/l \ll 1$. However, the impact on the dimer elongation, $l$, is very mild, because the factor $1/l$ makes the stress fairly negligible as compared to the external field. The final result is that the overall effect of $n$ on the dimer elongation is very mild, of the order of ten percent at most. Hence the power-law model at zero yield-stress has a negligible effect on the jet length. \bigskip\\ \underline{Influence of the yield stress} In the following, we investigate the effect of a non-zero yield stress, with fixed $n=1.8$ for convenience. The condition $\sigma_Y>0$ is expected to slow down the growth of $l(t)$, because the stress decays to a non-zero value even in the infinite time limit. Fig.~\ref{fig:yield1} shows the time evolution of $l,W,\sigma$ and $F_{tot}$, for $\sigma_Y=0.2,0.5,0.8$. From this figure, it is readily appreciated that, at variance with the previous case, increasing values of $\sigma_Y$ turn out to produce a significant slow down of the fluid elongation. In all case, the velocity $W(t)$ shows a decreasing trend in the transient $1<t<2$, coming very close to zero at $t \sim 2.5$ for the case $\sigma_Y=0.8$. The onset of the free-fall regime is significantly delayed, and consequently, so is the evolution of the dimer elongation, which at $t=10$ reaches the values $l=80,60,30$ for $\sigma_Y=0.2,0.5,0.8$, respectively. The latter case corresponds to a physical length of about $10 cm$, about three times shorter than in the Maxwell case, corresponding to about $l=90$ at $t=10$. Hence, we conclude that yield stress fluids may experience a noticeable reduction of the stable jet length in actual electrospinning experiments. \bigskip \noindent \underline{Effective Forces} Finally, it is of interest to inspect the effective force exerted upon the dimer as a function of its elongation. By effective force, we imply the sum of Coulomb repulsion and viscoleastic drag, namely \begin{equation} F_{eff}(l) = Q \left(\frac{1}{l^2} - \frac{\sigma(l)}{l}\right) \end{equation} This expression may indeed provide useful input to coarse-grained models for three-dimensional simulations of the jet dynamics. The effective force for different values of the exponent $n$ and yield-stress values $\sigma_Y$ is shown in Fig. 6, left and right panels. From these figures, it is appreciated that the behavior of $F_{eff}$ as a function of the elongation $l$ is similar to its dependence on time, although more peaked around the minimum. Such a minimum occurs slight above the crossover length, $l_X$ at which Coulomb repulsion and viscoelastic drag come to an exact balance, i.e \begin{equation} l_X \sigma(l_X) = 1 \end{equation} For $l<l_X$ Coulomb repulsion is dominant, thus driving the stretching of the jet. Subsequently, at $l>l_X$ the attractive component (drag) takes over, so that $F_{eff}(l)<0$, until a minimum is reached, $F_{min} \equiv F(l_{min})<0$. Finally, for $l>l_{min}$, the force starts growing again to attain its asymptotic zero value at $l \to \infty$. For the present choice of parameters, the minimum length $l_{min}$ is not far from the characteristic length $l_Q=\left(\displaystyle\frac{Q}{V}\right)^{1/2}$ (see eqn. (\ref{yhj})). With the numerical values in point, $Q=12$ and $V=2$, we compute $l_Q \sim 2.44$. Furthermore, such minimum length $l_{min}$ appears to be a decreasing function of $n$ at a given $\sigma_Y$ and independent of $\sigma_Y$ at a given $n$. It is interesting to note that, upon rescaling the elongation with the computed values $l_{min}(n)$, the three curves with $n=0.2,1.0,1.8$ collapse to a universal function $F_{eff}(l) = A_n f(l/l_{min}(n))$, where $A_n$ is a scaling amplitude which depends on the exponent $n$. Such universal function might prove useful in the parametrization of three-dimensional interactions. \section{Conclusions} Summarizing, we have developed a model for the flow of electrically charged viscoelastic fluids, with the main aim of investigating the role of non-Newtonian rheology on the stretching properties of electrically charged jets. The simulations show good agreement with the theoretical analysis and provide a qualitative understanding of the role of viscoelasticity in the early stage of the electrospinning experiment. The main finding is that yield-fluids may lead to a significantly reduction of the linear extension of the jet in the initial stage of the electrospinning process. The present findings may also prove useful to set up the model parameters that control the efficiency of the process and the quality of the spun fibers. \bigskip\\ \underline{Acknowledgments} \\ The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement n. 306357 (ERC Starting Grant “NANO-JETS”). One of the authors (S.S.) wishes to thank the Erwin Schroedinger Institute in Vienna, where this work was initiated, for kind hospitality and financial support through the ESI Senior Fellow program. \bibliographystyle{elsarticle-harv}
1,477,468,750,264
arxiv
\section{Introduction} This is the second in a series of papers addressing Bressoud's conjecture. In 1980, Bressoud \cite{Bressoud-1980} established an analytic generalization of the Rogers-Ramanujan identities by employing Andrews' generalization of Watson's $q$-analogue of Whipple's theorem. \begin{thm}[Bressoud \cite{Bressoud-1980}]\label{BRESSOUD-EQN} Let $\lambda,\ k,\ r,\ \eta$ and $j=0$ or $1$ be the integers such that $(2k+j)/2> r\geq\lambda\geq0$. Then \begin{equation}\label{Bressoud-conj-e} \begin{split} &\frac{(-q^{\alpha_1},\ldots,-q^{\alpha_\lambda};q^{\eta})_\infty(q^{\eta(r-\frac{\lambda}{2})},q^{\eta (2k-r-\frac{\lambda}{2}+j)},q^{\eta(2k-\lambda+j)} ;q^{\eta(2k-\lambda+j)})_\infty}{(q^\eta;q^\eta)_\infty}\\[10pt] &=\sum_{N_1\geq\cdots \geq N_{k-1}\geq0}\frac{q^{\eta(N_1^2+\cdots+N_{k-1}^2+N_r +\cdots+N_{k-1})}}{(q^\eta;q^\eta)_{N_1-N_2} \cdots(q^\eta;q^\eta)_{N_{k-2}-N_{k-1}}(q^{(2-j)\eta};q^{(2-j)\eta})_{N_{k-1}}}\\[5pt] &\hskip 1cm\times\prod_{s=1}^{\lambda}(-q^{\eta-\alpha_s-\eta N_s};q^\eta)_{N_s}\prod_{s=2}^{\lambda}(-q^{\eta -\alpha_s+\eta N_{s-1}};q^\eta)_\infty. \end{split} \end{equation} \end{thm} Throughout this paper, we assume that $\alpha_1,\alpha_2,\ldots, \alpha_\lambda$ and $\eta$ are integers such that \begin{equation*}\label{cond-alpha} 0<\alpha_1<\cdots<\alpha_\lambda<\eta, \quad \text{and} \quad \alpha_i=\eta-\alpha_{\lambda+1-i}\quad \text{for} \quad 1\leq i\leq \lambda. \end{equation*} Here and in the sequel, we adopt the standard notation \cite{Andrews-1976}: \[(a;q)_\infty=\prod_{i=0}^{\infty}(1-aq^i), \quad (a;q)_n=\frac{(a;q)_\infty}{(aq^n;q)_\infty},\] and \[(a_1,a_2,\ldots,a_m;q)_\infty=(a_1;q)_\infty(a_2;q)_\infty\cdots(a_m;q)_\infty. \] To give a combinatorial interpretation of \eqref{Bressoud-conj-e}, Bressoud introduced two partition functions. \begin{defi}[Bressoud \cite{Bressoud-1980}] Let $\lambda,\ k,\ r$ and $j=0$ or $1$ be the integers such that $(2k+j)/2> r\geq\lambda\geq0$. Define the partition function $A_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)$ to be the number of partitions of $n$ into parts congruent to $0,\alpha_1,\ldots,\alpha_\lambda\pmod\eta$ such that \begin{itemize} \item If $\lambda$ is even, then only multiples of $\eta$ may be repeated and no part is congruent to $0,\pm\eta(r-\lambda/2) \pmod{\eta(2k-\lambda+j)}${\rm{;}} \item If $\lambda$ is odd and $j=1$, then only multiples of ${\eta}/{2}$ may be repeated, no part is congruent to $\eta\pmod{2\eta}$, and no part is congruent to $0,\pm{\eta}(2r-\lambda)/{2} \pmod {\eta(2k-\lambda+1)}${\rm{;}} \item If $\lambda$ is odd and $j=0$, then only multiples of ${\eta}/{2}$ which are not congruent to ${\eta}(2k-\lambda)/{2}\pmod{\eta(2k-\lambda)}$ may be repeated, no part is congruent to $\eta\pmod{2\eta}$, no part is congruent to $0\pmod{2\eta(2k-\lambda)}$, and no part is congruent to $\pm{\eta}(2r-\lambda)/{2} \pmod {\eta(2k-\lambda)}$. \end{itemize} \end{defi} \begin{defi}[Bressoud \cite{Bressoud-1980}] \label{Bress-B-function} Let $\lambda,\ k,\ r$ and $j=0$ or $1$ be the integers such that $(2k+j)/2> r\geq\lambda\geq0$. Define $B_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)$ to be the number of partitions of $n$ of the form $(\pi_1,\ldots,\pi_\ell)$ where $\pi_i\geq\pi_{i+1}$ satisfying the following conditions{\rm{:}} \begin{itemize} \item[{\rm (1)}] $\pi_i\equiv0,\alpha_1,\ldots,\alpha_\lambda\pmod{\eta}${\rm{;}} \item[{\rm (2)}] Only multiples of $\eta$ may be repeated{\rm{;}} \item[{\rm (3)}] $ \pi_i\geq\pi_{i+k-1}+\eta$ with strict inequality if $\eta\mid\pi_i${\rm{;}} \item[{\rm (4)}] At most $r-1$ of the $\pi_i$ are less than or equal to $\eta${\rm{;}} \item[{\rm (5)}] If $\pi_i\leq\pi_{i+k-2}+\eta$ with strict inequality if $\eta\nmid\pi_i$, then \[[\pi_i/\eta]+\cdots+[\pi_{i+k-2}/\eta]\equiv r-1+V_\pi(\pi_i)\pmod{2-j},\] where $V_\pi(N)$ {\rm{(}}or $V(N)$ for short{\rm{)}} denotes the number of parts not exceeding $N$ which are not divided by $\eta$ in $\pi$ and $[\ ]$ denotes the greatest integer function. \end{itemize} \end{defi} Bressoud \cite{Bressoud-1980} made the following conjecture. \begin{conj}[Bressoud \cite{Bressoud-1980}] \label{Bressoud-conjecture-j} Let $\lambda,\ k,\ r$ and $j=0$ or $1$ be the integers such that $(2k+j)/2> r\geq\lambda\geq0$. Then for $n\geq 0$, \begin{equation*}\label{Bressoud-conj-1} A_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)=B_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n). \end{equation*} \end{conj} This conjecture specializes to many infinite families of combinatorial theorems, including Euler's partition theorem, Schur's theorem, the Rogers-Ramanujan-Gordon theorem and the Andrews-G\"ollnitz-Gordon theorem. For more details, please refer to Bressoud \cite{Bressoud-1980}. By definition, it is not difficult to show that the left-hand side of \eqref{Bressoud-conj-e} is the generating function of $A_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)$. For $k>\lambda\geq 0$ and $j=0$, we define \begin{equation*}\label{Bressoud-conj-defi-e-kk0} \begin{split} &\sum_{n\geq0}A_0(\alpha_1,\ldots,\alpha_\lambda;\eta,k,k;n)q^n\\[5pt] &=\frac{(-q^{\alpha_1},\ldots,-q^{\alpha_\lambda};q^{\eta})_\infty (q^{\eta(k-\frac{\lambda}{2})},q^{\eta(k-\frac{\lambda}{2})},q^{\eta(2k-\lambda)};q^{\eta(2k-\lambda)})_\infty}{(q^\eta;q^\eta)_\infty}. \end{split} \end{equation*} Then, for $k\geq r\geq \lambda\geq 0$, $k+j-1\geq\lambda$ and $j=0$ or $1$, we have \begin{equation}\label{Bressoud-conj-defi-e} \begin{split} &\sum_{n\geq0}A_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)q^n\\[5pt] &=\frac{(-q^{\alpha_1},\ldots,-q^{\alpha_\lambda};q^{\eta})_\infty (q^{\eta(r-\frac{\lambda}{2})},q^{\eta(2k-r-\frac{\lambda}{2}+j)}, q^{\eta(2k-\lambda+j)};q^{\eta(2k-\lambda+j)})_\infty}{(q^\eta;q^\eta)_\infty}. \end{split} \end{equation} However, it is difficult to show that the right-hand side of \eqref{Bressoud-conj-e} is the generating function of $B_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)$. Hence Bressoud \cite{Bressoud-1980} conjectured that \begin{conj}[Bressoud \cite{Bressoud-1980}] \label{Bressoud-gen-b-e} Let $\lambda,\ k,\ r$ and $j=0$ or $1$ be the integers such that $(2k+j)/2> r\geq\lambda\geq0$. Then \begin{equation*} \begin{split} &\sum_{n\geq0}B_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)q^n\\ &=\sum_{N_1\geq\cdots\geq N_{k-1}\geq0}\frac{q^{\eta(N_1^2+\cdots+N_{k-1}^2+N_r+\cdots+N_{k-1})}}{(q^\eta;q^\eta)_{N_1-N_2}\cdots(q^\eta;q^\eta)_{N_{k-2}-N_{k-1}}(q^{(2-j)\eta};q^{(2-j)\eta})_{N_{k-1}}}\\ &\hskip1cm\times\prod_{s=1}^{\lambda}(-q^{\eta-\alpha_s-\eta N_s};q^\eta)_{N_s}\prod_{s=2}^{\lambda}(-q^{\eta -\alpha_s+\eta N_{s-1}};q^\eta)_\infty. \end{split} \end{equation*} \end{conj} Andrews\cite{Andrews-1974m} proved that Conjecture \ref{Bressoud-conjecture-j} holds for $\eta=\lambda+1$ and $j=1$ by using the $q$-difference method. Kim and Yee\cite{Kim-Yee-2014} gave a proof of Conjecture \ref{Bressoud-conjecture-j} for $j=1$ and $\lambda=2$. More precisely, they showed that Conjecture \ref{Bressoud-gen-b-e} holds for $j=1$ and $\lambda=2$ with the aid of Gordon marking introduced by Kur\c{s}ung\"oz \cite{Kursungoz-2010a, Kursungoz-2010}. Recently, Kim \cite{Kim-2018} proved that Conjecture \ref{Bressoud-conjecture-j} holds for $j=1$. Instead of proving Conjecture \ref{Bressoud-gen-b-e}, Kim proved that for $k\geq r\geq\lambda\geq0$, \begin{equation}\label{Bressoud-conj-defi-e-11} \begin{split} &\sum_{n\geq0}B_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)q^n\\[5pt] &=\frac{(-q^{\alpha_1},\ldots,-q^{\alpha_\lambda};q^{\eta})_\infty (q^{\eta(r-\frac{\lambda}{2})},q^{\eta(2k-r-\frac{\lambda}{2}+1)}, q^{\eta(2k-\lambda+1)};q^{\eta(2k-\lambda+1)})_\infty}{(q^\eta;q^\eta)_\infty}. \end{split} \end{equation} Conjecture \ref{Bressoud-conjecture-j} for $j=1$ immediately follows from \eqref{Bressoud-conj-defi-e} and \eqref{Bressoud-conj-defi-e-11}. The main objective of this paper is to prove that Conjecture \ref{Bressoud-conjecture-j} holds for $j=0$. We aim to show that for $k\geq r\geq \lambda\geq0$ and $k>\lambda$, \begin{equation}\label{Bressoud-conj-defi-e-t} \begin{split} &\sum_{n\geq0}B_0(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)q^n\\[5pt] &= \frac{{(-q^{\alpha_1},\ldots,-q^{\alpha_\lambda};q^\eta)_\infty} (q^{\eta(r-\frac{\lambda}{2})}, q^{\eta(2k-r-\frac{\lambda}{2})}, q^{\eta(2k-\lambda)}; q^{\eta(2k-\lambda)})_\infty}{(q^\eta;q^\eta)_\infty}. \end{split} \end{equation} It is easy to see that Conjecture \ref{Bressoud-conjecture-j} for $j=0$ immediately follows from \eqref{Bressoud-conj-defi-e} and \eqref{Bressoud-conj-defi-e-t}. To show \eqref{Bressoud-conj-defi-e-t}, we are required to recall the definitions of the partition functions $\overline{A}_j(\alpha_1,\ldots,\alpha_\lambda;\eta, k,r;n)$ and $\overline{B}_j(\alpha_1,\ldots,\alpha_\lambda;\eta, k,r;n)$ in our previous paper \cite{he-ji-zhao}, which can be viewed as the overpartition analogues of the partition functions ${A}_j(\alpha_1,\ldots,\alpha_\lambda;\eta, k,r;n)$ and ${B}_j(\alpha_1,\ldots,\alpha_\lambda;\eta, k,r;n)$ defined in Bressoud's conjecture. Recall that an overpartition of $n$ is a partition of $n$ in which the first occurrence of a number can be overlined. Notice that the parts in an overpartition are order in the following order \begin{equation*}\label{order} 1<\bar{1}<2<\bar{2}<\cdots. \end{equation*} In \cite{he-ji-zhao}, we introduced the following partition functions $\overline{A}_j(\alpha_1,\ldots,\alpha_\lambda;\eta, k,r;n)$ and $\overline{B}_j(\alpha_1,\ldots,\alpha_\lambda;\eta, k,r;n)$. \begin{defi}{\rm \cite[Definition 1.16]{he-ji-zhao}} Let $\lambda,\ k,\ r$ and $j=0$ or $1$ be the integers such that $(2k+1-j)/2> r\geq\lambda\geq0$ and $k+j-1>\lambda$. Define $\overline{A}_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)$ to be the number of overpartitions of $n$ satisfying $\pi_i\equiv0,\alpha_1,\ldots,\alpha_\lambda\pmod{\eta}$ such that \begin{itemize} \item If $\lambda$ is even, then only multiplies of $\eta$ may be non-overlined and there is no non-overlined part congruent to $0,\pm\eta(r-\lambda/2) \pmod {\eta(2k-\lambda+j-1)}${\rm{;}} \item If $\lambda$ is odd and $j=1$, then only multiples of ${\eta}/{2}$ may be non-overlined, no non-overlined part is congruent to ${\eta}(2k-\lambda)/{2}\pmod{\eta(2k-\lambda)}$, no non-overlined part is congruent to $\eta \pmod{2\eta}$, no non-overlined part is congruent to $0 \pmod{2\eta(2k-\lambda)}$, no non-overlined part is congruent to $\pm{\eta}(2r-\lambda)/{2} \pmod {\eta(2k-\lambda)}$, and no overlined part is congruent to ${\eta}/{2}\pmod \eta$ and not congruent to ${\eta}(2k-\lambda)/{2}\pmod{\eta(2k-\lambda)}${\rm{;}} \item If $\lambda$ is odd and $j=0$, then only multiples of ${\eta}/{2}$ may be non-overlined, no non-overlined part is congruent to $\eta \pmod{2\eta}$, no non-overlined part is congruent to $0,\pm{\eta}(2r-\lambda)/{2} \pmod {\eta(2k-\lambda-1)}$, and no overlined part is congruent to ${\eta}/{2}\pmod \eta$. \end{itemize} \end{defi} For $k>\lambda\geq0$ and $j=1$, we define \begin{equation*}\label{overpartition-Afunction-e-kko1} \begin{split} &\sum_{n\geq0}\overline{A}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,k;n)q^n\\[5pt] &=\frac{(-q^{\alpha_1},\ldots,-q^{\alpha_\lambda},-q^{\eta};q^{\eta})_\infty (q^{\eta(k-\frac{\lambda}{2})}, q^{\eta(k-\frac{\lambda}{2})}, q^{\eta(2k-\lambda)}; q^{\eta(2k-\lambda)})_\infty}{(q^\eta;q^\eta)_\infty}. \end{split} \end{equation*} By definition, it is easy to see that for $k\geq r\geq \lambda\geq0$, $k+j-1>\lambda$ and $j=0$ or $1$, \begin{equation}\label{overpartition-Afunction-e} \begin{split} &\sum_{n\geq0}\overline{A}_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)q^n\\[5pt] &=\frac{(-q^{\alpha_1},\ldots,-q^{\alpha_\lambda},-q^{\eta};q^{\eta})_\infty(q^{\eta(r-\frac{\lambda}{2})}, q^{\eta(2k-r-\frac{\lambda}{2}+j-1)}, q^{\eta(2k-\lambda+j-1)}; q^{\eta(2k-\lambda+j-1)})_\infty}{(q^\eta;q^\eta)_\infty}. \end{split} \end{equation} \begin{defi}{\rm \cite[Definition 1.12]{he-ji-zhao}} \label{defi-O-B} Let $\lambda,\ k,\ r,\ \eta$ and $j=0$ or $1$ be the integers such that $k\geq r\geq \lambda\geq0$ and $k-1+j>\lambda$. Define $\overline{B}_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)$ to be the number of overpartitions of $n$ of the form $(\pi_1,\ldots,\pi_\ell)$ where $\pi_i\geq\pi_{i+1}$ satisfying the following conditions{\rm{:}} \begin{itemize} \item[{\rm (1)}] $\pi_i\equiv0,\alpha_1,\ldots,\alpha_\lambda\pmod{\eta}${\rm{;}} \item[{\rm (2)}] Only multiples of $\eta$ may be non-overlined{\rm{;}} \item[{\rm (3)}] $\pi_i\geq\pi_{i+k-1}+\eta$ with strict inequality if $\pi_i$ is non-overlined{\rm{;}} \item[{\rm (4)}] At most $r-1$ of the $\pi_i$ are less than or equal to $\eta${\rm{;}} \item[{\rm (5)}] If $\pi_i\leq\pi_{i+k-2}+\eta$ with strict inequality if $\pi_i$ is overlined, then \[\left[\pi_i/\eta\right]+\cdots+\left[\pi_{i+k-2}/\eta\right]\equiv r-1+\overline{V}_\pi(\pi_i)\pmod{2-j},\] \end{itemize} where $\overline{V}_\pi(N)$ {\rm{(}}or $\overline{V}(N)$ for short{\rm{)}} denotes the number of overlined parts not exceeding $N$ in $\pi$. \end{defi} It should be noted that for an overpartition $\pi$ counted by $\overline{B}_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)$ without overlined parts divided by $\eta$, if we change overlined parts in $\pi$ to non-overlined parts, then we get a partition counted by ${B}_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)$. Hence we say that $\overline{B}_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)$ is an overpartition analogue of ${B}_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)$. We established the following relationship between $\overline{B}_1$ and $B_0$ in \cite{he-ji-zhao}. \begin{thm}{\rm \cite[Theorem 1.13]{he-ji-zhao}} \label{rel-over1}For $k\geq r\geq \lambda\geq0$ and $k>\lambda$, \begin{equation}\label{new-b-0-over} \begin{split} &\sum_{n\geq0}\overline{B}_1(\alpha_1,\ldots,\alpha_\lambda; \eta,k,r;n)q^n=(-q^\eta;q^\eta)_\infty \sum_{n\geq0}B_0(\alpha_1,\ldots,\alpha_\lambda; \eta,k,r;n)q^n. \end{split} \end{equation} \end{thm} Setting $\lambda=0$ and $\eta=1$ in Theorem \ref{rel-over1}, and combining with the Bressoud-Rogers-Ramanujan theorem \cite{Bressoud-1979}, we can recover the overpartition analogue of the Rogers-Ramanujan-Gordon theorem due to Chen, Sang and Shi \cite{Chen-Sang-Shi-2013}. For more details, please refer to \cite{he-ji-zhao}. \begin{thm}[Chen-Shi-Sang]\label{R-R-G-o} For $k\geq r\geq1$, let $\overline{B}_1(-;1,k,r;n)$ denote the number of overpartitions $\pi$ of $n$ of the form $(\pi_1,\ldots,\pi_\ell)$, where $\pi_i\geq \pi_{i+1}$, $\pi_i-\pi_{i+k-1}\geq1$ with strict inequality if $\pi_i$ is non-overlined, and at most $r-1$ of the $\pi_i$ are equal to $1$. For $k>r\geq1$, let $\overline{A}_1(-;1,k,r;n)$ denote the number of overpartitions of $n$ such that non-overlined parts $\not\equiv0,\pm r\pmod{2k}$, and for $k=r$, let $\overline{A}_1(-;1,k,k;n)$ denote the number of overpartitions of $n$ into parts not divided by $k$. Then for $k\geq r\geq1$ and $n\geq0$, \begin{equation*} \overline{A}_1(-;1,k,r;n)=\overline{B}_1(-;1,k,r;n). \end{equation*} \end{thm} Setting $\lambda=1$ and $\eta=2$ in Theorem \ref{rel-over1}, and using the Bressoud-G\"ollnitz-Gordon theorem \cite{Bressoud-1980}, we obtained an overpartition analogue of the Andrews-G\"ollnitz-Gordon theorem in \cite{he-ji-zhao} which is different from that found by He, Ji, Wang and Zhao \cite{He-Ji-Wang-Zhao}. \begin{thm}{\rm \cite[Theorem 1.20]{he-ji-zhao}}\label{Over-bre-121} For $k\geq r\geq 1$, let $\overline{B}_1(1;2,k,r;n)$ denote the number of overpartitions of $n$ of the form $(\pi_1,\ldots,\pi_\ell)$, where $\pi_i\geq\pi_{i+1}$, only even parts may be non-overlined, $\pi_i\geq\pi_{i+k-1}+2$ with strict inequality if $\pi_i$ is non-overlined, and at most $r-1$ of the $\pi_i$ are less than or equal to $2$. For $k\geq r\geq 1$, let $\overline{A}_1(1;2,k,r;n)$ denote the number of overpartitions of $n$ such that only even parts can be overlined, and non-overlined parts $\not\equiv 2\pmod4$ and $\not\equiv 0,\pm(2r-1)\pmod{4k-2}$. Then, for $k\geq 2$, $k\geq r\geq1$ and $n\geq0$, \begin{equation*}\label{main000000} \overline{A}_1(1;2,k,r;n)=\overline{B}_1(1;2,k,r;n). \end{equation*} \end{thm} In this paper, we first establish the following generating function of $\overline{B}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)$ by generalizing Kim's method in \cite{Kim-2018}. \begin{thm} \label{Over-bre-1a} For $k\geq r\geq \lambda\geq0$ and $k>\lambda$, \begin{equation}\label{proof-0-A} \begin{split} &\sum_{n\geq0}\overline{B}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)q^n\\ &= \frac{{(-q^{\alpha_1},\ldots,-q^{\alpha_\lambda},-q^\eta;q^\eta)_\infty}(q^{\eta(r-\frac{\lambda}{2})}, q^{\eta(2k-r-\frac{\lambda}{2})}, q^{\eta(2k-\lambda)}; q^{\eta(2k-\lambda)})_\infty}{(q^\eta;q^\eta)_\infty}. \end{split} \end{equation} \end{thm} Substituting \eqref{proof-0-A} into \eqref{new-b-0-over} in Theorem \ref{rel-over1}, we could obtain \eqref{Bressoud-conj-defi-e-t}, which implies that Bressoud's conjecture for $j=0$ holds. By \eqref{overpartition-Afunction-e}, it is easy to see that Theorem \ref{Over-bre-1a} can be interpreted as the following partition identity, which can be viewed as an overpartition analogue of Bressoud's conjecture for $j=1$. \begin{thm}\label{Over-bre-1} Let $\lambda,\ k,\ r$ and $\eta$ be the integers such that $k\geq r\geq \lambda\geq0$ and $k>\lambda$. Then for $n\geq 0$, \begin{equation*}\label{main} \overline{A}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n)=\overline{B}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r;n). \end{equation*} \end{thm} We also obtain the generating function of Theorem \ref{Over-bre-1} with the aid of Bailey pairs. \begin{thm}\label{G-B-O-1}For $k\geq r>\lambda \geq 0$, we have \begin{equation}\label{G-B-O-1-eq} \begin{split} &\sum_{N_1\geq\cdots\geq N_{k-1}\geq0} \frac{q^{\eta(N_{1}^{2}+\cdots+N_{k-1}^{2}+ N_r+\cdots+N_{k-1})}(1+q^{-\eta N_{r}})(-q^{\eta-\eta N_{\lambda+1}};q^{\eta})_{N_{\lambda+1}-1} (-q^{\eta+\eta N_{\lambda}};q^{\eta})_{\infty}} {(q^{\eta};q^{\eta})_{N_{1}-N_{2}}\cdots(q^{\eta} ;q^{\eta})_{N_{k-1}}}\\[5pt] &\hskip1cm\times \prod_{s=1}^\lambda(-q^{\eta-\alpha_{s}-\eta N_{s}};q^{\eta})_{N_{s}} \prod_{s=2}^\lambda(-q^{\eta-\alpha_{s}+\eta N_{s-1}};q^{\eta})_{\infty}\\[10pt] &=\frac{(-q^{\alpha_{1}},\ldots,-q^{\alpha_{\lambda}},-q^{\eta};q^{\eta})_{\infty}(q^{(r-\frac{\lambda}{2})\eta}, q^{(2k-r-\frac{\lambda}{2})\eta} ,q^{(2k-\lambda)\eta};q^{(2k-\lambda)\eta})_{\infty}} {(q^{\eta};q^{\eta})_{\infty}}. \end{split} \end{equation} \end{thm} By Theorem \ref{Over-bre-1} and Theorem \ref{G-B-O-1}, it is easy to obtain the following theorem. \begin{thm}\label{sum-side-conj} For $k\geq r>\lambda \geq 0$, \begin{equation*} \begin{split} &\sum_{n\geq0}\overline{B}_1(\alpha_1,\ldots,\alpha_\lambda; \eta,k,r,j;n)q^n\\[5pt] &\sum_{N_1\geq\cdots\geq N_{k-1}\geq0} \frac{q^{\eta(N_{1}^{2}+\cdots+N_{k-1}^{2}+ N_r+\cdots+N_{k-1})}(1+q^{-\eta N_{r}})(-q^{\eta-\eta N_{\lambda+1}};q^{\eta})_{N_{\lambda+1}-1} (-q^{\eta+\eta N_{\lambda}};q^{\eta})_{\infty}} {(q^{\eta};q^{\eta})_{N_{1}-N_{2}}\cdots(q^{\eta} ;q^{\eta})_{N_{k-1}}}\\[5pt] &\hskip1cm\times \prod_{s=1}^\lambda(-q^{\eta-\alpha_{s}-\eta N_{s}};q^{\eta})_{N_{s}} \prod_{s=2}^\lambda(-q^{\eta-\alpha_{s}+\eta N_{s-1}};q^{\eta})_{\infty}. \end{split} \end{equation*} \end{thm} It is interesting to give a direct combinatorial proof of Theorem \ref{sum-side-conj}. This article is organized as follows. In Section 2, we give the outline of the proof of Theorem \ref{Over-bre-1a}. We will show that the proof of Theorem \ref{Over-bre-1a} is equivalent to the proof of Theorem \ref{lambda-r}. We also recall the preliminary definitions and operations defined in \cite{he-ji-zhao} and give the outline of the proof of Theorem \ref{lambda-r}. In Section 3, we introduce the $(k-1)$-addition and the $(k-1)$-subtraction which are the main ingredients in the proof of Theorem \ref{lambda-r}. In Section 4, we recall the $(k-1)$-insertion and the $(k-1)$-separation defined in \cite{he-ji-zhao}, which are also useful in the proof of Theorem \ref{lambda-r}. In Section 5, we give a bijective proof of Theorem \ref{lambda-r}. In Section 6, we provide an example for the illustration of the bijection in the proof of Theorem \ref{lambda-r}. In Section 7, we give a proof of Theorem \ref{G-B-O-1} with the aid of Bailey pairs. \section{An outline of the proof of Theorem \ref{Over-bre-1a}} In this section, we give a brief outline of the proof of Theorem \ref{Over-bre-1a}. We will show that the proof of Theorem \ref{Over-bre-1a} is equivalent to the proof of the following theorem. \begin{thm}\label{lambda-r} For $k\geq r\geq \lambda\geq 2$ and $k>\lambda$, \begin{equation}\label{lambda} \begin{split} &\sum_{n\geq0}\overline{B}_1(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r;n)q^n\\[5pt] &\quad\quad\quad\quad= (-q^{\alpha_{1}},-q^{\alpha_{\lambda}};q^{\eta})_\infty \sum_{n\geq0}\overline{B}_1(\alpha_{2},\ldots,\alpha_{\lambda-1};\eta,k-1,r-1;n)q^n. \end{split} \end{equation} \end{thm} To give a proof of Theorem \ref{lambda-r}, we will recall the definitions of Gordon marking and the reverse Gordon marking of an overpartition counted by $\overline{B}_j(\alpha_1,\ldots,\alpha_\lambda; \eta,k,r;n)$ which were introduced in \cite{he-ji-zhao}. We also review the forward move and the backward move defined in \cite{he-ji-zhao}, which are the main ingredients in the proof of Theorem \ref {lambda-r}. We then give an outline of the proof of Theorem \ref{lambda-r}. \subsection{Proof of Theorem \ref{Over-bre-1a} with the aid of Theorem \ref{lambda-r}} \noindent{\it Proof of Theorem \ref{Over-bre-1a}.} By induction on $\lambda$. We first show that Theorem \ref{Over-bre-1a} holds when $\lambda=0$. Combining Theorem \ref{R-R-G-o} and \eqref{overpartition-Afunction-e}, we find that: For $k\geq r\geq1$, \begin{equation}\label{proof-0-A-0} \sum_{n\geq0}\overline{B}_1(-;1,k,r;n)q^n=\frac{{(-q ;q )_\infty}(q^{r}, q^{2k-r}, q^{2k}; q^{2k})_\infty}{(q ;q )_\infty}. \end{equation} Setting $q\rightarrow q^\eta$ in \eqref{proof-0-A-0}, we obtain that for $k\geq r\geq1$, \begin{equation*} \begin{split} \sum_{n\geq0}\overline{B}_1(-;\eta,k,r;n)q^n=\frac{(-q^\eta;q^\eta)_\infty(q^{\eta r},q^{\eta(2k-r)},q^{2k\eta};q^{2k\eta})_\infty}{(q^\eta;q^\eta)_\infty}, \end{split} \end{equation*} so Theorem \ref{Over-bre-1a} holds when $\lambda=0$. We next show that Theorem \ref{Over-bre-1a} holds when $\lambda=1$. In this case, $\alpha_1=\eta/2$. By Theorem \ref{Over-bre-121} and \eqref{overpartition-Afunction-e}, we see that: For $k\geq 2$ and $k\geq r\geq1$, \begin{equation}\label{b-1-2} \begin{split} \sum_{n\geq0}\overline{B}_1(1;2,k,r;n)q^n= \frac{(-q;q^2)_\infty(-q^2;q^2)_\infty(q^{2r-1},q^{4k-2r-1},q^{4k-2};q^{4k-2})_\infty}{(q^2;q^2)_\infty}. \end{split} \end{equation} Setting $q\rightarrow q^{\eta/2}$ in \eqref{b-1-2}, we obtain that for $k\geq 2$ and $k\geq r\geq1$, \begin{equation*}\label{odd-2} \begin{split} &\ \sum_{n\geq0}\overline{B}_1({\eta}/{2};\eta,k,r;n)q^n\\ &= \frac{(-q^{{\eta}/{2}};q^\eta)_\infty(-q^\eta;q^\eta)_\infty(q^{\eta(r-{\frac{1}{2}})},q^{\eta(2k-r-{\frac{1}{2}})}, q^{\eta(2k-1)};q^{\eta(2k-1)})_\infty}{(q^\eta;q^\eta)_\infty}, \end{split} \end{equation*} so Theorem \ref{Over-bre-1a} holds when $\lambda=1$. When $\lambda\geq 2$, assume that Theorem \ref{Over-bre-1a} holds for $\lambda-2$, that is, for $k-1\geq r-1\geq \lambda-2\geq0$ and $k-1>\lambda-2$, we have \begin{equation}\label{proof-0-A-i} \begin{split} &\sum_{n\geq0}\overline{B}_1(\alpha_2,\ldots,\alpha_{\lambda-1};\eta,k-1,r-1;n)q^n\\ &= \frac{{(-q^{\alpha_2},\ldots,-q^{\alpha_{\lambda-1}},-q^\eta;q^\eta)_\infty} (q^{\eta(r-\frac{\lambda}{2})}, q^{\eta(2k-r-\frac{\lambda}{2})}, q^{\eta(2k-\lambda)}; q^{\eta(2k-\lambda)})_\infty}{(q^\eta;q^\eta)_\infty}. \end{split} \end{equation} We aim to show that Theorem \ref{Over-bre-1a} holds for $\lambda$. Namely, for $k\geq r\geq \lambda \geq2$ and $k>\lambda$, \begin{equation*}\label{proof-0-A-cc} \begin{split} &\sum_{n\geq0}\overline{B}_1(\alpha_1,\ldots,\alpha_{\lambda};\eta,k,r;n)q^n\\ &= \frac{{(-q^{\alpha_1},\ldots,-q^{\alpha_{\lambda}},-q^\eta;q^\eta)_\infty} (q^{\eta(r-\frac{\lambda}{2})}, q^{\eta(2k-r-\frac{\lambda}{2})}, q^{\eta(2k-\lambda)}; q^{\eta(2k-\lambda)})_\infty}{(q^\eta;q^\eta)_\infty}, \end{split} \end{equation*} which can be obtained by substituting \eqref{proof-0-A-i} into \eqref{lambda} in Theorem \ref{lambda-r}. Hence we show that Theorem \ref{Over-bre-1a} holds for $\lambda$. This completes the proof of Theorem \ref{Over-bre-1a}. \qed \subsection{The Gordon marking of a $\overline{B}_j$-overpartition} Let $\lambda,\ k,\ r,\ \eta$ and $j=0$ or $1$ be the integers such that $k\geq r\geq \lambda\geq0$ and $k-1+j>\lambda$. An overpartition $\pi$ is called a $\overline{B}_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r)$ overpartition (or $\overline{B}_j$-overpartition for short) if $\pi$ is counted by $\sum_{n\geq 0}\overline{B}_j(\alpha_1,\ldots,\alpha_\lambda; \eta,k,r;n)$. Let $\overline{\mathcal{B}}_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r)$ denote the set of $\overline{B}_j(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r)$ overpartitions. The Gordon marking of a $\overline{B}_j$-overpartition was defined in \cite{he-ji-zhao} as follows: \begin{defi}[Gordon marking]\label{Gordon-marking} For an overpartition $\pi=(\pi_1,\pi_2,\ldots,\pi_\ell)$ where $\pi_1\geq \pi_2\geq \cdots \geq \pi_\ell$, we assign a positive integer to each part of $\pi$ from the smallest as follows{\rm{:}} We first assign $1$ to the smallest part of $\pi$. Then, for each $\pi_i$, we will assign $q$ to $\pi_i$, where $q$ is the least positive integer that is not used to mark the parts $\pi_p$ for $p>i$ such that $\pi_p\geq \pi_i-\eta$ with strict inequality if $\pi_i$ is overlined. We denote the Gordon marking of $\pi$ by $G(\pi)$. \end{defi} We will illustrate the Gordon marking of an overpartition by using the example in \cite{he-ji-zhao}. Let $k=5$, $r=4$, $\lambda=2$, $\eta=10$, $\alpha_1=1$, $\alpha_1=9$ and $\pi$ be an overpartition in $\overline{\mathcal{B}}_1(1,9;10,5,4)$ defined as follows: \begin{equation}\label{mark-exa-1} \begin{split} &\pi=(\overline{80},80,80,\overline{70},70,\overline{69},\overline{60}, 60,\overline{59},\overline{51},50,\overline{40},40,\overline{31},\overline{29},\overline{21},\\[3pt] &\ \ \ \ \ \ \ \overline{20},20,\overline{19},\overline{10},10, \overline{9},\overline{1}). \end{split} \end{equation} The Gordon marking of $\pi$ is given by \begin{equation}\label{exa-g-1} \begin{split} &G(\pi)=(\overline{80}_2,{\color{red}{80}_4},{\color{blue}{80}_1,\overline{70}_2,{70}_3},\overline{69}_1,\overline{60}_2,{\color{red}{60}_4},{\color{blue}\overline{59}_3, \overline{51}_1,{50}_2},\overline{40}_3,{40}_1,\overline{31}_2,\overline{29}_1, {\color{red}\overline{21}_4},\\[3pt] &\ \ \ \ \ \ \ \ \ \ \ {\color{blue}\overline{20}_3,{20}_2,\overline{19}_1},{\color{red}\overline{10}_4},{\color{blue}{10}_3,\overline{9}_2,\overline{1}_1}). \end{split} \end{equation} For an overpartition $\pi=(\pi_1,\pi_2,\ldots,\pi_\ell)$ where $\pi_1\geq \pi_2\geq \cdots \geq \pi_\ell$, if there are $k-1$ parts $\pi_i\geq \pi_{i+1}\geq \cdots \geq \pi_{i+k-2}$ satisfying the following relation: \begin{equation}\label{sequence} \pi_i\leq\pi_{i+k-2}+\eta\text{ with strict inequality if } \\ \pi_i \text{ is overlined}, \end{equation} then these $k-1$ parts have different marks in the Gordon marking of $\pi$. Here we call the set of these $k-1$ parts as the $(k-1)$-set of $\pi$, denoted by $\{\pi_i\}_{k-1}$. Hence, we have the following proposition: \begin{prop}{\!\!\!\rm \cite[Proposition 2.2]{he-ji-zhao}} Let $\pi$ be an overpartition satisfying {\rm(1)} and {\rm(2)} in Definition \ref{defi-O-B}. Then $\pi$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r)$ if and only if the marks in the Gordon marking of $\pi$ do not exceed $k-1$ and the marks of parts less than or equal to $\eta$ in the Gordon marking of $\pi$ do not exceed $r-1$. \end{prop} Let $\pi$ be an overpartition in $\overline{\mathcal{B}}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r)$, assume that there are $N$ $(k-1)$-marked parts in $G(\pi)$, and denote these $(k-1)$-marked parts by $\tilde{g}_1(\pi)> {\tilde{g}}_2(\pi)>\cdots >\tilde{g}_N(\pi)$. For each $(k-1)$-marked part $\tilde{g}_p(\pi)$, assume that $\tilde{g}_p(\pi)$ is the $i$-th part $\pi_i$ of $\pi$, by Definition \ref{Gordon-marking}, we see that there must exist $k-2$ parts $\pi_s$ for $s>i$ in $\pi$ such that $\pi_s\geq \pi_i-\eta$ with strict inequality if $\pi_i$ is overlined. Denote these $k-2$ parts together with $\tilde{g}_p(\pi)$ by $\{\tilde{g}_p(\pi)\}_{k-1}$, and call it the $(k-1)$-set of $\tilde{g}_p(\pi)$: \begin{equation*}\label{k-1-set} \{\tilde{g}_p(\pi)\}_{k-1}=\{ \tilde{g}_{p,1}(\pi)\leq \tilde{g}_{p,2}(\pi)\leq \cdots \leq \tilde{g}_{p,k-2}(\pi)\leq \tilde{g}_{p,k-1}(\pi):=\tilde{g}_{p}(\pi)\}. \end{equation*} It is easy to check that these $k-1$ parts satisfy \eqref{sequence}. Let $\pi$ be the overpartition defined in \eqref{mark-exa-1}, and from the Gordon marking \eqref{exa-g-1} of $\pi$, we see that there are four $4$-marked parts in $G(\pi)$, which are $\tilde{g}_1(\pi)=80$, $\tilde{g}_2(\pi)=60$, $\tilde{g}_3(\pi)=\overline{21}$ and $\tilde{g}_4(\pi)=10$. The corresponding $4$-set $\{\tilde{g}_p(\pi)\}_{4}$ for each $\tilde{g}_p(\pi)$ is indicated as follows. \begin{equation*} \label{mark-exa-1-s} \begin{split} &G(\pi)=(\overline{80}_2,\overbrace{{\color{red}{80}_4}, {\color{blue}{80}_1, \overline{70}_2,{70}_3}}^{\color{red}\{80\}_4},\overline{69}_1, \overline{60}_2,\overbrace{{\color{red}{60}_4},{\color{blue}\overline{59}_3, \overline{51}_1,{50}_2}}^{\color{red}\{60\}_4},\overline{40}_3, {40}_1,\overline{31}_2,\overline{29}_1,\\[5pt] &\ \ \ \ \ \ \ \ \ \ \ \underbrace{{\color{red}\overline{21}_4},{\color{blue}\overline{20}_3, {20}_2,\overline{19}_1}}_ {\color{red}\{\overline{21}\}_4},\underbrace{\color{red}{\overline{10}_4}, {\color{blue}{10}_3,\overline{9}_2,\overline{1}_1}}_{\color{red}\{\overline{10}\}_4}). \end{split} \end{equation*} If we assign a mark to each part from the largest in same manner as in the definition of the Gordon marking of a $\overline{B}_j$-overpartition, we can get the reverse Gordon marking of a $\overline{B}_j$-overpartition, which has been defined in \cite{he-ji-zhao}. \begin{defi}[Reverse Gordon marking]\label{R-Gordon-marking} For an overpartition $\pi=(\pi_1,\pi_2,\ldots,\pi_\ell)$ where $\pi_1\geq \pi_2\geq \cdots \geq \pi_\ell$, we assign a positive integer to each part of $\pi$ from the largest as follows{\rm{:}} We first assign $1$ to the largest part of $\pi$. Then, for each $\pi_i$, we assign $q$ to $\pi_i$, where $q$ is the least positive integer that is not used to mark the parts $\pi_p$ for $p<i$ such that $\pi_p\leq \pi_i+\eta$ with strict inequality if $\pi_i$ is overlined. We denote the reserve Gordon marking of $\pi$ by $RG(\pi)$. \end{defi} For the overpartition $\pi$ defined in \eqref{mark-exa-1}, the reverse Gordon marking of $\pi$ is given by: \begin{equation}\label{exa-r-1} \begin{split} &RG(\pi)=(\overline{80}_1,{\color{blue}{80}_2,{80}_3,\overline{70}_1},{\color{red}{70}_4},\overline{69}_2,{\color{blue}\overline{60}_1,{60}_3,\overline{59}_2}, {\color{red}\overline{51}_4},{50}_1,\overline{40}_2,{40}_3,\overline{31}_1,{\color{blue}\overline{29}_2,\overline{21}_1,}\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ {\color{blue}\overline{20}_3},{\color{red}{20}_4},\overline{19}_2,{\color{blue}\overline{10}_1,{10}_3,\overline{9}_2},{\color{red}\overline{1}_4}). \end{split} \end{equation} Let $\pi=(\pi_1,\pi_2,\ldots,\pi_\ell)$ be an overpartition with $\pi_1\geq \pi_2\geq \cdots \geq \pi_\ell$, and let $\{\pi_i\}_{k-1}$ be the $(k-1)$-set of $\pi$. By the definition of reverse Gordon marking, we see that these $k-1$ parts in the $(k-1)$-set $\{\pi_i\}_{k-1}$ of $\pi$ have different marks in the reverse Gordon marking of $\pi$. Hence, we have the following proposition: \begin{prop}{\rm \cite[Proposition 2.4]{he-ji-zhao}} Let $\pi$ be an overpartition satisfying {\rm(1)} and {\rm(2)} in Definition \ref{defi-O-B}. Then $\pi$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r)$ if and only if the marks in the reverse Gordon marking of $\pi$ do not exceed $k-1$ and there are at most $r-1$ parts less than or equal to $\eta$ in $\pi$. \end{prop} Let $\pi$ be an overpartition in $\overline{\mathcal{B}}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r)$. Assume that there are $M$ $(k-1)$-marked parts in $RG(\pi)$, and denote these $(k-1)$-marked parts by $\tilde{r}_1(\pi)> \tilde{{r}}_2(\pi)>\cdots >\tilde{r}_M(\pi)$. For each $(k-1)$-marked part $\tilde{r}_p(\pi)$, assume that $\tilde{r}_p(\pi)$ is the $i$-th part of $\pi$, by Definition \ref{R-Gordon-marking}, we see that there must exist $k-2$ parts $\pi_s$ for $s<i$ in $\pi$ such that $\pi_s{ \leq} \pi_i+\eta$ with strict inequality if $\pi_i$ is overlined. Denote these $k-2$ parts together with $\tilde{r}_p(\pi)$ by $\{\tilde{r}_p(\pi)\}_{k-1}$, and call it the $(k-1)$-set of $\tilde{r}_p(\pi)$: \[\{\tilde{r}_p(\pi)\}_{k-1}=\{{\tilde{r}_{p}(\pi):=} \tilde{r}_{p,1}(\pi)\leq \tilde{r}_{p,2}(\pi)\leq \cdots \leq \tilde{r}_{p,k-2}(\pi)\leq \tilde{r}_{p,k-1}(\pi)\}.\] It is easy to check that these $k-1$ parts satisfy \eqref{sequence}. Let $\pi$ be the overpartition defined in \eqref{mark-exa-1}. From the reverse Gordon marking \eqref{exa-r-1} of $\pi$, we see that there four $4$-marked parts in $RG(\pi)$, which are $\tilde{r}_1(\pi)=70$, $\tilde{r}_2(\pi)=\overline{51}$, $\tilde{r}_3(\pi)=20$ and $\tilde{r}_4(\pi)=\overline{1}$. The corresponding $4$-set $\{\tilde{r}_p(\pi)\}_{4}$ for each $\tilde{r}_p(\pi)$ is indicated as follows. \begin{equation*} \begin{split} &RG(\pi)=(\overline{80}_1,\overbrace{{\color{blue}{80}_2,{80}_3,\overline{70}_1}, {\color{red}{70}_4}}^{{\color{red}\{70\}_4}},\overline{69}_2, \overbrace{{\color{blue}\overline{60}_1,{60}_3,\overline{59}_2}, {\color{red}\overline{51}_4}}^{{\color{red}\{\overline{51}\}_4}},{50}_1, \overline{40}_2,{40}_3,\overline{31}_1,\\[5pt] &\ \ \ \ \ \ \ \ \ \ \ \ \ \underbrace{{\color{blue}\overline{29}_2,\overline{21}_1,} {\color{blue}\overline{20}_3},{\color{red}{20}_4}} _{\color{red}\{20\}_4},\overline{19}_2, \underbrace{{\color{blue}\overline{10}_1,{10}_3,\overline{9}_2}, {\color{red}\overline{1}_4}}_{\color{red}\{\overline{1}\}_4}). \end{split} \end{equation*} The following proposition tells us that the number of $(k-1)$-marked parts in the Gordon marking of $\pi$ equals the number of $(k-1)$-marked parts in the reverse Gordon marking of $\pi$. The proof of this proposition can be found in \cite{he-ji-zhao}. \begin{prop}{\rm\!\! \cite[Proposition 2.5]{he-ji-zhao}} \label{sequence-length} Let $\pi$ be an overpartition in $\overline{\mathcal{B}}_1 (\alpha_1,\ldots,\alpha_\lambda;\eta,k,r)$. Assume that there are $N$ $(k-1)$-marked parts in the Gordon marking of $\pi$, denoted by $\tilde{g}_1(\pi)> \tilde{g}_2(\pi)>\cdots >\tilde{g}_N(\pi)$, and there are $M$ $(k-1)$-marked parts in the reverse Gordon marking of $\pi$, denoted by $\tilde{r}_1(\pi)> \tilde{r}_2(\pi)>\cdots >\tilde{r}_M(\pi)$. We have $N=M$. Moveover, for each $1\leq i \leq N\ (=M)$, we have $\tilde{g}_i(\pi)\in \{\tilde{r}_i(\pi)\}_{k-1}$ and $\tilde{r}_i (\pi)\in \{\tilde{g}_i(\pi)\}_{k-1}$. \end{prop} \subsection{The forward move and the backward move} In this subsection, we will review the forward move and the backward move defined in \cite{he-ji-zhao} based on the Gordon marking of a $\overline{B}_j$-overpartition and the reverse Gordon marking of a $\overline{B}_j$-overpartition, and then state that they are inverses of each other. To this end, we need to make the following assumption. Let $\pi_i$ be the $i$-th part of the overpartition $\pi=(\pi_1,\pi_2,\ldots, \pi_\ell)$. We define a new part $\pi_i\pm \eta$ as an overlined part (resp. a non-overlined part) of size $|\pi_i|\pm \eta$ if $\pi_i$ is an overlined part (resp. a non-overlined part). \begin{defi}[The forward move]\label{forward-defi} Let $\pi$ be an overpartition such that there are at most $k-1$ marks in its reverse Gordon marking. Assume that there are $N$ $(k-1)$-mark parts in the reverse Gordon marking of $\pi$ which are denoted by $\tilde{r}_1(\pi)>\tilde{r}_2(\pi)>\cdots> \tilde{r}_N(\pi)$. For $1\leq p\leq N$, we define the forward move of the $p$-th kind as follows{\rm{:}} add $\eta$ to each $\tilde{r}_1(\pi),\,\tilde{r}_2(\pi),\ldots,\tilde{r}_{p}(\pi)$ and denote the resulting overpartition by $\phi_p(\pi)$. \end{defi} For example, let $\pi$ be the overpartition defined in \eqref{mark-exa-1}, from the reverse Gordon marking \eqref{exa-r-1} of $\pi$, we see that there are four $4$-marked parts in the reverse Gordon marking of $\pi$. After applying the forward move of the second kind to $\pi$ defined in \eqref{mark-exa-1}, we obtain \begin{equation*}\label{exa-r-2} \begin{split} &\phi_2(\pi)=(\overline{80},{ {80},{80},80,} \overline{70},\overline{69},\overline{61}, {\overline{60},{60},\overline{59}}, {50},\overline{40},{40},\overline{31}, {\overline{29},\overline{21},}\\ &\ \ \ \ \ \ \ \ \ \ \ \ {\overline{20}},{{20}},\overline{19}, {\overline{10},{10},\overline{9}},{\overline{1}}). \end{split} \end{equation*} The Gordon marking of $\phi_2(\pi)$ is given by \begin{equation}\label{example-b} \begin{split} &G(\phi_2(\pi))=(\overline{80}_1,\overbrace{{\color{red}{80}_4},{\color{blue}{80}_3,{80}_1,\overline{70}_2} }^{{\color{red}\{80\}_4}},\overline{69}_1, \overbrace{{\color{red}\overline{61}_4},{\color{blue}\overline{60}_2,{60}_3,\overline{59}_1} }^{{\color{red}\{\overline{61}\}_4}},{50}_2, \overline{40}_3,{40}_1,\overline{31}_2,\\[5pt] &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \overline{29}_1,\underbrace{{\color{red}\overline{21}_4}, {\color{blue}\overline{20}_3,{20}_2,\overline{19}_1}} _{\color{red}\{\overline{21}\}_4}, \underbrace{{\color{red}\overline{10}_4},{\color{blue}{10}_3,\overline{9}_2,\overline{1}_1}}_{\color{red}\{\overline{10}\}_4}). \end{split} \end{equation} There are the following two lemmas on the resulting overpartition $\phi_p(\pi)$. The proof of these two lemmas can be found in \cite{he-ji-zhao}. \begin{lem}{\rm \cite[Lemma 2.7]{he-ji-zhao}}\label{lem-for1} For $1\leq i\leq p$, the part $\tilde{r}_i(\pi)+\eta$ is not repeated in $\phi_p(\pi)$ if $\tilde{r}_i(\pi)$ is overlined. \end{lem} \begin{lem}{\rm \cite[Lemma 2.8]{he-ji-zhao}}\label{lem-for2} There are at most $k-1$ marks in the Gordon marking of $\phi_p(\pi)$ and there are $N$ $(k-1)$-marked parts in the Gordon marking of $\phi_p(\pi)$, denoted by $\tilde{g}_1(\phi_p(\pi))>\tilde{g}_2(\phi_p(\pi))>\cdots> \tilde{g}_N(\phi_p(\pi))$, then \begin{equation}\label{relation-ab-1} \tilde{g}_i(\phi_p(\pi))= \tilde{r}_i(\pi)+\eta \text{ for }1\leq i\leq p, \ \text{ and }\ \tilde{r}_{i,1}(\pi)\leq \tilde{g}_i(\phi_p(\pi))\leq \tilde{r}_{i,k-1}(\pi)\text{ for }p< i\leq N, \end{equation} where $ \tilde{r}_{i}(\pi):=\tilde{r}_{i,1}(\pi)\leq \tilde{r}_{i,2}(\pi)\leq \cdots \leq \tilde{r}_{i,k-2}(\pi)\leq \tilde{r}_{i,k-1}(\pi)$ is the $(k-1)$-set of $ \tilde{r}_{i}(\pi)$. \end{lem} \begin{defi}[The backward move]\label{defi-backward} Let $\omega$ be an overpartition such that there are at most $k-1$ marks in its Gordon marking. Assume that there are $N$ $(k-1)$-marked parts in the Gordon marking of $\omega$, denoted by $\tilde{g}_1(\omega)>\tilde{g}_2(\omega)>\cdots> \tilde{g}_N(\omega)$. For $1\leq p\leq N$, assume that $\omega$ satisfies the following two conditions{\rm{:}} {\rm (a)} $\tilde{g}_p(\omega)\geq\overline{\eta+\alpha_1}$ and {\rm (b)} there are no $(k-1)$-sets of $\omega$ in $[\tilde{g}_p(\omega)-2\eta, \tilde{g}_p(\omega))$ {\rm{(}}resp. $(\tilde{g}_p(\omega)-2\eta, \tilde{g}_p(\omega))${\rm{)}} if $\tilde{g}_p(\omega)$ is non-overlined (resp. overlined). Then we could define the backward move of the $p$-th kind to $\omega$ as follows: subtract $\eta$ to each $\tilde{g}_1(\omega),\,\tilde{g}_2(\omega),\ldots, \tilde{g}_{p}(\omega)$ and denote the resulting overpartition by $\psi_p(\omega)$. \end{defi} For example, for the overpartition $\omega$ defined in \eqref{example-b}, we see that the largest mark in the Gordon marking of $\omega$ is equal to four. For $p=2$, we see that $\omega$ satisfies the conditions (a) and (b) in Definition \ref{defi-backward}, hence we could apply the backward move of the second kind to $\omega$ to recover the overpartition $\pi$ defined in \eqref{exa-r-1}. There are following two lemmas on the resulting overpartition $\psi_p(\omega)$. The proof can be found in \cite{he-ji-zhao}. \begin{lem}{\rm \cite[Lemma 2.10]{he-ji-zhao}}\label{lem-bac1} For $1\leq i\leq p$, the part $\tilde{g}_i(\omega)-\eta$ is not repeated in $\psi_p(\omega)$ if $\tilde{g}_i(\omega)$ is overlined. \end{lem} \begin{lem}{\rm \cite[Lemma 2.11]{he-ji-zhao}}\label{lem-bac2} There are at most $k-1$ marks in the reverse Gordon marking of $\psi_p(\omega)$ and there are $N$ $(k-1)$-marked parts in the reverse Gordon marking of $\psi_p(\omega)$, denoted by $\tilde{r}_1(\psi_p(\omega))>\cdots >\tilde{r}_N(\psi_p(\omega))$, then \begin{equation}\label{relation-ab-2} \tilde{r}_i(\psi_p(\omega))=\tilde{g}_i(\omega)-\eta \text{ for }1\leq i\leq p, \ \text{ and }\ \tilde{g}_{i,1}(\omega)\leq \tilde{r}_i(\psi_p(\omega))\leq \tilde{g}_{i,k-1}(\omega)\text{ for }p< i\leq N, \end{equation} where $\tilde{g}_{i,1}(\omega)\leq \tilde{g}_{i,2}(\omega)\leq \cdots \leq \tilde{g}_{i,k-2}(\omega)\leq \tilde{g}_{i,k-1}(\omega):=\tilde{g}_{i}(\omega)$ is the $(k-1)$-set of $\tilde{g}_{i}(\omega)$. \end{lem} We conclude this subsection by the following theorem. \begin{thm}{\rm \cite[Theorem 2.12]{he-ji-zhao}}\label{ForBac-Inv} The forward move of the $p$-th kind $\phi_p$ and the backward move of the $p$-th kind $\psi_p$ are inverses of each other. \end{thm} \subsection{The outline of the proof Theorem \ref{lambda-r} } Let $\mathcal{D}_{\alpha_1}$ and $\mathcal{D}_{\alpha_\lambda}$ denote the sets of distinct partitions whose parts are congruent to $\alpha_1$ and $\alpha_\lambda $ modulo $\eta$ respectively. It is easy to see that Theorem \ref{lambda-r} is equivalent to the following combinatorial statement. \begin{thm}\label{lambdathm} Let $\lambda,k$ and $r$ be nonnegative integers such that $k\geq r\geq\lambda\geq0$ and $0\leq \lambda< k$. There is a bijection $\Theta$ between $\mathcal{D}_{\alpha_1}\times\mathcal{D}_{\alpha_\lambda}\times \mathcal{\overline{B}}_1(\alpha_{2},\ldots,\alpha_{\lambda-1};\eta, k-1,r-1)$ and $\mathcal{\overline{B}}_1(\alpha_1,\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r)$. Moreover, for a triplet $(\delta^{(1)},\delta^{(\lambda)},\pi) \in \mathcal{D}_{\alpha_1}\times\mathcal{D}_{\alpha_\lambda}\times \mathcal{\overline{B}}_1(\alpha_{2},\ldots,\alpha_{\lambda-1} ;\eta, k-1,r-1)$, we have $\tau=\Theta(\delta^{(1)},\delta^{(\lambda)},\pi)\in \mathcal{\overline{B}}_1(\alpha_1,\ldots,\alpha_{\lambda};\eta,k,r)$ such that $|\tau|=|\delta^{(1)}|+|\delta^{(\lambda)}|+|\pi|$. \end{thm} To build the bijection $\Theta$ in Theorem \ref{lambdathm}, we will first unit $\pi$ and $\delta^{(\lambda)}$, and denote the resulting overpartition by $\pi^{(0)}$. By definition, we see that $\pi^{(0)}$ is an overpartition in $\mathcal{\overline{B}}_1(\alpha_{2},\ldots,\alpha_{\lambda-1},\,\alpha_{\lambda};k,r)$. Furthermore, there is a part congruent $\alpha_\lambda$ modulo $\eta$ in each $(k-1)$-set $\{\pi^{(0)}_i\}_{k-1}$ of $ \pi^{(0)}$. We next aim to insert the parts of $\delta^{(1)}$ into $\pi^{(0)}$. It turns out that the bijection $\Theta$ consists of two bijections. We first insert some parts in $\delta^{(1)}$ from smallest to largest into $\pi^{(0)}$ so that there are no parts congruent to $\alpha_\lambda$ modulo $\eta$ in some $(k-1)$-sets of the resulting overpartitions. To this end, we will introduce the $(k-1)$-addition and its inverse the $(k-1)$-subtraction which are overpartition generalizations of the $(k-1)$-addition and the $(k-1)$-subtraction introduced by Kim\cite{Kim-2018}. We then use the $(k-1)$-insertion with $a=\alpha_1$ defined in \cite{he-ji-zhao} to insert the remaining parts of $\delta^{(1)}$ from smallest to largest into the resulting overpartition after the application of the $(k-1)$-addition. However, by introducing non-degenerate $(r-1)$-set, we could give a unified proof of Theorem \ref{lambdathm}. \section{The $(k-1)$-addition and the $(k-1)$-subtraction} Throughout this section and later, we assume that $\lambda,k$ and $r$ are nonnegative integers such that $k\geq r\geq\lambda\geq0$ and $k>\lambda$. In this section, we give the definitions of the $(k-1)$-addition and the $(k-1)$-subtraction and show that they are inverses of each other. Before doing that, we will define the non-degenerate $(k-1)$-sets, the non-degenerate $(r-1)$-sets, and the degenerate overpartitions. In the remaining of the paper, we make the following assumption. Let $\pi_i$ be the $i$-th part of the overpartition $\pi=(\pi_1,\pi_2,\ldots, \pi_\ell)$. If $\pi_i$ is an overlined part congruent to $\alpha_\lambda$ modulo $\eta$, then we define a new part $\pi_i+\alpha_1$ as a non-overlined part of size $|\pi_i|+\alpha_1$. If $\pi_i$ is a non-overlined part divided by $\eta$, then we define a new part $\pi_i-\alpha_1$ as an overlined part of size $|\pi_i|-\alpha_1$. For an overpartition $\pi$ and an interval $I$, we use $f_\pi I$ to denote the number of parts of $\pi$ in the interval $I$. For example, we use $f_\pi(0,\eta]$ to denote the number of parts of $\pi$ less than or equal to $\eta$. Let $\pi=(\pi_1,\pi_2,\ldots, \pi_\ell)$ be an overpartition in $\mathcal{\overline{{B}}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$. A $(k-1)$-set of $\pi$ is called a non-degenerate $(k-1)$-set if there are no parts congruent to $\alpha_\lambda$ modulo $\eta$ in this $(k-1)$-set. Let $\{\pi_m\}_{k-1}$ be a non-degenerate $(k-1)$-set of $\pi$, namely \[\pi_m\geq \pi_{m+1}\geq \cdots \geq \pi_{m+k-2},\] where $\pi_m\leq \pi_{m+k-2}+\eta$ with strict inequality if $\pi_m$ is overlined. Note that $0\leq \lambda< k$, so there is at least one non-overlined part in $\{\pi_m\}_{k-1}$. Let $\pi_{m+t}$ be the largest non-overlined part in $\{\pi_m\}_{k-1}$. If $\pi_{m+t}>\pi_{m+t+1}$, then call $\pi_{m+t}$ a non-degenerate part of $\pi$. If $f_\pi(0,\eta]=r-1$, and $\overline{\alpha_\lambda}$ does not occur in $\pi$, then the set consisting of the following parts \[\eta\geq \pi_{s-r+2}\geq \pi_{s-r+3}\geq \cdots \geq \pi_{s}>0\] is called a non-degenerate $(r-1)$-set of $\pi$, denoted by $\{\pi_s\}_{r-1}$. Notice that $r-1>\lambda-2$, so there must be at lest one non-overlined part $\eta$ of $\pi$. Let $\pi_{s-t}=\eta>\pi_{s-t+1}$ where $0\leq t\leq r-2$. If there are no $(k-1)$-sets of $\pi$ in $(0,\overline{\eta+\alpha_\lambda})$, then $\pi_{s-t}$ is called the non-degenerate $(r-1)$-part of $\pi$. It should be noted that for an overpartition $\pi$ in $\mathcal{\overline{{B}}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$, if there is a non-degenerate $(r-1)$-part in $\pi$, then $r<k$. Otherwise, if $r=k$, then $f_\pi(0,\eta]=r-1=k-1$. It implies that $\{\pi_s\}_{r-1}$ is a $(k-1)$-set of $\pi$, which contradicts to the assumption that there are no $(k-1)$-sets of $\pi$ in $(0,\overline{\eta+\alpha_\lambda})$. Let $\pi$ be an overpartition in $\mathcal{\overline{{B}}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$. Assume that there is at least a non-degenerate $(r-1)$-part or a non-degenerate part in $\pi$, the degenerate overpartition $\hat{\pi}$ of $\pi$ can be defined as follows: There are two cases. \begin{itemize} \item[\rm{Case 1:}] If there is a non-degenerate $(r-1)$-part in $\pi$, then the degenerate overpartition $\hat{\pi}$ of $\pi$ is obtained by subtracting $\alpha_1$ from the non-degenerate $(r-1)$-part of $\pi$. \item[\rm{Case 2:}] Otherwise, the degenerate overpartition $\hat{\pi}$ of $\pi$ is obtained by subtracting $\alpha_1$ from the smallest non-degenerate part of $\pi$. \end{itemize} For example, let $k=4$, $r=3$, $\lambda=3$, $\eta=10$, $\alpha_1=3$, $\alpha_2=5$ and $\alpha_3=7$. We consider the following overpartition $\pi$ in $\mathcal{\overline{{B}}}_1(5,7;10,4,3)$. \[\begin{array}{lllllllllllllllll} &\pi_1,& \pi_2,& \pi_3,& \pi_4,& \pi_5,& \pi_6,& \pi_7,& \pi_8,& \pi_9,& \pi_{10},& \pi_{11}\\[5pt] &\downarrow&\downarrow&\downarrow&\downarrow&\downarrow&\downarrow &\downarrow&\downarrow&\downarrow&\downarrow&\downarrow\\[5pt] \pi=(&\overline{50},&\overline{47},&40,&40,&\overline{30},&\overline{25},&\overline{20}, &20,&\overline{15},&10,&\overline{5}&). \end{array}\] Note that $f_\pi(0,10]=2$, $\overline{7}$ does not occur in $\pi$ and there are no $3$-sets of $\pi$ in $(0,\overline{17})$, so $\{10,\overline{5}\}$ is the non-degenerate $2$-set of $\pi$. Then the degenerate overpartition $\hat{\pi}$ is obtained from $\pi$ by subtracting $3$ from $\pi_{10}=10$. \[\begin{array}{lllllllllllllllll} &\hat{\pi}_1,& \hat{\pi}_2,& \hat{\pi}_3,& \hat{\pi}_4,& \hat{\pi}_5,& \hat{\pi}_6,& \hat{\pi}_7,& \hat{\pi}_8,& \hat{\pi}_9,& \hat{\pi}_{10},& \hat{\pi}_{11}\\[5pt] &\downarrow&\downarrow&\downarrow&\downarrow&\downarrow&\downarrow &\downarrow&\downarrow&\downarrow&\downarrow&\downarrow\\[5pt] \hat{\pi}=(&\overline{50},&\overline{47},&40,&40,&\overline{30},&\overline{25},&\overline{20}, &20,&\overline{15},&\overline{7},&\overline{5}&). \end{array}\] For another example, let $k=4$, $r=3$, $\lambda=3$, $\eta=10$, $\alpha_1=3$, $\alpha_2=5$ and $\alpha_3=7$. We consider the following overpartition $\tau$ in $\mathcal{\overline{{B}}}_1(5,7;10,4,3)$. \begin{equation}\label{new-example-11}\begin{array}{lllllllllllllllll} &\tau_1,& \tau_2,& \tau_3,& \tau_4,& \tau_5,& \tau_6,& \tau_7,& \tau_8,& \tau_9,& \tau_{10},& \tau_{11}\\[5pt] &\downarrow&\downarrow&\downarrow&\downarrow&\downarrow&\downarrow &\downarrow&\downarrow&\downarrow&\downarrow&\downarrow\\[5pt] \tau=(&\overline{50},&\overline{47},&40,&40,&\overline{30},&\overline{25},&\overline{20}, &20,&\overline{10},&10,&\overline{7}&). \end{array}\end{equation} Note that $\overline{7}$ is a part of $\tau$, so there does not exist a non-degenerate $2$-set in $\tau$. There are three non-degenerate $3$-sets of $\tau$, which are $\{40,40,\overline{30}\}$, $\{25,\overline{20},20\}$ and $\{20,\overline{10},{10}\}$. It is easy to see that $\tau_4=40$ and $\tau_8=20$ are two non-degenerate parts of $\tau$. Then, the degenerate overpartition $\hat{\tau}$ of $\tau$ can be obtained by subtracting $3$ from $\tau_8=20$ in $\tau$. \begin{equation}\label{new-example-12}\begin{array}{lllllllllllllllll} &\hat{\tau}_1,& \hat{\tau}_2,& \hat{\tau}_3,& \hat{\tau}_4,& \hat{\tau}_5,& \hat{\tau}_6,& \hat{\tau}_7,& \hat{\tau}_8,& \hat{\tau}_9,& \hat{\tau}_{10},& \hat{\tau}_{11}\\[5pt] &\downarrow&\downarrow&\downarrow&\downarrow&\downarrow&\downarrow &\downarrow&\downarrow&\downarrow&\downarrow&\downarrow\\[5pt] \hat{\tau}=(&\overline{50},&\overline{47},&40,&40,&\overline{30},&\overline{25},&\overline{20}, &\overline{17},&\overline{10},&10,&\overline{7}&). \end{array}\end{equation} We are now ready to describe the first bijection, which involves the following two sets. (a) For $0\leq p\leq N$, let $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda; \eta,k,r|N,p)$ denote the set of overpartitions $\pi$ in $\mathcal{\overline{{B}}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$ such that there are $N$ $(k-1)$-marked parts in the reverse Gordon marking of $\pi$, denoted by $\tilde{r}_{1}(\pi)>\cdots >\tilde{r}_{N}(\pi)$, satisfying one of the following conditions: (1) If $0\leq p <N$, then there exists a part congruent to $\alpha_\lambda$ modulo $\eta$ in the $(k-1)$-set of $\tilde{r}_{p+1}(\pi)$, denoted by $\tilde{\tilde{r}}_{p+1}(\pi)$, and there must be a part congruent to $\alpha_\lambda$ modulo $\eta$ in all $(k-1)$-sets of $\pi$ less than $\tilde{\tilde{r}}_{p+1}(\pi)$; (2) If $p=N$, then $f_\pi(0,\eta]=r-1$, $\tilde{r}_N(\pi)>\eta$ and $\overline{\alpha_\lambda}$ is a part of $\pi$ denoted by $\tilde{\tilde{r}}_{N+1}(\pi)$. (b) For $0\leq p\leq N$, let $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$ denote the set of overpartitions $\pi$ in $\mathcal{\overline{{B}}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$ such that there are $N$ $(k-1)$-marked parts in the Gordon marking of the degenerate overpartition $\hat{\pi}$ of $\pi$, denoted by $\tilde{g}_{1}(\hat{\pi})>\cdots >\tilde{g}_{N}(\hat{\pi})$, satisfying one of the following conditions: (1) If $0\leq p<N$ and assume that $\{\pi_m\}_{k-1}$ is the smallest non-degenerate $(k-1)$-set of $\pi$, then $\tilde{g}_{p+1}(\hat{\pi})< \pi_m+\eta<\tilde{g}_{p}(\hat{\pi})$; (2) If $p=N$, then there is a non-degenerate $(r-1)$-part of $\pi$. It should be noted that if $\pi$ is an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda; \eta,k,r|N,N)$, then $r<k$. This is because that $f_\pi(0,\eta]=r-1$ and $f_\pi(0,\eta]<k-1$ since $\tilde{r}_N(\pi)>\eta$. The $(k-1)$-addition gives the following bijection between $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda; \eta,k,r|N,p)$ and $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda; \eta,k,r|N,p)$. \begin{thm}\label{Dilation-reduction} For $N\geq 1$, $0\leq p\leq N$ and $0<\alpha_1<\eta$, let $\pi$ be an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$ and let $\tilde{r}_1(\pi)>\cdots>\tilde{r}_N(\pi)$ be the $(k-1)$-marked parts in the reverse Gordon marking of $\pi$. Assume that $\tilde{\tilde{r}}_{p+1}(\pi)$ is a part congruent to $\alpha_\lambda$ modulo $\eta$ in the $(k-1)$-set $\{\tilde{r}_{p+1}(\pi)\}_{k-1}$. Here we assume that $\tilde{\tilde{r}}_{N+1}(\pi)=\overline{\alpha_\lambda}$. The $(k-1)$-addition $\tau=A_{p\eta+\alpha_1}(\pi)$ is defined as follows{\rm{:}} First apply the forward move of the $p$-th kind $\phi_p$ to $\pi$ to get $\pi^{(1)}$ and then add $\alpha_1$ to $\tilde{\tilde{r}}_{p+1}(\pi)$ to generate a non-overlined part divided by $\eta$. We denote the resulting overpartition by $\tau$. Then $\tau$ is an overpartition in $\overline{\mathcal{B}}_{\eta}(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N, p)$ such that \[|\tau|=|\pi|+p\eta+\alpha_1.\] Furthermore, the inverse map of the $(k-1)$-addition, the $(k-1)$-subtraction $\pi={S}_{p\eta+\alpha_1}(\tau)$, is defined as follows: First subtract $\alpha_1$ from the smallest non-degenerate part {\rm (}or the non-degenerate $(r-1)$-part if it exists{\rm)} in $\tau$ to obtain the degenerate overpartition $\hat{\tau}$. Then apply the backward move of $p$-th kind $\psi_p$ to $\hat{\tau}$ to obtain $\pi$. Hence $A_{p\eta+\alpha_1}$ is a bijection between the set $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda; \eta,k,r|N,p)$ and the set $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda; \eta,k,r|N,p)$. \end{thm} For example, let $k=4$, $r=3$, $\lambda=3$, $\eta=10$, $\alpha_1=3$, $\alpha_2=5$, $\alpha_3=7$ and let $\pi$ be an overpartition in $\mathcal{\overline{{B}}}_1(5,7;10,4,3)$, whose reverse Gordon marking is given below. \begin{equation}\label{new-example-21}RG(\pi)=(\overline{50}_1,\overline{47}_2,\overbrace{{\color{blue}{40}_1,\overline{30}_2},{\color{red}{30}_3}}^{{\color{red}\{{30}\}_3}},\overbrace{{\color{blue}\overline{25}_1,\overline{20}_2},{\color{red}\overline{17}_3}}^{{\color{red}\{\overline{17}\}_3}},\overbrace{{\color{blue}\overline{10}_1,{10}_2},{\color{red}\overline{7}_3}}^{{\color{red}\{\overline{7}\}_3}}). \end{equation} There are there $3$-marked parts in the reverse Gordon marking of $\pi$, which are $\tilde{r}_1(\pi)=30$, $\tilde{r}_2(\pi)=\overline{17}$ and $\tilde{r}_3(\pi)=\overline{7}$. Let $p=1$, we see that there is a part $\overline{17}$ which is congruent to $7$ modulo $10$ in the $3$-set $\{\tilde{r}_{p+1}(\pi)\}_3=\{\overline{25}_1,\overline{20}_2,\overline{17}_3\}$, so $\tilde{\tilde{r}}_{p+1}(\pi)=\overline{17}$. It is easy to see that $\{ \overline{10}_1,{10}_2,\overline{7}_3\}$ is the only one $3$-set less than $\overline{17}$ in $\pi$. Furthermore, there is a part $\overline{7}$ in the $3$-set $\{ \overline{10}_1,{10}_2,\overline{7}_3\}$. Hence \[\pi\in\mathcal{\overline{{B}}}_3(5,7;10,4,3|3,1).\] We could apply the $3$-addition $A_{13}$ to $\pi$ to obtain an overpartition $\tau$. Note that $p=1$, we first change the part $\tilde{r}_{1}(\pi)=30$ to $\tilde{r}_{1}(\pi)+\eta=40$ and then add $\alpha_1=3$ to the part $\tilde{\tilde{r}}_{2}(\pi)=\overline{17}$ to get $20$. So we obtain \[\tau=(\overline{50},\overline{47},40,40,\overline{30},\overline{25},\overline{20}, 20,\overline{10},10,\overline{7}),\] which is the overpartition defined in \eqref{new-example-11}. It is obvious that $|\tau|=|\pi|+p\eta+\alpha_1=|\pi|+13$. From the proceeding example \eqref{new-example-11}, we see that the degenerate overpartition $\hat{\tau}$ of $\tau$ is the overpartition defined in \eqref{new-example-12}, whose Gordon marking is given below. \[G(\hat{\tau})=(\overline{50}_2,\overline{47}_1,\overbrace{{\color{red}{40}_3},{\color{blue}{40}_2,\overline{30}_1}}^{{\color{red}\{{40}\}_3}},\overbrace{{\color{red}\overline{25}_3},{\color{blue}{20}_2,\overline{17}_1}}^{{\color{red}\{\overline{25}\}_3}},\overbrace{{\color{red}\overline{10}_3},{\color{blue}{10}_2,\overline{7}_1}}^{{\color{red}\{\overline{10}\}_3}}). \] There are three $3$-marked parts in the Gordon marking of $\hat{\tau}$, which are $\tilde{g}_1(\hat{\tau})=40$, $\tilde{g}_2(\hat{\tau})=\overline{25}$ and $\tilde{g}_3(\hat{\tau})=\overline{10}$. It is easy to check that $\{20,\overline{10},10\}$ is the smallest non-degenerate $3$-set of $\pi$, $\tau_m=20$ and $\tilde{g}_1(\hat{\tau})=40>\tau_m+\eta=30>\tilde{g}_2(\hat{\tau})=\overline{25}$. Hence \[\tau\in\mathcal{\overline{{B}}}_{10}(5,7;10,4,3|3,1).\] We could apply the $3$-subtraction ${S}_{13}$ to $\tau$. We first remove subtract $\alpha_1=3$ from the smallest non-degenerate part $20$ in $\tau$ to get the degenerate overpartition $\hat{\tau}$ of $\tau$. Then, we apply the backward move of the first kind $\psi_1$ to $\hat{\tau}$, namely, changing the part $\tilde{g}_1(\hat{\tau})=40$ in $\hat{\tau}$ to $30$. Finally, we recover the overpartition $\pi$ defined in \eqref{new-example-21}. In order to prove Theorem \ref{Dilation-reduction}, we will show Lemma \ref{dilation-1} and Lemma \ref{reduction}, where Lemma \ref{dilation-1} tells us that the $(k-1)$-addition $A_{p\eta+\alpha_1}$ is well-defined and Lemma \ref{reduction} tell us that the $(k-1)$-addition $A_{p\eta+\alpha_1}$ is reversible. Hence Theorem \ref{Dilation-reduction} immediately follows from these two lemmas and Theorem \ref{ForBac-Inv}. \begin{lem}\label{dilation-1} For $N\geq 1$, $0\leq p\leq N$ and $0<\alpha_1<\eta$, let $\pi$ be an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$. Applying the $(k-1)$-addition $A_{p\eta+\alpha_1}$ defined in Theorem \ref{Dilation-reduction} to $\pi$, we obtain $\tau=A_{p\eta+\alpha_1}(\pi)$. Then $\tau$ is an overpartition in $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$ such that $f_{\tau}(0,\eta]=f_{\pi}(0,\eta]$ and $|\tau|=|\pi|+p\eta+\alpha_1.$ Furthermore, $\pi^{(1)}$ is the degenerate overpartition of $\tau$. \end{lem} \pf Let $\tilde{r}_1(\pi)>\cdots>\tilde{r}_N(\pi)$ be the $(k-1)$-marked parts in the reverse Gordon marking of $\pi$. Let $\pi^{(1)}$ denote the resulting overpartition obtained by applying the forward move of the $p$-th kind $\phi_p$ to $\pi$. By Lemma \ref{lem-for1} and Lemma \ref{lem-for2}, we see that $\pi^{(1)}$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$. Furthermore, there are $N$ $(k-1)$-marked parts in the Gordon marking of $\pi^{(1)}$, denoted by $\tilde{g}_{1}(\pi^{(1)})>\tilde{g}_{2}(\pi^{(1)})>\cdots>\tilde{g}_{N}(\pi^{(1)}).$ By \eqref{relation-ab-1}, we see that \begin{equation}\label{proof-add-1} \tilde{g}_{i}(\pi^{(1)})=\tilde{r}_i(\pi)+\eta \quad \text{for}\quad 1\leq i\leq p,\quad \text{and}\quad \tilde{r}_{i}(\pi)\leq\tilde{g}_{i}(\pi^{(1)})\leq \tilde{r}_{i,k-1}(\pi)\quad \text{for}\quad p< i\leq N. \end{equation} Note that there are no overlined parts congruent to $\alpha_1$ modulo $\eta$ in $\pi^{(1)}$, so we conclude that $\pi^{(1)}$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$. When $p=N$, by \eqref{proof-add-1} and note that $\tilde{r}_N(\pi)>\eta$, we see that $\tilde{g}_N(\pi^{(1)})= \tilde{r}_N(\pi)+\eta>2\eta$. By definition, we see that there are no $(k-1)$-sets of $\pi^{(1)}$ in $(0,2\eta]$. So, there are no $(k-1)$-sets of $\pi^{(1)}$ in $(0,\overline{\eta+\alpha_\lambda})$. Next, we change $\tilde{\tilde{r}}_{N+1}(\pi)=\overline{\alpha_\lambda}$ in $\pi^{(1)}$ to $\eta$ to obtain $\tau$. We see that the marks of parts in $(0,2\eta]$ in the Gordon marking of $\tau$ are at most $k-1$ and there are no $(k-1)$-sets of $\tau$ in $(0,\overline{\eta+\alpha_\lambda})$. Furthermore, it is easy to check that \[f_\tau(0,\eta]=f_{\pi^{(1)}}(0,\eta]=f_{\pi}(0,\eta]=r-1.\] Hence, $\tau$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$. From the construction, we see that $\overline{\alpha_\lambda}$ does not occur in $\tau$, and so there is a non-degenerate $(r-1)$-part in $\tau$. Furthermore, it is easy to find that $\hat{\tau}=\pi^{(1)}$. Therefore, there are $N$ $(k-1)$-marked parts in the Gordon marking of $\hat{\tau}$. Hence, $\tau$ is an overpartition in $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,N)$ such that $f_{\tau}(0,\eta]=f_{\pi}(0,\eta]$ and $|\tau|=|\pi|+N\eta+\alpha_1$. When $0\leq p<N$, we first show that $\tau$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$. By definition, it suffices to show that there are at most $k-1$ marks in the Gordon marking of $\tau$. Assume that $\tilde{\tilde{r}}_{p+1}(\pi)$ is the part congruent to $\alpha_\lambda$ modulo $\eta$ in the $(k-1)$-set $\{\tilde{r}_{p+1}(\pi)\}_{k-1}$ and is the $t$-th part in the $(k-1)$-set $\{\tilde{r}_{p+1}(\pi)\}_{k-1}$. Namely, we have \[\tilde{r}_{p+1}(\pi)\leq \cdots \leq \tilde{\tilde{r}}_{p+1}(\pi) \leq \cdots\leq \tilde{r}_{p+1,k-1}(\pi),\] where $\tilde{\tilde{r}}_{p+1}(\pi)=\tilde{r}_{p+1,t}(\pi)$. In the remaining proof of this lemma, we assume that \[\tilde{\tilde{r}}_{p+1}(\pi)=\overline{(b-1)\eta+\alpha_\lambda}.\] We remove $\tilde{\tilde{r}}_{p+1}(\pi)=\overline{(b-1)\eta+\alpha_\lambda}$ from $\pi^{(1)}$ and denote the resulting overpartition by $\pi^{(2)}$. It is obvious that $\pi^{(2)}$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$. Furthermore, we find that $\tau$ is obtained by inserting $\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1=b\eta$ as a non-overlined part into $\pi^{(2)}$. To show that $\tau$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$, it suffices to show that there are no $(k-1)$-sets of $\pi^{(2)}$ in $[(b-1)\eta,(b+1)\eta]$. Suppose not. Then there is a $(k-1)$-set of $\pi^{(2)}$ in $[(b-1)\eta,(b+1)\eta]$, denoted by $\{\pi^{(2)}_i\}_{k-1}$. Then \[(b+1)\eta \geq \pi^{(2)}_i\geq \pi^{(2)}_{i+1}\geq \cdots \geq \pi^{(2)}_{i+k-2}\geq (b-1)\eta,\] where $\pi^{(2)}_i\leq \pi^{(2)}_{i+k-2}+\eta$ with strict inequality if $\pi^{(2)}_i$ is an overlined part. We consider the following two cases: Case 1: If $\pi^{(2)}_i<\overline{b\eta+\alpha_\lambda}$, then we find that there is a part marked with $k$ among the parts $\{\pi^{(2)}_{i}\}_{k-1}$ together with $\tilde{\tilde{r}}_{p+1}(\pi)=\overline{(b-1)\eta+\alpha_\lambda}$ in the Gordon marking of $\pi^{(1)}$. So there is a part marked with $k$ in the Gordon marking of $\pi^{(1)}$. It contradicts to the fact that there are at most $k-1$ marks in the Gordon marking of $\pi^{(1)}$. Case 2: If $\pi^{(2)}_i=\overline{b\eta+\alpha_\lambda}$ or $(b+1)\eta$, then $\pi^{(2)}_{i+k-2}\geq b\eta$. That is, \[(b+1)\eta \geq \pi^{(2)}_i\geq \pi^{(2)}_{i+1}\geq \cdots \geq \pi^{(2)}_{i+k-2}\geq b\eta,\] where $\pi^{(2)}_i\leq \pi^{(2)}_{i+k-2}+\eta$ with strict inequality if $\pi^{(2)}_i$ is an overlined part. We proceed to show that in such assumption, \begin{equation}\label{pi2} \tilde{r}_{p}(\pi)>b\eta \quad \text{and} \quad f_{\pi^{(2)}}[b\eta,(b+1)\eta]<k-1. \end{equation} By definition, we see that $\tilde{r}_{p}(\pi)\geq\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1=b\eta$. If $\tilde{r}_{p}(\pi)$ is overlined, notice that $\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1=b\eta$ is non-overlined, then $\tilde{r}_{p}(\pi)>\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1=b\eta$. If $\tilde{r}_{p}(\pi)$ is non-overlined, then $\tilde{r}_{p}(\pi)>\tilde{r}_{p+1}(\pi)+\eta$. In such case, we have $\tilde{r}_{p}(\pi)>\tilde{r}_{p+1}(\pi)+\eta\geq \tilde{\tilde{r}}_{p+1}(\pi)-\alpha_\lambda+\eta =\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1=b\eta$. Hence, in either case, we have $\tilde{r}_{p}(\pi)>b\eta.$ In order to show \eqref{pi2}, we consider the following two subcases. Case 2.1: $b\eta<\tilde{r}_{p}(\pi)\leq (b+1)\eta$. From the construction of the $(k-1)$-addition $A_{p\eta+\alpha_1}$, we see that $\tilde{g}_p(\pi^{(1)})=\tilde{r}_{p}(\pi)+\eta>(b+1)\eta$, and so $f_{\pi^{(1)}}[b\eta,(b+1)\eta]= f_{\pi}[b\eta, (b+1)\eta]-1<k-1$. By definition, we see that $f_{\pi^{(1)}}[b\eta,(b+1)\eta]=f_{\pi^{(2)}}[b\eta,(b+1)\eta] $. This yields that $f_{\pi^{(2)}}[b\eta,(b+1)\eta]<k-1$. Case 2.2: $\tilde{r}_{p}(\pi)> (b+1)\eta$. Assume that $f_{\pi^{(2)}}[b\eta,(b+1)\eta]= k-1$, then by definition, we have $f_{\pi}[b\eta,(b+1)\eta]=f_{\pi^{(2)}}[b\eta,(b+1)\eta]=k-1$. This implies that there exists a part $ \pi_j\in[b\eta,(b+1)\eta]$ marked with $k-1$ in the reverse Gordon marking of $\pi$, which contradicts to the fact that there are no $(k-1)$-marked parts in $({\tilde{r}}_{p+1}(\pi),{\tilde{r}}_{p}(\pi))$ in the reverse Gordon marking of $\pi$ since $\tilde{r}_{p}(\pi)> (b+1)\eta$ and $\tilde{r}_{p+1}(\pi)< \tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1=b\eta$. So, the assumption is false. Hence $f_{\pi^{(2)}}[b\eta,(b+1)\eta]< k-1$. In either subcase, we deduce that $f_{\pi^{(2)}}[b\eta,(b+1)\eta]< k-1$, but this contradicts to the assumption that $\pi^{(2)}_i=\overline{b\eta+\alpha_\lambda}$ or $(b+1)\eta$. It follows that there are no $(k-1)$-sets of $\pi^{(2)}$ in $[(b-1)\eta,(b+1)\eta]$. Hence, we conclude that $\tau$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$. Now, we proceed to show that $\tau$ is an overpartition in $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$. We first show that $\pi^{(1)}$ is the degenerate overpartition of $\tau$. To this end, we aim to show that $\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1$ is the smallest non-degenerate part of $\tau$. It is easy to see that \begin{equation}\label{new-e-1}\tilde{r}_{p+1}(\pi)\leq \cdots \leq \tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1 \leq \cdots\leq \tilde{r}_{p+1,k-1}(\pi)\end{equation} is a $(k-1)$-set of $\tau$ and there is no part congruent to $\alpha_\lambda$ modulo $\eta$ in such $(k-1)$-set. So the $(k-1)$-set \eqref{new-e-1} is a non-degenerate $(k-1)$-set of $\tau$. Notice that $\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1=b\eta$ and $\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1>\tilde{r}_{p+1,t+1}(\pi)$. Furthermore, since $\tilde{r}_{p+1,k-1}(\pi)\leq \tilde{r}_{p+1}(\pi)+\eta< \tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1+\eta=(b+1)\eta$, we see that $\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1$ is a non-degenerate part of $\tau$. Assume that $\mathfrak{s}(\tau)$ is the smallest non-degenerate part of $\tau$. From the above proof, we see that $\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1$ is a non-degenerate part of $\tau$, and so $\mathfrak{s}(\tau)\leq \tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1$. On the other hand, note that $\pi$ is an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$, we see that there is a part congruent to $\alpha_\lambda$ modulo $\eta$ in all $(k-1)$-sets of $\pi$ less than ${\tilde{\tilde{r}}}_{p+1}(\pi)$. By the construction of $\tau$, we see that there is a part congruent to $\alpha_\lambda$ modulo $\eta$ in all $(k-1)$-sets of $\tau$ less than ${\tilde{\tilde{r}}}_{p+1}(\pi)+\alpha_1$. It follows that $\mathfrak{s}(\tau)\geq {\tilde{\tilde{r}}}_{p+1}(\pi)+\alpha_1$. Thus, we obtain that $\mathfrak{s}(\tau)= {\tilde{\tilde{r}}}_{p+1}(\pi)+\alpha_1$. Hence, we conclude that $\pi^{(1)}$ is the degenerate overpartition of $\tau$, namely $\hat{\tau}=\pi^{(1)}$. It has been proved that there are $N$ $(k-1)$-marked parts in the Gordon marking of $\pi^{(1)}$, so there are $N$ $(k-1)$-marked parts in the Gordon marking of $\hat{\tau}$, which are denoted by $\tilde{g}_{1}(\hat{\tau})>\cdots>\tilde{g}_{N}(\hat{\tau})$. Assume that $\{\tau_m\}_{k-1}$ is the smallest non-degenerate $(k-1)$-set of $\tau$. We aim to show that \[\tilde{g}_{p+1}(\hat{\tau})< \tau_m+\eta<\tilde{g}_{p}(\hat{\tau}). \] Note that $\hat{\tau}=\pi^{(1)}$, then it is equivalent to show that \begin{equation}\label{definaaa} \tilde{g}_{p+1}(\pi^{(1)})< \tau_m+\eta<\tilde{g}_{p}(\pi^{(1)}). \end{equation} By definition, we see that $\mathfrak{s}(\tau)={\tilde{\tilde{r}}}_{p+1}(\pi)+\alpha_1$ is in $\{\tau_m\}_{k-1}$, and so \begin{equation}\label{relation-v-2} \tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1\leq \tau_m\leq \text{max}\{\tilde{r}_{p+1,k-1}(\pi),\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1\}. \end{equation} By \eqref{relation-v-2}, we see that \[\tau_m+\eta\geq \tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1+\eta> {\tilde{r}}_{p+1,k-1}(\pi)+\alpha_1>{\tilde{r}}_{p+1,k-1}(\pi).\] By \eqref{proof-add-1}, we find that ${\tilde{r}}_{p+1,k-1}(\pi)\geq {\tilde{g}}_{p+1}(\pi^{(1)})$. Thus, we arrive at $\tau_m+\eta>{\tilde{g}}_{p+1}(\pi^{(1)})$. To show that $\tilde{g}_{p}(\pi^{(1)})>\tau_m+\eta$, by \eqref{proof-add-1}, it suffices to show that $\tilde{r}_{p}(\pi)>\tau_m$. We consider the following two cases. Case 1: $\tilde{\tilde{r}}_{p+1}(\pi)<\tilde{r}_{p,k-1}(\pi)$. By \eqref{relation-v-2}, we see that $\tau_m\leq \text{max}\{\tilde{r}_{p+1,k-1}(\pi),\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1\}=\tilde{r}_{p+1,k-1}(\pi)<\tilde{r}_{p}(\pi)$. Case 2: $\tilde{\tilde{r}}_{p+1}(\pi)=\tilde{r}_{p,k-1}(\pi)$. Note that $\text{max}\{\tilde{r}_{p+1,k-1}(\pi),\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1\}=\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1$. By \eqref{relation-v-2}, we see that $\tau_m=\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1$. Notice that $\tilde{r}_{p}(\pi)\geq \tilde{r}_{p+1}(\pi)+\eta$ with strict inequality if $\tilde{r}_{p+1}(\pi)$ is non-overlined and $\tilde{r}_{p+1}(\pi)\geq \tilde{\tilde{r}}_{p+1}(\pi)-\alpha_\lambda$ with strict inequality if $\tilde{r}_{p+1}(\pi)$ is overlined, so $\tilde{r}_{p}(\pi)\geq \tilde{r}_{p+1}(\pi)+\eta\geq \tilde{\tilde{r}}_{p+1}(\pi)-\alpha_\lambda+\eta=\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1$. Thus we derive that $\tilde{r}_{p}(\pi)>\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1=\tau_m$. Hence, we obtain that $\tilde{r}_{p}(\pi)>\tau_m$ which implies that $\tilde{g}_{p}(\pi^{(1)})>\tau_m+\eta$. Thus we have shown that \eqref{definaaa} holds. Therefore, we conclude that $\tau$ is an overpartition in $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$. Furthermore, from the construction of $(k-1)$-addition, it is easy to see that $f_{\tau}(0,\eta]=f_{\pi}(0,\eta]$ and $|\tau|=|\pi|+p\eta+\alpha_1.$ Thus, we complete the proof of this lemma. \qed To show that the $(k-1)$-addition $A_{p\eta+\alpha_1}$ is reversible, we will first show that the backward move can be applied to the degenerate overpartition $\hat{\tau}$ of $\tau$. \begin{lem}\label{2etatau} Assume that $\tau$ is an overpartition in $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$ and $\hat{\tau}$ is the degenerate overpartition of $\tau$, then $\hat{\tau}$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$ and there are $N$ $(k-1)$-marked parts in the Gordon marking of $\hat{\tau}$, which are denoted by $\tilde{g}_1(\hat{\tau})>\cdots>\tilde{g}_N(\hat{\tau})$. Furthermore, $\hat{\tau}$ satisfies the following two conditions{\rm{:}} $(a)$ $\tilde{g}_p(\hat{\tau})\geq \overline{\eta+\alpha_1}$ and $(b)$ there are no $(k-1)$-sets of $\hat{\tau}$ in $[\tilde{g}_p(\hat{\tau})-2\eta, \tilde{g}_p(\hat{\tau}))$ {\rm{(}}resp. $(\tilde{g}_p(\hat{\tau})-2\eta, \tilde{g}_p(\hat{\tau}))${\rm{)}} if $\tilde{g}_p(\hat{\tau})$ is non-overlined (resp. overlined). \end{lem} \pf When $p=N$, by definition, there are no $(k-1)$-sets of $\tau$ in $(0,\overline{\eta+\alpha_\lambda})$. We change the non-degenerate $(r-1)$-part $\eta$ in $\tau$ to $\overline{\alpha_\lambda}$ to obtain $\hat{\tau}$. We see that there are no $(k-1)$-sets of $\hat{\tau}$ in $(0,\overline{\eta+\alpha_\lambda})$. So \begin{equation}\label{p=n-sub}\tilde{g}_N(\hat{\tau})\geq\overline{\eta+\alpha_\lambda}>\overline{\eta+\alpha_1}. \end{equation} By definition, there are no $(k-1)$-sets of $\hat{\tau}$ in $(0, \tilde{g}_N(\hat{\tau}))$, otherwise, a part less than $\tilde{g}_N(\hat{\tau})$ should be marked with $k-1$ in the Gordon marking of $\hat{\tau}$, which contradicts to the fact that $\tilde{g}_N(\hat{\tau})$ is the smallest $(k-1)$-marked part in the Gordon marking of $\hat{\tau}$. Furthermore, there are no $(k-1)$-sets of $\hat{\tau}$ in $[\tilde{g}_N(\hat{\tau})-2\eta, \tilde{g}_N(\hat{\tau}))$. When $0\leq p<N$, assume that $\mathfrak{s}(\tau)$ is the smallest non-degenerate part of $\pi$ and $\{\tau_m\}_{k-1}$ is the corresponding non-degenerate $(k-1)$-set in $\tau$. Namely, \[\tau_m\geq \cdots \geq \mathfrak{s}(\tau) \geq \cdots \geq \tau_{m+k-2}, \] where $\tau_{m}\leq \tau_{m+k-2}+\eta$ with strict inequality if $\tau_m$ is overlined and we assume that $\tau_{m+t-1}=\mathfrak{s}(\tau)$. By the definition of the non-degenerate part, we see that $\tau_{m+t-1}>\tau_{m+t}$. Assume that $\mathfrak{s}(\tau)=b\eta$, we have $\tau_m\leq (b+1)\eta$, $\tau_m\neq (b+1)\eta$ and $\tau_m\not\equiv\alpha_\lambda\pmod\eta$. Hence $\tau_m<\overline{b\eta+\alpha_\lambda}.$ By definition, we see that $\hat{\tau}$ is obtained from $\tau$ by subtracting $\alpha_1$ from $\mathfrak{s}(\tau)=b\eta$. In order to show that $\hat{\tau}$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$, it suffices to show that $\mathfrak{s}(\tau)-\alpha_1$ is not repeated in $\hat{\tau}$. We consider the following two cases. Case 1: $t<k-1$. It is trivial that $\tau_{m+t}<\mathfrak{s}(\tau)-\alpha_1$ since there is no part congruent to $\alpha_\lambda$ modulo $\eta$ in $\{\tau_m\}_{k-1}$. Case 2: $t=k-1$. We have $\tau_m<\tau_{m+k-2}+\alpha_\lambda$. Hence \[\tau_{m+k-1}\leq \tau_m-\eta<\tau_{m+k-2}+\alpha_\lambda-\eta=\mathfrak{s}(\tau)-\alpha_1.\] In either case, we have $\tau_{m+t}<\mathfrak{s}(\tau)-\alpha_1$. This implies that $\mathfrak{s}(\tau)-\alpha_1$ is not repeated in $\hat{\tau}$. Note that $\tau$ is an overpartition in $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$, we see that $\tilde{g}_p(\hat{\tau})>\tau_m+\eta\geq \mathfrak{s}(\tau)+\eta\geq \eta+\eta=2\eta$, which implies that \[\tilde{g}_p(\hat{\tau})\geq \overline{\eta+\alpha_1}.\] We proceed to show that there are no $(k-1)$-sets of $\hat{\tau}$ in $[\tilde{g}_p(\hat{\tau})-2\eta, \tilde{g}_p(\hat{\tau}))$ \rm{(}resp. $(\tilde{g}_p(\hat{\tau})-2\eta, \tilde{g}_p(\hat{\tau}))$\rm{)} if $\tilde{g}_p(\hat{\tau})$ is non-overlined (resp. overlined). Suppose not. Then we find that there is a $(k-1)$-marked part in $[\tilde{g}_p(\hat{\tau})-2\eta, \tilde{g}_p(\hat{\tau}))$ \rm{(}resp. $(\tilde{g}_p(\hat{\tau})-2\eta, \tilde{g}_p(\hat{\tau}))$\rm{)} in the Gordon marking of $\hat{\tau}$, which should be $\tilde{g}_{p+1}(\hat{\tau})$. Hence $\tilde{g}_{p+1}(\hat{\tau})\geq \tilde{g}_p(\hat{\tau})-2\eta$. Note that $\tilde{g}_{p+1}(\hat{\tau})<\tau_m+\eta<\tilde{g}_p(\hat{\tau})$, and so $\tau_m+\eta<\tilde{g}_p(\hat{\tau})\leq\tilde{g}_{p+1} (\hat{\tau})+2\eta<\tau_m+3\eta.$ We claim that for any $Q\in[\tilde{g}_p(\hat{\tau})-\eta,\tilde{g}_p(\hat{\tau}))$, we have \begin{equation}\label{xxx} f_{\hat{\tau}}[Q-\eta,Q]< k-1\text{ (resp. } f_{\hat{\tau}}(Q-\eta,Q]< k-1 ) \end{equation} if $Q$ is non-overlined (resp. overlined), which contradicts to the assumption that there is a $(k-1)$-set in $[\tilde{g}_p(\hat{\tau})-2\eta, \tilde{g}_p(\hat{\tau}))$ \rm{(}resp. $(\tilde{g}_p(\hat{\tau})-2\eta, \tilde{g}_p(\hat{\tau}))$\rm{)}. Hence if the claim is true, then we see that there are no $(k-1)$-sets of $\hat{\tau}$ in $[\tilde{g}_p(\hat{\tau})-2\eta, \tilde{g}_p(\hat{\tau}))$ \rm{(}resp. $(\tilde{g}_p(\hat{\tau})-2\eta, \tilde{g}_p(\hat{\tau}))$\rm{)}. So, it remains to show that the claim is true. Recall that $\hat{\tau}$ is obtained from $\tau$ by subtracting $\alpha_1$ from $\mathfrak{s}(\tau)=b\eta$, so \begin{equation}\label{tau0} f_{\hat{\tau}}[b\eta,(b+1)\eta]=f_{\tau}[b\eta,(b+1)\eta]-1<k-1. \end{equation} We see that \eqref{xxx} holds for $Q=\overline{b\eta+\alpha_\lambda}$ and $(b+1)\eta$. We next show that for $2\leq s< \lambda$, if $f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_s},\overline{b\eta+\alpha_s}]=k-1$, then \begin{equation}\label{xxx-basic} f_{\hat{\tau}}[Q-\eta,Q]< k-1\text{ (resp. } f_{\hat{\tau}}(Q-\eta,Q]< k-1 ) \quad \end{equation} for $\overline{(b+1)\eta}\leq Q\leq \overline{(b+1)\eta+\alpha_s}$. Assume that \begin{equation}\label{xxx-con}f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_s},\overline{b\eta+\alpha_s}]=k-1. \end{equation} \begin{itemize} \item[\rm{(a)}] When $Q=\overline{(b+1)\eta}$, by \eqref{tau0}, we aim to show that \begin{equation*} f_{\hat{\tau}}(\overline{b\eta},\overline{(b+1)\eta}]<k-1. \end{equation*} Suppose not. Then $f_{\hat{\tau}}(\overline{b\eta},\overline{(b+1)\eta}]=k-1$. By \eqref{tau0}, we deduce that $f_{\hat{\tau}}[{b\eta},\overline{b\eta}]=0$. Hence \begin{equation*} \begin{split} &\ \ f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_s},\overline{b\eta+\alpha_s}]\\[5pt] &=f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_{s}},{b\eta})+f_{\hat{\tau}}[{b\eta}, \overline{b\eta}]+f_{\hat{\tau}}(\overline{b\eta},\overline{b\eta+\alpha_s}]\\[5pt] &\leq (\lambda-s)+0+(s-1)=\lambda-1<k-1, \end{split} \end{equation*} which contradicts to \eqref{xxx-con}. So $f_{\hat{\tau}}(\overline{b\eta},\overline{(b+1)\eta}]<k-1$. \item[\rm{(b)}] When $Q=\overline{(b+1)\eta+\alpha_j}$ with $2\leq j\leq s$, we aim to show that \begin{equation*} f_{\hat{\tau}}(\overline{b\eta+\alpha_j},\overline{(b+1)\eta+\alpha_j}]<k-1. \end{equation*} Suppose not. Then $f_{\hat{\tau}}(\overline{b\eta+\alpha_j},\overline{(b+1)\eta+\alpha_j}]=k-1$. By \eqref{tau0}, we deduce that \[f_{\hat{\tau}}[b\eta,\overline{b\eta+\alpha_j}]<f_{\hat{\tau}}[\overline{(b+1)\eta},\overline{(b+1)\eta+\alpha_j}]\leq j.\] Hence \begin{equation*} \begin{split} &\ \ f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_s},\overline{b\eta+\alpha_s}]\\[5pt] &= f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_{s}},{b\eta})+f_{\hat{\tau}}[{b\eta}, \overline{b\eta+\alpha_j}]+f_{\hat{\tau}}(\overline{b\eta+\alpha_j}, \overline{b\eta+\alpha_s}]\\[5pt] &< (\lambda-s)+j+(s-j)=\lambda\leq k-1, \end{split} \end{equation*} which contradicts to \eqref{xxx-con}. So $f_{\hat{\tau}}(\overline{b\eta+\alpha_j},\overline{(b+1)\eta+\alpha_j}]<k-1$. \end{itemize} Hence, we obtain \eqref{xxx-basic}. We proceed to show that \eqref{xxx} holds for $Q\in[\tilde{g}_p(\hat{\tau})-\eta,\tilde{g}_p(\hat{\tau}))$. Notice that $\tau_m+\eta<\tilde{g}_p(\hat{\tau})<\tau_m+3\eta$, we consider the following two cases. \begin{itemize} \item[Case 1.] $\tau_m+\eta<\tilde{g}_p(\hat{\tau}) \leq \tau_m+2\eta$. Notice that $b\eta\leq\tau_m<\overline{b\eta+\alpha_\lambda}$, so $(b+1)\eta<\tilde{g}_p(\hat{\tau})<\overline{(b+2)\eta+\alpha_\lambda}$. There are five subcases. \begin{itemize} \item [Case 1.1.] If $\tilde{g}_p(\hat{\tau})=\overline{(b+1)\eta}$, then \[f_{\hat{\tau}}(\overline{b\eta},\overline{(b+1)\eta}] =f_{\hat{\tau}}(\overline{b\eta},{(b+1)\eta}] +1=k-1.\] By \eqref{tau0}, we see that \[f_{\hat{\tau}}[{b\eta},\overline{b\eta}]+f_{\hat{\tau}}(\overline{b\eta},{(b+1)\eta}]< k-1.\] It follows that \[f_{\hat{\tau}}[{b\eta},\overline{b\eta}]=0.\] We next show that \eqref{xxx} holds for $\overline{b\eta}\leq Q\leq (b+1)\eta$. By \eqref{tau0}, we see that \eqref{xxx} holds for $Q=\overline{b\eta+\alpha_\lambda}$ or ${(b+1)\eta}$. \begin{itemize} \item[\rm{(a)}] When $Q=\overline{b\eta}$, we find that \begin{equation*} \begin{split} f_{\hat{\tau}}(\overline{(b-1)\eta},\overline{b\eta}]&=f_{\hat{\tau}}[\overline{(b-1)\eta+\alpha_2}, \overline{(b-1)\eta+\alpha_\lambda}] +f_{\hat{\tau}}[{b\eta},\overline{b\eta}] \\[5pt] &\leq (\lambda-1)+0<\lambda\leq k-1. \end{split} \end{equation*} \item[\rm{(b)}] When $Q=\overline{b\eta+\alpha_j}$ where $2\leq j<\lambda$, we find that \begin{equation*} \begin{split} f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_j},\overline{b\eta+\alpha_j}] &=f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_{j}}, b\eta)+f_{\hat{\tau}}[b\eta,\overline{b\eta}] +f_{\hat{\tau}}(\overline{b\eta}, \overline{b\eta+\alpha_j}]\\[5pt] &\leq (\lambda-j)+0+(j-1)<\lambda\leq k-1. \end{split} \end{equation*} \end{itemize} We see that \eqref{xxx} holds for $\overline{b\eta}\leq Q\leq (b+1)\eta$. \item[ Case 1.2.] If $\tilde{g}_p(\hat{\tau})=\overline{(b+1)\eta+\alpha_i}$ for some $2\leq i\leq\lambda$, then \begin{equation*}\label{con-1}f_{\hat{\tau}}(\overline{b\eta+\alpha_i},\overline{(b+1)\eta+\alpha_i}]=k-1.\end{equation*} We will show that \eqref{xxx} holds for $\overline{b\eta+\alpha_i}\leq Q< \overline{(b+1)\eta+\alpha_i}$. By \eqref{tau0}, we see that \eqref{xxx} holds for $Q=\overline{b\eta+\alpha_\lambda}$ or ${(b+1)\eta}$. By \eqref{xxx-basic}, we see that \eqref{xxx} holds for $Q=\overline{b\eta+\alpha_j}$ with $i\leq j<\lambda$. \begin{itemize} \item[\rm{(a)}] When $Q=\overline{(b+1)\eta}$. Assume that $f_{\hat{\tau}}(\overline{b\eta},\overline{(b+1)\eta}]=k-1$, note that $\tilde{g}_p(\hat{\tau})=\overline{(b+1)\eta+\alpha_i}$, so $\tilde{g}_{p+1}(\hat{\tau})$ is in $(\overline{b\eta},\overline{(b+1)\eta}]$. Set $\tilde{g}_{p+1}(\hat{\tau})=\overline{b\eta+\alpha_s}$ with $2\leq s\leq i\leq \lambda$. Notice that \eqref{xxx} holds for $Q=\overline{b\eta+\alpha_\lambda}$, so $2\leq s<\lambda$. By definition, we see that $f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_s},\overline{b\eta+\alpha_s}]=k-1$ where $2\leq s<\lambda$. By \eqref{xxx-basic}, we see that \eqref{xxx} holds for $Q=\overline{(b+1)\eta}$, namely $f_{\hat{\tau}}(\overline{b\eta},\overline{(b+1)\eta}]<k-1$, which contradicts to the assumption that $f_{\hat{\tau}}(\overline{b\eta},\overline{(b+1)\eta}]=k-1$. So \eqref{xxx} holds for $Q=\overline{(b+1)\eta}$. \item[\rm{(b)}] When $Q=\overline{(b+1)\eta+\alpha_j}$ with $2\leq j<i$. If $f_{\hat{\tau}}(\overline{b\eta+\alpha_j},\overline{(b+1)\eta+\alpha_j}]=k-1$, note that $\tilde{g}_p(\hat{\tau})=\overline{(b+1)\eta+\alpha_i}$, then we see that $\tilde{g}_{p+1}(\hat{\tau})=\overline{b\eta+\alpha_s}$ with $j<s\leq i\leq \lambda$. Notice that \eqref{xxx} holds for $Q=\overline{b\eta+\alpha_\lambda}$. So $2\leq s<\lambda$. By definition, we see that $f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_s},\overline{b\eta+\alpha_s}]=k-1$ where $2\leq s<\lambda$. By \eqref{xxx-basic}, we see that \eqref{xxx} holds for $Q=\overline{(b+1)\eta+\alpha_j}$ where $2\leq j<s$, namely $f_{\hat{\tau}}(\overline{b\eta+\alpha_j},\overline{(b+1)\eta+\alpha_j}]<k-1$, which contradicts to the assumption that $f_{\hat{\tau}}(\overline{b\eta+\alpha_j},\overline{(b+1)\eta+\alpha_j}]=k-1$. So \eqref{xxx} holds for $Q=\overline{(b+1)\eta+\alpha_j}$ with $2\leq j<i$. \end{itemize} Hence, we conclude that \eqref{xxx} holds for $\overline{b\eta+\alpha_i}\leq Q< \overline{(b+1)\eta+\alpha_i}$. \item[Case 1.3.] If $\tilde{g}_p(\hat{\tau})=(b+2)\eta$, then we aim to show that \eqref{xxx} holds for $(b+1)\eta\leq Q <(b+2)\eta $. By \eqref{tau0}, we see that \eqref{xxx} holds for $Q=(b+1)\eta$. Using the similar arguments in Case 1.2, we could show that \eqref{xxx} holds for $Q=\overline{(b+1)\eta}$ and $\overline{(b+1)\eta+\alpha_j}$ with $2\leq j<\lambda$. Note that $\tilde{g}_p(\hat{\tau})=(b+2)\eta$, then by definition, we have $f_{\hat{\tau}}[(b+2)\eta,(b+2)\eta]\geq 1$, and so \begin{equation*} \begin{split} &\ \ f_{\hat{\tau}}[(b+1)\eta,\overline{(b+1)\eta+\alpha_\lambda}]\\[5pt] & =f_{\hat{\tau}}[(b+1)\eta,(b+2)\eta]-f_{\hat{\tau}}[(b+2)\eta,(b+2)\eta]<k-1. \end{split} \end{equation*} It yields that \eqref{xxx} holds for $Q=\overline{(b+1)\eta+\alpha_\lambda}$. Thus, we conclude that \eqref{xxx} holds for $(b+1)\eta\leq Q <(b+2)\eta$. \item[Case 1.4.]If $\tilde{g}_p(\hat{\tau})=\overline{(b+2)\eta}$, then it suffices to show that \eqref{xxx} holds for $ \overline{(b+1)\eta} \leq Q\leq (b+2)\eta$. We first show that $\overline{(b+1)\eta}$ does not occur in $\hat{\tau}$. Suppose not. Then we have $f_{\hat{\tau}}[\overline{(b+1)\eta},\overline{(b+1)\eta}]=1$, and so \[f_{\hat{\tau}}[\overline{(b+1)\eta},\overline{(b+2)\eta})=1+f_{\hat{\tau}}(\overline{(b+1)\eta},\overline{(b+2)\eta})=f_{\hat{\tau}}(\overline{(b+1)\eta},\overline{(b+2)\eta}]=k-1.\] It implies that there is a part in $[\overline{(b+1)\eta},\overline{(b+2)\eta})$ marked with $k-1$ in the Gordon marking of $\hat{\tau}$. Notice that $\tilde{g}_p(\hat{\tau})=\overline{(b+2)\eta}$, by definition, we see that the marks of the parts in $(\overline{(b+1)\eta},\overline{(b+2)\eta})$ are less than $k-1$ in the Gordon marking of $\hat{\tau}$. So $\tilde{g}_{p+1}(\hat{\tau})=\overline{(b+1)\eta}$. Since $\tau_m+\eta>\tilde{g}_{p+1}(\hat{\tau})$, we find that $\tau_m>\overline{b\eta}$. Assume that $\tau_m=\overline{b\eta+\alpha_j}$ for some $2\leq j<\lambda$, then by definition, we have $f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_j}, \overline{b\eta+\alpha_j}]=k-1.$ By \eqref{xxx-basic}, we see that \eqref{xxx} holds for $Q=\overline{(b+1)\eta}$, which contradicts to the fact that $\tilde{g}_{p+1}(\hat{\tau})=\overline{(b+1)\eta}$. So, the assumption is false, it follows that $\overline{(b+1)\eta}$ is not a part of $\hat{\tau}$. By \eqref{tau0}, we have \[f_{\hat{\tau}}(\overline{b\eta},\overline{(b+1)\eta}]=f_{\hat{\tau}}(\overline{b\eta},{(b+1)\eta}]\leq f_{\hat{\tau}}[b\eta,(b+1)\eta]< k-1.\] So \eqref{xxx} holds for $Q=\overline{(b+1)\eta}$. With the similar argument in Case 1.2, we can deduce that \eqref{xxx} holds for $Q=\overline{(b+1)\eta+\alpha_j}$ with $2\leq j< \lambda$. It remains to show that \eqref{xxx} holds for $Q=\overline{(b+1)\eta+\alpha_\lambda}$ or $(b+2)\eta$. Suppose not. Then $f_{\hat{\tau}}[(b+1)\eta,(b+2)\eta]=k-1$, and note that $\tilde{g}_p(\hat{\tau})=\overline{(b+2)\eta}$, so $\tilde{g}_{p+1}(\hat{\tau})=(b+1)\eta$ or $\overline{(b+1)\eta}$. From the proof above, we see that $\overline{(b+1)\eta}$ does not occur in $\hat{\tau}$, so $\tilde{g}_{p+1}(\hat{\tau})\neq \overline{(b+1)\eta}$. Thus $\tilde{g}_{p+1}(\hat{\tau})=(b+1)\eta$, and then \[f_{\hat{\tau}}[{b\eta}, {(b+1)\eta}]=k-1,\] which contradicts to \eqref{tau0}. Hence $f_{\hat{\tau}}[(b+1)\eta,(b+2)\eta]<k-1$. Therefore \eqref{xxx} holds for $Q=\overline{(b+1)\eta+\alpha_\lambda}$ or $(b+2)\eta$. Hence we conclude that \eqref{xxx} holds for $ \overline{(b+1)\eta} \leq Q \leq (b+2)\eta$. \item[Case 1.5.] If $\tilde{g}_p(\hat{\tau})=\overline{(b+2)\eta+\alpha_i}$ with $2\leq i<\lambda$, then we aim to show that \eqref{xxx} holds for $\overline{(b+1)\eta+\alpha_i}\leq Q<\overline{(b+2)\eta+\alpha_i}$. Since $\tilde{g}_p(\hat{\tau})\leq \tau_m+2\eta$, set $\tau_m=\overline{b\eta+\alpha_j}$ with $i\leq j<\lambda$. We proceed to show that \begin{equation}\label{case1.5xxx} \tilde{g}_{p+1}(\hat{\tau})\geq\overline{b\eta+\alpha_{j}}. \end{equation} Suppose not. Then we have $\overline{b\eta+\alpha_{i}}<\tilde{g}_{p+1}(\hat{\tau})<\overline{b\eta+\alpha_{j}}$. Setting $\tilde{g}_{p+1}(\hat{\tau})=\overline{b\eta+\alpha_{t}}$ with $i< t<j$, by definition, we have \[f_\tau(\overline{(b-1)\eta+\alpha_t},\overline{b\eta+\alpha_{t}}]=f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_t},\overline{b\eta+\alpha_{t}}]=k-1.\] Notice that $\overline{(b-1)\eta+\alpha_\lambda}$ is not a part of $\tau$, so there is a non-degenerate $(k-1)$-set of $\tau$ in $(\overline{(b-1)\eta+\alpha_t},\overline{b\eta+\alpha_{t}}]$. By definition, we see that $\tau_m\leq \overline{b\eta+\alpha_{t}}$, which contradicts to the fact that $\tau_m=\overline{b\eta+\alpha_{j}}>\overline{b\eta+\alpha_{t}}$. Hence \eqref{case1.5xxx} holds. Since $\tilde{g}_p(\hat{\tau})\geq \tilde{g}_{p+1}(\hat{\tau})+\eta$, we see that \[ \overline{b\eta+\alpha_{j}}\leq \tilde{g}_{p+1}(\hat{\tau})\leq \overline{(b+1)\eta+\alpha_i}.\] By \eqref{tau0}, we see that $\tilde{g}_{p+1}(\hat{\tau})\neq \overline{b\eta+\alpha_\lambda}$ or $(b+1)\eta$. Recall that $\tau_m=\overline{b\eta+\alpha_j}$ with $i\leq j<\lambda$, by \eqref{xxx-basic} and note that $i\leq j$, it follows that $\tilde{g}_{p+1}(\hat{\tau})\neq\overline{(b+1)\eta}$ or $\overline{(b+1)\eta+\alpha_s}$ with $2\leq s \leq i$. Hence $\tilde{g}_{p+1}(\hat{\tau})=\overline{b\eta+\alpha_s}$ with $j\leq s< \lambda$. Note that $\tilde{g}_p(\hat{\tau})=\overline{(b+2)\eta+\alpha_i}$, by definition, we see that \eqref{xxx} holds for $\overline{(b+1)\eta+\alpha_s}\leq Q<\overline{(b+2)\eta+\alpha_i}$. It remains to show that \eqref{xxx} holds for $Q=\overline{(b+1)\eta+\alpha_j}$ with $i\leq j<s$. Since $\tilde{g}_{p+1}(\hat{\tau})=\overline{b\eta+\alpha_s}$ with $j\leq s< \lambda$, by \eqref{xxx-basic}, we see that\eqref{xxx} holds for $Q=\overline{(b+1)\eta+\alpha_j}$ with $i\leq j<s$. Thus, we conclude that \eqref{xxx} holds for $ \overline{(b+1)\eta+\alpha_i} \leq Q <\overline{(b+2)\eta+\alpha_i}$. \end{itemize} \item[Case 2.] $\tau_m+2\eta<\tilde{g}_p(\hat{\tau})<\tau_m+3\eta$. In such case, we see that \[\tau_m+\eta>\tilde{g}_{p+1}(\hat{\tau})\geq \tilde{g}_{p}(\hat{\tau})-2\eta>\tau_m.\] Note that $b\eta\leq \tau_m<\overline{b\eta+\alpha_\lambda}$, so \[b\eta<\tilde{g}_{p+1}(\hat{\tau})<\overline{(b+1)\eta+\alpha_\lambda}.\] By \eqref{tau0}, we find that $\tilde{g}_{p+1}(\hat{\tau})\neq \overline{b\eta+\alpha_\lambda}$ or $(b+1)\eta$. We next show that $ b\eta<\tilde{g}_{p+1}(\hat{\tau})<\overline{b\eta+\alpha_\lambda}$. Suppose not. Then $\overline{(b+1)\eta}\leq \tilde{g}_{p+1}(\hat{\tau})<\overline{(b+1)\eta+\alpha_\lambda}$. Notice that $\tau_m+\eta> \tilde{g}_{p+1}(\hat{\tau})$, we have $\tau_m>\overline{b\eta}$. Assume that $\tau_m=\overline{b\eta+\alpha_j}$ with $2\leq j<\lambda$, by definition, we have $f_{\hat{\tau}}(\overline{(b-1)\eta+\alpha_j},\overline{b\eta+\alpha_j}]=k-1$. By \eqref{xxx-basic}, we see that \eqref{xxx} holds for $\overline{(b+1)\eta}\leq Q\leq \overline{(b+1)\eta+\alpha_j}$. On the other hand, notice that $\tilde{g}_{p+1}(\hat{\tau})<\tau_m+\eta=\overline{(b+1)\eta+\alpha_j}$, so $\tilde{g}_{p+1}(\hat{\tau})=\overline{(b+1)\eta}$ or $\overline{(b+1)\eta+\alpha_s}$ with $2\leq s<j$, which leads to a contradiction. Hence, $ b\eta<\tilde{g}_{p+1}(\hat{\tau})<\overline{b\eta+\alpha_\lambda}$. If $\tilde{g}_{p+1}(\hat{\tau})=\overline{b\eta}$, then $(b+2)\eta\leq \tau_m+2\eta<\tilde{g}_{p}(\hat{\tau})\leq\tilde{g}_{p+1}(\hat{\tau})+2\eta=\overline{(b+2)\eta}$. So $\tilde{g}_{p}(\hat{\tau})=\overline{(b+2)\eta}$. By definition, there are no $(k-1)$-set of $\hat{\tau}$ in $(\overline{b\eta},\overline{(b+2)\eta})$. Hence, \eqref{xxx} holds for $Q\in[\tilde{g}_p(\hat{\tau})-\eta,\tilde{g}_p(\hat{\tau}))$ when $\tilde{g}_{p+1}(\hat{\tau})=\overline{b\eta}$. If $\tilde{g}_{p+1}(\hat{\tau})=\overline{b\eta+\alpha_s}$ with $2\leq s< \lambda$, then $\tilde{g}_{p}(\hat{\tau})\leq\tilde{g}_{p+1}(\hat{\tau})+2\eta=\overline{(b+2)\eta+\alpha_s}$. Note that $\tilde{g}_{p}(\hat{\tau})>\tau_m+2\eta\geq (b+2)\eta$, so \[\overline{(b+2)\eta}\leq\tilde{g}_{p}(\hat{\tau})\leq \overline{(b+2)\eta+\alpha_s}.\] We next show that \eqref{xxx} holds for $Q\in[\tilde{g}_p(\hat{\tau})-\eta,\tilde{g}_p(\hat{\tau}))$. Suppose not. Then there exists $Q\in[\tilde{g}_p(\hat{\tau})-\eta,\tilde{g}_p(\hat{\tau}))$ such that \begin{equation}\label{xxx=} f_{\hat{\tau}}[Q-\eta,Q]= k-1 \text{ (resp. } f_{\hat{\tau}}(Q-\eta,Q]= k-1\text{) if } Q \text{ is non-overlined (resp. overlined)}. \end{equation} By definition, we have $Q\geq \tilde{g}_p(\hat{\tau})-\eta\geq\overline{(b+1)\eta}$. Also, we have $Q-\eta<\overline{b\eta+\alpha_s}$, otherwise, there is a part that is less than or equal to $Q$ should be $\tilde{g}_p(\hat{\tau})$, which contradicts to the fact that $Q<\tilde{g}_p(\hat{\tau})$. So $\overline{(b+1)\eta}\leq Q<\overline{b\eta+\alpha_s}+\eta=\overline{(b+1)\eta+\alpha_s}$. Notice that $\tilde{g}_{p+1}(\hat{\tau})=\overline{b\eta+\alpha_s}$, by \eqref{xxx-basic}, we see that $Q\neq \overline{(b+1)\eta}$ and $Q\neq \overline{(b+1)\eta+\alpha_g}$ with $2\leq g<s$, which leads to a contradiction to the fact that $\overline{(b+1)\eta}\leq Q<\overline{(b+1)\eta+\alpha_s}$. So, the assumption is false. There does not exist $Q\in[\tilde{g}_p(\hat{\tau})-\eta,\tilde{g}_p(\hat{\tau}))$ satisfying \eqref{xxx=}. Hence, \eqref{xxx} holds for $Q\in[\tilde{g}_p(\hat{\tau})-\eta,\tilde{g}_p(\hat{\tau}))$ when $\tilde{g}_{p+1}(\hat{\tau})=\overline{b\eta+\alpha_s}$ with $2\leq s< \lambda$. Hence, we conclude that \eqref{xxx} holds for $Q\in[\tilde{g}_p(\hat{\tau})-\eta,\tilde{g}_p(\hat{\tau}))$. \end{itemize} Thus, we complete the proof. \qed We are now in a position to show that the $(k-1)$-subtraction is the inverse of the $(k-1)$-addition. \begin{lem}\label{reduction} For $N\geq 1$, $0\leq p<N$ and $0<\alpha_1< \eta$, let $\tau$ be an overpartition in $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$. The $(k-1)$-subtraction $\pi=S_{p\eta+\alpha_1}(\tau)$ is defined as follows{{\rm:}} First subtract $\alpha_1$ from the smallest non-degenerate part {\rm (}or the non-degenerate $(r-1)$-part if it exists{\rm)} in $\tau$ to obtain the degenerate overpartition $\hat{\tau}$. Then apply the backward move of $p$-th kind $\psi_p$ to $\hat{\tau}$ to obtain $\pi$. Then $\pi$ is an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$ such that $f_{\pi}(0,\eta]=f_{\tau}(0,\eta]$ and $|\pi|=|\tau|-p\eta-\alpha_1$. \end{lem} \pf Let $\hat{\tau}$ be the degenerate overpartition of $\tau$. By Lemma \ref{2etatau}, we see that the backward move of the $p$-th kind $\psi_p$ can be applied to $\hat{\tau}$, and so the $(k-1)$-subtraction $\pi=S_{p\eta+\alpha_1}(\tau)$ is well-defined. By Lemma \ref{lem-bac2}, we see that $\pi$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r)$. Furthermore, there are $N$ $(k-1)$-marked parts in the reverse Gordon marking of $\pi$, denoted by $\tilde{r}_1(\pi)>\cdots>\tilde{r}_N(\pi)$. Assume that $\tilde{g}_{1}(\hat{\tau})>\cdots>\tilde{g}_{N}(\hat{\tau})$ are the $(k-1)$-marked parts in the Gordon marking of $\hat{\tau}$. By \eqref{relation-ab-2}, we see that \begin{equation}\label{relation-nn-1} \tilde{r}_i(\pi)= \tilde{g}_i(\hat{\tau})-\eta \text{ for }1\leq i\leq p, \ \text{ and }\ \tilde{g}_{i,1}(\hat{\tau})\leq \tilde{r}_i(\pi)\leq \tilde{g}_{i,k-1}(\hat{\tau})\text{ for }p< i\leq N. \end{equation} When $p=N$, $\hat{\tau}$ is obtained by subtracting $\alpha_1$ from the non-degenerate $(r-1)$-part $\eta$ in $\tau$. By \eqref{p=n-sub} in the proof of Lemma \ref{2etatau}, we see that $\tilde{g}_N(\hat{\tau})\geq \overline{\eta+\alpha_\lambda}$. By definition, we have $f_{\hat{\tau}}[\eta,2\eta]=f_{{\tau}}[\eta,2\eta]-1<k-1$. We see that $\tilde{g}_N(\hat{\tau})\neq \overline{\eta+\alpha_\lambda}$ or $2\eta$. So \begin{equation}\label{p=n-sub-1} \tilde{g}_N(\hat{\tau})>2\eta. \end{equation} It follows that $\tilde{r}_N(\pi)= \tilde{g}_N(\hat{\tau})-\eta>\eta$. Furthermore, it is easy to check that $f_{\pi}(0,\eta]=f_{{\tau}}(0,\eta]=r-1$ and $\overline{\alpha_\lambda}$ occurs in $\pi$. Therefore, we see that $\pi$ is an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,N)$ such that $f_{\pi}(0,\eta]=f_{\tau}(0,\eta]$ and $|\pi|=|\tau|-p\eta-\alpha_1$. When $0\leq p<N$, let $\mathfrak{s}(\tau)$ be the smallest non-degenerate part of $\tau$, and let $\{\tau_m\}_{k-1}$ be the smallest non-degenerate $(k-1)$-set of $\tau$. By definition, we see that \begin{equation*} \tau_m\geq \cdots \geq \mathfrak{s}(\tau)> \cdots \geq \tau_{m+k-2}, \end{equation*} where $\tau_m\leq \tau_{m+k-2}+\eta$ with strict inequality if $\tau_m$ is overlined. Furthermore, $\tilde{g}_{p+1}(\hat{\tau})<\tau_m+\eta<\tilde{g}_p(\hat{\tau})$. Assume that $\mathfrak{s}(\tau)=b\eta$, then $b\eta\leq \tau_m<\overline{b\eta+\alpha_\lambda}.$ To show that $\pi$ is an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$, it suffices to show that there exists a part congruent to $\alpha_\lambda$ modulo $\eta$ in the $(k-1)$-set $\{\tilde{r}_{p+1}(\pi)\}_{k-1}$ of $\pi$, denoted by $\tilde{\tilde{r}}_{p+1}(\pi)$, and there must be a part congruent to $\alpha_\lambda$ modulo $\eta$ in all $(k-1)$-sets less than $\tilde{\tilde{r}}_{p+1}(\pi)$ of $\pi$. Notice that $\mathfrak{s}(\tau)$ is the smallest non-degenerate part of $\tau$, we see that there is a part congruent to $\alpha_\lambda$ modulo $\eta$ in any $(k-1)$-set less than $\mathfrak{s}(\tau)$ of $\tau$. By the construction of $\pi$, we see that there is a part congruent to $\alpha_\lambda$ modulo $\eta$ in any $(k-1)$-set less than $\mathfrak{s}(\tau)-\alpha_1$ of $\pi$. Furthermore, it is easy to see that \[ \tau_m\geq \cdots \geq \mathfrak{s}(\tau)-\alpha_1\geq \cdots \geq \tau_{m+k-2} \] is a $(k-1)$-set of $\pi$, and so there is a $(k-1)$-marked part in this set. Hence we deduce that \begin{equation}\label{proof-dilation-2} \tilde{r}_{p+1}(\pi)\geq \text{min}\{\tau_{m+k-2},\mathfrak{s}(\tau)-\alpha_1\}. \end{equation} Notice that $\tilde{g}_{p}(\hat{\tau})>\tau_m+\eta>\tilde{g}_{p+1}(\hat{\tau})$, by \eqref{relation-nn-1}, we have $\tilde{r}_p(\pi)=\tilde{g}_p(\hat{\tau})-\eta>\tau_m$. Thus, in order to show $\tilde{\tilde{r}}_{p+1}(\pi)=\mathfrak{s}(\tau)-\alpha_1$, it suffices to prove that \begin{equation*} \tilde{r}_{p+1}(\pi)\leq \mathfrak{s}(\tau)-\alpha_1=\overline{(b-1)\eta+\alpha_\lambda}. \end{equation*} Suppose not. Then $\tilde{r}_{p+1}(\pi)\geq \mathfrak{s}(\tau)=b\eta$. If $\tilde{r}_{p+1}(\pi)=b\eta$, then $\tilde{r}_{p}(\pi)>\tilde{r}_{p+1}(\pi)+\eta= (b+1)\eta$. If $\tilde{r}_{p+1}(\pi)>b\eta$, then $ \tilde{r}_{p}(\pi)\geq\tilde{r}_{p+1}(\pi)+\eta> (b+1)\eta$. By \eqref{relation-nn-1}, we see that \begin{equation}\label{proof-dilation-3} \tilde{g}_p(\hat{\tau})-\eta=\tilde{r}_p(\pi)>(b+1)\eta. \end{equation} Recall that $\hat{\tau}$ is obtained by subtracting $\alpha_1$ from $\mathfrak{s}(\tau)=b\eta$ in $\tau$, so \begin{equation}\label{proof-dilation-0} f_{\pi}[b\eta,(b+1)\eta]= f_{\hat{\tau}}[b\eta,(b+1)\eta]=f_{{\tau}}[b\eta,(b+1)\eta]-1<k-1, \end{equation} which implies that $\tilde{r}_{p+1}(\pi)\neq b\eta$. Hence \begin{equation*}\label{proof-dilation-4} \tilde{r}_{p+1}(\pi)>\mathfrak{s}(\tau)=b\eta. \end{equation*} Notice that $\tilde{r}_{p+1}(\pi)<\tau_m+\eta$, we consider the following two cases. Case 1: $\mathfrak{s}(\tau)<\tilde{r}_{p+1}(\pi)\leq \tau_m$. Then $\tau_m=\overline{b\eta}$ or $\overline{b\eta+\alpha_i}$ where $2\leq i<\lambda$. By \eqref{proof-dilation-0}, we see that $\tilde{r}_{p+1}(\pi)\neq\overline{b\eta}$. Set $\tilde{r}_{p+1}(\pi)=\overline{b\eta+\alpha_j}$ and $\tau_m=\overline{b\eta+\alpha_i}$ where $2\leq j\leq i<\lambda$. Recall that $\{\tau_m\}_{k-1}$ is a $(k-1)$-set of $\tau$, and by \eqref{proof-dilation-3}, we find that \begin{equation}\label{proof-dilation-5} f_{\pi}[\overline{(b-1)\eta+\alpha_{i+1}},\overline{b\eta+\alpha_i}]=k-1. \end{equation} Since $\tilde{r}_{p+1}(\pi)=\overline{b\eta+\alpha_j}$, by definition, we see that \[f_{\pi}[\overline{b\eta+\alpha_j},\overline{(b+1)\eta+\alpha_j})=f_{\pi}[\overline{b\eta+\alpha_j},(b+1)\eta] +f_{\pi}[\overline{(b+1)\eta},\overline{(b+1)\eta+\alpha_j})=k-1.\] By \eqref{proof-dilation-0}, we see that \[f_{\pi}[{b\eta},\overline{b\eta+\alpha_j})+f_{\pi}[\overline{b\eta+\alpha_j},(b+1)\eta]<k-1.\] Hence \[f_{\pi}[{b\eta},\overline{b\eta+\alpha_j})<f_{\pi}[\overline{(b+1)\eta}, \overline{(b+1)\eta+\alpha_j})\leq j-1.\] Thus \begin{equation*} \begin{split} &\ \ f_{\pi}[\overline{(b-1)\eta+\alpha_{i+1}},\overline{b\eta+\alpha_i}]\\[5pt] &=f_{\pi}[\overline{(b-1)\eta+\alpha_{i+1}},\overline{(b-1)\eta+\alpha_\lambda}] +f_{\pi}[{b\eta},\overline{b\eta+\alpha_j})+f_{\pi}[\overline{b\eta+\alpha_j}, \overline{b\eta+\alpha_i}]\\[5pt] &< (\lambda-i)+(j-1)+(i-j+1)=\lambda\leq k-1, \end{split} \end{equation*} which contradicts to \eqref{proof-dilation-5}. Case 2: $\tau_m<\tilde{r}_{p+1}(\pi)< \tau_m+\eta$. From the construction of the $(k-1)$-subtraction $S_{p\eta+\alpha_1}$, we see that \[\tilde{r}_{p+1}(\pi)\leq \cdots \leq \ \tilde{r}_{p+1,k-1}(\pi)\] are also parts of $\hat{\tau}$, so there exists $\tilde{g}_{s}(\hat{\tau})$ such that $\tilde{r}_{p+1}(\pi)\leq \tilde{g}_{s}(\hat{\tau})\leq \tilde{r}_{p+1,k-1}(\pi)$. Hence \[\tilde{g}_{s}(\hat{\tau})\leq \tilde{r}_{p+1,k-1}(\pi)<\tilde{r}_{p}(\pi)+\eta=\tilde{g}_{p}(\hat{\tau}).\] We see that $s\geq p+1$. Notice that $\tilde{g}_{p+1}(\hat{\tau})<\tau_m+\eta$, we have $\tilde{g}_{s}(\hat{\tau})\leq \tilde{g}_{p+1}(\hat{\tau})< \tau_m+\eta$. Hence $\tau_m<\tilde{r}_{p+1}(\pi)\leq\tilde{g}_{s}(\hat{\tau})< \tau_m+\eta$. Note that $b\eta\leq\tau_m<\overline{b\eta+\alpha_\lambda}$, we consider the following three subcases. Case 2.1: If $\tau_m=b\eta$, then $b\eta<\tilde{r}_{p+1}(\pi) \leq \tilde{g}_{s}(\hat{\tau})< (b+1)\eta$. By \eqref{proof-dilation-0}, we see that $\tilde{g}_{s}(\hat{\tau})\neq\overline{b\eta+\alpha_\lambda}$ and $\tilde{r}_{p+1}(\pi)\neq \overline{b\eta}$. Hence $\tilde{r}_{p+1}(\pi)=\overline{b\eta+\alpha_j}$ and $\tilde{g}_{s}(\hat{\tau})=\overline{b\eta+\alpha_i}$, where $2\leq j\leq i< \lambda$. By using the similar argument in Case 1, We can show that these two relations could not hold simultaneously. Case 2.2: If $\tau_m=\overline{b\eta}$, then $\overline{b\eta}<\tilde{r}_{p+1}(\pi)\leq\tilde{g}_{s}(\hat{\tau})<{\overline{(b+1)\eta}}$. By \eqref{proof-dilation-0}, we see that $\tilde{g}_{s}(\hat{\tau})\neq \overline{b\eta+\alpha_\lambda}$ and $(b+1)\eta$. Hence $\tilde{r}_{p+1}(\pi)=\overline{b\eta+\alpha_j}$ and $\tilde{g}_{s}(\hat{\tau})=\overline{b\eta+\alpha_i}$, where $2\leq j\leq i< \lambda$. These two relations can be shown not to be held simultaneously by using the similar argument in Case 1. Case 2.3: If $\tau_m=\overline{b\eta+\alpha_i}$ where $2\leq i<\lambda$, then $\overline{b\eta+\alpha_i}<\tilde{r}_{p+1}(\pi)\leq\tilde{g}_{s}(\hat{\tau})<{\overline{(b+1)\eta+\alpha_i}}$. By \eqref{proof-dilation-0}, we see that $\tilde{g}_{s}(\hat{\tau})\neq \overline{b\eta+\alpha_\lambda}$ and $(b+1)\eta$. If $\tilde{g}_{s}(\hat{\tau})<\overline{b\eta+\alpha_\lambda}$, then $\overline{b\eta+\alpha_i}=\tau_m<\tilde{r}_{p+1}(\pi)\leq \tilde{g}_{s}(\hat{\tau})<\overline{b\eta+\alpha_\lambda}$. With the similar argument in Case 1, we find that this relation does not hold. Hence \[(b+1)\eta <\tilde{g}_{s}(\hat{\tau}) <\overline{(b+1)\eta+\alpha_i}.\] From \eqref{xxx-basic} in the proof of Lemma \ref{2etatau}, we find that the inequality could not hold when $\tau_m=\overline{b\eta+\alpha_i}$, where $2\leq i<\lambda$, Hence, it leads to a contradiction. So, the assumption that $ \tilde{r}_{p+1}(\pi)\geq \mathfrak{s}(\tau)$ is false. Hence $ \tilde{r}_{p+1}(\pi)\leq \mathfrak{s}(\tau)-\alpha_1=\overline{b\eta+\alpha_\lambda}.$ Combining with \eqref{proof-dilation-2}, we arrive at $\tilde{\tilde{r}}_{p+1}(\pi)=\mathfrak{s}(\tau)-\alpha_1$. Furthermore, there is a part congruent to $\alpha_\lambda$ modulo $\eta$ in all $(k-1)$-sets of $\pi$ less than $\tilde{\tilde{r}}_{p+1}(\pi)$. Hence $\pi\in \overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$. It is easy to check that $f_{\pi}(0,\eta]=f_{\tau}(0,\eta]$ and $|\pi|=|\tau|-p\eta-\alpha_1$. Thus, we complete the proof of this lemma. \qed \section{The $(k-1)$-insertion and the $(k-1)$-separation} In this section, we recall the $(k-1)$-insertion and the $(k-1)$-separation defined in \cite{he-ji-zhao}. For more details, please see \cite{he-ji-zhao}. The definitions of the $(k-1)$-insertion and the $(k-1)$-separation involve the following two sets. (a) For $q\geq N\geq 0$ and $a=\eta$ or $\alpha_i$ where $1\leq i\leq \lambda$, let $\overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N, q\eta+a)$ denote the set of overpartitions $\pi$ in $\overline{\mathcal{B}}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r)$ such that there are $N$ $(k-1)$-marked parts in the reverse Gordon marking of $\pi$, denoted by $\tilde{r}_1(\pi)>\tilde{r}_2(\pi)>\cdots> \tilde{r}_N(\pi)$, the largest overlined part $\equiv a\pmod\eta$ is less than $\overline{(q-p)\eta+a}$, where $p$ is the least integer such that $0\leq p\leq N$ and $\overline{(q-p)\eta+a}\geq\tilde{r}_{p+1}(\pi)+\eta$, and if $a<\eta$, $q=N$ and $f_{\pi}(0,\eta]=r-1$, then $\tilde{r}_{N}(\pi)\leq\eta$. Here, we assume that $\tilde{r}_{N+1}(\pi)=-\infty$. (b) For $s\geq 0$, $0\leq p\leq N$ and $a=\eta$ or $\alpha_i$ for $1\leq i\leq \lambda$, let $\overline{\mathcal{B}}_{=}(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N, (s+p)\eta+a)$ denote the set of overpartitions $\omega$ in $\overline{\mathcal{B}}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r)$ such that the largest overlined part $\equiv a\pmod\eta$ in $\omega$ is $\overline{s\eta+a}$, there are $N$ $(k-1)$-marked parts in the Gordon marking of $\underline{\omega}$, denoted by $\tilde{g}_1(\underline{\omega})>\cdots>\tilde{g}_N(\underline{\omega})$, and $p$ is the least integer such that $ \tilde{g}_{p+1}(\underline{\omega})< \overline{s\eta+a}$, where $\underline{\omega}$ is the underlying overpartition of $\omega$ obtained by removing $ {\overline{s\eta+a}}$ from $\omega$. Here, we assume that $\tilde{g}_{N+1}(\underline{\omega})=-\infty$. The $(k-1)$-insertion gives the following bijection between $\overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N, q\eta+a)$ and $\overline{\mathcal{B}}_{=}(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N, (s+p)\eta+a)$. \begin{thm}{\rm \cite[Theorem 3.8]{he-ji-zhao}}\label{deltagammathmbb} For $N\geq 0$, $s\geq 0$, $0\leq p\leq N$, $q\geq N$ and $a=\eta$ or $\alpha_i$ for $1\leq i\leq \lambda$, let $\pi$ be an overpartition in $\overline{\mathcal{{B}}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k, r|N, q\eta+a)$ and let $\tilde{r}_1(\pi)>\cdots>\tilde{r}_N(\pi)$ be the $(k-1)$-marked parts in the reverse Gordon marking of $\pi$. Assume that $p$ is the least integer such that $0\leq p\leq N$ and $\overline{(q-p)\eta+a}\geq\tilde{r}_{p+1}(\pi)+\eta$. Define the $(k-1)$-insertion $\omega=I_{q\eta+a}(\pi)$ as follows: First apply the forward move of the $p$-th kind into $\pi$ to get $\underline{\omega}$, and then insert $\overline{(q-p)\eta+a}$ into $\underline{\omega}$ as an overlined part to obtain $\omega$. Then $\omega$ is an overpartition in $\overline{\mathcal{B}}_{=}(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N, (s+p)\eta+a)$ such that \[q=s+p \quad \text{and} \quad |\omega|=|\pi|+q\eta+a.\] Furthermore, the inverse map of the $(k-1)$-insertion $\omega=I_{q\eta+a}(\pi)$, the $(k-1)$-separation $\pi={SP}_{(s+p)\eta+a}(\omega)$, is defined as follows: First remove $\overline{s\eta+a}$ from $\omega$ to obtain $\underline{\omega}$. Then apply the backward move of the $p$-th kind into $\underline{\omega}$ to obtain $\pi$. Hence $I_{q\eta+a}$ is a bijection between the set $\mathcal{{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k, r|N, q\eta+a)$ and the set $\mathcal{{B}}_=(\alpha_1,\ldots,\alpha_\lambda;\eta,k, r|N, (s+p)\eta+a)$. \end{thm} For example, assume that $k=4$, $r=3$, $\lambda=3$, $\eta=10$, $\alpha_1=3$, $\alpha_2=5$ and $\alpha_3=7$. Let $\pi$ be an overpartition in $\overline{\mathcal{B}}_1(3,5,7;10,4,3)$, whose reverse Gordon marking is given below. \begin{equation}\label{new-example} \begin{split} &RG(\pi)=(\overbrace{{\color{blue}{\overline{97}_1,\overline{90}}_2}, {\color{red}{90}_3}}^{{\color{red}\{90\}_3}},\overline{80}_1, \overbrace{{\color{blue}\overline{75}_2, {70}_1}, {\color{red}\overline{67}_3}}^{{\color{red}\{\overline{67}\}_3}},\overbrace{{\color{blue}\overline{57}_1,\overline{50}_2},{\color{red}{50}_3}} ^{{\color{red}\{50\}_3}},\overline{45}_1,\\[5pt] &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \underbrace{{\color{blue}\overline{37}_2,\overline{33}_1},{\color{red}{30}_3}} _{\color{red}\{{30}\}_3},\overline{25}_2,\overline{20}_1,\underbrace{{\color{blue}\overline{10}_1,\overline{7}_2},{\color{red}\overline{3}_3}} _{\color{red}\{\overline{3}\}_3}). \end{split} \end{equation} Note that there are five 3-marked parts in $RG(\pi)$, which are $\tilde{r}_1(\pi)=90$, $\tilde{r}_2(\pi)=\overline{67}$, $\tilde{r}_3(\pi)={50}$, $\tilde{r}_4(\pi)={30}$ and $\tilde{r}_5(\pi)=\overline{3}$. Let $q=7$ and $a=\alpha_1=3$, it is easy to check that $p=3$ is the least integer such that $\overline{(q-p)\cdot \eta+a}=\overline{43}\geq \tilde{r}_{p+1}(\pi)+\eta=40$ and the largest overlined part congruent to $3$ modulo $10$ in $\pi$ is $\overline{33}$, which is less than $\overline{43}$. Hence \[\pi\in\overline{\mathcal{B}}_<(3,5,7;10,4,3|5,73).\] We can apply the $3$-insertion $I_{73}$ to $\pi$ to get $\omega$. Note that $p=3$, we first apply the forward move of the third kind $\phi_3$ to $\omega$, namely, changing ${90}$, $\overline{67}$ and $50$ in $\pi$ to ${100}$, $\overline{77}$ and $60$ respectively. Then we insert $\overline{43}$ into the resulting overpartition. Hence, we obtain \begin{equation*}\label{new-example-1} \begin{split} &\omega=(100,\overline{97},\overline{90},\overline{80},\overline{77},\overline{75},70,60,\overline{57},\overline{50},\overline{45},\overline{43},\overline{37},\overline{33}, {30},\overline{25},\overline{20},\overline{10},\overline{7},\overline{3}). \end{split} \end{equation*} It is easy to check that $|\omega|=|\pi|+q\eta+a=|\pi|+73$, $\omega$ is an overpartition in $\overline{\mathcal{B}}_1(3,5,7;10,4,3)$ and the largest overlined part congruent to $3$ modulo $10$ in $\omega$ is $\overline{43}$. For $a=\alpha_1=3$, let $\underline{\omega}$ be the underlying overpartition obtained by removing $\overline{43}$ from $\omega$. Then, the Gordon marking of $\underline{\omega}$ is given as follows. \begin{equation*} \begin{split} &G(\underline{\omega})=(\overbrace{{\color{red}\overline{100}_3},{\color{blue}\overline{97}_2,\overline{90}_1}} ^{{\color{red}\{100\}_3}},\overline{80}_1,\overbrace{{\color{red}\overline{77}_3},{\color{blue}\overline{75}_2,{70}_1}} ^{{\color{red}\{\overline{77}\}_3}},\overbrace{{\color{red}{60}_3},{\color{blue}\overline{57}_1,\overline{50}_2}} ^{{\color{red}\{60\}_3}},\overline{45}_1,\\[5pt] &\ \ \ \ \ \ \ \ \ \ \ \ \overline{37}_2,\overline{33}_1,\underbrace{{\color{red}{30}_3},{\color{blue}\overline{25}_2,\overline{20}_1}} _{\color{red}\{{30}\}_3},\underbrace{{\color{red}\overline{10}_3},{\color{blue}\overline{7}_2,\overline{3}_1}} _{\color{red}\{\overline{10}\}_3}). \end{split} \end{equation*} Note that there are five $3$-marked parts in $G(\underline{\omega})$, which are $\tilde{g}_1(\underline{\omega})=100$, $\tilde{g}_2(\underline{\omega})=\overline{77}$, $\tilde{g}_3(\underline{\omega})=60$, $\tilde{g}_4(\omega)=30$ and $\tilde{g}_5(\underline{\omega})=\overline{10}$. It is easy to check that $s=4$ and $p=3$ is the least integer such that $\overline{s\cdot \eta+ a}=\overline{43}>\tilde{g}_{p+1}(\underline{\omega}) =30$. Hence \[\omega\in\overline{\mathcal{B}}_=(3,7;10,4,3|5,73).\] We can apply the $3$-separation ${SP}_{73}$ to $\omega$. We first remove $\overline{43}$ from $\omega$ to get $\underline{\omega}$. Then, we apply the backward move of the third kind $\psi_3$ to $\underline{\omega}$, namely, changing $100$, $\overline{77}$ and $60$ in $\underline{\omega}$ to $90$, $\overline{67}$ and $50$ respectively. Finally, we recover the overpartition $\pi$ defined in \eqref{new-example}. \section{Proof of Theorem \ref{lambda-r}} As stated in Section 2, the proof of Theorem \ref{lambda-r} is equivalent to the proof of Theorem \ref{lambdathm}. In this section, we will show Theorem \ref{lambdathm} by using the $(k-1)$-addition defined in Theorem \ref{Dilation-reduction} and the $(k-1)$-insertion defined in Theorem \ref{deltagammathmbb} repeatedly. Before giving a proof of Theorem \ref{lambdathm}, we need to show that the $(k-1)$-addition can be applied to an overpartition in $ \overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$ successively. \begin{lem}\label{tauhh} For $0\leq p<N$, let $\tau$ be an overpartition in $ \overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$. Assume that there are $N'$ $(k-1)$-marked parts in the reverse Gordon marking of $\tau$. {\rm{(1)}} If $N'>p'>p$, then $\tau\in\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N',p')$. {\rm{(2)}} Set $\pi=S_{p\eta+\alpha_1}(\tau)$. If $\pi$ is an overpartition in $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots, \alpha_\lambda;\eta,k,r|N'',p'')$ and $\tau$ is not an overpartition in $ \overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,N)$, then $N''>p''$ and $p>p''$. \end{lem} \pf (1) Let $\tilde{r}_1(\tau)>\cdots>\tilde{r}_{N'}(\tau)$ be the $(k-1)$-marked parts in the reverse Gordon marking of $\tau$. If $N'>p'>p$, then we wish to show that there exists a part congruent to $\alpha_\lambda$ modulo $\eta$ in the $(k-1)$-set $\{\tilde{r}_{p'+1}(\tau)\}_{k-1}$ of $\tau$, denoted by $\tilde{\tilde{r}}_{p'+1}(\tau)$, and there must be a part congruent to $\alpha_\lambda$ modulo $\eta$ in all $(k-1)$-sets of $\tau$ less than $\tilde{\tilde{r}}_{p'+1}(\tau)$. By Theorem \ref{Dilation-reduction}, we see that there is an overpartition $\pi$ in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots, \alpha_\lambda;\eta,k,r|N,p)$ such that $\tau=A_{p\eta+\alpha_1}(\pi)$, where $A_{p\eta+\alpha_1}$ is the $(k-1)$-addition defined in Theorem \ref{Dilation-reduction}. By definition, there are $N$ $(k-1)$-marked parts in the reverse Gordon marking of $\pi$, denoted by $\tilde{r}_1(\pi)>\cdots>\tilde{r}_N(\pi)$. Furthermore, there exists a part congruent to $\alpha_\lambda$ modulo $\eta$ in the $(k-1)$-set $\{\tilde{r}_{p+1}(\pi)\}_{k-1}$, denoted by $\tilde{\tilde{r}}_{p+1}(\pi)$, and there must be a part congruent to $\alpha_\lambda$ modulo $\eta$ in all $(k-1)$-sets of $\pi$ less than $\tilde{\tilde{r}}_{p+1}(\pi)$. Recall the construction of $\tau=A_{p\eta+\alpha_1}(\pi)$, we first apply the forward move of the $p$-th $\phi_p$ to $\pi$ to get $\pi^{(1)}$ and then add $\alpha_1$ to $\tilde{\tilde{r}}_{p+1}(\pi)$ to obtain $\tau$. We first show that \begin{equation}\label{proof8.1-2} \tilde{r}_p(\pi)\leq \tilde{r}_p(\tau)\leq \tilde{r}_p(\pi)+\eta. \end{equation} Let $\tilde{g}_1(\pi^{(1)})>\cdots>\tilde{g}_N(\pi^{(1)})$ be the $(k-1)$-marked parts in the Gordon marking of $\pi^{(1)}$ and let $\tilde{r}_1(\pi^{(1)})>\cdots>\tilde{r}_N(\pi^{(1)})$ be the $(k-1)$-marked parts in the reverse Gordon marking of $\pi^{(1)}$. From \eqref{proof-add-1} in the proof of Lemma \ref{dilation-1}, we see that $\tilde{g}_p(\pi^{(1)})= \tilde{r}_p(\pi)+\eta$. By Proposition \ref{sequence-length}, we see that \begin{equation}\label{proof8.1-3}\tilde{r}_p(\pi)=\tilde{g}_p(\pi^{(1)})-\eta\leq\tilde{r}_p(\pi^{(1)})\leq \tilde{g}_p(\pi^{(1)})=\tilde{r}_p(\pi)+\eta.\end{equation} From the construction of $\tau$, we see that $\tilde{r}_p(\tau)=\tilde{r}_p(\pi^{(1)})$. Hence, we obtain \eqref{proof8.1-2} from \eqref{proof8.1-3}. We next show that there are no $(k-1)$-sets of $\tau$ in $(\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1,\ \tilde{r}_{p}(\pi)+\eta)$. Suppose not. Then let $\{\tau_i\}_{k-1}$ be a $(k-1)$-set of $\tau$ in $(\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1, \tilde{r}_{p}(\pi)+\eta)$, that is, \[\tilde{r}_{p}(\pi)+\eta>\tau_{i}\geq \cdots\geq \tau_{i+k-2}>\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1,\] where $\tau_i\leq \tau_{i+k-2}+\eta$ with strict inequality if $\tau_i$ is overlined. By definition, it is easy to see that there are no $(k-1)$-sets of $\pi$ in $(\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1,\tilde{r}_{p}(\pi))$. From the construction of $\tau$, we see that the parts of $\tau$ in $(\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1,\tilde{r}_{p}(\pi))$ stay the same as those of $\pi$. So there are no $(k-1)$-sets of $\tau$ in $(\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1,\tilde{r}_{p}(\pi))$. Hence $\tau_i\geq \tilde{r}_{p}(\pi)$. Notice that $\tilde{r}_{p}(\pi)$ in $\pi$ is changed to $\tilde{r}_{p}(\pi)+\eta$ in $\tau$, so $\tau_i\neq \tilde{r}_{p}(\pi)$. Hence $\tau_i> \tilde{r}_{p}(\pi)$, and so $\tau_{i+k-2}\geq \tau_i-\eta> \tilde{r}_{p}(\pi)-\eta$. Therefore $\{\tau_i\}_{k-1}$ is a $(k-1)$-set of $\tau$ in $(\tilde{r}_{p}(\pi)-\eta, \tilde{r}_{p}(\pi)+\eta)$, which contradicts to the fact that there are no $(k-1)$-sets of $\tau$ in $(\tilde{r}_{p}(\pi)-\eta,\tilde{r}_{p}(\pi)+\eta)$ since $\tilde{r}_{p}(\pi)$ in $\pi$ is changed to $\tilde{r}_{p}(\pi)+\eta$ in $\tau$. So the assumption is false. Hence, there are no $(k-1)$-sets of $\tau$ in $(\tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1, \tilde{r}_{p}(\pi)+\eta)$. Combining with \eqref{proof8.1-2}, we obtain \begin{equation}\label{proof-ss-1} \tilde{r}_{p+1}(\tau)\leq \tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1. \end{equation} It follow that if $N'>p'>p$, then \[{\tilde{r}}_{p'+1,1}(\tau)\leq\cdots \leq {\tilde{r}}_{p'+1,k-1}(\tau){ <} \tilde{r}_{p+1}(\tau)\leq \tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1. \] From the construction of $\tau$, we see that $\tilde{\tilde{r}}_{p+1}(\pi)$ is not a part of $\tau$. Hence \begin{equation*}\label{bbb}{\tilde{r}}_{p'+1,1}(\tau)\leq\cdots \leq {\tilde{r}}_{p'+1,k-1}(\tau){ <} \tilde{\tilde{r}}_{p+1}(\pi). \end{equation*} By definition, there is a part congruent to $\alpha_\lambda$ modulo $\eta$ in all $(k-1)$-sets of $\pi$ less than $\tilde{\tilde{r}}_{p+1}(\pi)$, so there exists a part congruent to $\alpha_\lambda$ in the $(k-1)$-set $\{\tilde{r}_{p'+1}(\tau)\}_{k-1}$ of $\tau$, denoted by $\tilde{\tilde{r}}_{p'+1}(\tau)$. Furthermore, there is a part congruent to $\alpha_\lambda$ modulo $\eta$ in all $(k-1)$-sets of $\tau$ less than $\tilde{\tilde{r}}_{p'+1}(\tau)$. Hence, we conclude that $\tau\in\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N',p')$. (2) By Theorem \ref{Dilation-reduction}, we see that \[\pi\in\overline{\mathcal{B}}_\lambda (\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p).\] Let $\tilde{r}_1(\pi)>\cdots>\tilde{r}_N(\pi)$ be the $(k-1)$-marked parts in the reverse Gordon marking of $\pi$. By definition, we see that there exists a part congruent to $\alpha_\lambda$ modulo $\eta$ in the $(k-1)$-set $\{\tilde{r}_{p+1}(\pi)\}_{k-1}$, denoted by $\tilde{\tilde{r}}_{p+1}(\pi)$, and there must be a part congruent to $\alpha_\lambda$ modulo $\eta$ in all $(k-1)$-sets of $\pi$ less than $\tilde{\tilde{r}}_{p+1}(\pi)$. Assume that $\pi$ is an overpartition in $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots, \alpha_\lambda;\eta,k,r|N'',p'')$ and $\tau$ is not an overpartition in $ \overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,N)$. Let $\tilde{g}_{1}(\hat{\pi})>\cdots >\tilde{g}_{N''}(\hat{\pi})$ be the $(k-1)$-marked parts in the Gordon marking of the degenerate overpartition $\hat{\pi}$ of $\pi$. \begin{itemize} \item If $0\leq p''<N''$ and assume that $\{\pi_m\}_{k-1}$ is the smallest non-degenerate $(k-1)$-set of $\pi$, then $\tilde{g}_{p''+1}(\hat{\pi})< \pi_m+\eta<\tilde{g}_{p''}(\hat{\pi})$; \item If $p''=N''$, then there is a non-degenerate $(r-1)$-part of $\pi$. \end{itemize} We first show that in this case, $N''\neq p''$, that is, $N''>p''$. Suppose not. Then $N''=p''$. By definition, we see that $f_\pi(0,\eta]=r-1$, there are no $(k-1)$-sets of $\pi$ in $(0,\overline{\eta+\alpha_\lambda})$ and $\overline{\alpha_\lambda}$ is not a part of $\pi$. Recall that $\pi=S_{p\eta+\alpha_1}(\tau)$ and $\tau$ is an overpartition in $ \overline{\mathcal{B}}_\eta(\alpha_2, \ldots,\alpha_\lambda;\eta,k,r|N,p)$ where $p<N$. Assume that $\{\tau_m\}_{k-1}$ is the smallest non-degenerate $(k-1)$-set of $\tau$ and $\tau_{m+t}$ is the smallest non-degenerate part of $\tau$. Notice that the $(k-1)$-addition $A_{p\eta+\alpha_1}$ is the inverse map of the $(k-1)$-subtraction $S_{p\eta+\alpha_1}$, we see that $\tau=A_{p\eta+\alpha_1}(\pi)$. By definition, we have $\tau_{m+t}\geq 2\eta$. Let $\tilde{g}_1(\hat{\tau})>\cdots>\tilde{g}_N(\hat{\tau})$ be the $(k-1)$-marked parts in the Gordon marking of the degenerate overpartition $\hat{\tau}$ of $\tau$. Then, by definition, we see that \[\tilde{g}_p(\hat{\tau})>\tau_m+\eta\geq \tau_{m+t}+\eta\geq 3\eta.\] From the construction of $\pi=S_{p\eta+\alpha_1}(\tau)$, we see that the parts of $\pi$ in $(0,\overline{\eta+\alpha_\lambda})$ are stay the same as those of $\tau$. Hence $f_\tau(0,\eta]=r-1$, there are no $(k-1)$-sets of $\tau$ in $(0,\overline{\eta+\alpha_\lambda})$ and $\overline{\alpha_\lambda}$ is not a part of $\tau$. By definition, we see that \[\tau\in\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,N),\] which contradicts to the fact that $\tau$ is not an overpartition in $ \overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,N)$. So the assumption is false. Hence $N''>p''$. We proceed to show that $p>p''$. Since $\pi$ is an overpartition in $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots, \alpha_\lambda;\eta,k,r|N'',p'')$, by Theorem \ref{Dilation-reduction}, we see that there exists an overpartition $\omega$ in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots, \alpha_\lambda;\eta,k,r|N'',p'')$ such that $\pi=A_{p''\eta+\alpha_1}(\omega)$, where $A_{p''\eta+\alpha_1}$ is the $(k-1)$-addition defined in Theorem \ref{Dilation-reduction}. Let $\tilde{r}_1(\omega)>\cdots>\tilde{r}_N(\omega)$ be the $(k-1)$-marked parts in the reverse Gordon marking of $\omega$. By definition, there exists a part congruent to $\alpha_\lambda$ modulo $\eta$ in the $(k-1)$-set $\{\tilde{r}_{p''+1}(\omega)\}_{k-1}$ of $\omega$, denoted by $\tilde{\tilde{r}}_{p''+1}(\omega)$, and there must be a part congruent to $\alpha_\lambda$ modulo $\eta$ in all $(k-1)$-sets of $\omega$ less than $\tilde{\tilde{r}}_{p''+1}(\omega)$. With the same argument in the proof of \eqref{proof-ss-1}, we have \[\tilde{r}_{p''+1}(\pi)\leq \tilde{\tilde{r}}_{p''+1}(\omega)+\alpha_1.\] Notice that \[\tilde{r}_{p''+1}(\omega)\leq\cdots \leq \tilde{{\tilde{r}}}_{p''+1}(\omega)+\alpha_1 \leq \cdots \leq \tilde{r}_{p''+1,k-1}(\omega)\] is a non-degenerate $(k-1)$-set of $\pi$, so \begin{equation}\label{proof-ss-2} \tilde{r}_{p''+1}(\pi)\geq\tilde{r}_{p''+1}(\omega). \end{equation} Assume that $p\leq p''$, by \eqref{proof-ss-2}, we see that $\tilde{\tilde{r}}_{p+1}(\pi)\geq\tilde{r}_{p+1}(\pi)\geq \tilde{r}_{p''+1}(\pi)\geq \tilde{r}_{p''+1}(\omega)$. Notice that $\tilde{\tilde{r}}_{p+1}(\pi)$ and $\tilde{\tilde{r}}_{p''+1}(\omega)$ are congruent to $\alpha_\lambda$ modulo $\eta$, so $\tilde{\tilde{r}}_{p+1}(\pi)\geq \tilde{\tilde{r}}_{p''+1}(\omega)$. Furthermore, from the construction of $\pi$, we see that $\tilde{\tilde{r}}_{p''+1}(\omega)$ is not a part of $\pi$. Hence \[\tilde{\tilde{r}}_{p+1}(\pi)\geq \tilde{\tilde{r}}_{p''+1}(\omega)+\eta.\] It follows that \[\text{max}\{\tilde{\tilde{r}}_{p''+1}(\omega)+\alpha_1,\tilde{r}_{p''+1,k-1}(\omega)\}<\tilde{\tilde{r}}_{p''+1}(\omega)+\eta\leq\tilde{\tilde{r}}_{p+1}(\pi).\] Hence we conclude that $\tilde{r}_{p''+1,1}(\omega)\leq \cdots \leq \tilde{{\tilde{r}}}_{p''+1}(\omega)+\alpha_1 \leq \cdots \leq \tilde{r}_{p''+1,k-1}(\omega)$ is a non-degenerate $(k-1)$-set of $\pi$ less than $\tilde{\tilde{r}}_{p+1}(\pi)$, which contradicts to the fact that any $(k-1)$-set less than $\tilde{\tilde{r}}_{p+1}(\pi)$ of $\pi$ has a part congruent to $\alpha_\lambda$ modulo $\eta$. So the assumption is false. Hence $p>p''$. Thus, we complete the proof. \qed Similarly, the following lemma in \cite{he-ji-zhao} is also useful, which tells us that the $(k-1)$-insertion can be applied successively to an overpartition in $ \overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k, r|N,(s+p)\eta+a)$. \begin{lem}{\rm \cite[Theorem 3.11]{he-ji-zhao}}\label{ssins} Let $\omega$ be an overpartition in $\overline{\mathcal{{B}}}_{=}(\alpha_1,\ldots,\alpha_\lambda;\eta,k, r|N,(s+p)\eta+a)$. Assume that there are $N'$ $(k-1)$-marked part in the reverse Gordon marking of $\omega$. \begin{itemize} \item[\rm{(1)}] If $q'>s+p$, then $\omega$ is an overpartition in $\overline{\mathcal{{B}}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k, r|N',q'\eta+a)${\rm{;}} \item[\rm{(2)}] Set $\pi={SP}_{(s+p)\eta+a}(\omega)$. If $\pi$ is an overpartition in $\overline{\mathcal{{B}}}_{=}(\alpha_1,\ldots,\alpha_\lambda;\eta,k, r|N'',(s_1+p_1)\eta+a)$, then $s+p>s_1+p_1$. \end{itemize} \end{lem} The following lemma tells us that the $(k-1)$-insertion can be applied into the resulting overpartition obtained by applying the $(k-1)$-addition. \begin{lem}\label{sskr1} For $0\leq p\leq N$, let $\pi$ be an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,p)$. Define $\tau=A_{p\eta+\alpha_1}(\pi).$ Assume that there are $N'$ $(k-1)$-marked parts in the reverse Gordon marking of $\tau$. {\rm{(1)}} For $p=N$, $\tau$ is an overpartition in $\overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N',q\eta+\alpha_1)$ if and only if $q>N$. {\rm{(2)}} For $0\leq p<N$, assume that $\overline{\alpha_\lambda}$ is a part of $\pi$ when $f_\pi(0,\eta]=r-1$. If $\tau$ is not an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N',N')$, then for $q\geq N'$, we have $\tau\in\overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N',q\eta+\alpha_1)$. {\rm{(3)}} For $0\leq p<N$ and $ p< p'<N'$, if $\tau$ is an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N',p')$ and $\overline{\alpha_\lambda}$ is a part of $\pi$, then $\overline{\alpha_\lambda}$ is also a part of $\tau$. \end{lem} \pf Let $\tilde{r}_{1}(\pi)>\cdots>\tilde{r}_{N}(\pi)$ be the $(k-1)$-marked parts in the reverse Gordon marking of $\pi$ and let $\tilde{r}_{1}(\tau)>\cdots>\tilde{r}_{N'}(\tau)$ be the $(k-1)$-marked parts in the reverse Gordon marking of $\tau$. (1) We first show that $N'=N$ or $N+1$ and $\tilde{r}_{N'}({\tau})=\eta$ if $N'=N+1$. Let $\tilde{g}_1(\hat{\tau})>\cdots>\tilde{g}_N(\hat{\tau})$ be the $(k-1)$-marked parts in the Gordon marking of the degenerate overpartition $\hat{\tau}$ of $\tau$. Note that $p=N$, so $\tau$ is an overpartition in $\overline{\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N,N)$, and hence $f_\tau(0,\eta]=r-1$, $\overline{\alpha_\lambda}$ does not occur in $\tau$, and there are no $(k-1)$-sets of $\tau$ in $(0,\overline{\eta+\alpha_\lambda})$. From \eqref{p=n-sub-1} in the proof of Lemma \ref{reduction}, we see that $\tilde{g}_N(\hat{\tau})>2\eta$. Let $\tilde{r}_1(\hat{\tau})>\cdots>\tilde{r}_N(\hat{\tau})$ be the $(k-1)$-marked parts in the reverse Gordon marking of $\hat{\tau}$. By Proposition \ref{sequence-length}, we see that \[\tilde{r}_{N}(\hat{\tau})\geq \tilde{g}_{N,1}(\hat{\tau})\geq\tilde{g}_{N}(\hat{\tau})-\eta>\eta.\] By the definition of the degenerate overpartition, we see that \begin{equation}\label{proof-8.7-1} \tilde{r}_N(\tau)=\tilde{r}_{N}(\hat{\tau})>\eta. \end{equation} We consider the following two cases: Case 1: If there are no $(k-1)$-sets of $\tau$ in $(0,2\eta]$, then $N'=N$. Case 2: If there exists a $(k-1)$-set of $\tau$ in $(0,2\eta]$, denoted by $\tau_{i+k-2}\leq\cdots\leq \tau_{i}$. Since there are no $(k-1)$-sets of $\tau$ in $(0,\overline{\eta+\alpha_\lambda})$ and $\eta$ is a part of $\tau$, then $\overline{\eta+\alpha_\lambda}\leq \tau_{i}\leq2\eta$, $\tau_{i+k-2}=\eta$ and $\{\tau_{i}\}_{k-1}$ is the only one $(k-1)$-set of $\tau$ in $(0,2\eta]$. There are two subcases. Case 2.1: If $\eta<\tilde{r}_N(\tau)\leq 2\eta$, then $\tau_{i+k-2}<\tilde{r}_N(\tau)\leq \tau_i$, which implies that there are no $(k-1)$-sets of $\tau$ in $(0,\tilde{r}_N(\tau))$. Hence $N'=N$. Case 2.2: If $\tilde{r}_{N}(\tau)> 2\eta$, then there is a part in the $(k-1)$-set $\{\tau_{i}\}_{k-1}$ marked with $k-1$ in the reverse Gordon marking of $\tau$. Hence $N'=N+1$. By definition, there are no $(k-1)$-sets of $\hat{\tau}$ in $(\eta,\tilde{r}_N(\tau))$. So there are no $(k-1)$-sets of $\tau$ in $(\eta,\tilde{r}_N(\tau))$. It implies that there is no $(k-1)$-marked part in $(\eta,\tilde{r}_N(\tau))$ in the reverse Gordon marking of $\tau$. Hence $\tilde{r}_{N+1}(\tau)=\eta$. From the argument above, we see that $N'=N$ or $N+1$. Furthermore, if $N'=N+1$, then $\tilde{r}_{N+1}(\tau)=\eta$. We first show that if $\tau$ is an overpartition in $\overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N',q\eta+\alpha_1)$, then $q > N$. By the definition of $\overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N',q\eta+\alpha_1)$, we see that $q\geq N'\geq N$. We proceed to show that $q>N$. Suppose not. Then $q=N$. This implies that $ N'= N$. So, $\tau$ is an overpartition in $\overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N,N\eta+\alpha_1)$. Note that $f_\tau(0,\eta]=r-1$, by definition, we see that $\tilde{r}_{N}(\tau)\leq\eta$, which contradicts to \eqref{proof-8.7-1}. So the assumption is false. Hence $q>N$. We next show that if $q>N$, then $\tau$ is an overpartition in $\overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N',q\eta+\alpha_1)$. Assume that $q>N$, then $q\geq N+1\geq N'$. So there exists $p$ such that $0\leq p\leq N'$ and $p$ is the least integer such that $\overline{(q-p)\eta+\alpha_1}\geq\tilde{r}_{p+1}(\tau)+\eta$. Note that there are no overlined parts $ \equiv\alpha_1\pmod\eta$ in $\tau$, so the overlined parts $\equiv\alpha_1\pmod\eta$ in $\tau$ are less than $\overline{(q-p)\eta+\alpha_1}$. By definition, we have $f_{\tau}(0,\eta]=r-1$. If $q=N'$, then $q=N'=N+1$. From the proof above, we see that $\tilde{r}_{N+1}(\tau)=\eta$. Hence we arrive at $\tau\in\overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N',q\eta+\alpha_1)$. Thus, we complete the proof. (2) Assume that $p$ is the least integer such that $0\leq p \leq N'$ and $\overline{(q-p)\eta+\alpha_1}\geq\tilde{r}_{p+1}(\tau)+\eta$. It is easy to see that such $p$ exists since $q\geq N'$ and $\overline{(q-N')\eta+\alpha_1}>0\geq\tilde{r}_{N'+1}(\tau)+\eta$, where $\tilde{r}_{N'+1}(\tau)=-\infty$. Note that there are no overlined parts congruent to $\alpha_1$ modulo $\eta$ in $\tau$, so the largest overlined part congruent to $\alpha_1$ modulo $\eta$ in $\tau$ is less than $\overline{(q-p)\eta+\alpha_1}$. By the definition of $\overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N',q\eta+\alpha_1)$, it suffices to show that if $q=N'$ and $f_{\tau}(0,\eta]=r-1$, then $\tilde{r}_{N'}(\tau)\leq\eta$. Assume that $q=N'$ and $f_{\tau}(0,\eta]=r-1$. By Lemma \ref{dilation-1}, we have $f_{\pi}(0,\eta]=f_{\tau}(0,\eta]=r-1$. By definition, we see that $\overline{\alpha_\lambda}$ is a part of $\pi$. If $\overline{\alpha_\lambda}$ is a part of $\tau$, note that $\tau$ is not an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|N', N')$, then $\tilde{r}_{N'}(\tau)\leq\eta$. If $\overline{\alpha_\lambda}$ is not a part of $\tau$, then by the construction of $\tau=A_{p\eta+\alpha_1}(\pi)$, we have $\tilde{\tilde{r}}_{p+1}(\pi)=\overline{\alpha_\lambda}$. By \eqref{proof-ss-1} in the proof of Lemma \ref{tauhh}, we see that \begin{equation*}\label{proof6.1-1} \tilde{r}_{N'}(\tau)\leq \tilde{r}_{p+1}(\tau)\leq \tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1=\eta. \end{equation*} In either case, we can obtain that $\tilde{r}_{N'}(\tau)\leq\eta$ if $q=N'$ and $f_{\tau}(0,\eta]=r-1$. Hence, $\tau\in\overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N',q\eta+\alpha_1).$ Thus, we complete the proof. (3) From \eqref{proof-ss-1} in the proof Lemma \ref{tauhh}, and note that $p<p'$, we see that \begin{equation*} \tilde{\tilde{r}}_{p+1}(\pi)+\alpha_1\geq \tilde{r}_{p+1}(\tau)\geq\tilde{r}_{p'+1}(\tau)+\eta>\eta. \end{equation*} So $\tilde{\tilde{r}}_{p+1}(\pi)>\overline{\alpha_\lambda}$. Then, by the construction of $\tau=A_{p\eta+\alpha_1}(\pi)$, we see that the part $\overline{\alpha_\lambda}$ in $\pi$ is not changed after applying $A_{p\eta+\alpha_1}$ to $\pi$. It implies that $\overline{\alpha_\lambda}$ is also a part of $\tau$. Thus, we complete the proof. \qed We are now in a position to give a proof of Theorem \ref{lambda-r}. It suffices to give a proof of Theorem \ref{lambdathm}. \noindent{\bf Proof of Theorem \ref{lambdathm}:} Let $\pi$ be an overpartition in $\mathcal{\overline{B}}_1(\alpha_{2},\ldots,\alpha_{\lambda-1};\eta,k-1,r-1)$ and let $\delta^{(1)}$ be a partition into distinct parts congruent to $\alpha_1$ modulo $\eta$ and let $\delta^{(\lambda)}$ be a partition into distinct parts congruent to $\alpha_\lambda$ modulo $\eta$. We will define $\tau=\Theta(\delta^{(1)},\delta^{(\lambda)},\pi)$ such that $\tau$ is an overpartition in $\mathcal{\overline{B}}_1(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r)$ and $|\tau|=|\delta^{(1)}|+|\delta^{(\lambda)}|+|\pi|$. We first insert the parts of $\delta^{(\lambda)}$ as overlined parts into $\pi$, and denote the resulting overpartition by $\pi^{(0)}$. By definition, we see that $\pi^{(0)}$ is an overpartition in $\mathcal{\overline{B}}_1(\alpha_{2},\ldots, \alpha_{\lambda};\eta,k,r)$ such that \begin{equation}\label{thm6.1aaa} |\pi^{(0)}|=|\pi|+|\delta^{(\lambda)}|. \end{equation} Furthermore, there is a part congruent to $\alpha_\lambda$ in all $(k-1)$-sets of $\pi^{(0)}$, and $\overline{\alpha_\lambda}$ is a part of $\pi^{(0)}$ when $f_{\pi^{(0)}}(0,\eta]=r-1$. There are two cases: Case 1: If $\delta^{(1)}=\emptyset$, then set $\tau=\pi^{(0)}$. It is easy to check that $\tau$ is an overpartition in $\mathcal{\overline{B}}_1(\alpha_{1},\ldots, \alpha_{\lambda};\eta,k,r)$ so that $|\tau|=|\delta^{(1)}|+|\delta^{(\lambda)}|+|\pi|$. Case 2: If $\delta^{(1)}\neq\emptyset$, then set $\delta^{(1)}=(q_1\eta+\alpha_1,\ldots,q_m\eta+\alpha_1)$ where $q_1>\cdots>q_m\geq0$. We will insert $q_{m}\eta+\alpha_1,\ldots,q_{1}\eta+\alpha_1$ into $\pi^{(0)}$ successively. There are three steps and we will denote the resulting pairs by $({\rm Step}_i(\delta^{(1)}),{\rm Step}_i(\pi^{(0)}))$ after Step $i$, where $i=1,2,3$. {\bf Step 1}: Assume that there are $N(\pi^{(0)})$ $(k-1)$-marked parts in the reverse Gordon marking of $\pi^{(0)}$. If $ q_m\geq N(\pi^{(0)})$, then set ${\rm Step}_1(\delta^{(1)})=\delta^{(1)}$ and ${\rm Step}_1(\pi^{(0)}))=\pi^{(0)}$, and go to Step 2 directly. Otherwise, we apply the $(k-1)$-addition to insert some parts of $\delta^{(1)}$ from smallest to largest into $\pi^{(0)}$ successively and denote the intermediate overpartitions by $\pi^{(1)},\pi^{(2)},\ldots$. Assume that there are $N(\pi^{(i)})$ $(k-1)$-marked parts in the reverse Gordon marking of $\pi^{(i)}$. If $0\leq q_m<N(\pi^{(0)})$, and let $b=0$, then we repeat the following process until $q_{m-b}\geq N(\pi^{(b)})$. \begin{itemize} \item Step 1-1: When $b=0$, by the definition of $\pi^{(0)}$, we see that there is a part congruent to $\alpha_\lambda$ modulo $\eta$ in any $(k-1)$-set of $\pi^{(0)}$, so \[\pi^{(0)}\in\overline{\mathcal{B}}_\lambda(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|N(\pi^{(0)}), q_m).\] When $b\geq1$, note that \[ \pi^{(b)}\in\overline{\mathcal{B}}_\eta(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|N(\pi^{(b-1)}), q_{m-b+1}). \] Since $N^{(b)}>q_{m-b}>q_{m-b+1}$, then by Lemma \ref{tauhh} (1), we see that \[\pi^{(b)}\in\overline{\mathcal{B}}_\lambda(\alpha_{2},\ldots,\alpha_{\lambda}; \eta,k,r|N(\pi^{(b)}),q_{m-b}).\] \item Step 1-2: Hence we could apply the $(k-1)$-addition to insert $q_{m-b}\eta+\alpha_1$ into $\pi^{(b)}$ to generate non-degenerate $(k-1)$-sets. More precisely, apply $A_{q_{m-b}\eta+\alpha_1}$ on $\pi^{(b)}$ to obtain $\pi^{(b+1)}$, that is, \[\pi^{(b+1)}=A_{q_{m-b}\eta+\alpha_1}(\pi^{(b)}).\] By Lemma \ref{dilation-1}, we see that \begin{equation*}\label{dd} \pi^{(b+1)}\in\overline{\mathcal{B}}_\eta(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|N(\pi^{(b)}),q_{m-b}) \end{equation*} and \[|\pi^{(b+1)}|=|\pi^{(b)}|+q_{m-b}\eta+\alpha_1.\] \item Step 1-3: Replace $b$ by $b+1$. \end{itemize} Suppose that $q_j\geq N^{(m-j)}$, then set ${\rm Step}_1(\pi^{(0)})=\pi^{(m-j)}$ and $ {\rm Step}_1(\delta^{(1)})=(q_1\eta+\alpha_1,\ldots,q_j\eta +\alpha_1) $ and go to Step 2. Obviously, \[{\rm Step}_1(\pi^{(0)})\in\overline{\mathcal{B}}_\eta(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|N^{(m-j-1)},q_{j+1})\] and \begin{equation}\label{sum-m-t} |{\rm Step}_1(\pi^{(0)})|=|\pi^{(0)}|+(q_{m}\eta+\alpha_1)+\cdots+(q_{j+1}\eta+\alpha_1). \end{equation} {\bf Step 2}: Denote ${\rm Step}_1(\pi^{(0)})$ by $\sigma$, and assume that there are $N(\sigma)$ $(k-1)$-marked parts in the reverse Gordon marking of $\sigma$. Recall that \[{\rm Step}_1(\delta^{(1)})=(q_1\eta+\alpha_1,\ldots,q_j\eta +\alpha_1) \] and $q_j\geq N(\sigma)$. We consider the following two cases: Case 2-1: If $\sigma $ is not an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|q_j, q_j),$ then set ${\rm Step}_2(\pi^{(0)})=\sigma$ and $ {\rm Step}_2(\delta^{(1)})=(q_1\eta+\alpha_1,\ldots,q_j\eta +\alpha_1) $ and go to Step 3 directly. Case 2-2: If $\sigma $ is an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|q_j, q_j),$ then we could apply the $(k-1)$-addition to insert $q_{j}\eta+\alpha_1$ into $\sigma$. More precisely, apply the map $A_{q_{j}\eta+\alpha_1}$ to $\sigma$ to obtain ${\rm Step}_2(\pi^{(0)})$ and set ${\rm Step}_2(\delta^{(1)})=(q_1\eta+\alpha_1,\ldots,q_{j-1}\eta +\alpha_{1})$. Namely, \[{\rm Step}_2(\pi^{(0)})=A_{q_{j}\eta+\alpha_1}(\sigma).\] By Lemma \ref{dilation-1}, we see that \begin{equation*}\label{proof6.1-2} {\rm Step}_2(\pi^{(0)})\in\overline{\mathcal{B}}_\eta(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|q_j,q_{j}) \end{equation*} and \begin{equation}\label{proof6.1-5}|{\rm Step}_2(\pi^{(0)})|=|\sigma|+q_{j}\eta+\alpha_1=|{\rm Step}_1(\pi^{(0)})|+q_{j}\eta+\alpha_1.\end{equation} {\bf Step 3}: Denote ${\rm Step}_2(\pi^{(0)})$ by $\varsigma$, and assume that there are $N(\varsigma)$ $(k-1)$-marked parts in the reverse Gordon marking of $\varsigma$. Recall that \[{\rm Step}_2(\delta^{(1)})=(q_1\eta+\alpha_1,\ldots,q_t\eta +\alpha_1) ,\] where $t=j$ if $\varsigma=\sigma$, or $t=j-1$ if $\varsigma\neq \sigma$. We first show that the $(k-1)$-insertion $I_{q_{t}\eta+\alpha_1}$ can be applied to $\varsigma$, that is, $\varsigma\in\overline{\mathcal{B}}_<(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r|N(\varsigma),q_{t}\eta+\alpha_1)$. We consider three cases. (1) If $t=j-1$, then $\varsigma=A_{q_j\eta+\alpha_1}(\sigma)$, where $\sigma \in \overline{\mathcal{B}}_\lambda(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|q_j, q_j)$. Note that $q_{j-1}>q_{j}$, then by Lemma \ref{sskr1} (1), we see that \[ \varsigma \in \overline{\mathcal{B}}_<(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r|N(\varsigma),q_{j-1}\eta+\alpha_1).\] (2) If $t=j=m$, then $\varsigma=\sigma=\pi^{(0)}$ is an overpartition in ${\overline{\mathcal{B}}}_1(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r)$. Let $\tilde{r}_{1}(\varsigma)>\cdots>\tilde{r}_{N(\varsigma)}(\varsigma)$ be the $(k-1)$-marked parts in the reverse Gordon marking of $\varsigma$. Assume that $p$ is the least integer such that $0\leq p \leq N(\varsigma)$ and $\overline{(q_m-p)\eta+\alpha_1}\geq\tilde{r}_{p+1}(\varsigma)+\eta$. It is easy to see that such $p$ exists since $q_m\geq N(\pi^{(0)})=N(\varsigma)$ and $\overline{(q_m-N(\varsigma))\eta+\alpha_1}>0\geq\tilde{r}_{N(\varsigma)+1}(\varsigma)+\eta$, where $\tilde{r}_{N(\varsigma)+1}(\varsigma)=-\infty$. Note that there are no overlined parts congruent to $\alpha_1$ modulo $\eta$ in $\varsigma$, so the largest overlined part congruent to $\alpha_1$ modulo $\eta$ in $\varsigma$ is less than $\overline{(q_m-p)\eta+\alpha_1}$. By definition, $\varsigma$ is not an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|q_m, q_m)$. Hence if $q_m=N(\varsigma)$ and $f_{\varsigma}(0,\eta]=f_{\pi^{(0)}}(0,\eta]=r-1$, then $\overline{\alpha_\lambda}$ is a part of $\pi^{(0)}=\varsigma$, and so $\tilde{r}_{N(\varsigma)}(\varsigma)\leq\eta$. Thus, by definition, we see that \[ \varsigma \in \overline{\mathcal{B}}_<(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r|N(\varsigma),q_{m}\eta+\alpha_1).\] (3) If $t=j<m$, then we have $\varsigma=\sigma=\pi^{(m-j)}=A_{q_{j+1}\eta+\alpha_1}(\pi^{(m-j-1)})$, where $\pi^{(m-j-1)} \in \overline{\mathcal{B}}_\lambda(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|N(\pi^{(m-j-1)}), q_{j+1}).$ By definition, we see that $\varsigma$ is not an overpartition in $\overline{\mathcal{B}}_\lambda(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|q_j, q_j)$. If $f_{\pi^{(m-j-1)}}(0,\eta]=r-1$, then by Lemma \ref{dilation-1}, we see that $f_{\pi^{(0)}}(0,\eta]=f_{\pi^{(1)}}(0,\eta]=\cdots=f_{\pi^{(m-j-1)}}(0,\eta]=r-1$. So, in such case, $\overline{\alpha_\lambda}$ is a part of $\pi^{(0)}$. By Lemma \ref{sskr1} (3), we see that $\overline{\alpha_\lambda}$ is also a part of $\pi^{(b)}$ for $0\leq b\leq m-j-1$. Thus, $\overline{\alpha_\lambda}$ is also a part of $\pi^{(m-j-1)}$ when $f_{\pi^{(m-j-1)}}(0,\eta]=r-1$. Note that $q_{j}\geq N^{(m-j)}=N(\varsigma)$, then by Lemma \ref{sskr1} (2), we see that \[ \varsigma \in \overline{\mathcal{B}}_<(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r|N(\varsigma),q_{j}\eta+\alpha_1).\] Therefore, we arrive at \begin{equation*}\label{proof6.1-4} \varsigma \in \overline{\mathcal{B}}_<(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r|N(\varsigma),q_{t}\eta+\alpha_1). \end{equation*} Hence we could apply the $(k-1)$-insertion with $a=\alpha_1$ in Theorem \ref{deltagammathmbb} to insert $q_{t}\eta+\alpha_1,\ldots,q_1\eta+\alpha_1$ into $\varsigma$ in succession to get $\tau$, where the intermediate overpartitions are denoted by $\varsigma^{(0)},\ldots,\varsigma^{(t)}$ with $\varsigma^{(0)}=\varsigma$ and $\varsigma^{(t)}={\rm Step}_3(\pi^{(0)})$. Assume that there are $N(\varsigma^{(i)})$ $(k-1)$-marked parts in the reverse Gordon marking of $\varsigma^{(i)}$. Let $b=0$, we repeat the following process until $b=t$: \begin{itemize} \item Step 3-1: We wish to insert $q_{t-b}\eta+\alpha_1$ into $\varsigma^{(b)}$ to generate an overlined part congruent to $\alpha_1$ modulo $\eta$. More precisely, apply the map $I_{q_{t-b}\eta+\alpha_1}$ to $\varsigma^{(b)}$ to obtain $\varsigma^{(b+1)}$, that is, \[\varsigma^{(b+1)}=I_{q_{t-b}\eta+\alpha_1}(\varsigma^{(b)}).\] By Theorem \ref{deltagammathmbb}, we see that \begin{equation}\label{insertbbb-n} \varsigma^{(b+1)} \in \overline{\mathcal{B}}_=(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r|N(\varsigma^{(b)}),q_{t-b}\eta+\alpha_1) \end{equation} and \[ |\varsigma^{(b+1)}|=|\varsigma^{(b)}|+q_{t-b}\eta+\alpha_1.\] \item Step 3-2: When $b=t-1$, do nothing and go to Step 3-3 directly. When $0\leq b<t-1$, note that $q_{t-b-1}>q_{t-b}$, then by \eqref{insertbbb-n} and using Lemma \ref{ssins} (1), we see that \[ \varsigma^{(b+1)} \in \overline{\mathcal{B}}_<(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r|N(\varsigma^{(b+1)}),q_{s-b-1}\eta+\alpha_1)\] \item Step 3-3: Replace $b$ by $b+1$. \end{itemize} Hence when $b=t$, we see that \[ {\rm Step}_3(\pi^{(0)})=\varsigma^{(t)} \in \overline{\mathcal{B}}_=(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r|N(\varsigma^{(t-1)}),q_{1}\eta+\alpha_1)\] and \begin{equation} \label{wei-par2-n} |{\rm Step}_3(\pi^{(0)})|= |\varsigma^{(t)}|=|{\rm Step}_2(\pi^{(0)})|+(q_{t}\eta+\alpha_1) +\cdots+(q_1\eta+\alpha_1). \end{equation} Set ${\rm Step}_3(\delta^{(1)})=\emptyset$ and $\tau={\rm Step}_3(\pi^{(0)})$. From the construction of the $(k-1)$-insertion with $a=\alpha_1$ in Theorem \ref{deltagammathmbb}, we see that $\tau$ is an overpartition in $\overline{\mathcal{B}}_1(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r)$ and there are $t$ overlined parts $ \equiv\alpha_1\pmod\eta$ in $\tau$. Furthermore, combining \eqref{sum-m-t}, \eqref{proof6.1-5} and \eqref{wei-par2-n} with \eqref{thm6.1aaa}, we see that $|\tau|=|\delta^{(1)}|+|\delta^{(\lambda)}|+|\pi|$. Therefore $\Theta$ is well-defined. To prove that $\Theta$ is a bijection, we shall give the description of the inverse map $\Upsilon$ of $\Theta$. Let $\tau$ be a $\overline{B}_1(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r)$ overpartition. We shall define $\Upsilon(\tau)=(\delta^{(1)},\delta^{(\lambda)}, \pi)$ such that $\delta^{(1)}$ is a partition into distinct parts congruent to $\alpha_1$ modulo $\eta$, $\delta^{(\lambda)}$ is a partition into distinct parts congruent to $\alpha_\lambda$ modulo $\eta$ and $\pi$ is an overpartition in $\mathcal{\overline{B}}_1(\alpha_{2},\ldots,\alpha_{\lambda-1};\eta,k-1,r-1)$. Furthermore, $|\tau|=|\delta^{(1)}|+|\delta^{(\lambda)}|+|\pi|$. There are the following two cases: Case 1: If there are no overlined parts $ \equiv\alpha_1\pmod\eta$ in $\tau$ and there are no non-degenerate parts in $\tau$ and there is no non-degenerate $(r-1)$-part in $\tau$ when $r<k$, then set $\delta^{(1)}=\emptyset$ and split the overpartition $\tau$ into $\overline{\delta}^{(\lambda)}$ and $\pi$, where $\overline{\delta}^{(\lambda)}$ consists of the parts congruent to $\alpha_{\lambda}$ modulo $\eta$ in $\tau$ and $\pi$ consists of the parts $\not\equiv\alpha_{\lambda}\pmod\eta$ in $\tau$. Finally, change the parts in $\overline{\delta}^{(\lambda)}$ to non-overlined parts to get ${\delta}^{(\lambda)}$. It is not difficult to check that $\pi$ is an overpartition in $\mathcal{\overline{B}}(\alpha_{2},\ldots,\alpha_{\lambda-1};\eta,k-1,r-1)$ and $|\tau|=|\delta^{(1)}|+|\delta^{(\lambda)}|+|\pi|$. Case 2: If there are overlined parts $\equiv\alpha_1\pmod\eta$ in $\tau$ or there are non-degenerate parts in $\tau$ or there is a non-degenerate $(r-1)$-part in $\tau$, then we iteratively apply the $(k-1)$-separation with $a=\alpha_1$ in Theorem \ref{deltagammathmbb} and the $(k-1)$-subtraction in Theorem \ref{Dilation-reduction} to $\tau$. There are three steps. We denote the resulting pairs by $( {\rm Step}_i(\delta^{(1)}), {\rm Step}_i(\tau))$ after Step $i$, where $i=1,2,3$. {\bf Step 1}: There are the following two cases: Case 1-1: If there are no overlined parts $\equiv\alpha_1\pmod\eta$ in $\tau$, then set $ {\rm Step}_1(\tau)=\tau$ and ${\rm Step}_1(\delta^{(1)})=\emptyset$, and go to Step 2 directly. Case 1-2: Otherwise, assume that there are $t\geq 1$ overlined parts $\equiv\alpha_1\pmod\eta$ in $\tau$, denoted by $\overline{\eta s_1+\alpha_1}>\overline{\eta s_2+\alpha_1}>\cdots>\overline{\eta s_{t}+\alpha_1}$. We shall apply the $(k-1)$-separation with $a=\alpha_1$ defined in Theorem \ref{deltagammathmbb} to remove the $t$ overlined parts $\equiv\alpha_1\pmod\eta$ from $\tau$ and denote the intermediate pairs by $(\gamma^{(0)}, \tau^{(0)}),\ldots,(\gamma^{(t)}, \tau^{(t)})$ with $\gamma^{(0)}=\emptyset$, $\tau^{(0)}=\tau$, ${\rm Step}_1(\delta^{(1)})=\gamma^{(t)}$ and ${\rm Step}_1(\tau)=\tau^{(t)}$. Assume that there are $N(\tau^{(i)})$ $(k-1)$-marked parts in the reverse Gordon marking of $\tau^{(i)}$. Let $b=0$, we repeat the following process until $b=t$. \begin{itemize} \item Step 1-1: We wish to remove $\overline{\eta s_{b+1}+\alpha_1}$ from $\tau^{(b)}$. More precisely, let $\underline{\tau}^{(b)}$ be the underlying overpartition obtained from $\tau^{(b)}$ by removing $\overline{\eta s_{b+1}+\alpha_1}$. Assume that there are $N(\underline{\tau}^{(b)})$ $(k-1)$-marked parts in the Gordon marking of $\underline{\tau}^{(b)}$ denoted by $\tilde{g}_1(\underline{\tau}^{(b)})>\cdots>\tilde{g}_{N(\underline{\tau}^{(b)})} (\underline{\tau}^{(b)})$ and $p_{b+1}$ is the least integer such that $\overline{\eta s_{b+1}+\alpha_1}\geq \tilde{g}_{p_{b+1}+1}(\underline{\tau}^{(b)})$. By definition, we see that \[\tau^{(b)} \in \overline{\mathcal{B}}_=(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r| N(\underline{\tau}^{(b)}),(s_{b+1}+p_{b+1})\eta+\alpha_1).\] Hence we could apply the $(k-1)$-separation ${SP}_{(s_{b+1}+p_{b+1})\eta+\alpha_1}$ to $\tau^{(b)}$ to get $\tau^{(b+1)}$, that is, \[\tau^{(b+1)}={SP}_{(s_{b+1}+p_{b+1})\eta +\alpha_1}(\tau^{(b)}).\] By Theorem \ref{deltagammathmbb}, we see that $N(\underline{\tau}^{(b)})=N(\tau^{(b+1)})$, \[\tau^{(b+1)}\in \overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r| N(\tau^{(b+1)}),(s_{b+1}+p_{b+1})\eta+\alpha_1),\] and \[ \left|\tau^{(b+1)}\right|=|\tau^{(b)}|-((s_{b+1}+p_{b+1})\eta+\alpha_1).\] \item Step 1-2: Insert $(s_{b+1}+p_{b+1})\eta+\alpha_1$ into $\gamma^{(b)}$ to generate a new partition $\gamma^{(b+1)}$. \item Step 1-3: Replace $b$ by $b+1$. \end{itemize} From the construction of ${SP}_{(s_b+p_b)\eta +\alpha_1}$, it is easy to see that there are $t-b$ parts $\equiv\alpha_1\pmod\eta$ in $\tau^{(b)}$ and the largest part $\equiv\alpha_1\pmod\eta$ is $\overline{\eta s_b+\alpha_1}$. When $b=t$, \[ {\rm Step}_1(\delta^{(1)})=\gamma^{(t)}=((s_1+p_1)\eta+\alpha_1, \ldots,(s_t+p_t)\eta+\alpha_1),\] and \begin{equation*}\label{thm6-1} {\rm Step}_1(\tau)=\tau^{(t)}\in \overline{\mathcal{B}}_<(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r| N(\tau^{(t)}),(s_{t}+p_{t})\eta+\alpha_1). \end{equation*} It is easy to check that there are no parts $\equiv\alpha_1\pmod\eta$ in ${\rm Step}_1(\tau)$ and \begin{equation}\label{weight-rel} |{\rm Step}_1(\tau)|+ |{\rm Step}_1(\delta^{(1)})|= |\tau|. \end{equation} Furthermore, for $0\leq b< t-1$, recall that $p_{b+2}$ is the least integer such that $\overline{\eta s_{b+2}+\alpha_1}\geq \tilde{g}_{p_{b+2}+1}(\underline{\tau}^{(b+1)})$, by definition, we see that \[\tau^{(b+1)} \in \overline{\mathcal{B}}_=(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r |N(\tau^{(b+2)}),(s_{b+2}+p_{b+2})\eta+\alpha_1).\] Hence, by Lemma \ref{ssins} (2), we see that for $0\leq b< t-1$, \begin{equation}\label{ttt1} s_{b+1}+p_{b+1}>s_{b+2}+p_{b+2}\geq N(\tau^{(b+2)}). \end{equation} This implies that ${\rm Step}_1(\delta^{(1)})$ is a partition into distinct parts congruent to $\alpha_1$ modulo $\eta$. {\bf Step 2}: There are the following two cases: Case 2-1: If $r=k$ or $r<k$ and there is no non-degenerate $(r-1)$-part in $\varsigma$, then set ${\rm Step}_2(\tau)={\rm Step}_1(\tau)$ and $ {\rm Step}_2(\delta^{(1)})={\rm Step}_1(\delta^{(1)}) $ and go to Step 3 directly. Case 2-2: Otherwise, denote ${\rm Step}_1(\tau)$ by $\varsigma$ and assume that there are $N({\varsigma})$ $(k-1)$-marked parts in the Gordon marking of $\varsigma$. From the Step 1, we see that \begin{equation}\label{thm6-1aa} {\rm Step}_1(\tau)=\varsigma\in\overline{\mathcal{B}}_<(\alpha_{1},\ldots,\alpha_{\lambda};\eta,k,r|N({\varsigma}), (s_t+q_t)\eta+\alpha_1). \end{equation} Assume that there are $N(\hat{\varsigma})$ $(k-1)$-marked parts in the Gordon marking of the degenerate overpartition $\hat{\varsigma}$ of $\varsigma$. Set $q_0=N(\hat{\varsigma})$, by definition, we see that \begin{equation*} {\rm Step}_1(\tau)=\varsigma\in\overline{\mathcal{B}}_\eta(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|q_0, q_0). \end{equation*} Then we could apply the $(k-1)$-subtraction $S_{q_{0}\eta+\alpha_1}$ to $\varsigma$ to obtain ${\rm Step}_2(\tau)$. More precisely, \[{\rm Step}_2(\tau)=S_{q_{0}\eta+\alpha_1}(\varsigma),\] and set \[{\rm Step}_2(\delta^{(1)})=((s_1+q_1)\eta+\alpha_1, \ldots,(s_t+q_t)\eta+\alpha_1,q_{0}\eta+\alpha_1).\] By Theorem \ref{Dilation-reduction}, we see that \begin{equation}\label{proofi6.1-2} {\rm Step}_2(\tau)\in\overline{\mathcal{B}}_\lambda(\alpha_{2},\ldots,\alpha_{\lambda};\eta,k,r|q_0,q_0) \end{equation} and \begin{equation}\label{proofi6.1-5}|{\rm Step}_2(\tau)|=|\varsigma|-(q_{0}\eta+\alpha_1)=|{\rm Step}_1(\tau)|-(q_{0}\eta+\alpha_1).\end{equation} Combining \eqref{thm6-1aa} and \eqref{proofi6.1-2}, and by Lemma \ref{sskr1} (1), we see that $s_t+p_t>q_0.$ Hence ${\rm Step}_2(\delta^{(1)})$ is a partition into distinct parts congruent to $\alpha_1$ modulo $\eta$. {\bf Step 3}: There are the following two cases: Case 3-1: If there are no non-degenerate parts in ${\rm Step}_2(\tau)$, we shall do nothing. In such case, we set ${\rm Step}_3(\delta^{(1)})={\rm Step}_2(\delta^{(1)})$ and ${\rm Step}_3(\tau)={\rm Step}_2(\tau)$. Setting $\delta^{(1)}={\rm Step}_3(\delta^{(1)})$, it is easy to see that $\delta^{(1)}\in\mathcal{D}_{\alpha_1}$. Now, we split the overpartition ${\rm Step}_3(\tau)$ into $\overline{\delta}^{(\lambda)}$ and $\pi$, where $\overline{\delta}^{(\lambda)}$ consists of the parts congruent to $\alpha_{\lambda}$ modulo $\eta$ in ${\rm Step}_3(\tau)$ and $\pi$ consists of the parts $\not\equiv\alpha_{\lambda}\pmod\eta$ in ${\rm Step}_3(\tau)$. Finally, change the parts in $\overline{\delta}^{(\lambda)}$ to non-overlined parts to get $ {\delta}^{(\lambda)}$. It is easy to see that $\delta^{(\lambda)}\in\mathcal{D}_{\alpha_\lambda}$ and $\pi$ is a $\overline{B}_1(\alpha_{2},\ldots,\alpha_{\lambda-1};k-1,r-1)$ overpartition. Furthermore, by \eqref{weight-rel} and \eqref{proofi6.1-5}, it is easy to see that $|\tau|=|\delta^{(1)}|+|\delta^{(\lambda)}|+|\pi|$. Case 3-2: If there are non-degenerate parts in ${\rm Step}_2(\tau)$, then we iteratively apply the $(k-1)$-subtraction defined in Theorem \ref{Dilation-reduction} to ${\rm Step}_2(\tau)$ until there does not exist non-degenerate parts in the resulting overpartition. Assume that the $(k-1)$-subtraction defined in Theorem \ref{Dilation-reduction} can be applied $j$ times into ${\rm Step}_2(\tau)$ so that there does not exist non-degenerate parts in the resulting overpartition, and we denote the intermediate overpartitions by $(\zeta^{(0)},\sigma^{(0)}),\ldots, (\zeta^{(j)},\sigma^{(j)})$ with $\zeta^{(0)}={\rm Step}_2(\delta^{(1)})$, $\sigma^{(0)}= {\rm Step}_2(\tau) $, $\zeta^{(j)}={\rm Step}_3(\delta^{(1)})$ and $\sigma^{(j)}={\rm Step}_3(\tau)$. Assume that there are $N(\sigma^{(i)})$ $(k-1)$-marked parts in the Gordon marking of $\sigma^{(i)}$. Let $b=0$, we repeat the following process until $b=j$. \begin{itemize} \item Step 3-1: Assume that there are $N(\hat{\sigma}^{(b)})$ $(k-1)$-marked parts in the Gordon marking of the degenerate overpartition $\hat{\sigma}^{(b)}$ of $\sigma^{(b)}$, denoted by $\tilde{g}_1(\hat{\sigma}^{(b)})> \cdots>\tilde{g}_{N(\hat{\sigma}^{(b)})} (\hat{\sigma}^{(b)})$, $\{\sigma_m^{(b)}\}_{k-1}$ is the smallest non-degenerate $(k-1)$-set of ${\sigma}^{(b)}$ and $q_{b+1}$ is the largest integer such that $\tilde{g}_{q_{b+1}}(\hat{\sigma}^{(b)})> \sigma_m^{(b)}+\eta$. By definition, we see that \[\sigma^{(b)}\in {\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N(\hat{\sigma}^{(b)}),q_{b+1}).\] Hence we could apply the $(k-1)$-subtraction $S_{q_{b+1}\eta+\alpha_1}$ to $\sigma^{(b)}$ to get a new overpartition $\sigma^{(b+1)}$, that is, \[\sigma^{(b+1)}=S_{q_{b+1}\eta+\alpha_1}(\sigma^{(b)}).\] By Theorem \ref{Dilation-reduction}, we see that $N(\sigma^{(b+1)})=N(\hat{\sigma}^{(b)})$, \[ \sigma^{(b+1)}\in {\mathcal{B}}_{\lambda}(\alpha_2,\ldots,\alpha_\lambda;\eta,k,r|N(\sigma^{(b+1)}),q_{b+1}), \] and \begin{equation}\label{aaaa} |\sigma^{(b+1)}|=|\sigma^{(b)}|-(q_b\eta+\alpha_1). \end{equation} \item Step 3-2: Insert $q_{b+1}\eta+\alpha_1$ into $\zeta^{(b)}$ to generate a new partition $\zeta^{(b+1)}$. \item Step 3-3: Replace $b$ by $b+1$. \end{itemize} From the above construction, we see that when $b=j$, \begin{equation*}\label{thm6-1} {\rm Step}_3(\tau)=\sigma^{(j)}\in \overline{\mathcal{B}}_\lambda(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r| N(\sigma^{(j)}),q_j). \end{equation*} Furthermore, there does not exist non-degenerate parts in $\sigma^{(j)}$. By \eqref{aaaa}, we see that \begin{equation}\label{sum} |{\rm Step}_3(\tau)|=|\sigma^{(j)}|=|{\rm Step}_2(\tau)|-(q_1\eta+\alpha_1)-\cdots-(q_j\eta+\alpha_1). \end{equation} Note that for $0\leq b<j-1$, \[\sigma^{(b)}\in {\mathcal{B}}_\eta(\alpha_2,\ldots,\alpha_\lambda; \eta,k,r|N({\sigma}^{(b+1)}),q_{b+1}),\] and \[ \sigma^{(b+1)}=S_{q_{b+1}\eta+\alpha_1}(\sigma^{(b)})\in {\mathcal{B}}_\eta(\alpha_1,\ldots,\alpha_\lambda;\eta,k,r|N(\sigma^{(b+2)}),q_{b+2}). \] By Lemma \ref{tauhh} (2), we have \begin{equation}\label{ttt2} q_{b+2}<q_{b+1}<N(\sigma^{(b+1)}). \end{equation} If ${\rm Step}_2(\delta^{(1)})={\rm Step}_1(\delta^{(1)})=((s_1+q_1)\eta+\alpha_1, \ldots,(s_t+q_t)\eta+\alpha_1)$, then \[{\rm Step}_3(\delta^{(1)})=((s_1+q_1)\eta+\alpha_1, \ldots,(s_t+q_t)\eta+\alpha_1,q_1\eta+\alpha_1, \ldots,q_j\eta+\alpha_1),\] and $ {\rm Step}_1(\tau)=\tau^{(t)}=\sigma^{(0)}.$ By \eqref{ttt1}, we see that $s_t+q_t\geq N(\pi^{(t)})$. By \eqref{ttt2}, we see that $q_{1}<N(\sigma^{(1)})=N(\hat{\sigma}^{(0)})\leq N( {\sigma}^{(0)})$. It follows that $s_t+q_t>q_1 $. Hence, by \eqref{ttt2}, we arrive at ${\rm Step}_3(\delta^{(1)}) \in\mathcal{D}_{\alpha_1}.$ If ${\rm Step}_2(\delta^{(1)})\neq{\rm Step}_1(\delta^{(1)})$, then ${\rm Step}_2(\delta^{(1)})=((s_1+q_1)\eta+\alpha_1, \ldots,(s_t+q_t)\eta+\alpha_1,q_0\eta+\alpha_1).$ Hence \[{\rm Step}_3(\delta^{(1)})=((s_1+q_1)\eta+\alpha_1, \ldots,(s_t+q_t)\eta+\alpha_1,q_0\eta+\alpha_1, q_1\eta+\alpha_1, \ldots,q_j\eta+\alpha_1).\] Notice that $q_0=N({\sigma^{(0)}})>q_1$, by \eqref{ttt2}, we see that ${\rm Step}_3(\delta^{(1)})\in\mathcal{D}_{\alpha_1}$. Setting $\delta^{(1)}={\rm Step}_3(\delta^{(1)})$, we have shown that $\delta^{(1)}\in\mathcal{D}_{\alpha_1}$. Now, we split the overpartition ${\rm Step}_3(\tau)$ into $\overline{\delta}^{(\lambda)}$ and $\pi$, where $\overline{\delta}^{(\lambda)}$ consists of the parts congruent to $\alpha_{\lambda}$ modulo $\eta$ in ${\rm Step}_3(\tau)$ and $\pi$ consists of the parts $\not\equiv\alpha_{\lambda}\pmod\eta$ in ${\rm Step}_3(\tau)$. Finally, change the parts in $\overline{\delta}^{(\lambda)}$ to non-overlined parts to get ${\delta}^{(\lambda)}$. We can see that $\delta^{(\lambda)}\in\mathcal{D}_{\alpha_\lambda}$ and $\pi$ is a $\overline{B}_1(\alpha_{2},\ldots,\alpha_{\lambda-1};k-1,r-1)$ overpartition. By \eqref{weight-rel}, \eqref{proofi6.1-5} and \eqref{sum}, it is easy to check that $|\tau|=|\delta^{(1)}|+|\delta^{(\lambda)}|+|\pi|$. Thus, we complete the proof of Theorem \ref{lambdathm}. \qed \section{Example} We provide an example for the illustration of the bijection $\Theta$ in Theorem \ref{lambdathm}. {\noindent \bf The example for the map $\Theta$}: Assume that $k=6$, $r=4$, $\lambda=4$, $\eta=10$, $\alpha_1=1$, $\alpha_2=4$, $\alpha_3=6$ and $\alpha_4=9$. Let $\delta^{(1)}=(61,51,31,21,1)$ and $\delta^{(4)}=(49,39,29,19,9)$ and let $\pi=(50,50,50,\overline{36},\overline{34}, \overline{30},30,\overline{26},\overline{24}, {20},\overline{10},10,\overline{6})$ be an overpartition in $\overline{\mathcal{B}}_1(4,6;10,5,3)$. We unit $\pi$ and $\delta^{(4)}$ to obtain $\pi^{(0)}$, whose reverse Gordon marking is given below. \[\begin{split}RG(\pi^{(0)})=&({50}_1,{50}_2,{50}_3, \overline{49}_4,\overbrace{{\color{blue}\overline{39}_1, \overline{36}_2,\overline{34}_3,\overline{30}_4},{\color{red}{30}_5}}^{{\color{red}\{30\}_5}},\overline{29}_1,\overline{26}_2,\overline{24}_3,{20}_4,\\ &\ \overline{19}_1,{\overline{10}_2,{10}_3,\overline{9}_1,\overline{6}_4}).\end{split}\] We see that $\pi^{(0)}$ is an overpartition in $\overline{\mathcal{B}}_1(4,6,9;10,6,4)$ and there are no non-degenerate $5$-sets in $\pi^{(0)}$. We wish to insert all parts of \[\delta^{(1)}=(6\times 10+1,5\times 10+1,3\times 10+1,2\times 10+1,0\times 10+1),\] into $\pi^{(0)}$, where $q_1=6>q_2=5>q_3=3>q_4=2>q_5=0$. {\bf Step 1:} Note that $N(\pi^{(0)})=1$ and $q_5=0$, so we could apply the $5$-addition to insert $1$ into $\pi^{(0)}$. It is easy to check that \[\pi^{(0)}\in\overline{\mathcal{B}}_4(4,6,9;10,6,4|1,0).\] Apply the $5$-addition $A_{1}$ to $\pi^{(0)}$ to get $\pi^{(1)}$, namely, change $\overline{39}$ to $40$. \[\begin{split}RG(\pi^{(1)})=&(\overbrace{{\color{blue}{50}_1,{50}_2,{50}_3,\overline{49}_4},{\color{red}{40}_5}}^{{\color{red}\{40\}_5}},\overbrace{{\color{blue}\overline{36}_1, \overline{34}_2,\overline{30}_3,{30}_4},{\color{red}\overline{29}_5}}^{{\color{red}\{\overline{29}\}_5}},\overline{26}_1,\overline{24}_2,{20}_3,\\ &\ \overline{19}_4,{\overline{10}_1,{10}_2, \overline{9}_3,\overline{6}_4}).\end{split} \] By Lemma \ref{dilation-1}, we see that \begin{equation*}\label{dd} \pi^{(1)}\in\overline{\mathcal{B}}_{10} (4,6,9;10,6,4| 1,0). \end{equation*} Note that $N(\pi^{(1)})=2$ and $q_4=2$, so ${\rm Step}_1(\pi^{(0)})=\pi^{(1)}$ and \[{\rm Step}_1(\delta^{(1)})=(61,51,31,21).\] {\bf Step 2:} Denote ${\rm Step}_1(\pi^{(0)})$ by $\sigma$, there are two $5$-marked parts in the reverse Gordon marking of $\sigma$, which are $\tilde{r}_1(\sigma)=40$ and $\tilde{r}_2(\sigma)=\overline{29}$. Notice that $f_{\sigma}(0,10]=3$, $\tilde{r}_2(\sigma)=\overline{29}>10$ and $\overline{9}$ is a part of $\sigma$, then by definition, we see that \[\sigma\in\overline{\mathcal{B}}_4(4,6,9;10,6,4|2,2).\] Apply the $5$-addition $A_{21}$ to $\sigma$ to get ${\rm Step}_2(\pi^{(0)})$, namely, first change $\overline{29}$ and ${40}$ in $\sigma$ to $\overline{39}$ and ${50}$ respectively and then change $\overline{9}$ in $\sigma$ to ${10}$. \[\begin{split}RG({\rm Step}_2(\pi^{(0)}))=&(\overbrace{{\color{blue}{50}_1,{50}_2,{50}_3,{50}_4},{\color{red}\overline{49}_5}}^{{\color{red}\{\overline{49}\}_5}},\overbrace{{\color{blue}\overline{39}_1, \overline{36}_2,\overline{34}_3,\overline{30}_4},{\color{red}{30}_5}}^{{\color{red}\{{30}\}_5}},\overline{26}_1,\overline{24}_2,\\ &\ \underbrace{{\color{blue}{20}_3,\overline{19}_4,\overline{10}_1,{10}_2},{\color{red}{10}_5}}_{\color{red}\{{10}\}_5},\overline{6}_3).\end{split}\] Furthermore, set ${\rm Step}_2(\delta^{(1)})=(61,51,31)$, where $q_1=6>q_2=5>q_3=3$. {\bf Step 3:} Denote ${\rm Step}_2(\pi^{(0)})$ by $\varsigma$, we will apply the $5$-insertion to insert $61,51$ and $31$ of $\delta^{(1)}$ from smallest to largest into $\varsigma$ successively to generate some overlined parts congruent to $1$ modulo $10$. Let $\varsigma^{(0)}=\varsigma$. \begin{itemize} \item $b=0$. Insert $31$ into $\varsigma^{(0)}$, and note that $q_3=3$. There are three $5$-marked parts in $RG(\varsigma^{(0)})$, which are $\tilde{r}_1(\varsigma^{(0)})=\overline{49}$, $\tilde{r}_2(\varsigma^{(0)})=30$ and $\tilde{r}_3(\varsigma^{(0)})={10}$. It is easy to check that $p_3=3$ is the least integer such that $\overline{10\cdot(q_3-p_3)+1}=\overline{1}\geq \tilde{r}_{p_3+1}(\varsigma^{(0)})+10=-\infty$ and there are no overlined parts congruent to $1$ modulo $10$ in $\varsigma^{(0)}$. Hence \[\varsigma^{(0)}\in\overline{\mathcal{B}}_<(1,4,6,9;10,6,4|3,31).\] Apply the $5$-insertion $I_{31}$ to $\varsigma^{(0)}$ to get $\varsigma^{(1)}$. More precisely, note that $p_3=3$, so we first change $\overline{49}$, $30$ and $10$ to $\overline{59}$, $40$ and $20$ respectively and then insert $\overline{1}$ into the resulting overpartition. \[\begin{split}RG(\varsigma^{(1)})=&(\overbrace{{\color{blue}\overline{59}_1,{50}_2,{50}_3,{50}_4},{\color{red}{50}_5}}^{{\color{red}\{{50}\}_5}},\overbrace{{\color{blue}{40}_1,\overline{39}_2, \overline{36}_3,\overline{34}_4},{\color{red}\overline{30}_5}}^{{\color{red}\{\overline{30}\}_5}},\overbrace{{\color{blue}\overline{26}_1,\overline{24}_2,{20}_3,{20}_4},{\color{red}\overline{19}_5}}^{{\color{red}\{\overline{19}\}_5}},\\ &\ {{\overline{10}_1,{10}_2}},\overline{6}_3,\overline{1}_4).\end{split}\] \item $b=1$. Insert $51$ into $\varsigma^{(1)}$, and note that $q_2=5$. There are three $5$-marked parts in $RG(\varsigma^{(1)})$, which are $\tilde{r}_1(\varsigma^{(1)})={50}$, $\tilde{r}_2(\varsigma^{(1)})=\overline{30}$ and $\tilde{r}_3(\varsigma^{(1)})=\overline{19}$. It is easy to find that $p_2=1$ is the least integer such that $\overline{10\cdot(q_2-p_2)+1}=\overline{41}\geq \tilde{r}_{p_2+1}(\varsigma^{(1)})+10=\overline{40}$ and the largest overlined part congruent to $1$ modulo $10$ in $\varsigma^{(1)}$ is $\overline{1}$, which is less than $\overline{41}$. Hence \[\pi^{(3)}\in\overline{\mathcal{B}}_<(1,4,6,9;10,6,4|3,51).\] Apply the $5$-insertion $I_{51}$ to $\varsigma^{(1)}$ to get $\varsigma^{(2)}$. More precisely, note that $p_2=1$, so we first change $50$ to $60$ and then insert $\overline{41}$ into the resulting overpartition. \[\begin{split}RG(\varsigma^{(2)})=&(\overbrace{{\color{blue}{60}_1,\overline{59}_2,{50}_3,{50}_4},{\color{red}{50}_5}}^{{\color{red}\{{50}\}_5}},\overbrace{{\color{blue}\overline{41}_1,{40}_2,\overline{39}_3, \overline{36}_4},{\color{red}\overline{34}_5}}^{{\color{red}\{\overline{34}\}_5}},\overline{30}_1,\overbrace{{\color{blue}\overline{26}_2,\overline{24}_3,{20}_1,{20}_4},{\color{red}\overline{19}_5}}^{{\color{red}\{\overline{19}\}_5}}\\ &\ {{\overline{10}_2,{10}_3}},\overline{6}_1,\overline{1}_4).\end{split}\] \item $b=2$. Insert $61$ into $\varsigma^{(2)}$, and note that $q_1=6$. There are three $5$-marked parts in $RG(\varsigma^{(2)})$, which are $\tilde{r}_1(\varsigma^{(2)})={50}$, $\tilde{r}_2(\varsigma^{(2)})=\overline{34}$ and $\tilde{r}_3(\varsigma^{(2)})=\overline{19}$. It is easy to check that $p_1=0$ is the least integer such that $\overline{10\cdot(q_1-p_1)+1}=\overline{61}\geq \tilde{r}_{p_1+1}(\varsigma^{(2)})+10=60$ and the largest overlined part congruent to $1$ modulo $10$ in $\varsigma^{(2)}$ is $\overline{41}$, which is less than $\overline{61}$. Hence \[\varsigma^{(2)}\in\overline{\mathcal{B}}_<(1,4,6,9;10,6,4|3,61).\] Apply the $5$-insertion $I_{61}$ to $\varsigma^{(2)}$ to get $\varsigma^{(3)}$, namely, insert $\overline{61}$ as a part into $\varsigma^{(2)}$. \begin{equation}\label{reverse-theta}\begin{split} RG(\varsigma^{(3)})=&(\overline{61}_1,\overbrace{{\color{blue}{60}_2,\overline{59}_3,{50}_1,{50}_4},{\color{red}{50}_5}}^{{\color{red}\{{50}\}_5}},\overbrace{{\color{blue}\overline{41}_2,{40}_3,\overline{39}_1, \overline{36}_4},{\color{red}\overline{34}_5}}^{{\color{red}\{\overline{34}\}_5}},\overline{30}_2,\overbrace{{\color{blue}\overline{26}_1,\overline{24}_3,{20}_2,{20}_4},{\color{red}\overline{19}_5}}^{{\color{red}\{\overline{19}\}_5}}\\ &\ {{\overline{10}_1,{10}_3}},\overline{6}_2,\overline{1}_4).\end{split} \end{equation} \item Furthermore, set ${\rm Step}_3(\pi^{(0)})=\varsigma^{(3)}$ and ${\rm Step}_3(\delta^{(1)})=\emptyset.$ \end{itemize} Set $\tau={\rm Step}_3(\pi^{(0)})$, it is easy to check that $\tau$ is an overpartition in $\overline{\mathcal{B}}_1(1,4,6,9;10,6,4)$ with three overlined pars congruent to $1$ modulo $10$ and $|\tau|=|\pi|+|\delta^{(1)}|+|\delta^{(4)}|$. {\bf The example for $\Upsilon$:} Let $k=6$, $r=4$, $\lambda=4$, $\eta=10$, $\alpha_1=1$, $\alpha_2=4$, $\alpha_3=6$, $\alpha_4=9$ and let $\tau$ be an overpartition in $\overline{\mathcal{B}}_1(1,4,6,9;10,6,4)$ given by \begin{equation*} \begin{split} \tau=&(\overline{61},{60},\overline{59},{50},{50},{50},\overline{41},{40},\overline{39}, \overline{36},\overline{34},\overline{30},\overline{26},\overline{24},{20},{20},\overline{19}\\ &\ \overline{10},{10},\overline{6},\overline{1}).\end{split} \end{equation*} whose reverse Gordon marking is given in \eqref{reverse-theta}. Then the triplet $(\delta^{(1)},\delta^{(4)},\pi)$ can be obtained by iteratively using the $5$-separation and the $5$-subtraction. {\bf Step 1:} Note that there are three overlined parts congruent to 1 modulo 10. Let $\gamma^{(0)}=\emptyset$ and $\tau^{(0)}=\tau$. We will iteratively use the $5$-separation to remove $\overline{61}$, $\overline{41}$ and $\overline{1}$ from $\tau$. \begin{itemize} \item $b=0$. Remove $\overline{61}$ from $\tau^{(0)}$, and let $s_1=6$. Let $\underline{\tau}^{(0)}$ be the underlying overpartition obtained by removing $\overline{61}$ from $\tau^{(0)}$. Then the Gordon marking of $\underline{\tau}^{(0)}$ is given as follows. \[\begin{split}G(\underline{\tau}^{(0)})=&(\overbrace{{\color{red}{60}_5},{\color{blue}\overline{59}_1,{50}_4,{50}_3,{50}_2}}^{\color{red}\{60\}_5},\overline{41}_1,\overbrace{{\color{red}{40}_5},{\color{blue}\overline{39}_4, \overline{36}_3,\overline{34}_2,\overline{30}_1}}^{\color{red}\{40\}_5},\overline{26}_4,\overline{24}_3,\\ &\ \underbrace{{\color{red}{20}_5},{\color{blue}{20}_2,\overline{19}_1,\overline{10}_4,{10}_3}}_{\color{red}\{20\}_5},\overline{6}_2,\overline{1}_1).\end{split}\] Note that there are three $5$-marked parts in $G(\underline{\tau}^{(0)})$, which are $\tilde{g}_1(\underline{\tau}^{(0)})=60$, $\tilde{g}_2(\underline{\tau}^{(0)})={40}$ and $\tilde{g}_3(\underline{\tau}^{(0)})={20}$. It is easy to check that $p_1=0$ is the least integer such that $\overline{10\cdot s_1+1}=\overline{61}>\tilde{g}_{p_1+1}(\underline{\tau}^{(0)})=60$. Hence \[\tau^{(0)}\in \overline{\mathcal{B}}_=(1,4,6,9;10,6,4|3,61).\] We then apply $5$-separation $SP_{61}$ into $\tau^{(0)}$, that is, note that $p_1=0$, so remove $\overline{61}$ from $\tau^{(0)}$ to get $\tau^{(1)}$ and insert $61$ into $\gamma^{(0)}$ to obtain $\gamma^{(1)}$. Hence $\tau^{(1)}=\underline{\tau}^{(0)}$ and $\gamma^{(1)}=(61)$. \item $b=1$. Remove $\overline{41}$ from $\tau^{(1)}$, and let $s_2=4$. Let $\underline{\tau}^{(1)}$ be the underlying overpartition obtained by removing $\overline{21}$ from $\tau^{(1)}$. Then the Gordon marking of $\underline{\tau}^{(1)}$ is given as follows. \[\begin{split}G(\underline{\tau}^{(1)})=&(\overbrace{{\color{red}{60}_5},{\color{blue}\overline{59}_4,{50}_3,{50}_2,{50}_1}}^{\color{red}\{60\}_5},\overbrace{{\color{red}{40}_5},{\color{blue}\overline{39}_4, \overline{36}_3,\overline{34}_2,\overline{30}_1}}^{\color{red}\{40\}_5},\overline{26}_4,\overline{24}_3,\\ &\ \underbrace{{\color{red}{20}_5},{\color{blue}{20}_2,\overline{19}_1,\overline{10}_4,{10}_3}}_{\color{red}\{20\}_5},\overline{6}_2,\overline{1}_1).\end{split}\] There are three $5$-marked parts in $G(\underline{\tau}^{(1)})$, which are $\tilde{g}_1(\underline{\tau}^{(1)})=60$, $\tilde{g}_2(\underline{\tau}^{(1)})={40}$ and $\tilde{g}_3(\underline{\tau}^{(1)})=20$. It is easy to check that $p_2=1$ is the least integer such that $\overline{10\cdot s_2+1}=\overline{41}>\tilde{g}_{p_2+1}(\underline{\tau}^{(1)})=40$. Hence \[\tau^{(1)}\in \overline{\mathcal{B}}_=(1,4,6,9;10,6,4|3,51).\] We then apply $5$-separation $SP_{51}$ to $\tau^{(1)}$ to get $\tau^{(2)}$. We first remove $\overline{41}$ from $\tau^{(1)}$ to get $\underline{\tau}^{(1)}$, and then change $60$ in $\underline{\tau}^{(1)}$ to $50$ to obtain ${\tau}^{(2)}$. Finally, we insert $51$ into $\gamma^{(1)}$ to obtain $\gamma^{(2)}$. Hence $\gamma^{(2)}=(61,51)$, and \[\begin{split}G({\tau}^{(2)})=&({{\overline{59}_5,{{50}_4},{50}_3,{50}_2,{50}_1}},{{{40}_5},{\overline{39}_4, \overline{36}_3,\overline{34}_2,\overline{30}_1}},\overline{26}_4,\overline{24}_3,\\ &\ {{{20}_5},{{20}_2,\overline{19}_1,\overline{10}_4,{10}_3}},\overline{6}_2,\overline{1}_1).\end{split}\] \item $b=2$. Remove $\overline{1}$ from $\tau^{(2)}$, and let $s_3=0$. Let $\underline{\tau}^{(2)}$ be the underlying overpartition obtained by removing $\overline{1}$ from $\tau^{(2)}$. Then the Gordon marking of $\underline{\tau}^{(2)}$ is given as follows. \[\begin{split}G(\underline{\tau}^{(2)})=&(\overbrace{{\color{red}\overline{59}_5},{\color{blue}{50}_4,{50}_3,{50}_2,{50}_1}}^{\color{red}\{\overline{59}\}_5},\overbrace{{\color{red}{40}_5},{\color{blue}\overline{39}_4, \overline{36}_3,\overline{34}_2,\overline{30}_1}}^{\color{red}\{40\}_5},\overline{26}_3,\overline{24}_2,\\ &\ \underbrace{{\color{red}{20}_5},{\color{blue}{20}_4,\overline{19}_1,\overline{10}_3,{10}_2}}_{\color{red}\{20\}_5},\overline{6}_1).\end{split}\] There are three $5$-marked parts in $G(\underline{\tau}^{(2)})$, which are $\tilde{g}_1(\underline{\tau}^{(2)})=\overline{59}$, $\tilde{g}_2(\underline{\tau}^{(2)})={40}$ and $\tilde{g}_3(\underline{\tau}^{(2)})={20}$. It is easy to check that $p_3=3$ is the least integer such that $\overline{10\cdot s_3+1}=\overline{1}>\tilde{g}_{p_3+1}(\underline{\tau}^{(2)})=-\infty$. Hence \[\tau^{(2)}\in \overline{\mathcal{B}}_=(1,4,6,9;10,6,4|3,31).\] We then apply $5$-separation $SP_{31}$ to $\tau^{(2)}$. We first remove $\overline{1}$ from $\tau^{(2)}$ to get $\underline{\tau}^{(2)}$. Next, we change $\overline{59}$, ${40}$ and ${20}$ in $\underline{\tau}^{(2)}$ to $\overline{49}$, ${30}$ and ${10}$ respectively to obtain $\tau^{(3)}$. Finally, we insert $31$ into $\gamma^{(2)}$ to obtain $\gamma^{(3)}$. Hence $\gamma^{(3)}=(61,51,31)$, and \[\begin{split}G({\tau}^{(3)})=&({{{50}_5,{{50}_4},{50}_3,{50}_2,\overline{49}_1}},{{\overline{39}_5, \overline{36}_3,\overline{34}_2,\overline{30}_4,{30}_1}},\overline{26}_3,\overline{24}_2,\\ &\ {{{20}_5},{\overline{19}_1,\overline{10}_4,{10}_3,{10}_2}},\overline{6}_1).\end{split}\] \end{itemize} Note that there are no overlined parts congruent to 1 modulo 10 in ${\tau}^{(3)}$, so ${\rm Step}_1(\tau)={\tau}^{(3)}$ and ${\rm Step}_1(\delta^{(1)})=\gamma^{(3)}=(61,51,31).$ {\bf Step 2:} Denote ${\rm Step}_1(\tau)$ by $\varsigma$. Notice that $f_{\varsigma}(0,10]=3$, there are no $5$-sets of $\varsigma$ in $(0,\overline{19})$ and $\overline{9}$ is not a part of $\varsigma$. By definition, we see that the part ${10}$ with mark $2$ in the Gordon marking of $\varsigma={\tau}^{(3)}$ denoted by ${10}_2$ is the non-degenerate $3$-part of $\varsigma$. We will apply the $5$-subtraction into $\varsigma$ to obtain an overpartition in $\overline{\mathcal{B}}_1(4,6,9;10,6,4)$ so that there is a non-degenerate $3$-part. To this end, we subtract $1$ from ${10}_2$ to obtain the degenerate overpartition $\hat{\varsigma}$ of $\varsigma$. \[\begin{split}G(\hat{\varsigma})=&(\overbrace{{\color{red}\overline{50}_5},{\color{blue}{50}_4,{50}_3,{50}_2,\overline{49}_1}}^{\color{red}\{{50}\}_5},\overbrace{{\color{red}\overline{39}_5},{\color{blue} \overline{36}_4,\overline{34}_3,\overline{30}_2,{30}_1}}^{\color{red}\{\overline{39}\}_5},\overline{26}_4,\overline{24}_3,\\ &\ {{{20}_2},{\overline{19}_1,\overline{10}_4,{10}_3}},\overline{9}_2,\overline{6}_1).\end{split}\] There are two $5$-marked parts in $G(\hat{\varsigma})$, which are $\tilde{g}_1(\hat{\varsigma})={50}$ and $\tilde{g}_2(\hat{\varsigma})=\overline{39}$. Hence \[ {\varsigma}\in \overline{\mathcal{B}}_{10}(4,6,9;10,6,4|2,21).\] We then apply $5$-subtraction $S_{21}$ to ${\varsigma}$. We first subtract $1$ from ${10}_2$ in ${\varsigma}$ to get $\hat{{\varsigma}}$. Next, we change ${50}$ and $\overline{39}$ in $\hat{{\varsigma}}$ to ${40}$ and $\overline{29}$ to obtain ${\rm Step}_2(\tau)$. Set ${\rm Step}_2(\delta^{(1)})=(61,51,31,21)$. \[\begin{split}G({\rm Step}_2(\tau))=&({50}_5,{50}_4,{50}_3,\overline{49}_2,{\color{red}{40}_1}, {\color{blue}\overline{36}_4,\overline{34}_3,\overline{30}_2,{30}_5},\overline{29}_1,\overline{26}_4,\overline{24}_3,\\ &\ \overline{20}_2,\overline{19}_1,\overline{10}_4,{10}_3,\overline{9}_2,\overline{6}_1).\end{split}\] {\bf Step 3:} Denote ${\rm Step}_2(\tau)$ by $\sigma$. It is easy to see that there is at least one non-degenerate part in $\sigma$, the smallest non-degenerate $5$-set of $\sigma$ is $\{30_5,\overline{30}_2,\overline{34}_3,\overline{36}_4,40_1\}$, and the smallest non-degenerate part of $\sigma$ is the part $40$ with mark $1$ in the Gordon marking $\sigma$, denoted by ${40}_1$. Hence we will iteratively apply the $5$-subtraction to $\sigma$ to obtain an overpartition in $\overline{\mathcal{B}}_1(4,6,9;10,6,4)$ so that there is a part congruent to $9$ modulo $10$ in any $5$-sets of the resulting overpartition. Let $\sigma^{(0)}=\sigma$ and $\zeta^{(0)}={\rm Step}_2(\delta^{(1)})$. We first subtract $1$ from $40_1$ to get the degenerate overpartition $\hat{\sigma}^{(0)}$ of $\sigma^{(0)}$. \[\begin{split}G(\hat{\sigma}^{(0)})=&({{{50}_4,{50}_3,{50}_2,\overline{49}_1}},\overline{39}_1,\overline{36}_4,\overline{34}_3,\overline{30}_2,\overbrace{{\color{red}{30}_5},{\color{blue} \overline{29}_1,\overline{26}_4,\overline{24}_3,{{20}_2}}}^{\color{red}\{{30}\}_5},\\ &\ {{\overline{19}_1,\overline{10}_4,{10}_3}},\overline{9}_2,\overline{6}_1).\end{split}\] There is one $5$-marked part in $G(\hat{\sigma}^{(0)})$, which is $\tilde{g}_1(\hat{\sigma}^{(0)})={30}$. It is easy to check that $q_1=0$ is the largest integer such that $\tilde{g}_{q_1}(\hat{\sigma}^{(0)})=\infty> 40+10$. Hence \[\sigma^{(0)}\in \overline{\mathcal{B}}_{10}(4,6,9;10,6,4|1,0).\] We then apply $5$-subtraction $S_{1}$ to $\sigma^{(0)}$ to get $\sigma^{(1)}$, namely, we just subtract $1$ from ${40}_1$ in $\sigma^{(0)}$ to get $\sigma^{(1)}$. We see that $\sigma^{(1)}=\hat{\sigma}^{(0)}$ and set $\zeta^{(1)}=(61,51,31,21,1)$. Note that any $5$-set of $\sigma^{(1)}$ contains a part congruent to $9$ modulo $10$, so set ${\rm Step}_3(\tau)=\sigma^{(1)}$ and ${\rm Step}_3(\delta^{(1)})=\zeta^{(1)}$. We set $\delta^{(1)}={\rm Step}_3(\delta^{(1)})=\zeta^{(1)}=(61,51,31,21,1)$ and split ${\rm Step}_3(\tau)$ into $\overline{\delta}^{(4)}=(\overline{49},\overline{39},\overline{29},\overline{19},\overline{9})$ and \[\pi=(50,50,50,\overline{36},\overline{34},\overline{30},30,\overline{26},\overline{24},{20},\overline{10},10,\overline{6}).\] Then, we change the parts in $\overline{\delta}^{(4)}$ to non-overlined parts to obtain $\delta^{(4)}=(49,39,29,19,9)$. It is easy to check that $\delta^{(1)}=(61,51,31,21,1)\in\mathcal{D}_1$, $\delta^{(4)}=(49,39,29,19,9)\in\mathcal{D}_9$ and $\pi$ is an overpartition in $\overline{\mathcal{B}}_1(4,6;10,5,3)$ so that $|\tau|=|\delta^{(1)}|+|\delta^{(4)}|+|\pi|$. \section{Proof of Theorem \ref{G-B-O-1}} In this section, we will give a proof of Theorem \ref{G-B-O-1} by using some related results on Bailey pairs. For more information on Bailey pairs, see, for example, \cite{Agarwal-Andrews-Bressoud, Andrews-1986, Andrews-2000, Bressoud-Ismail-Stanton-2000, Lovejoy-2004b, Paule-1987,Warnaar-2001}. Recall that a pair of sequences $(\alpha_n(a,q),\beta_n(a,q))$ is called a Bailey pair relative to $(a,q)$ (or a Bailey pair for short) if for $n\geq 0,$ \begin{equation*}\label{bailey pair} \beta_n(a,q)=\sum_{r=0}^n\frac{\alpha_r(a,q)}{(q;q)_{n-r}(aq;q)_{n+r}}. \end{equation*} The proof of Theorem \ref{G-B-O-1} is much similar to the proof Theorem 1.18 in \cite{He-Ji-Wang-Zhao}. We shall invoke the following Bailey pair, which has appeared in \cite[(2.9)]{He-Ji-Wang-Zhao}. Here we generalize the range of $k$ and $r$ in the Bailey pair from $k > r+ 1 \geq 2$ to $k\geq r \geq 1$. For the proof of other cases, please see the proof in \cite[Lemma 2.6]{He-Ji-Wang-Zhao}. \begin{lem}\label{BPG} For $k\geq r \geq 1$, \begin{equation}{\label{BPG-eq}} \begin{split} \alpha_n(1,q)&=\left\{ \begin{array}{ll} 1, & \hbox{if $n=0$}, \\[5pt] (-1)^nq^{\frac{2k-2r+1}{2}n^2}(q^{\frac{2k-2r-1}{2} n}+q^{-\frac{2k-2r+1}{2} n}) (1+q^{n})/2, & \hbox{if $n\geq1$,} \end{array} \right.\\[5pt] \beta_{n}(1,q)&=\sum_{n\geq N_{r+1}\geq \cdots \geq N_{k-1}\geq0}\frac{(1+q^{n})q^{N_{r+1}^2+\cdots+N_{k-1}^2 +N_{r+1}+\cdots+ N_{k-1}}}{2(q;q)_{n-N_{r+1}}(q;q)_{N_{r+1}-N_{r+2}}\cdots (q;q)_{N_{k-1}}} \end{split} \end{equation} is a Bailey pair relative to $(1,q)$. \end{lem} The proof of Theorem \ref{G-B-O-1} also requires the following corollary. \begin{core}\label{lc}{\rm \cite[Corollary 5.7]{he-ji-zhao}} If $(\alpha_n(1,q^\eta),\beta_n(1,q^\eta))$ is a Bailey pair relative to $(1,q^\eta)$, then for $r>\lambda\geq0$, \begin{align}\nonumber & \sum_{n=0}^\infty\frac{2q^{(r-\frac{\lambda +1}{2})\eta n^2+\frac{\lambda +1}{2}\eta n -(\alpha_1+\cdots+\alpha_{\lambda})n}(-q^{\alpha_1};q^{\eta})_n\cdots(-q^{ \alpha_{\lambda}};q^{\eta})_n}{(1+q^{\eta n})(-q^{\eta-\alpha_1};q^{\eta}) _n\cdots(-q^{\eta-\alpha_{\lambda}};q^{\eta})_n}\alpha_n(1,q^\eta) \\[5pt]\nonumber &=\frac{(q^{\eta};q^{\eta})_{\infty}}{(-q^{\eta-\alpha_1};q^{\eta})_ {\infty}}\sum_{N_1\geq N_2\geq\cdots\geq N_r\geq 0} \frac{q^{ \eta(N_{\lambda+2}^2+\cdots+N_{r}^2)+\eta\left({N_1+1 \choose2}+\cdots+{N_{\lambda+1}+1 \choose2}\right) -(\alpha_1N_1+\cdots+\alpha_{\lambda}N_{\lambda}) }}{(q^{\eta};q^{\eta})_{N_1-N_{2}}\cdots(q^{\eta};q^{\eta})_{ N_{r-1}-N_{r}}}\\[5pt] \label{lem-cor} &\hskip2cm\times\frac{(-1;q^{\eta})_{N_{\lambda+1} }(-q^{\alpha_1};q^{\eta})_{N_1}\cdots(-q^{\alpha_{\lambda }};q^{\eta})_{N_{\lambda}}} {(-q^{\eta};q^{\eta})_{N_{\lambda}} (-q^{\eta-\alpha_2};q^ {\eta})_{N_1}\cdots(-q^{\eta-\alpha_{\lambda}};q^{\eta})_{ N_{\lambda-1}}}\beta_{N_r}(1,q^\eta). \end{align} \end{core} We are ready to give the proof of Theorem \ref{G-B-O-1}. \noindent{\bf Proof of Theorem \ref{G-B-O-1}: } Plugging the Bailey pair \eqref{BPG-eq} in Lemma \ref{BPG} with $q$ replaced by $q^\eta$ into \eqref{lem-cor}, and by using the fact that for $1\leq i\leq \lambda$, $\alpha_i+\alpha_{\lambda+1-i}=\eta$, the left-hand side of \eqref{lem-cor} can be simplified as follows. \begin{align}\nonumber & 1+\sum_{n=1}^\infty\frac{(-q^{\alpha_1};q^{\eta})_n\cdots(-q^{ \alpha_{\lambda}};q^{\eta})_n}{(-q^{\eta-\alpha_1};q^{\eta}) _n\cdots(-q^{\eta-\alpha_{\lambda}};q^{\eta})_n}\\[5pt] \nonumber & \times (-1)^nq^{(k-\frac{\lambda}{2})\eta n^2+\frac{\lambda+1}{2}\eta n -(\alpha_1+\cdots+\alpha_{\lambda})n}(q^{\frac{2k-2r-1}{2}\eta n}+q^{-\frac{2k-2r+1}{2}\eta n})\\ \nonumber &=1+\sum_{n=1}^{\infty}(-1)^nq^{(k-\frac{\lambda}{2})\eta n^2}(q^{(k-r)\eta n}+q^{-(k-r)\eta n})\\ \label{eq-L} &=(q^{(r-\frac{\lambda}{2})\eta},q^{(2k-r- \frac{\lambda}{2})\eta},q^{(2k-\lambda)\eta};q^{(2k-\lambda)\eta})_{\infty}. \end{align} The right-hand side of \eqref{lem-cor} is \begin{align} \nonumber &\frac{(q^{\eta};q^{\eta})_{\infty}}{(-q^{\eta-\alpha_1};q^{\eta})_ {\infty}}\sum_{N_1\geq \cdots\geq N_{k-1}\geq0}\frac{(1+q^{-\eta N_r})(-q^{\eta};q^{\eta})_{N_{\lambda+1} -1}q^{\eta(N_{\lambda+2}^2+\cdots+N_{k-1}^2+N_r+\cdots+N_{k-1})}} {(q^{\eta};q^{\eta})_{N_1-N_2}\cdots(q^{\eta};q^{\eta})_{N_{ k-2}-N_{k-1}}(q^{\eta};q^{\eta})_{N_{k-1}}}\\[5pt] \label{eq-R1} &\hskip1cm\times\frac{q^{\eta\left({N_1+1 \choose2}+\cdots+{N_{\lambda+1}+1 \choose2}\right)-(\alpha_1N_1+\cdots+\alpha_{\lambda}N_{\lambda})} (-q^{\alpha_1};q^{\eta})_{N_1}\cdots(-q^{\alpha_{\lambda }};q^{\eta})_{N_{\lambda}}}{(-q^{\eta};q^{\eta})_{ N_{\lambda}}(-q^{\eta-\alpha_2};q^{\eta})_{N_1} \cdots(-q^{\eta-\alpha _{\lambda}};q^{\eta})_{N_{\lambda-1}}}. \end{align} Noting that \begin{align*} (-q^{r};q^\eta)_{n}&=q^{r n+\eta {n\choose 2}}(-q^{\eta-r-n\eta};q^\eta)_{n}, \\[5pt] \frac{1}{(-q^{\eta-r};q^\eta)_n} &=\frac{(-q^{\eta-r+n\eta};q^\eta)_\infty} {(-q^{\eta-r};q^\eta)_\infty}, \end{align*} so the summation in \eqref{eq-R1} can be simplified to \begin{align}\nonumber &\sum_{N_1\geq\cdots\geq N_{k-1}\geq0}\frac{q^{\eta(N_1^2+\cdots+N_{k-1}^2+N_r+\cdots+N_{k-1})} (1+q^{-\eta N_r})(-q^{\eta-\eta N_{\lambda+1}};q^{\eta})_{N_{\lambda+1}-1}} {(q^{\eta};q^{\eta})_{N_1-N_2}\cdots(q^{\eta};q^{\eta})_ {N_{k-2}-N_{k-1}}(q^{\eta};q^{\eta})_{N_{k-1}}}\\[5pt] \label{eq-R2} &\hskip1cm\times\frac{(-q^{\eta+\eta N_{\lambda}};q^{\eta})_{\infty}\prod_{s=1}^{\lambda}(-q^{\eta-\alpha_s -\eta N_{s}};q^{\eta})_{N_s}\prod_{s=2}^{\lambda}(-q^{\eta-\alpha_s+ \eta N_{s-1}};q^{\eta})_{\infty}}{(-q^\eta;q^\eta)_\infty\prod_{s=2}^{\lambda}(-q^{ \eta-\alpha_s};q^{\eta})_{\infty}}. \end{align} Combining \eqref{eq-L}, \eqref{eq-R1} and \eqref{eq-R2}, we deduce that \begin{align}\nonumber &\frac{(q^{\eta};q^{\eta})_{\infty}}{(-q^{\eta-\alpha_1};q^{\eta})_ {\infty}}\sum_{N_1\geq\cdots\geq N_{k-1}\geq0}\frac{q^{\eta(N_1^2+\cdots+N_{k-1}^2+N_r+\cdots+N_{k-1})} (1+q^{-\eta N_r})(-q^{\eta-\eta N_{\lambda+1}};q^{\eta})_{N_{\lambda+1}-1}} {(q^{\eta};q^{\eta})_{N_1-N_2}\cdots(q^{\eta};q^{\eta})_ {N_{k-2}-N_{k-1}}(q^{\eta};q^{\eta})_{N_{k-1}}}\\[5pt] \nonumber &\hskip1cm\times\frac{(-q^{\eta+\eta N_{\lambda}};q^{\eta})_{\infty}\prod_{s=1}^{\lambda}(-q^{\eta-\alpha_s -\eta N_{s}};q^{\eta})_{N_s}\prod_{s=2}^{\lambda}(-q^{\eta-\alpha_s+ \eta N_{s-1}};q^{\eta})_{\infty}}{(-q^\eta;q^\eta)_\infty\prod_{s=2}^{\lambda}(-q^{ \eta-\alpha_s};q^{\eta})_{\infty}}\\ \nonumber &=(q^{(r-\frac{\lambda}{2})\eta},q^{(2k-r- \frac{\lambda}{2})\eta},q^{(2k-\lambda)\eta};q^{(2k-\lambda )\eta})_{\infty}. \end{align} Multiplying both sides of the above identity by $$\frac{(-q^{\eta-\alpha_1},\ldots,-q^{\eta-\alpha_{\lambda}}, -q^\eta;q^\eta)_\infty}{(q^{\eta};q^{\eta})_{\infty}},$$ we obtain \eqref{G-B-O-1-eq} by noting that for $1\leq i\leq\lambda$, $\alpha_i+\alpha_{\lambda+1-i}=\eta$. Thus we complete the proof of Theorem \ref{G-B-O-1}. \qed \vskip 0.2cm \noindent{\bf Acknowledgment.} This work was supported by the National Science Foundation of China.
1,477,468,750,265
arxiv
\section{Introduction} It is generally impossible to completely isolate a small system of interest from the surrounding environment. Thus, dissipative effects caused by the environment are important in almost every quantum experiment, ranging from highly controlled settings, where much effort is invested in minimising them, to areas where the dissipation is the key object of interest. In many cases, exact modelling of the environment is not practical and its effect is instead accounted for by employing effective models describing the induced noise. Different approaches exist, e.g., quantum Langevin and stochastic Schr\"odinger equations \cite{Gardiner,Breuer}, quantum jump and state-diffusion models \cite{Percival,Plenio1998}, or Hilbert-space averaging methods \cite{Gemmer2006}. Arguably, the most widely applied approach is to use the \emph{quantum master equation} (QME) description \cite{Gardiner,Breuer}. In this approach, the system evolution is given by a time-local differential equation, where the effect of the environment is captured by the \emph{dynamical generator}. A master equation can be derived from a microscopic model of the system and environment, and their interaction, by tracing over the environment and applying appropriate approximations \cite{Gardiner,Breuer}. However, QMEs are also often applied directly, without explicit reference to an underlying model. In that case, care needs to be taken when several noise processes act in parallel, as simultaneous coupling to multiple baths in a microscopic model does not generally correspond to simple addition of noise generators. Moreover, when the Hamiltonian evolution of the system is modified, e.g., when controlling system dynamics by coherent driving \cite{Schmidt2011}, the form of noise generators in a QME may significantly change. Additivity of noise at the QME level has been discussed recently for qubits when analysing dynamical effects of interference between different baths \cite{Chan2014,Mitchison2018}, non-additivity of relaxation rates in multipartite systems \cite{Yu2006,Lankinen2016}, as well as in the context of charge (excitation) transport \cite{Giusteri2016}. In this work, we address the questions of when: \begin{itemize}[leftmargin=16pt,noitemsep,topsep=2pt,partopsep=2pt,parsep=0pt \item[(i)] \emph{The naive addition of generators yields physically valid dynamics.} \item[(ii)] \emph{The corresponding evolution coincides with the true system dynamics derived from the underlying microscopic model.} \end{itemize} First, we show that (i) is satisfied for generators which are commutative, semigroup-simulable (can be interpreted as a fictitious semigroup at each time instance), and preserve commutativity of the dynamics under addition. These reach beyond the case of Markovian generators for which (i) naturally holds. Outside of this class, we find examples of simple qubit QMEs which lead to unphysical dynamics. We observe that (ii) holds if and only if the cross-correlations between distinct environments can be ignored within a QME. We show this to be the case in the weak-coupling regime, extending previous results in this direction \cite{CohenTannoudji1998,Chan2014,Schaller2015}. We also provide a sufficient condition for (ii) dictated by the commutativity of Hamiltonians at the microscopic level. We combine these generic considerations with a detailed study of a specific open system, namely a qubit interacting simultaneously with multiple spin baths, for which we provide examples where (ii) is not satisfied, while choosing the microscopic Hamiltonians to fulfil particular commutation relations. Our results are of relevance to areas of quantum physics where careful description of dissipative dynamics plays a key role, e.g., in dissipative quantum state engineering \cite{Diehl2008,Verstraete2009,Metelmann2015,Reiter2016}, dissipative coupling in optomechanics \cite{Aspelmeyer2014}, or in dissipation-enhanced quantum transport scenarios \cite{Gurvitz1996,Giusteri2016}, including biological processes \cite{Lambert2013}. In particular, they are of importance to situations in which QMEs are routinely employed to account for multiple sources of dissipation, e.g., in quantum thermodynamics \cite{Alicki1979,Levy2014arpc,Vinjanampathy2016,Goold2015} when dealing with multiple heat baths \cite{Skrzypczyk2011,Correa2013,Levy2014epl,Mitchison2018} or in quantum metrology \cite{Maccone2011,Escher2011,Demkowicz2012} where the relation between dissipation and Hamiltonian dynamics, encoding the estimated parameter, is crucial \cite{Chaves2013,Brask2015,Smirne2016,Haase2017}. The manuscript is structured as follows. In \secref{sec:qmes}, we discuss QMEs at an abstract level---as defined by families of dynamical generators whose important properties we summarise in \secref{sec:dyn_gens_phys}. We specify in \secref{sub:gen_add} conditions under which the addition of physically valid generators is guaranteed to yield legitimate dynamics. We demonstrate by explicit examples that even mild violation of these conditions may lead to unphysical evolutions. In \secref{sec:micro}, we view the validity of QMEs from the microscopic perspective. In particular, we briefly review in \secref{sec:QME_micro_der} the canonical derivation of a QME based on an underlying microscopic model, in order to discuss the effect of changing the system Hamiltonian on the QME, as well as the generalisation to interactions with multiple environments. We then formulate a general criterion for the validity of generator addition in \secref{sec:gen_add_micro}, which we explicitly show to be ensured in the weak coupling regime, or when particular commutation relations of the microscopic Hamiltonians are fulfilled. In \secref{sec:magnets}, we develop an exactly solvable model of a qubit interacting with multiple spin baths, which allows us to explicitly construct counterexamples that disprove the microscopic validity of generator addition in all the regimes in which the aforementioned commutation relations do not hold. Finally, we conclude in \secref{sec:conclusion}. \section{Time-local quantum master equations} \label{sec:qmes} QMEs constitute a standard tool to describe reduced dynamics of open quantum systems. They provide a compact way of defining the effective system evolution at the level of its density matrix, $\rho_S(t)$, without need for explicit specification neither of environmental interactions nor the nature of the noise. Although a QME may be expressed in a generalised form as an integro-differential equation involving time-convolution \cite{Vacchini2016}, its equivalent (c.f.~\cite{Chruscinski2010}) and more transparent \emph{time-local} formulation is typically favoured, providing a more direct connection to the underlying physical mechanisms responsible for the dissipation \cite{Gardiner,Breuer}. Given a time-local QME: \begin{equation} \frac{d}{dt}\rho_S(t) = \mathcal L_t[\rho_S(t)] = \mathcal H_t[\rho_S(t)]+\mathcal D_t[\rho_S(t)], \label{eq:QME_dyn_gen} \end{equation} all the information about the system evolution is contained within the \emph{dynamical generator}, $\mathcal L_t$, that is uniquely defined at each moment of time $t$. Moreover, $\mathcal L_t$ can always be decomposed into its \emph{Hamiltonian} and \emph{purely dissipative} parts, i.e., $\mathcal L_t=\mathcal H_t+\mathcal D_t$ in \eqnref{eq:QME_dyn_gen} with $\mathcal H_t[\rho] = -\mathrm i}% \mathrm i [H(t) , \rho]$ and some Hermitian $H(t)$ \cite{Gorini1976}. Although the QME \eref{eq:QME_dyn_gen} constitutes an ordinary differential equation, the system evolution may exhibit highly non-trivial memory features thanks to the arbitrary dependence of $\mathcal L_t$ on the local time-instance $t$, but also on the (fixed) initial time $t_0$ at which the evolution commences~\cite{Chruscinski2010}---which, without loss of generality, we choose to be zero ($t_0=0$) and drop throughout this work. \subsection{Physicality of dynamical generators} \label{sec:dyn_gens_phys} For the QME \eref{eq:QME_dyn_gen} to be physically valid, it must yield dynamics that is consistent with quantum theory. In particular, upon integration the QME must lead to a \emph{family of (dynamical) maps} $\Lambda_t$ (parametrised by $t$) that satisfy ${\rho_S}(t) = \Lambda_t[{\rho_S}(0)]$ for any $t\ge 0$ and initial ${\rho_S}(0)$, with each $\Lambda_t$ being completely positive and trace preserving (CPTP) \cite{Jamiolkowski1972,Choi1975}. On the other hand, any QME \eref{eq:QME_dyn_gen} is unambiguously specified by the \emph{family of (dynamical) generators} $\mathcal L_t$ appearing in \eqnref{eq:QME_dyn_gen}. However, as discussed in \appref{app:QMEs_dyn_gen_fams}, although the CPTP condition can be straightforwardly checked for maps $\Lambda_t$, it does not directly translate onto the generators $\mathcal L_t$. As a result, for a generic QME its physicality cannot be easily inferred at the level of \eqnref{eq:QME_dyn_gen}, unless its explicit integration is possible. Nevertheless, we formally call a family of dynamical generators $\mathcal L_t$ \emph{physical} if the family of maps it generates consists only of CPTP transformations. In what follows (see also \appref{app:dyn_gen_descr}), we describe properties of dynamical generators that ensure their physicality. Any family of dynamical generators, whether physical or not, can be uniquely decomposed as \cite{Gorini1976} \begin{equation} \mathcal L_t[\rho]= -\mathrm i}% \mathrm i [H(t) , \rho]+\sum_{i,j=1}^{d^2-1} \msf{D}_{ij}(t) \left( F_i \rho F_j^\dagger - \frac{1}{2}\{ F_j^\dagger F_i , \rho \} \right)\!, \label{eq:gen_mat_basis} \end{equation} where $d$ is the Hilbert space dimension and $\{F_i\}_{i=1}^{d^2}$ is any orthonormal operator basis with $\tr\{F_i^\dagger F_j\}=\delta_{ij}$ and all $F_i$ traceless except $F_{d^2}=\openone/\sqrt{d}$. The Hamiltonian part, $\mathcal H_t$, of the generator in \eqnref{eq:QME_dyn_gen} is then determined by $H(t)$ of \eqnref{eq:gen_mat_basis}, while the dissipative part $\mathcal D_t$ is defined by the Hermitian matrix $\msf{D}(t)$. Although general criteria for physicality of dynamical-generator families are not known, two natural classes of physical dynamics can be identified based on the above decomposition. In particular, when $\msf{D}(t)$ is positive semidefinite, $\mathcal L_t$ is said to be of Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) form \cite{Gorini1976,Lindblad1976}. If this is the case for all $t\ge0$, then the corresponding evolution is not only physical but also \emph{CP-divisible}, i.e., the corresponding family of maps can be decomposed as $\Lambda_{t}=\tilde{\Lambda}_{t,s}\Lambda_{s}$, where $\tilde{\Lambda}_{t,s}$ is CPTP for all $0\le s\le t$. This property is typically associated with Markovianity of the evolution \cite{Rivas2014,Breuer2016,deVega2017}. Furthermore, when, in addition, $H$ and $\msf{D}$ in \eqnref{eq:gen_mat_basis} are time-independent, the dynamics forms a \emph{semigroup}, such that the generator and map families are directly related via $\Lambda_t=\exp[t\mathcal L]$ with all $\mathcal L_t=\mathcal L$ \cite{Alicki2002}. See \appref{app:dyn_gen_descr} for a more detailed discussion of different types of evolutions. For the purpose of this work, we also identify another important class of physical dynamics: \begin{definition} A given dynamical family $\Lambda_t$ is semigroup-simulable (SS) if for any $t\ge0$ the map $\mathcal Z_t =\log \Lambda_t$ is of the GKSL form \eref{eq:gen_mat_basis} with some $\msf{D}(t)\ge0$. \end{definition} \noindent Formally, $\mathcal Z_t$ constitutes the \emph{instantaneous exponent} of the dynamics, satisfying $\Lambda_t=e}% \mathrm e^{\mathcal Z_t}$ (see \refcite{Chruscinski2014} and \appref{app:dyn_gen_descr}). Physicality of the evolution is then guaranteed by the GKLS form of $\mathcal Z_t$, because at any $t$ the dynamical map $\Lambda_t$ can be interpreted as a fictitious semigroup $\Lambda_t=\left.e}% \mathrm e^{\mathcal Z_t\tau}\right|_{\tau=1}$ generated by $\mathcal Z_t$ (at this particular time instance). $\Lambda_t$ must therefore be CPTP at $t$. In general, it is not straightforward to verify whether a given QME \eref{eq:QME_dyn_gen} yields SS dynamics \cite{Chruscinski2014}, even after decomposing its dynamical generators according to \eqnref{eq:gen_mat_basis}. However, in the special case of \emph{commutative} dynamics, for which $[\mathcal L_s,\mathcal L_t]=0$ (or equivalently $[\Lambda_s,\Lambda_t]=0$) for all $s,t\ge0$, one may directly identify the SS subclass, because (see \appref{app:dyn_gen_descr}): \begin{lem} \label{lem:ss_cond_comm} Any commutative dynamics is SS iff (if and only if) for any $t\ge0$ the decomposition of its dynamical generators \eref{eq:gen_mat_basis} fulfills \begin{equation} \int_{0}^{t}\!{d}% \mathrm d}\tau\,\msf{D}(\tau)\ge0. \label{eq:ss_cond_comm} \end{equation} In short, we term any such semigroup-simulable and commutative dynamics SSC. \end{lem} \noindent Note that the condition \eref{eq:ss_cond_comm} is clearly weaker than positive semi-definiteness, $\msf{D}(t)\ge0$, at all times. Hence, there exist commutative dynamics which are SS but not CP-divisible. However, let us emphasise that there also exist commutative dynamics which are physical but \emph{not} even SS. An explicit example is provided by the eternally non-Markovian model of \refcite{Hall2014}, as well as by other instances of random unitary \cite{Andersson2007,Chruscinski2013} and phase covariant \cite{Smirne2016} qubit dynamics, which we discuss in detail in \appref{app:dyn_gens_qubit_dyns}. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{generatorgeometry.png} \end{center} \caption{\textbf{Cross-section of the vector space defined by the families of dynamical generators.} The non-convex set (\emph{orange}) describes a cut through the set of all physical families, while the inner convex sets (\emph{blue}) correspond to cuts through convex cones of various dynamical subclasses possessing additive generators. Sets containing CP-divisible and semigroup evolutions are indicated, as well as two exemplary SSC classes of dynamics. Physicality can be broken by adding to a family $\mathcal L^{(1)}_t$, which is SSC but \emph{non}-CP-divisible, another family $\mathcal L^{(2)}_t$ that lies outside of the particular SSC class, even a semigroup.} \label{fig:generatorgeometry} \end{figure} \subsection{Additivity of dynamical generators} \label{sub:gen_add} We define the notion of \emph{additivity} for families of dynamical generators as follows: \begin{definition} Two physical families of generators $\mathcal L^{(1)}_t$ and $\mathcal L^{(2)}_t$ are additive if all their non-negative linear combinations, $\alpha \mathcal L^{(1)}_t+\beta \mathcal L^{(2)}_t$ with $\alpha,\beta\ge0$, are also physical. \label{def:gen_additivity} \end{definition} \noindent Note that according to \defref{def:gen_additivity} a pair of generator families can be additive only if each of them is individually \emph{rescalable}---remains physical when multiplied by an non-negative scalar, i.e., $\mathcal L_t\to\alpha\mathcal L_t$ remains physical for any $\alpha\ge0$. However, as such a multiplication does not invalidate the GKSL form of the decomposition \eref{eq:gen_mat_basis} or the condition \eref{eq:ss_cond_comm}, it follows that any generator family which is CP-divisible or SSC must be rescalable. From the linear algebra perspective \cite{Rockafellar1970}, one may formally define the \emph{vector space} containing families of dynamical generators. Physical generators then form its particular subset. Rescalability of a given $\mathcal L_t$ states then that the whole \emph{ray} $\{\alpha\mathcal L_t\}_{\alpha\ge0}$ lies within the physical set. Additivity of $\mathcal L^{(1)}_t$ and $\mathcal L^{(2)}_t$, on the other hand, means that all the elements of the \emph{convex cone}, $\{\alpha \mathcal L^{(1)}_t+\beta \mathcal L^{(2)}_t\}_{\alpha,\beta\ge0}$, are physical. For CP-divisible dynamics, we observe that when both $\mathcal L^{(1)}_t$ and $\mathcal L^{(2)}_t$ are of the GKLS form or even form a semigroup, so must any non-negative linear combination of them. Hence, it naturally follows that generator families describing CP-divisible evolutions constitute a convex cone contained in the physical set, with semigroups forming a subcone. Furthermore, as we demonstrate in \appref{app:dyn_gens_add}: \begin{lem} Any SSC generator families $\mathcal L^{(1)}_t$ and $\mathcal L^{(2)}_t$ are additive if upon addition, $\alpha \mathcal L^{(1)}_t+\beta \mathcal L^{(2)}_t$ with any $\alpha,\beta\ge0$, they yield commutative dynamics. \end{lem} \noindent Hence, the non-negative linear span of any such SSC pair forms a convex cone contained in the physical set. Moreover, one may then naturally expand such a cone by considering more than two, in particular, a complete set of SSC generator families whose non-negative linear combinations are all commutative. We term the convex cone so-constructed a particular \emph{SSC class}. In \figref{fig:generatorgeometry}, we schematically depict the cross-section of the set of physical generator families, which then also cuts through the convex cones containing generator families of the aforementioned dynamical subclasses. Importantly, as all physical dynamics do \emph{not} form a convex cone in the vector space, the ones lying within such a hyperplane are described by a \emph{non-convex} set that, in turn, contains the \emph{convex} sets of:~CP-divisible dynamics, its semigroup subset, as well as ones representing particular SSC classes. Now, as indicated in \figref{fig:generatorgeometry} by the dashed line, by adding a generator family that is SSC but \emph{not} CP-divisible (i.e., non-Markovian \cite{Rivas2014,Breuer2016,deVega2017}) and another physical family, even a semigroup, which does not commute with the first---i.e., is not contained within the corresponding SSC class---one may obtain unphysical dynamics. Consider an example of two purely dissipative ($\mathcal L_t=\mathcal D_t$ in \eqnref{eq:QME_dyn_gen}) qubit generators: \begin{subequations} \label{eq:ex_gens} \begin{align} \mathcal L^{(1)}_t[\rho] & = \gamma_1(t)\, (\sigma_x \rho \sigma_x - \rho), \label{eq:gen_deph}\\ \mathcal L^{(2)}_t[\rho] & = \gamma_2(t)\, (\sigma_- \rho \sigma_+ - \frac{1}{2}\{\sigma_+\sigma_-,\rho\}), \label{eq:gen_emiss} \end{align} \end{subequations} where $\sigma_{\pm} = (\sigma_x \pm \mathrm i}% \mathrm i\sigma_y)/2$ and $\sigma_x$, $\sigma_y$, $\sigma_z$ are the Pauli operators, and $\gamma_1(t)$, $\gamma_2(t)$ are chosen such that the generators are physical. Importantly, the families $\mathcal L^{(1)}_t$ and $\mathcal L^{(2)}_t$ despite being commutative, do not commute between one another. They belong to different SSC classes of qubit dynamics (see \appref{app:dyn_gens_qubit_dyns}), namely, random-unitary \cite{Andersson2007,Chruscinski2013} and phase-covariant \cite{Smirne2016} evolutions, respectively. In order to prove the situation indicated in \figref{fig:generatorgeometry}, we construct examples in which both $\mathcal L^{(1)}_t$ and $\mathcal L^{(2)}_t$ are physical, but their sum is not. We take instances of $\gamma_1(t)$ and $\gamma_2(t)$ with one rate being constant (semigroup), and the other taking negative values for some times (non-Markovian) while fulfilling $\int_0^td}% \mathrm d\tau\gamma(\tau)\ge0$ of \eqnref{eq:ss_cond_comm} (SSC). Two simple examples are provided by choosing $\gamma_1(t) = \sin(\omega t)$ and $\gamma_2(t) = \gamma$ and \emph{vice versa}, with $\gamma$ and $\omega$ being positive constants. We consider then the generator family $\mathcal L^{(1)}_t + \mathcal L^{(2)}_t$ and solve analytically in \appref{app:counter_gens_add} for the families of maps $\Lambda_t$ that arise in both cases. For each $\Lambda_t$, we compute the eigenvalues of its Choi-Jamio\l{}kowski (CJ) matrix---all of which must be non-negative at all times for the map to be CPTP (see \appref{app:dyn_gen_descr}). We depict them as a function of time in \figref{fig:CJevals} for a choice of parameters which clearly demonstrates that the physicality is, indeed, invalidated at finite times. Their negativity, as demonstrated in \appref{app:counter_gens_add}, can also be verified analytically. In \appref{app:counter_gens_add}, we also consider additional choices of $\gamma_1(t)$ and $\gamma_2(t)$ for the generators \eref{eq:ex_gens}, in order to show that the same conclusion holds when both semigroup and non-Markovian contributions come from explicit microscopic derivations. In particular, as the generators describe dephasing \eref{eq:gen_deph} and spontaneous-emission \eref{eq:gen_emiss} processes, we consider their non-Markovian forms derived (see \appref{app:qubit_classes}) from spin-boson and Jaynes-Cummings models, respectively \cite{Breuer}. Note that it follows from the above observations that physicality of a non-Markovian QME can be easily broken by addition of even a time-invariant (semigroup) dissipative term. Moreover, one should be extremely careful when dealing with dynamics described by generator families that are not even rescalable, e.g., see \appref{app:dyn_gens_rescal}:~ones that exhibit singularities at finite times \cite{Andersson2007}, are derived assuming weak-coupling interactions \cite{Gaspard1999}, or lead to physical (even commutative) but non-SS dynamics \cite{Hall2014}. \begin{figure}[t!] \includegraphics[width=0.8\linewidth]{CJevalplot_simple.png} \caption{% \textbf{Eigenvalues of the CJ matrix as a function of time}, whose negativity demonstrates non-physicality of the dynamical maps generated by $\mathcal L^{(1)}_t + \mathcal L^{(2)}_t$ with $\mathcal L^{(\bullet)}_t$ as defined in \eqnref{eq:ex_gens}. We choose in (a):~$\gamma_1(t) = \sin(2 t)$ and $\gamma_2(t) = 1$;~while in (b):~$\gamma_1(t) = 1/2$ and $\gamma_2(t) = \sin(t)$.} \label{fig:CJevals} \end{figure} \section{Microscopic approach to QME\MakeLowercase{s}} \label{sec:micro} Let us recall that the QME \eref{eq:QME_dyn_gen} constitutes an effective description of the reduced dynamics, whose form must always originate from an underlying physical mechanism responsible for both free (noiseless) and dissipative parts of the system evolution. In particular, given a \emph{microscopic model} one should arrive at \eqnref{eq:QME_dyn_gen} starting from a closed dynamics describing the evolution of:~the system, its environment, as well as their interaction;~after tracing out the environmental degrees of freedom \cite{Gardiner,Breuer}. \subsection{Microscopic derivation of a QME} \label{sec:QME_micro_der} In a microscopic model of an evolving open quantum system, as illustrated in \figref{fig.settings}(a), one considers a system of interest $S$, coupled to an environment $E$ that is taken sufficiently large for the total system to be closed. The global evolution is then unitary, $U_{SE}(t)=\exp[-\mathrm i}% \mathrm i(H_S + H_E + H_I)t]$, being determined by the free Hamiltonians $H_S$ and $H_E$, and the system-environment interaction $H_I$. In the \emph{Schr\"odinger picture}, the reduced state of the system, ${\rho_S}(t)=\tr_E {\rho_{SE}}(t)$, evolves as \begin{equation} \frac{d}{dt}{\rho_S}(t) = - \mathrm i}% \mathrm i \tr_E \left[H_S + H_E + H_I , {\rho_{SE}}(t)\right], \label{eq:ODE_S+E} \end{equation} where ${\rho_{SE}}$ is the total system-environment state. If the environment and the system are initially uncorrelated, so that ${\rho_{SE}}(0)={\rho_S}(0)\otimes{\rho_{E}}$, and ${\rho_{E}}$ is stationary, i.e., $\left[H_{E},{\rho_{E}}\right]=0$, \eqnref{eq:ODE_S+E} can be conveniently rewritten as (see also \appref{app:QME_integrodiff}) \cite{Breuer,Rivas2012}: \begin{equation} \frac{d}{dt}\bar{\rho}_S(t) = - \int_0^t ds \tr_E [ \bar{H}_{I}(t) , [ \bar{H}_{I}(s) , \bar{\rho}_{SE}(s) ]], \label{eq:exact_integrodiff_singlebath} \end{equation} where by the bar, $\bar{\bullet} :=e}% \mathrm e^{\mathrm i}% \mathrm i\left(H_{S}+H_{E}\right)t}\bullet\,e}% \mathrm e^{-\mathrm i}% \mathrm i\left(H_{S}+H_{E}\right)t}$, we denote the \emph{interaction picture} with respect to the free system-environment Hamiltonian $H_S + H_E$. \eqnref{eq:exact_integrodiff_singlebath} constitutes the integro-differential QME discussed at the beginning of \secref{sec:qmes} that, in practice, is typically recast into the time-local form \eref{eq:QME_dyn_gen}, which after returning to the Schr\"{o}dinger picture (see \appref{app:QME_TL}~and~\ref{app:QME_SP}) reads: \begin{equation} \frac{d}{dt}\rho_S(t) = -\mathrm i}% \mathrm i[H_S,\rho_S(t)]+\Lenv_t\!\left[\rho_S(t)\right]. \label{eq:QME} \end{equation} Importantly, $\Lenv_t$ above can be unambiguously identified as the dynamical generator---containing both Hamiltonian and dissipative parts as in \eqnref{eq:QME_dyn_gen}---that arises purely due to the interaction with the environment;~with the system \emph{free} evolution (dictated by the system Hamiltonian $H_S$) being explicitly separated. \begin{figure}[!t] \includegraphics[width=\columnwidth]{settings_hamiltonians.png} \caption{% \textbf{Microscopic description of an open quantum system $S$}, (a):~interacting with a single environment $E$; (b):~simultaneously interacting with multiple, independent environments $E_1$, $E_2$, $E_3$, $\dots$.} \label{fig.settings} \end{figure} \subsubsection{Dependence on the system Hamiltonian} However, as detailed in \appref{app:HS_cov}, despite the separation of terms in \eqnref{eq:QME} the form of the dynamical generator, $\Lenv_t$, may in general strongly depend on the system Hamiltonian $H_S$. Crucially, this means that the evolution of systems with different $H_S$, which interact with the same type of environment, cannot generally be modelled with the same QME after simply changing the $H_S$ in \eqnref{eq:QME}. However, under certain circumstances this can be justified. In \appref{app:QME_HS}, we discuss in detail the natural cases when variations of $H_S$ do not affect the form of $\Lenv_t$ in \eqnref{eq:QME}, yet we summarise them here by the following lemma: ~\\ \begin{lem} Consider a change $H_S \rightarrow H^{\prime}_S(t) = H_S + V(t)$ in \eqnref{eq:ODE_S+E}. The corresponding time-local QME can be obtained by just replacing $H_S$ with $H^{\prime}_S(t)$ in \eqnref{eq:QME} while keeping $\Lenv_t$ unchanged, if at all times $[V(t),H_I] = 0$ and either $[H_S, H_I] = 0$ or $[V(t), H_S] = 0$ (or both). \end{lem} Unfortunately, if the above sufficient condition cannot be met, one must, in principle, rederive the QME \eref{eq:QME} and the corresponding generator $\Lenv_t$ for $H^{\prime}_S(t)$. Moreover, such treatment is required independently of the interaction strength, i.e., also in the weak-coupling regime discussed below. A prominent physical example is provided by coherently driven systems, for which $V(t)$ represents the externally applied force. In their case, it is common that the time-dependence of $V(t)$ is naturally carried over onto, and significantly amends, the dynamical generator irrespectively of the coupling strength \cite{Rivas2010,Rivas2012}. \subsubsection{Generalisation to multiple environments} Another important question one should pose is under what conditions the full derivation of the QME \eref{eq:QME} can also be bypassed when dealing with a system that simultaneously interacts with multiple environments---as depicted in \figref{fig.settings}(b). Motivated by the analysis of \secref{sub:gen_add}, one may then naively expect that, given multiple \emph{additive} generator families describing each separate interaction, $\Lenv[(i)]_t$, they should be simply added to construct the overall QME of the form \eref{eq:QME} with $\Lenv_t=\sum_i\Lenv[(i)]_t$ \footnote{Note that we are not concerned with the internal structure of the system that is crucial when discussing, e.g., additivity of decay rates for a bipartite system with each of its parts coupled to a different reservoir \cite{Yu2006}.}. Such a procedure may, however, lead to incorrect dynamics, as may be demonstrated by considering explicitly the microscopic model that incorporates interactions with multiple environments---with now $H_E = \sum_i H_{E_i}$ and $H_I = \sum_i H_{I_i}$ in \eqnref{eq:ODE_S+E}. Following the derivation steps of the time-local QME \eref{eq:QME}, while assuming its existence both in the presence of each single environment and all of them, one arrives at a generalised QME (see also \appref{app:valid_add_gens}): \begin{widetext} \begin{equation} \frac{d}{dt}{\rho_S}(t) =-\mathrm i}% \mathrm i\left[H_{S},{\rho_S}(t)\right]+\sum_{i}\Lenv[(i)]_{t}\!\left[{\rho_S}(t)\right] -\sum_{i\neq j}\int_{0}^{t}\!\!ds\,e}% \mathrm e^{-\mathrm i}% \mathrm i H_{S}(t-s)}\tr_{E_{ij}}\left[\bar{H}_{I_{i}}(t-s),\left[H_{I_{j}},\rho_{SE_{ij}}(s)\right]\right]e}% \mathrm e^{\mathrm i}% \mathrm i H_{S}(t-s)}, \label{eq:QME_multiple_envs} \end{equation} \end{widetext} where $\bar{H}_{I_{i}}(\tau)=e}% \mathrm e^{\mathrm i}% \mathrm i\left(H_{S}+H_{E_{i}}\right)\tau}H_{I_{i}}e}% \mathrm e^{-\mathrm i}% \mathrm i\left(H_{S}+H_{E_{i}}\right)\tau}$, $\Lenv[(i)]_t$ is the generator arising when only the $i$th environment is present, $\rho_{SE_{ij}}$ denotes the joint-reduced state of the system and environments $i$ and $j$, while $\tr_{E_{ij}}$ stands for the trace over these environments. Crucially, the naive addition of generators would lead to a QME that contains only the first two terms in \eqnref{eq:QME_multiple_envs}. In particular, it would completely ignore the last term, which we here name the \emph{cross-term}, as it accounts for the cross-correlations that may emerge between each two environments due to their indirect interaction being mediated by the system. \subsection{Microscopic validity of generator addition} \label{sec:gen_add_micro} The generalisation of the QME to multiple environments \eref{eq:QME_multiple_envs} allows one to unambiguously identify when the true dynamics derived microscopically coincides with the evolution obtained by naively adding the generators. \begin{obs} A dynamical generator corresponding to a system simultaneously interacting with multiple environments can be constructed by simple addition of the generators associated with each individual environment iff the cross-term in \eqnref{eq:QME_multiple_envs} identically vanishes. \label{obs:crossterm_vanish} \end{obs} \noindent % In what follows, we show that \obsref{obs:crossterm_vanish} allows one to prove the validity of generator addition in the weak-coupling regime. However, as the cross-term in \eqnref{eq:QME_multiple_envs} involves a time-convolution integral, in order to prove that it identically vanishes given the microscopic Hamiltonians satisfy particular commutation relations, we must consider the dynamics in its integrated form---at the level of the corresponding dynamical (CPTP) map. \subsubsection{Weak-coupling regime} \label{app:valid_add_gens_weakcoupling} \begin{lem} The dynamical generator of the evolution of a system simultaneously interacting with multiple environments in the weak coupling regime can be constructed by simple addition of the generators associated with each individual environment. \label{lem:weak_coupling} \end{lem} \noindent % Here, we summarise the proof of \lemref{lem:weak_coupling} that can be found in \appref{app:valid_add_gens_weakcoupling}, and generalises the argumentation of \refcite{CohenTannoudji1998,Schaller2015} applicable to the more restricted regime in which the Born-Markov approximation is valid. In particular, it applies to any QME valid in the \emph{weak-coupling} regime (c.f.~\cite{deVega2017,Breuer,Rivas2012}) which is derived using the ansatz: \begin{equation} {\rho_{SE}}(t) \approx {\rho_S}(t) \otimes \bigotimes_i \varrho_{E_i}(t), \label{eq:weakcopling_appr} \end{equation} where ${\rho_S}(t)$ is the reduced system state at $t$, while $\varrho_{E_i}(t)$ can be arbitrarily chosen for $t>0$---it does not need to represent the reduced state of the $i$th environment, $\rho_{E_i}(t)=\tr_{\neg E_i}{\rho_{SE}}(t)$, as long as it initially coincides with its stationary state, i.e., $\varrho_{E_i}(0)=\rho_{E_i}$. Let us emphasise that the assumption \eref{eq:weakcopling_appr} is employed only at the derivation stage, so that the QME so-obtained, despite correctly reproducing the reduced dynamics of the system under weak coupling, may yield upon integration closed dynamics with overall system-environment states that significantly deviate from the ansatz \eref{eq:weakcopling_appr} and, in particular, its tensor-product structure \cite{Rivas2010}. In \appref{app:valid_add_gens_weakcoupling}, we first employ operator Schmidt decomposition \cite{Bengtsson2006} to reexpress each of the interaction Hamiltonians in \eqnref{eq:QME_multiple_envs} as $H_{I_i} = \sum_{k} A_{i;k} \otimes B_{k}^{E_i}$, i.e, as a sum of operators that act separately on the system and corresponding environment. This decomposition, together with the ansatz \eref{eq:weakcopling_appr} allows us to rewrite the overall QME \eref{eq:QME_multiple_envs} in terms of correlation functions involving only pairs of baths. Furthermore, the tensor-product structure of \eqnref{eq:weakcopling_appr} ensures that each of these reduces to a product of single-bath correlation functions. Hence, as any single-bath (one-time) correlation function can always be assumed to be zero, every summand in the cross-term of the QME \eref{eq:QME_multiple_envs} must independently vanish. Note that, in particular, this holds for all QMEs derived using the \emph{time-convolutionless} approach \cite{Breuer2001} up to second order in all the interaction parameters representing coupling strengths for each environment. \subsubsection{Commutativity of microscopic Hamiltonians} \begin{figure}[!t] {\centering \includegraphics[width=0.85\linewidth]{venn.png}} \caption{ \textbf{Validity of generator addition as assured by the commutativity of microscopic Hamiltonians}. The QME \eref{eq:QME_multiple_envs} describing the multiple-environment scenario of \figref{fig.settings}(b) is considered. Each set (circle) above indicates that commutativity of the interaction Hamiltonians with:~II -- each other, IS -- the system Hamiltonian, IE -- all the free Hamiltonians of environments; can be assumed.} \label{fig:venn} \end{figure} Next, we investigate the implications that commutativity of the system, environment, and interaction Hamiltonians has on the validity of generator addition. We consider the cases when all $H_{I_i}$ commute with each other (II), with $H_S$ (IS), or with all the $H_{E_i}$ (IE), and summarise the results in \figref{fig:venn}. We find that: \begin{lem} Only when the interaction Hamiltonians commute among themselves and with the system free Hamiltonian, i.e., $[H_{I_i},H_{I_j}]=0$ and $[H_{I_i},H_S]=0$ for all $i,j$;~can the overall QME be constructed by adding dynamical generators associated individually with each environment---ignoring the cross-term in \eqnref{eq:QME_multiple_envs}. \label{lem:commutativity} \end{lem} \noindent % Again, we summarise here the proof of \lemref{lem:commutativity} that can be found in \appref{app:valid_add_gens_comm}. However, in contrast to the discussion of the weak-coupling regime, we are required to return to the microscopic derivation of the QME \eref{eq:QME_multiple_envs}. Crucially, the commutativity of interaction Hamiltonians with one another as well as with $H_S$---the region $\text{II}\cap\text{IS}$ marked `Yes' in \figref{fig:venn}---assures that the unitary of the global von Neumann equation \eref{eq:ODE_S+E} factorises, i.e.: \begin{equation} U_{SE}(t)\!=\! e}% \mathrm e^{-\mathrm i}% \mathrm i (H_S + \sum_i \!H_{E_i} \!+\! H_{I_i}) t} \!=\! e}% \mathrm e^{-\mathrm i}% \mathrm i H_S t}\prod_i e}% \mathrm e^{-\mathrm i}% \mathrm i (H_{I_i} + H_{E_i})t}. \end{equation} As a result, the system dynamics is described by a product of commuting CPTP maps, $\bar{\rho}_S(t) = \prod_i \tilde{\Lambda}_t^{(i)} [\bar{\rho}_S(0)]$, associated with each individual environment and given by $\tilde{\Lambda}_t^{(i)}[\bullet] = \tr_{E_i}\!\{e}% \mathrm e^{-\mathrm i}% \mathrm i(H_{I_i} + H_{E_i})t}\, (\bullet \otimes\rho_{E_i})\, e}% \mathrm e^{\mathrm i}% \mathrm i(H_{I_i} + H_{E_i})t}\}$. By differentiating the dynamics with respect to $t$, it is then evident that the QME takes the form \eref{eq:QME_multiple_envs} with each $\Lenv[(i)]_t=\dot{\tilde{\Lambda}}_{t}^{(i)}\circ(\tilde{\Lambda}_{t}^{(i)})^{-1}$ and the cross-term being, indeed, absent. Note that, as all $\Lenv[(i)]_t$ must then represent generator families belonging to a common commutative class, if each of them yields dynamics that is also SS, they all must belong to the same SSC class in \figref{fig:generatorgeometry}. In all other cases marked `No' in \figref{fig:venn}, the commutativity does not ensure the generators to simply add. We demonstrate this by providing explicit counterexamples based on a concrete microscopic model, for which the evolution of the system interacting with each environment separately, as well as all simultaneously, can be explicitly solved. It is sufficient to do so for the settings in which either all $H_{I_i}$ commute with all $H_{E_i}$ and $H_S$ (intersection $\text{IS}\cap\text{IE}$ in \figref{fig:venn}), or all $H_{I_i}$ commute with each other and all $H_{E_i}$ (intersection $\text{II}\cap\text{IE}$), since it then follows that neither II, IS, nor IE alone can ensure the validity of generator addition. Note that it is known that $[H_I,H_S]=0$ implies the evolution to be CP-divisible \cite{Khalil2014}. Thus, as families of CP-divisible generators are additive (c.f.~\figref{fig:generatorgeometry}), our counterexample for $\text{IS}\cap\text{IE}$ below corresponds to a case where generator addition results in dynamics which is physical but does \emph{not} agree with the microscopic derivation. For a setting in which $H_{I_i}$ do not commute neither among each other nor with $H_S$, validity of adding generators has been discussed in \refcite{Chan2014}. \section{Spin-magnet model} \label{sec:magnets} In this section, we construct counterexamples to the validity of adding generators at the QME level for the relevant cases summarised in \figref{fig:venn}. Inspired by \refcite{Allahverdyan2013,Perarnau2016}, we consider a single \emph{qubit} (spin-$1/2$ particle) in contact with multiple \emph{magnets}---environments consisting of many spin-$1/2$ systems. Within this model, the closed dynamics of the global qubit-magnets system can be solved and, after tracing out the magnets' degrees of freedom, the exact open dynamics of the qubit can be obtained. As a result, we can determine the QME generators describing dynamics of the qubit when coupled to one or more magnets, so that comparison with evolutions obtained by adding the corresponding generators can be explicitly made. \subsection{Magnet as an environment} Within our model, we allow the system free Hamiltonian $H_S$ to be chosen arbitrarily, yet, for simplicity, we take the free Hamiltonian of the environment to vanish, $H_E = 0$. As a result, any initial environment state is stationary, with $[\rho_E,H_E]=0$ trivially for any $\rho_{E}$. The environment is represented by a magnet that consists of $N$ spin-1/2 particles, for which we introduce the magnetisation operator: \begin{equation} \op m = \sum_{n=1}^{N} \sigma_z^{(n)} = \sum_{k=0}^N m_k \, \Pi_k , \end{equation} where $\Pi_k$ is the projector onto the subspaces with magnetisation $m_k$ (i.e., with $k$ spins pointing up). The magnetisation $m_k$ takes $N+1$ equally spaced values between $-N$ and $N$: \begin{equation} m_k = -N +2k \qquad\trm{for}\qquad k=0,...,N. \end{equation} Consistently with \secref{sec:micro}, the initial state of the spin-magnet system reads ${\rho_{SE}}(0) = {\rho_S}(0) \otimes {\rho_{E}}$, where we take the initial magnet state to be a classical mixture of different magnetisations, i.e., \begin{equation} {\rho_{E}} = \sum_{k=0}^{N} q_k \Pi_k. \label{eq:initial_magnet_state} \end{equation} The initial probability for an observation of the magnetisation to yield $m_k$ is then \begin{equation} p(m_k) = \tr[{\rho_{E}} \Pi_k] = q_k \tr \Pi_k. \label{eq:p_(m_k)} \end{equation} In the limit of large $N$, $p(m_k)$ approaches a continuous distribution $p(m)$, whose moments can then be computed as follows: \begin{equation} \sum_{k=0}^N (m_k)^s p(m_k) \quad\underset{N\to\infty}{\longrightarrow}\quad \int_{-\infty}^{\infty} dm \; m^s \; p(m) . \end{equation} We consider only interaction Hamiltonians which couple the system qubit to the magnet's magnetisation, i.e, \begin{equation} H_I = A \otimes\op m \label{eq:HI_magnet} \end{equation} with $A$ being an arbitrary qubit observable. However, in order to be consistent with \secref{sec:micro}, we must impose that \begin{equation} \tr_E\{H_I\rho_E\}=A \tr\{\op{m}\rho_E\}=A\,\sum_{k=0}^N m_k\, p (m_k)=0, \end{equation} which implies that we must restrict to distributions $p(m_k)$ (and $p(m)$ in the $N\to\infty$ limit) with zero mean. In the examples discussed below, we consider two initial magnetisation distributions for the magnet in the asymptotic $N$ limit. In particular, we consider a \emph{Gaussian} distribution: \begin{align} p(m)= \frac{1}{\sqrt{2 \pi}\sigma} e}% \mathrm e^{-\frac{m^2}{2\sigma^2}}, \label{eq.gaussianmagnet} \end{align} which formally corresponds to the asymptotic limit of a magnet being described by a microcanonical ensemble \cite{Allahverdyan2013}---its every spin configuration being equally probable, with $q_k = 1/2^{N}$ in \eqnref{eq:p_(m_k)}, yielding a binomial distribution of magnetisation with variance equal to the number of spins ($\sigma^2=N$). We also consider the case when the magnetisation follows a \emph{Lorentzian} distribution in the $N\to\infty$ limit, i.e.: \begin{align} \label{eq.lorentzianmagnet} p(m)= \frac{\lambda}{\pi (\lambda^2+m^2)} , \end{align} parametrised by the scale parameter $\lambda$ (specifying the half width at half maximum). Given the above initial magnet state \eref{eq:initial_magnet_state} and the interaction Hamiltonian \eref{eq:HI_magnet}, the global system-magnet state constitutes at all times a mixture of states with different magnet magnetisation. In particular, it can be decomposed at any $t\ge0$ as \begin{equation} \label{eq.globalstate} {\rho_{SE}}(t) = \sum_k q_k \rho_S^{(k)}(t) \otimes \Pi_k, \end{equation} where every $\rho_S^{(k)}$ can be understood as the (normalised) state of the system conditioned on the magnet possessing the magnetisation $m_k$. Consequently, the full reduced system state at time $t$ reads \begin{equation} \rho_S(t) = \tr_E {\rho_{SE}}(t) = \sum_k p(m_k) \rho_S^{(k)}(t). \label{eq.systemstate} \end{equation} Crucially, within the model each of the conditional states $\rho_S^{(k)}$ in \eqnref{eq.systemstate} evolves independently. In order to show this, we substitute the system-environment state \eref{eq.globalstate} and the microscopic Hamiltonians into the global von Neumann equation \eref{eq:ODE_S+E} to obtain \begin{align} \dot{\rho}_{SE}(t) &= -\mathrm i}% \mathrm i [H_S + H_I , {\rho_{SE}}(t)] \nonumber\\ & = -\mathrm i}% \mathrm i [H_S\otimes\openone + A\otimes\op m , {\rho_{SE}}(t)] \nonumber\\ & = -\mathrm i}% \mathrm i [H_S\otimes\openone + \sum_k m_k A\otimes\Pi_k, \sum_l q_l \rho_S^{(l)}(t) \otimes \Pi_l ] \nonumber\\ & = -\mathrm i}% \mathrm i \sum_k q_k [H_S + m_k A , \rho_S^{(k)}(t) ] \otimes\Pi_k. \end{align} As no coupling between different magnetisation subspaces (labelled by $k$) is present, after rewriting the l.h.s.~above using \eqnref{eq.globalstate}, one obtains a set of uncoupled differential equations for each conditional state: \begin{equation} \dot{\rho}_S^{(k)}(t) = -\mathrm i}% \mathrm i [H_S + m_k A , \rho_S^{(k)}(t) ], \label{eq:vN_conditional} \end{equation} with $\rho_S^{(k)}(t)=\rho_S(0)$ for each $k$. Hence, every $\rho_S^{(k)}$ evolves unitarily within our model with $U_S^{(k)}(t):=\exp[-\mathrm i}% \mathrm i (H_S+m_kA)t]$, while the overall evolution of the qubit \eref{eq.systemstate} is given by the dynamical map, $\Lambda_t$, corresponding to a mixture of such (conditional) unitary transformations distributed according to the initial magnetisation distribution of the magnet, $p(m_k)$, i.e.: \begin{equation} \rho_S(t) = \Lambda_t[\rho_S(0)]=\sum_k p(m_k)\;U_S^{(k)}(t)\,\rho_S(0)\, U_S^{(k)\dagger}(t). \label{eq:spin_reduced_state} \end{equation} Furthermore, as $\Lambda_t$ constitutes a mixture of unitaries in the model, it must be unital, i.e., for all $t\ge0$:~$\Lambda_t[\openone]=\openone$. \subsubsection{Bloch ball representation} We rewrite the above qubit dynamics employing the Bloch ball representation \cite{Bengtsson2006}, i.e., $\rho \equiv \frac{1}{2}(\openone + \mathbf{r}\cdot{\boldsymbol \sigma}} % {\hat {\boldsymbol \sigma})$ with the Bloch vector $\mathbf{r}$ unambiguously specifying a qubit state $\rho$. Then, \eqnsref{eq:vN_conditional}{eq:spin_reduced_state} read, respectively: \begin{equation} \label{eq.commeqn} \dot{\mathbf{r}}^{(k)}(t)\cdot{\boldsymbol \sigma}} % {\hat {\boldsymbol \sigma} = -\mathrm i}% \mathrm i [H_S + m_k A \, , \, \mathbf{r}^{(k)}(t)\cdot{\boldsymbol \sigma}} % {\hat {\boldsymbol \sigma} ] \end{equation} and \begin{equation} \mathbf{r}(t) = \mat{D}_t\,\mathbf{r}(0) = \left[\sum_k p(m_k)\, \mat{R}^{(k)}(t)\right] \mathbf{r}(0). \label{eq:bloch_vector} \end{equation} The rotation matrices above, $\mat{R}^{(k)}$, constitute the SO(3) representations of the unitaries $U_S^{(k)}\in \text{SU(2)}$ in \eqnref{eq:spin_reduced_state}, and are thus similarly mixed according to $p(m_k)$. The qubit dynamical map, $\Lambda_t$ of \eqnref{eq:spin_reduced_state}, is represented by an affine transformation of the Bloch vector: \begin{equation} \mat{D}_t := \sum_k p(m_k)\, \mat{R}^{(k)}(t) \;\underset{N\to\infty}{\longrightarrow}\; \int_{-\infty}^{\infty}\!\!\! dm \; p(m)\, \mat{R}(m,t), \label{eq:affine_map} \end{equation} which is linear due to $\Lambda_t$ being unital within the magnet model, i.e., does not contain a translation. Now, as the spaces of physical $\Lambda_t$ (dynamical maps) and $\mat{D}_t$ (affine transformations) are isomorphic \cite{Bengtsson2006}, the dynamical generators of the former $\LSP_t:=\dot{\Lambda}_t \circ\Lambda_t^{-1}$ (see \appref{app:dyn_gen_descr}) directly translate onto $\LSPb_t:=\dot{\mat{D}}_t \,\mat{D}_t^{-1}$ of the latter, with the map composition and inversion replaced by matrix multiplication and inversion, respectively. Moreover, as the vector spaces containing families of generators defined in this manner must also be isomorphic, all the notions described in \secref{sub:gen_add}---in particular, rescalability and additivity---naturally carry over. However, in order to define the Bloch ball representation of the environment-induced generator $\Lenv_t$ in the QME \eref{eq:QME}, we must correctly relate it to the interaction and Schr\"{o}dinger pictures of the dynamics, summarised in \appref{app:SP_and_IP}. In general (see \appref{app:QME_SP} for the derivation from the dynamical maps perspective), the Bloch ball representation of $\Lenv_t$ reads \begin{equation} \Lenvb_t := \LSPb_t - \dot{\mat{R}}_S(t)\,\mat{R}_S(t)^{-1} = \mat{R}_S(t)\, \LIPb_t \,\mat{R}_S(t)^{-1}, \label{eq:Lb_env_ind} \end{equation} where $\mat{R}_S(t)\in\text{SO(3)}$ is the rotation matrix of the Bloch vector that represents the qubit unitary map, $U_S(t):=\exp[-\mathrm i}% \mathrm i H_S t]\in\text{SU(2)}$, induced by the system free Hamiltonian $H_S$. As stated in \eqnref{eq:Lb_env_ind}, $\Lenvb_t$ may be equivalently specified with help of $\LIPb_t:=\dot{\bar{\mat{D}}}_t \,\bar{\mat{D}}^{-1}_t$, i.e, the Bloch ball representation of the dynamical generator defined in the interaction picture, $\LIP_t:=\dot{\bar{\Lambda}}_{t}\circ\bar{\Lambda}_{t}^{-1}$---see also \appref{app:QME_TL} for its formal microscopic definition. Importantly, $\LIPb_t$ may be directly computed for a given $\mat{D}_t$ of \eqnref{eq:affine_map} by first transforming it to the interaction picture, i.e., determining $\bar{\mat{D}}_t:=\mat{R}_S^{-1}(t)\,\mat{D}_t$ that is the Bloch-equivalent of $\bar{\Lambda}_t[\bullet]:=U_S^\dagger(t)\,\Lambda_{t}\!\left[\bullet\right] U_S(t)$ discussed in \appref{app:SP_and_IP}. On the other hand, as $\Lenvb_t$ and $\LIPb_t$ are linearly related via \eqnref{eq:Lb_env_ind}, their vector spaces must be isomorphic. Hence, in what follows, we may equivalently stick to the interaction picture and consider $\LIPb_t$ instead, in particular, $\LIPb[(1)]_t+\LIPb[(2)]_t\;\Leftrightarrow\;\Lenvb[(1)]_t+\Lenvb[(2)]_t$, when verifying the validity of generator addition. \subsubsection{Example: magnet-induced dephasing} \label{sec:magnet_dephasing} To illustrate the model, we first consider the case when it is employed to provide a simple microscopic derivation of the qubit \emph{dephasing dynamics}. We take: \begin{equation} \label{eq.magmodelonebath} H_{E}=0 ,\quad H_S=0 , \quad H_{I} = \frac{1}{2} g \,\sigma_z \otimes \op m, \end{equation} with the system Hamiltonian being absent, so that all the generators, $\LSPb_t=\LIPb_t=\Lenvb_t$ in \eqnref{eq:Lb_env_ind}, become equivalent. From \eqnref{eq.commeqn} we get \begin{equation} \dot{\mathbf{r}}^{(k)}(t)\cdot{\boldsymbol \sigma}} % {\hat {\boldsymbol \sigma} = -\frac{\mathrm i}% \mathrm i}{2} g m_{k} [ \sigma_z \, , \, \mathbf{r}^{(k)}(t)\cdot{\boldsymbol \sigma}} % {\hat {\boldsymbol \sigma} ] , \end{equation} which just yields rotations of the Bloch ball around the $z$ axis with angular speed depending on the magnetisation $m_k$, i.e., \eqnref{eq:affine_map} with (in Cartesian coordinates) \begin{equation} \label{eq.singlebathsol} \mat{R}(m,t) = \begin{pmatrix} \cos \left(g m t\right) & -\sin \left(g m t\right) & 0 \\ \sin \left(g m t\right) & \cos \left(g m t\right) & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} \end{equation} for the $N\to\infty$ limit. Integrating \eqnref{eq.singlebathsol} over the initial magnetisation distribution $p(m)$, we obtain the affine transformation \eref{eq:affine_map} to asymptotically read \begin{equation} \mat{D}_t = \begin{pmatrix} e}% \mathrm e^{- f(t)} & 0 & 0 \\ 0 & e}% \mathrm e^{- f(t)} & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}, \end{equation} with $f(t)=\frac{1}{2} \sigma^2 g^2 t^2$ and $f(t)=\gamma g t$ in case of the Gaussian \eref{eq.gaussianmagnet} and Lorentzian \eref{eq.lorentzianmagnet} distributions $p(m)$, respectively. Hence, the corresponding generators \eref{eq:Lb_env_ind} in Bloch ball representation take a simple form: \begin{equation} \LSPb_t = \begin{pmatrix} - \gamma(t) & 0 & 0 \\ 0 & - \gamma(t) & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}, \end{equation} which corresponds to the standard \emph{dephasing generator} with a time-dependent rate (as defined in \eqnref{eq:dyn_gens_dephasing} of \appref{app:dyn_gens_qubit_dyns}), i.e.: \begin{equation} \LSP_t[\bullet] = \gamma(t)\,(\sigma_z \bullet \sigma_z - \bullet) \end{equation} with $\gamma(t)=\sigma^2 g^2 t$ in the Gaussian case, and constant (semigroup) $\gamma(t)=\lambda g$ in the Lorentzian case. \subsection{Counterexamples to sufficiency of the commutativity assumptions} \label{subsec:Multiple} We now prove the regions marked `No' in \figref{fig:venn}. In particular, we provide explicit counterexamples which assure that the commutativity assumption---associated with the particular region of the Venn diagram---is generally \emph{not} sufficient for the system dynamics to be recoverable by simple addition of the generators attributed to each of the environments. In order to do so, it is enough to consider the scenario in which the qubit is independently coupled to just two magnets via the mechanism described above. \subsubsection{$\text{IS}\cap\text{IE}$ commutativity assumption} \label{app:counter_ISandIE} We start with an example of dynamics, in which the interaction Hamiltonians (trivially) commute with the free system and all the environmental Hamiltonians, but not among each other. In particular, we simply set: \begin{eqnarray} H_S & =& H_{E_1}=H_{E_2}=0, \label{eq.magmodelonlyint} \\ H_{I_1} &=& \frac{1}{2} g_1 \,\sigma_z \otimes \op m_1 , \quad H_{I_2} = \frac{1}{2} g_2\, \sigma_x \otimes \op m_2 \nonumber \end{eqnarray} with subscripts $\{1,2\}$ labelling to the first and the second magnet. As in the case of the dephasing-noise derivation above, the system Hamiltonian is absent, so the generators in \eqnref{eq:Lb_env_ind} coincide with $\LSPb_t=\LIPb_t=\Lenvb_t$. In the case of simultaneous coupling to two magnets, \eqnref{eq.globalstate} naturally generalises to \begin{equation} \rho_{SE_1E_2}(t) = \sum_{k,k'} q_{k,k'} \,\rho_S^{(k,k')}(t) \otimes \Pi_k \otimes \Pi_{k'} , \end{equation} where $q_{k,k'}$ now represents the joint probability of finding the first and the second magnet in magnetisations $m_k$ and $m_{k'}$, respectively, while $\rho_S^{(k,k')}(t)$ stands for the corresponding conditional reduced state of the system. Consequently, the (conditional) von Neumann equation \eref{eq.commeqn}, which now must be derived for $H_I=H_{I_1}+H_{I_2}$, describes the dynamics of Bloch-vectors that represent each conditional state, $\rho_S^{(k,k')}(t)$, being also parametrised by the two indices $k$ and $k'$, i.e.: \begin{equation} \dot{\mathbf{r}}^{(k,k')}(t)\cdot{\boldsymbol \sigma}} % {\hat {\boldsymbol \sigma} = -\frac{\mathrm i}% \mathrm i}{2}[g_1 m_{1,k} \sigma_z + g_2 m_{2,k'} \sigma_x \, , \, \mathbf{r}^{(k,k')}(t)\cdot{\boldsymbol \sigma}} % {\hat {\boldsymbol \sigma} ]. \label{eq:Bloch_dyn_case1} \end{equation} \eqnref{eq:Bloch_dyn_case1} leads to coupled equations in the Cartesian basis, i.e. (dropping the indices $k,k'$ and the explicit time-dependence for simplicity): \begin{subequations} \label{firsteqsmotion} \begin{align} \dot{r}_x & = - g_1 m_1 r_y , \\ \dot{r}_y & = g_1 m_1 r_x - g_2 m_2 r_z , \\ \dot{r}_z & = g_2 m_2 r_y, \end{align} \end{subequations} which can be analytically solved to obtain the $\mat{R}$-matrix in \eqnref{eq:affine_map}---labelled $\mat{R}_{12}$ to indicate that both magnets are involved. $\mat{R}_{12}(m_1,m_2,t)$ possesses now two magnetisation parameters associated with each of the magnets, and we state its explicit form in \appref{app:IS_IE}. Furthermore, we can straightforwardly obtain the solution of the equations of motion when only one of the magnets is present by simply setting either $g_1=0$ or $g_2=0$ in $\mat{R}_{12}$. In presence of only the first magnet ($g_2=0$), we recover the magnet-induced dephasing noise described above---with $\mat{R}_{12}(m_1,m_2,t)$ simplifying to $\mat{R}_1(m_1,t)$ that takes exactly the form \eref{eq.singlebathsol}. On the other hand, when only the second magnet ($g_1=0$) is present, which couples to the system via $\sigma_x$ rather than $\sigma_z$, see \eqnref{eq.magmodelonlyint}, we obtain $\mat{R}_2(m_2,t)$ as in \eqnref{eq.singlebathsol} but with coordinates cyclically exchanged---see \appref{app:IS_IE} for explicit expressions. We then average each $\mat{R}_\msf{x}$, where $\msf{x}=\{12,1,2\}$ denotes the magnet(s) being present, over the initial magnetisations in the $N\!\to\!\infty$ limit. This way, we obtain the affine maps \eref{eq:affine_map} representing the corresponding qubit dynamics for all the three cases in the asymptotic $N$ limit as: \begin{equation} \mat{D}^{(\msf{x})}_t = \int_{-\infty}^{\infty} \!\!\!d}% \mathrm d m_\msf{x} \; p(m_\msf{x}) \; \mat{R}_\msf{x}(m_\msf{x},t), \label{eq.numint} \end{equation} where in presence of both magnets $m_{12}\equiv(m_1,m_2)$ and $p(m_{12})\equiv p(m_1)\,p(m_2)$;~and we take each $p(m_i)$ to follow a Gaussian distribution \eref{eq.gaussianmagnet} with variance $\sigma_i$. Similarly, we obtain the integral expressions for the time-derivatives of the affine maps, $\dot{\mat{D}}_t^{(\msf{x})}=\int \!d}% \mathrm d m_\msf{x}\, p(m_\msf{x})\, \dot{\mat{R}}_\msf{x}(m_\msf{x},t)$ after also computing analytically all the corresponding $\dot{\mat{R}}_{\msf{x}}$. Finally, we choose particular values of $g_1$, $g_2$, $\sigma_1$, $\sigma_2$ and time $t$, in order to numerically compute the integrals over the magnetisation parameters and obtain all $\mat{D}_t^{(\msf{x})}$ and $\dot{\mat{D}}_t^{(\msf{x})}$. Our choice, allows us then to explicitly construct dynamical generators $\LSPb[(\msf{x})]_t = \dot{\mat{D}}_t^{(\msf{x})}(\mat{D}_t^{(\msf{x})})^{-1}$, which importantly exhibit $\LSPb[(12)]_t \neq \LSPb[(1)]_t + \LSPb[(2)]_t$, see \appref{app:IS_IE}. Hence, we conclude that the commutativity of the interaction Hamiltonians with both system and environment Hamiltonians, but not with each other, \emph{cannot} assure the generators to simply add at the level of the QME---as denoted by the `No' label in the region of \figref{fig:venn} representing the $\text{IS}\cap\text{IE}$ commutativity assumption. \subsubsection{$\text{II}\cap\text{IE}$ commutativity assumption} \label{app:counter_IIandIE} In order to construct an example of reduced dynamics in which, at the microscopic level, the interaction Hamiltonians commute with each other and all free environmental Hamiltonians but not the system Hamiltonian, we consider again a two-magnet model but this time set: \begin{eqnarray} H_S &=& \frac{1}{2}\omega \sigma_x, \quad H_{E_1}=H_{E_2}=0, \label{eq.magmodel2} \\ H_{I_1} &=& \frac{1}{2} g_1 \,\sigma_z \otimes \op m_1 , \quad H_{I_2} = \frac{1}{2} g_2\, \sigma_z \otimes \op m_2 . \nonumber \end{eqnarray} In contrast to the previous example, since $H_S\neq0$, in order to investigate the validity of generator addition we must either consider the environment-induced generators $\Lenvb_t$ defined in \eqnref{eq:Lb_env_ind} or the generators $\LIPb_t$ directly computed in the interaction picture. We do the latter and compute both $H_{I_i}$ in the interaction picture, i.e., $\bar{H}_{I_i}(t)$, which are obtained by replacing $\sigma_z$ Pauli operators in \eqnref{eq.magmodel2} with \begin{equation} {\bar\sigma}_z(t):=e}% \mathrm e^{\mathrm i}% \mathrm i H_S t} \sigma_z e}% \mathrm e^{-\mathrm i}% \mathrm i H_S t} = \cos(\omega t) \sigma_z + \sin(\omega t) \sigma_y. \end{equation} Importantly, ${\bar\sigma}_z(t)$ should be interpreted as a (time-dependent) operator $A$ in the general expression \eref{eq:HI_magnet} for $H_I$ that, in contrast to the previous case, is now identical for both magnets. Thus, inspecting the general expression for the dynamics \eref{eq.commeqn}, we obtain the equation of motion for the Bloch vector in the interaction picture, $\bar{\mathbf{r}}^{(k,k')}(t):=\mat{R}_S^{-1}(t)\,\mathbf{r}^{(k,k')}(t)$, that represents the qubit state conditioned on the first and second magnet possessing magnetisations $m_k$ and $m_{k'}$, respectively, as \begin{align} \dot{\bar{\mathbf{r}}}^{(k,k')}(t)\cdot{\boldsymbol \sigma}} % {\hat {\boldsymbol \sigma} =& -\frac{\mathrm i}% \mathrm i}{2} (g_1 m_{1,k} + g_2 m_{2,k'})\times \label{eq:Bloch_dyn_case2}\\ &\quad\times [\cos(\omega t) \sigma_z + \sin(\omega t) \sigma_y \, , \, \bar{\mathbf{r}}^{(k,k')}(t)\cdot{\boldsymbol \sigma}} % {\hat {\boldsymbol \sigma} ] , \nonumber \end{align} which leads to coupled equations (again, dropping the indices $k,k'$ and the explicit time-dependence): \begin{subequations} \label{secondeqsmotion} \begin{align} \dot{\bar{r}}_x & = (g_1 m_1 + g_2 m_2) (\sin(\omega t) \bar{r}_z - \cos(\omega t) \bar{r}_y) , \\ \dot{\bar{r}}_y & = (g_1 m_1 + g_2 m_2) \cos(\omega t) \bar{r}_x , \\ \dot{\bar{r}}_z & = - (g_1 m_1 + g_2 m_2) \sin(\omega t) \bar{r}_x. \end{align} \end{subequations} As before, see \appref{app:II_IE}, we solve the above equations of motion in order to obtain the $\bar{\mat{R}}$-matrix of \eqnref{eq:affine_map} in the interaction picture, i.e., $\bar{\mat{R}}_{12}(m_1,m_2,t)$. Again, by setting either $g_2=0$ or $g_1=0$, we obtain expressions for $\bar{\mat{R}}_1$ and $\bar{\mat{R}}_2$, respectively, corresponding to the cases when only first or second magnet is present. We then also compute all $\dot{\bar{\mat{R}}}_\msf{x}$ with $\msf{x}=\{12,1,2\}$, in order to arrive at integral expressions for both the affine maps and their time-derivatives, i.e., $\bar{\mat{D}}_t^{(\msf{x})}$ and $\dot{\bar{\mat{D}}}_t^{(\msf{x})}$, respectively, computed now in the interaction picture. As in the previous example, we take initial magnetisation distributions of both magnets to be Gaussian and fix all the model parameters (i.e., $\omega$ and $g_1$, $g_2$ for the system and interaction, $\sigma_1$, $\sigma_2$ for the magnets, as well as the time $t$) in order to numerically perform the integration over magnetisations $m_\msf{x}$. We then find, see \appref{app:II_IE}, a choice of parameters for which it is clear that the dynamical generators, $\LIPb[(\msf{x})]_t = \dot{\bar{\mat{D}}}_t^{(\msf{x})}(\bar{\mat{D}}_t^{(\msf{x})})^{-1}$, fulfil $\LIPb[(12)]_t \neq \LIPb[(1)]_t + \LIPb[(2)]_t$. Hence, we similarly conclude that the commutativity of all the interaction Hamiltonians with each other, and all the free Hamiltonians of environments also \emph{cannot} assure the generators to simply add at the level of the QME---proving the `No' label in the region of \figref{fig:venn} representing the $\text{II}\cap\text{IE}$ commutativity assumption. \section{Conclusions} \label{sec:conclusion} We have investigated under what circumstances modifications to open system dynamics can be effectively dealt with at the master equation level by adding dynamical generators. We have identified a condition---semigroup simulability and commutativity preservation---applicable beyond Markovian (CP-divisible) dynamics which guarantees generator addition to yield physical evolutions. We have also demonstrated by considering simple qubit generators that even mild violation of this condition may yield unphysical dynamics under generator addition. Moreover, even when physically valid, generator addition does not generally correspond to the real evolution derived from a microscopic model describing interactions with multiple environments. We have formulated a general criterion under which the addition of generators associated with each individual environment yields the correct dynamics. We have then shown that this condition is generally satisfied in the weak-coupling regime, whenever it is correct to use a master equation derived assuming a tensor-product ansatz for the global state describing the system and environments. Finally, we have demonstrated that, at the microscopic level, the commutativity of interaction Hamiltonians among each other and with the system Hamiltonian also ensures addition of dynamical generators to give the correct dynamics. We believe that our results may prove useful in areas where the master equation description of open quantum systems is a common workhorse, including quantum metrology, thermodynamics, transport, and engineered dissipation. \begin{acknowledgments} We would like to thank L.~Aolita and N.~Bernades for interesting discussions on non-Markovianity that sparked this work, as well as A.~Smirne, M.~Lostaglio, S.~Huelga, L.~Correa and A.~S.~S{\o}rensen for helpful exchanges. J.K.~and B.B.~acknowledge support from the Spanish MINECO (Grant QIBEQI FIS2016-80773-P and Severo Ochoa SEV-2015-0522), Fundaci\'{o} Privada Cellex, Generalitat de Catalunya (SGR875 and CERCA Program). J.K.~is also supported by the EU Horizon 2020 programme under the MSCA Fellowship Q-METAPP (no. 655161), while B.B.~by the ICFO-MPQ Fellowship. M.P.-L.~acknowledges also support from the Alexander von Humboldt Foundation. \end{acknowledgments}
1,477,468,750,266
arxiv
\section{Introduction} \label{sec:introduction} \par Multi-robot systems consists of network of robots which cooperate to perform tasks such as consensus, formation control etc. \cite{jadbabaie2003coordination,murray2007recent, olfati2007consensus,tanner2007flocking}. Mobile sensor networks consists of network of robots mounted with sensors deployed to perform some distributed sensing task such as monitoring, coverage etc \cite{cortes2004coverage}. In this paper, we consider the problem of estimation of an unknown scalar field using mobile sensor networks. There have been many works related to scalar field estimation in literature. Several works have studied field estimation using wireless sensor networks. See for example \cite{novak2004a,bajwa2005a}. In \cite{waterschoot2011a} the scalar field is assumed to be modelled using a partial differential equation and finite element methods are used for estimating the field. In \cite{vuran2006a,zhang2009a,dardari2007a,dogandzic2006a, graham2012a,nevat2013a} the field is modelled as spatial random process and estimated using samples from the sensor nodes. In \cite{ramachandran2017a} field reconstruction is posed as an optimization problem constrained by linear dynamics and a gradient-based method is used to solve the problem. In \cite{bergamo2012a}, the scalar field is assumed to be linearly parameterized in terms of Gaussian basis functions and the measurements from the sensors are fused together to form an estimate for the scalar field. In most of these cases, the sensors are assumed to be fixed and distributed over the region of interest. Usually a large number of sensors are required to be installed for achieving enough spatial resolution. Using mobile sensor networks can be highly advantageous since they can move around the region of interest and collect measurements adaptively, the number of sensors required is greatly reduced. In \cite{la2013a,la2015a}, scalar field estimation is done with mobile sensor network by fusing sensor measurements using consensus filters. In \cite{wu2011a}, information about a scalar field is obtained by exploring the level surfaces of the field using a mobile sensor network. In \cite{zhang2007a}, a static sensor network is used along with a mobile robot to estimate a scalar field by combining the robot measurements with the sensor network measurements and planning the robot trajectory to minimize some reconstruction error. However the method we propose in the current work is motivated by the coverage control problem \cite{cortes2004coverage, Schwager2009, rihab2016a, rihab2018a, rihab2018b}. \par In the coverage problem, we are interested in controlling the robots so that the robots attain an optimal configuration or a near optimal configuration with respect to a scalar field. In \cite{cortes2004coverage}, this is achieved by minimizing a cost function which gives a measure of how good the coverage is. In \cite{Schwager2009}, the authors extended the coverage algorithm for the case where the scalar field is unknown. The scalar field is assumed to be linearly parameterized with unknown constant parameters. In order to achieve the coverage goal, the robot needs to adapt the unknown parameters so that the estimated scalar field is close to the actual field. The exact estimation (asymptotically) of the density function parameters require a time integral quantity to be positive definite, which is a sufficient richness condition for the robot trajectories. See \cite{Schwager2009} for more details. In general, the robot trajectories need not meet this condition since the trajectories of the robots are decided based on the gradient of the coverage cost function, not on estimating the density function parameters. However, it is crucial to estimate the true values of those parameters since the estimation of the unknown scalar field is often the primary objective for a robotic sensor network and it may lead more efficient deployment of robots. For example, in case of radiation spill, if we have a good estimate of the radiation concentration, we may directly deploy agents to regions of high concentration. Thus in this work, we look at a slightly different problem closely related to and motivated by the coverage problem discussed above. Our primary aim in this paper is to accurately estimate the scalar field not the coverage. The unknown scalar field is approximated using positive definite radial basis functions and we use a similar adaptive approach as that in \cite{Schwager2009} for parameter estimation. \par In Section \ref{sec:problem}, we discuss the problem statement in detail. In section \ref{sec:singleagent}, we consider the single mobile sensor case, followed by the mobile sensor network case in Section \ref{sec:multiagent}. In Section \ref{sec:centresnotknown} we discuss the case where the centres of the radial basis functions are not known exactly, but only to within an $\epsilon$-accuracy. We present some simulations to verify the results in section \ref{sec:simulations}. We conclude the paper with Section \ref{sec:conclusion}. \section{Preliminaries and Problem Statement} \label{sec:problem} We denote the set of positive real numbers by $\mathbb{R}_+$. The components of a vector $v$ are denoted using superscripts $v^i$. Subscripts on vector quantities refer to the agent or mobile sensor the quantity is associated to. For example, $v_i$ refers to a quantity associated with agent $i$. \par We consider a compact region $\mathcal{Q} \subset \mathbb{R}^n$ with $N$ mobile sensors. The position of the sensors is denoted by $x_i; \,\,\, i=1,2,\dots,N$. There also exists a continuous scalar field $\phi : \mathcal{Q} \to \mathbb{R}_+$ over $\mathcal{Q}$ which is unknown. The objective is to estimate the unknown scalar field using $N$ mobile sensors assuming the sensors can measure the value of the scalar field at their respective positions. We assume that the unknown scalar field can be represented by positive definite radial basis functions (RBF). In other words, we assume the density function can be parameterized as \begin{subequations} \begin{align} \label{eqn:phi1} \phi(q) &= \mathcal{K}(q)^{\scriptscriptstyle\top} a \\ &= \sum_{i=1}^p \mathcal{K}^{i}(q) a^{i} \end{align} \end{subequations} where $a \in \mathbb{R}^p$ is a constant vector, and $\mathcal{K}(q) = \left[ \mathcal{K}^{1}(q) \,\,\, \mathcal{K}^{2}(q) \,\,\, \dots \,\,\, \mathcal{K}^{p}(q) \right]^{\scriptscriptstyle\top}$ with $\mathcal{K}^i : \mathcal{Q} \to \mathbb{R}_+$ given by $\mathcal{K}^{i}(q) = \varphi(\|c_i - q\|)$ with $\varphi: \mathbb{R}_+ \to \mathbb{R}_+$ are radial basis functions for a set of points $c_i$. This assumption is common in neural networks and justified as follows: \begin{theorem}[\cite{park1991,lavretsky2013book}] \label{thm:approx1} For any continuous function $f:\mathbb{R}^n \to \mathbb{R}$ and any $\epsilon > 0$, there is an RBF network with $p$ elements, a set of centers $\{c_i\}_{i=1}^p$, such that we can define \begin{align*} \hat{f}(q) &= \sum_{i=1}^p a^{i} \mathcal{K}^{i}(q) = a^{\scriptscriptstyle\top} \mathcal{K}(q) \end{align*} with $\|f-\hat{f}\|_{L_2}^2 \leq \epsilon = \mathcal{O}\left(p^{-\frac{1}{n}}\right)$. \end{theorem} The theorem tells us that we can approximate a continuous function to an arbitrary accuracy by using a network of RBF elements. An example of positive definite radial kernel is the Gaussian kernel, \begin{equation} \mathcal{K}^{i}(q) = \varphi(\|c_i-q\|) = \exp\left\{-\frac{\|c_i-q\|^2}{\sigma_i^2}\right\} \label{eqn:gaussian} \end{equation} where $c_i$ are the centres of the Gaussian kernels. The main problem studied in this work is to accurately determine the parameters $a^i$ so that the scalar field $\phi(.)$ may be accurately reconstructed. We make the following assumption: \begin{assumption} The centres $c_i$ of the radial functions are known to all the mobile agents. \end{assumption} The strengths $a_i$ of individual radial functions are unknown and need to be estimated. To proceed, we require the following theorem: \begin{theorem}[Micchelli's Theorem \cite{micchelli1986}] \label{thm:approx2} Given $p$ distinct points $c_1,c_2,\dots,c_p$ in $\mathbb{R}^q$, the $p \times p$ matrix $\mathrm{K}$, whose elements are $\mathrm{K}_{ij} = \mathcal{K}^{i}(c_j) = \varphi(\|c_i-c_j\|)$ is non-singular. \end{theorem} The theorem says that for positive definite radial kernels, the $p \times p$ matrix formed by evaluating the radial functions at each of the centres is non-singular. In what follows, we assume that $\phi(.)$ can be \emph{exactly parameterized} by the RBF kernels. A consequence of theorem \ref{thm:approx2} is given below: \begin{lemma} The matrix $S$ given by \begin{equation} S:= \int_{\mathcal{Q}} \mathcal{K}(q) \mathcal{K}(q)^{\scriptscriptstyle\top} dq \end{equation} where $\mathcal{K}(q) = \left[ \mathcal{K}^{1}(q) \,\,\, \mathcal{K}^{2}(q) \,\,\, \dots \,\,\, \mathcal{K}^{p}(q) \right]^{\scriptscriptstyle\top}$ and $\phi$ is parameterized as in (\ref{eqn:phi1}), is positive definite. \label{lemma:Spd} \end{lemma} \begin{proof} From the definition of $S$, we know it is atleast positive semi-definite. Therefore for any $v \neq 0$, $v^{\scriptscriptstyle\top} S v \geq 0$ or \[ \int_{\mathcal{Q}} | \mathcal{K}(q)^{\scriptscriptstyle\top}v |^2 dq \geq 0 \] Now, since $\mathcal{K}(q)$ consists of positive definite radial kernels, we have from theorem \ref{thm:approx2} that \begin{equation*} \left( \begin{array}{cccc} \mathcal{K}^1(c_1) & \mathcal{K}^1(c_2) & \dots & \mathcal{K}^1(c_p) \\ \mathcal{K}^2(c_1) & \mathcal{K}^2(c_2) & \dots & \mathcal{K}^2(c_p) \\ \dots & \ddots & \dots & \vdots \\ \mathcal{K}^p(c_1) & \mathcal{K}^p(c_2) & \dots & \mathcal{K}^p(c_p) \\ \end{array} \right) \end{equation*} is positive definite. This implies that the vectors $\mathcal{K}(c_j); \,\, j=1,2,\dots,p$ are linearly independent. Thus, given any $v \neq 0, v \in \mathbb{R}^p$, there exists some $j \in \{1,2,\dots,p\}$ such that $\mathcal{K}(c_j)^{\scriptscriptstyle\top} v$ is non-zero. This along with the fact that $\mathcal{K}(\cdot)$ is continuous allows us to conclude that \begin{equation*} \int_Q | \mathcal{K}(q)^{\scriptscriptstyle\top} v |^2 dq > 0 \quad \mbox{for any } v \neq 0 \end{equation*} Hence, $S$ is positive definite. \end{proof} \section{Single Mobile Robot Sensor} \label{sec:singleagent} In this section, we consider the case of a single mobile sensor ($N=1$) with position $x(t)$ at time $t$ deployed in the region $\mathcal{Q}$ to estimate the scalar field parameter $a$ (as given by equation \eqref{eqn:phi1}). The estimate of $a$ is denoted by $\hat{a}$. Then we can state the following corollary to lemma \ref{lemma:Spd}. \begin{corr} Suppose the mobile sensor moves continuously within the domain $Q$, such that in time $T$, it passes through each of the RBF centres $c_i\, ; \,\, i=1,2,\dots,p$, then \begin{equation} \mathcal{S}_T := \int_0^T \mathcal{K}(x(t)) \mathcal{K}(x(t))^{\scriptscriptstyle\top} dt \label{eqn:Tmatrix} \end{equation} is positive definite. \label{corr:pd} \end{corr} \begin{proof} The proof is essentially the same and follows from lemma \ref{lemma:Spd}. \end{proof} \noindent Now consider the following integrators running on the mobile sensor: \begin{equation} \begin{aligned} \dot{\Lambda} &= \mathcal{K}(t)\mathcal{K}(t)^{\scriptscriptstyle\top} \\ \dot{\lambda} &= \mathcal{K}(t)\phi(t) \\ \end{aligned} \end{equation} where $\mathcal{K}(t) := \mathcal{K}(x(t))$ denotes the value of function $\mathcal{K}(\cdot)$ at the point where the robot is at time $t$ and $\phi(t)$ is the measured value of the density function $\phi(\cdot)$ by the robot at time $t$. \begin{prop} \label{prop:1} Suppose the mobile sensor moves such that it passes through each of the centres $c_i; \,\, i=1,2,\dots,p$ in some finite time $T>0$, and during this motion updates its estimate $\hat{a}$ of $a$ by \begin{equation} \dot{\hat{a}} = -\Gamma \left( \Lambda \hat{a} - \lambda \right), \label{eqn:single_updatelaw} \end{equation} where $\Gamma$ is a positive definite gain matrix, then the estimate $\hat{a}$ is bounded and converges asymtotically to the true value $a$. \end{prop} \begin{proof} Under the assumptions of the proposition \ref{prop:1} and corollary \ref{corr:pd}, \begin{equation*} S(T):= \int_0^T \mathcal{K}(\tau) \mathcal{K}(\tau)^{\scriptscriptstyle\top} d \tau \end{equation*} is positive definite. This implies that \begin{equation*} S(t) = \int_0^t \mathcal{K}(\tau) \mathcal{K}(\tau)^{\scriptscriptstyle\top} d \tau \end{equation*} is positive definite for all $t \geq T$. \\ Now consider the positive definite candidate Lyapunov function, \begin{equation} V = \frac{1}{2} \tilde{a}^{\scriptscriptstyle\top} \Gamma^{-1} \tilde{a} \end{equation} where $\tilde{a} = \hat{a} - a$ is the estimation error. Taking the derivative of $V$, we obtain \begin{equation*} \dot{V} = \tilde{a}^{\scriptscriptstyle\top} \Gamma^{-1} \dot{\hat{a}} \end{equation*} Substituting the update law from (\ref{eqn:single_updatelaw}) and simplifying, we get \begin{equation*} \begin{aligned} \dot{V} &= -\tilde{a}^{\scriptscriptstyle\top} S(t) \tilde{a} \\ \dot{V} &\leq \left\{ \begin{array}{ll} 0 & \mbox{ for } t \in [0,T] \\ -\alpha V & \mbox{ for } t > T, \end{array} \right. \end{aligned} \end{equation*} where $\alpha = \frac{\lambda_{\min}(S(T))}{\lambda_{\max} (\Gamma^{-1})} > 0$, $\lambda_{min}(\cdot)$ and $\lambda_{\max}(\cdot)$ denoting the minimum and maximum eigenvalues of their argument. Since $V$ is always non-increasing and bounded from below, $\tilde{a}(t)$ is bounded for all $t > 0$. Since $\dot{V} < 0$ for all $t \geq T$, then we have $V(t) \to 0$ as $t \to \infty$. This implies that $\tilde{a} \to 0$ as $t \to \infty$. \end{proof} \begin{remark} The matrix $S(t)$ being positive definite for all $t \geq T$ is a \emph{sufficient excitation} condition, similar to (but weaker than) the persistency of excitation condition, on the robot trajectories which ensures parameter convergence. See \cite{Schwager2009} for more information. \end{remark} \subsection{Relaxing the condition in corollary \ref{corr:pd}} \label{sec:relaxpd} In corollary \ref{corr:pd}, it was required that the mobile sensor passes through the centres $c_i$ of the radial kernels. This can be relaxed so that the mobile sensor need only move through a sufficiently small neighbourhood of each of the centres $c_i$, as described in \cite{gorinevsky1995persistency}. Consider the vector $\mathcal{X}(q) := \mathrm{K}^{-1} \mathcal{K}(q)$ where $\mathrm{K}$ is the matrix specified in theorem \ref{thm:approx2}. Then $\mathcal{X}(q)$ has the property that $\mathcal{X}^j(c_k) = \delta_{jk}$ where $\delta_{jk}$ is the Kronecker delta function and $\mathcal{X}^j(c_k)$ is the $j$-th component of $\mathcal{X}(c_k)$. Now consider the diagonal dominance sets defined by ($0 < \varepsilon < 1$) \[ \mathcal{A}_j^{\varepsilon} := \left\{ q \in \mathcal{Q} \, : \, |\mathcal{X}^j(q)| - \sum_{i=1,i \neq j}^{p} |\mathcal{X}^i(q)| > \varepsilon \right\}. \] It can be easily seen that $\mathcal{A}_j^{\varepsilon}$ contains the centre $c_j$ and thus $\mathcal{A}_j^{\varepsilon}$ is an open subset containing $c_j$. The following lemma is an adaptation of theorem $1$ in \cite{gorinevsky1995persistency}: \begin{lemma} Suppose that the mobile sensor moves continuously throughout the domain $\mathcal{Q}$ such that in time $T$, the trajectory traverses through each of the neighbourhoods $\mathcal{A}_j^{\varepsilon} , \,\, j=1,2,\dots,p$, then the matrix $\mathcal{S}_T$ given by equation (\ref{eqn:Tmatrix}) is positive definite. \label{lemma:relaxedpd} \end{lemma} \begin{proof} $\mathcal{S}_T$ can be written as $\mathcal{S}_T = \mathrm{K} \bar{\mathcal{S}}_T \mathrm{K}^{\scriptscriptstyle\top}$ where \[ \bar{\mathcal{S}}_T = \int_0^T \mathcal{X}(x(t)) \mathcal{X}(x(t))^{\scriptscriptstyle\top} dt. \] Since $\mathrm{K}$ is invertible, $\mathcal{S}_T$ is positive definite iff $\bar{\mathcal{S}}_T$ is positive definite. $\bar{\mathcal{S}}_T$ is positive definite iff there exists some $\delta > 0$ such that $\underaccent{\bar}{\sigma}(\bar{\mathcal{S}}_T) \geq \delta$ where $\underaccent{\bar}{\sigma}(A)$ denotes the minimum singular value of $A$. Suppose $\bar{\mathcal{S}}_T$ is not positive definite under the conditions of the theorem. Then there exists no $\delta > 0$ such that $\underaccent{\bar}{\sigma}(\bar{\mathcal{S}}_T) \geq \delta$. This implies that for any $\delta > 0$, there exists $u \neq 0, \|u\|=1 $ such that $u^{\scriptscriptstyle\top} \bar{\mathcal{S}}_T u < \delta$, i.e., \[ \int_0^T u^{\scriptscriptstyle\top} \mathcal{X}(x(t)) \mathcal{X}(x(t))^{\scriptscriptstyle\top} u \,dt < \delta \] Let $i$ be the index of the components of $u$ which has the largest absolute value. i.e., $|u^i| \geq |u^j| \,\, \forall j$. Also let $[t_{i1},t_{i2}] \subset [0,T]$ be the subinterval during which the mobile sensor trajectory is contained in the set $\mathcal{A}_i^{\varepsilon}$. Clearly since the set $\mathcal{A}_i^{\varepsilon}$ is open and the trajectory is continuous, $[t_{i1},t_{i2}]$ has finite positive length. Then, \begin{align} \int_0^T u^{\scriptscriptstyle\top} & \mathcal{X}(x(t)) \mathcal{X}(x(t))^{\scriptscriptstyle\top} u \,dt = \int_0^T |\mathcal{X}^{\scriptscriptstyle\top}u|^2 \, dt \\ & \geq \int_{t_{i1}}^{t_{i2}} |\mathcal{X}^{\scriptscriptstyle\top}u|^2 \, dt = \int_{t_{i1}}^{t_{i2}} |\sum_{j=1}^p \mathcal{X}^j u^j | ^2 \, dt \\ & \geq \int_{t_{i1}}^{t_{i2}} ( |\mathcal{X}^i u^i| - |\sum_{j=1,j \neq i}^p \mathcal{X}^j u^j| )^2 \, dt \\ & \geq \int_{t_{i1}}^{t_{i2}} ( |\mathcal{X}^i u^i| - \sum_{j=1,j \neq i}^p |\mathcal{X}^j u^j| )^2 \, dt \\ & \geq \int_{t_{i1}}^{t_{i2}} (( |\mathcal{X}^i| - \sum_{j=1,j \neq i}^p |\mathcal{X}^j| ) |u^i| )^2 \, dt \\ & \geq \int_{t_{i1}}^{t_{i2}} \varepsilon^2 |u^i|^2 \, dt = (t_{i2} - t_{i1}) \varepsilon^2 |u^i|^2. \end{align} Choosing $\delta < (t_{i2} - t_{i1}) \varepsilon^2 |u^i|^2$ leads to a contradiction. Therefore, $\bar{\mathcal{S}}_T$ is positive definite and hence $\mathcal{S}_T$ is positive definite. \end{proof} \subsection*{A sufficient condition for satisfaction of lemma \ref{lemma:relaxedpd}'s assumptions:} Since checking the condition of the mobile sensor traversing througn the sets $\mathcal{A}_j^{\varepsilon}$ in lemma \ref{lemma:relaxedpd} involves transforming the vector $\mathcal{K}(q)$ at each instant which can be cumbersome if the number of parameters are large, we present a simpler sufficient condition which ensures that a given point $q$ is inside the set $\mathcal{A}_j^{\varepsilon}$. Note that the conditions derived are not equivalent to the conditions of the lemma, but only sufficient and thus can be conservative. However it is beneficial during implementations. \begin{lemma} Given the mobile sensor position $x$, if \begin{equation} \|\mathcal{K}(x) - \mathcal{K}(c_j)\|_{\infty} < \frac{(1 - \epsilon)}{2 (p-1) \|\mathrm{K}^{-1}\|_{\infty}}, \end{equation} then $x \in \mathcal{A}_j^{\varepsilon}$. \end{lemma} \begin{proof} We have the $i$-th component of $\mathcal{X}(x)$, $\mathcal{X}^i(x) = \left[ \mathrm{K}^{-1} \mathcal{K}(x) \right]^i$. Then \begin{equation} \mathcal{X}^i(x) - \mathcal{X}^i(c_j) = \left[ \mathrm{K}^{-1} (\mathcal{K}(x) - \mathcal{K}(c_j)) \right]^i \end{equation} Now consider the mapping \begin{equation} \left[ \begin{array}{c} y_1 \\ y_2 \end{array} \right] = B_j \left( \mathcal{X}(x) - \mathcal{X}(c_j) \right) \end{equation} where \begin{equation} B_j = \left[ \begin{array}{ccccccc} 0 & \dots & 0 & 1 & 0 & \dots & 0 \\ 1 & \dots & 1 & 0 & 1 & \dots & 1 \end{array} \right] \end{equation} The $1$ in the first row and the $0$ in the second row occurs at the $j$-th column. If the infinity-norm of $y = [y_1, y_2]^{\scriptscriptstyle\top}$, $\|y\|_{\infty}<(1-\varepsilon)/2$, then it is guaranteed that $x \in \mathcal{A}_j^{\varepsilon}$. We also have \begin{align} \|y\|_{\infty} & \leq \|B\|_{\infty} \|\mathcal{X}(x) - \mathcal{X}(c_j)\|_{\infty} \\ & \leq \|B\|_{\infty} \|\mathrm{K}^{-1}\|_{\infty} \|\mathcal{K}(x) - \mathcal{K}(c_j)\|_{\infty} \end{align} Requiring the above bound to be less than $\frac{(1-\epsilon)}{2}$ and noting that $\|B\|_{\infty} = (p-1)$ we have \begin{equation} \|\mathcal{K}(x) - \mathcal{K}(c_j)\|_{\infty} < \frac{(1 - \epsilon)}{2 (p-1) \|\mathrm{K}^{-1}\|_{\infty}} \end{equation} \end{proof} Any point $p$ which satisfies the above condition will lie in the set $\mathcal{A}_j^{\varepsilon}$ although all points in $\mathcal{A}_j^{\varepsilon}$ are not characterized by the above condition. \section{Mobile Sensor Network} \label{sec:multiagent} Suppose that we have $N$ mobile sensors deployed in the region $\mathcal{Q}$, with the position of the $i$-th mobile sensor denoted by $x_i$. We want to estimate the function $\phi: \mathcal{Q} \to \mathbb{R}_+$ collectively. We assume that equation (\ref{eqn:phi1}) holds so that we can linearly parameterize $\phi(\cdot)$ in terms of radial basis functions. We partition the region into $N$ components $\mathcal{Q}_i \,\, (i=1,2,\dots,N)$. Correspondingly we partition the basis function vector $\mathcal{K}(q)$ and the parameter vector $a$ as \begin{equation} \mathcal{K}(q) = \left[ \begin{array}{c} \mathcal{K}^{(1)}(q) \\ \mathcal{K}^{(2)}(q) \\ \vdots \\ \mathcal{K}^{(N)}(q) \end{array} \right], \qquad a = \left[ \begin{array}{c} a^{(1)} \\ a^{(2)} \\ \vdots \\ a^{(N)} \end{array} \right] \label{eqn:partition} \end{equation} Each region $\mathcal{Q}_i$ contains the centres of the basis functions in the sub-vector $\mathcal{K}^{(i)}$. We assign each region $\mathcal{Q}_i$ to one of the mobile sensors where the sensor operates. This assignment is permanent and each mobile sensor starts within its region $\mathcal{Q}_i$ and moves in $\mathcal{Q}_i$. The algorithms presented below do not depend on any particular partition or assignment of mobile sensors, and this can be done arbitrarily. One particular method to divide the region and assign the sensors will be discussed in section \ref{sec:simulations}. Assuming the region $\mathcal{Q}$ is partitioned and the mobile sensors are assigned to each partition, we consider the graph $\mathcal{G}$ with the vertices representing the mobile sensors and an edge existing between two sensors if they belong to adjacent partitions. By adjacent partitions, we mean two partitions which share a subset of their boundary with each other that is of non-zero length. See figure \ref{fig:estimation_distributed} for an illustration. \begin{figure} \centering \includegraphics[scale=1.8]{estimation_distributed2.pdf} \caption{Illustration of four mobile sensors with a partition of domain $\mathcal{Q}$: A graph with mobile sensors as root nodes and edge between neighbouring sensors is also depicted in the figure.} \label{fig:estimation_distributed} \end{figure} Now we consider two cases: (1) each mobile sensor estimates the entire parameter vector, and (2) each mobile sensor estimates only part of the parameter vector. \subsection{Each mobile sensor estimates the full parameter vector} \label{sec:ntwk_full} \par \noindent In this subsection, we consider the case where each mobile sensor estimates the entire parameter vector, the estimate of sensor $i$ being denoted by $\hat{a}_i$. To proceed, we consider the following integrators running on mobile sensor $i$: \begin{align} \dot{\Lambda}_i &= \mathcal{K}_i(t) \mathcal{K}_i(t)^{\scriptscriptstyle\top} \\ \dot{\lambda}_i &= \mathcal{K}_i(t) \phi_i(t) \end{align} where $\mathcal{K}_i(t) = \mathcal{K}(x_i(t))$ and $\phi_i(t) = \phi(x_i(t))$ is the measurement of $\phi(.)$ obtained by sensor $i$ at its location at time $t$. \par \noindent We consider the following update law for the parameter estimate of mobile sensor $i$: \begin{equation} \dot{\hat{a}}_i = -\Gamma \left( \Lambda_i \hat{a}_i - \lambda_i \right) - \Gamma \zeta \sum_{j=1}^N l_{ij} \left( \hat{a}_i - \hat{a}_j \right) \label{eqn:adaptationlaw_estimationmultiagent1} \end{equation} with $\hat{a}_i(0)$ being arbitrary; where $\zeta$ is a positive constant, $l_{ij}$ is the weight of the edge between sensors $i$ and $j$. The weight $l_{ij}$ is zero if there is no edge between sensor $i$ and $j$ and positive otherwise. The first term corresponds to the measurement update of mobile sensor $i$ and the second term is a consensus term to ensure that the estimates of all the mobile sensors asymptotically agree or come close to each other. This is critical in establishing the convergence of the estimation error as will be shown below. \begin{lemma} \label{lemma:estimation_multiagent1} Suppose the mobile sensors translate continuously such that in some time $T>0$, each sensor $i$ passes through each of the centres in the region $\mathcal{Q}_i$ so that \[ \int\limits_{0}^T \mathcal{K}_i^{(i)}(t) \mathcal{K}_i^{(i)}(t)^{\scriptscriptstyle\top} dt > 0, \quad \mbox{for} \,\, i=1,2,\dots,N. \] where $\mathcal{K}_i^{(i)}(t)$ denotes part of the vector $\mathcal{K}_i(t)$ corresponding to the partition \eqref{eqn:partition}. Then, we have \[ \sum_{i=1}^N \int\limits_{0}^T \mathcal{K}_i(t) \mathcal{K}_i(t)^{\scriptscriptstyle\top} dt > 0. \] \end{lemma} \begin{proof} Since each mobile sensor $i$ passes through the centres in the region $\mathcal{Q}_i$, the union of the trajectories of all mobile sensors cover all the centres, which implies that the matrix \begin{equation} \sum_{i=1}^N \int\limits_{0}^T \mathcal{K}_i(t) \mathcal{K}_i(t)^{\scriptscriptstyle\top} dt \label{eqn:matrix-lambdai} \end{equation} is positive definite using the same arguments as in proof of corrollary \ref{corr:pd} and lemma \ref{lemma:Spd}. \end{proof} \begin{remark} Lemma \ref{lemma:estimation_multiagent1} states that each agent passing through the centres in its partition $\mathcal{Q}_i$ is sufficient to ensure that the total sum matrix (\ref{eqn:matrix-lambdai}) is positive definite. \end{remark} \par \noindent Now we have the following result: \begin{theorem} \label{thm:estimation_multiagent1} Suppose the $N$ mobile sensors adopt the parameter adaptation law (\ref{eqn:adaptationlaw_estimationmultiagent1}). Further assume that each mobile sensor $i$ traverses a trajectory going through all the basis function centres in $\mathcal{Q}_i$. Then \begin{equation} \lim_{t \to \infty} \left( \hat{a}_i - a \right) = 0, \end{equation} for each $i \in \{1,2,\dots,N\}$, i.e. the mobile sensors arrive at a common value for the parameters, the common value being the true parameter value. \end{theorem} \begin{proof} Consider the function \begin{equation} V = \frac{1}{2} \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \Gamma^{-1} \tilde{a}_i. \end{equation} Taking the derivative of $V$, \begin{align*} \dot{V} &= \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \Gamma^{-1} \dot{\hat{a}}_i \\ &= -\sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \left( \Lambda_i \hat{a}_i - \lambda_i \right) - \zeta \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} l_{ij} \left( \hat{a}_i - \hat{a}_j \right) \end{align*} Substituting for the variables $\Lambda_i$, $\lambda_i$ and rearranging the second term, \begin{align} \dot{V} &= -\sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^t \mathcal{K}_i(\tau) \mathcal{K}_i^{\scriptscriptstyle\top}(\tau) d\tau \tilde{a}_i - \zeta \sum_{\alpha=1}^p \hat{a}^{{\alpha}^{\scriptscriptstyle\top}} L \hat{a}^{\alpha}\\ & \leq 0. \end{align} where $\hat{a}^{\alpha} = [ a_1^{\alpha} \, a_2^{\alpha} \, \dots \, a_N^{\alpha}]^{\scriptscriptstyle\top}$ is the vector of the estimate of parameter $\alpha$ of all the sensors stacked together. The function $V$ is lower bounded and non-increasing, and therefore tends to a limit. This implies that $\dot{V}$ is integrable and also that the estimates $\hat{a}_i$ are bounded. $\dot{V}$ is also uniformly continuous since the derivative of each term in $\dot{V}$ is bounded. Using Barbalat's lemma, we conclude that $\dot{V}$ tends to zero as $t \to \infty$. From the second term in $\dot{V}$, noting that $L$ is the laplacian matrix of the connected graph $\mathcal{G}$ with nullspace $k\mathbf{1}$ where $\mathbf{1}$ is the vector of ones and $k \in \mathbb{R}$, we see that as $t \to \infty$, $\hat{a}^{\alpha} \to k_{\alpha} \mathbf{1}$ for some $k_{\alpha}$. Then, \[ \lim_{t \to \infty} \left( \hat{a}_i - \hat{a}_j \right) = 0. \] since $\hat{a}_i = [ a_i^{1} \, a_i^{2} \, \dots \, a_i^{p}]^{\scriptscriptstyle\top}$. Now from the first term of $\dot{V}$ we have, as $t \to \infty$, \[ - \tilde{a}^{\scriptscriptstyle\top} \sum_{i=1}^N \int\limits_0^t \mathcal{K}_i(\tau) \mathcal{K}_i^{\scriptscriptstyle\top}(\tau) d\tau \tilde{a} = 0 \] where $\tilde{a}$ is the common value to which the mobile sensor parameter estimation errors $\tilde{a}_i$ converge. Then using lemma \ref{lemma:estimation_multiagent1}, it follows that $\lim_{t \to \infty} \tilde{a} = 0$ and the parameter estimates converge to the true parameter values. \end{proof} \begin{remark} Although lemma \ref{lemma:estimation_multiagent1} and theorem \ref{thm:estimation_multiagent1} requires that the mobile sensors move through the centres, the relaxation given in section \ref{sec:relaxpd} (requiring that the mobile sensors move only through the neighbourhoods $\mathcal{A}_j^{\varepsilon}$ of the centres) also applies here, as well as in all the following results which requires the sensors to move through the centres. \end{remark} \subsection{Each mobile sensor estimates only part of the parameter vector} \label{sec:ntwk_part} If the number of parameters $p$ is large as could be the case when the density function is completely unknown, each mobile sensor estimating the entire parameter vector could be computationally intensive, as it would require computing $\left( \frac{p(p+1)}{2} + p \right)$ filter variables in addition to the $p$ parameter estimates. In such cases it would be beneficial to have each mobile sensor estimate only part of the parameters. Suppose each mobile sensor $i$ is to estimate only part of the $a$-vector $a^{(i)}$ given by (\ref{eqn:partition}). Now we use $\hat{a}_i$ to denote the estimate of $a^{(i)}$ by sensor $i$. We write \begin{align} \phi(q) &= \mathcal{K}(q)^{\scriptscriptstyle\top} a \\ &= \mathcal{K}^{(i)}(q)^{\scriptscriptstyle\top} a^{(i)} + \bar{\mathcal{K}}^{{(i)}^{\scriptscriptstyle\top}} \bar{a}^{(i)}. \end{align} where $\mathcal{K}(q)$ and the parameter $a$ are partitioned appropriately. Since the mobile sensor $i$'s measurement is denoted by $\phi_i(t) := \phi(x_i(t))$, we have \begin{align} \phi_i(t) &= \mathcal{K}_i^{(i)}(t)^{\scriptscriptstyle\top} a^{(i)} + \bar{\mathcal{K}}_i^{(i)}(t)^{\scriptscriptstyle\top} \bar{a}^{(i)} \\ &= \mathcal{K}_i^{(i)}(t)^{\scriptscriptstyle\top} a^{(i)} + \Delta \phi_i(t) \end{align} where $\mathcal{K}_i(t) := \mathcal{K}(x_i(t))$ and $\Delta \phi_i(t) := \bar{\mathcal{K}}_i^{(i)}(t)^{\scriptscriptstyle\top} \bar{a}^{(i)}$. The basis functions in $\bar{\mathcal{K}}_i^{(i)}(t)$ are centred outside the region $\mathcal{Q}_i$ and thus their values at the points $p_i(t)$ are assumed to be small. Under this condition, we consider the contribution to $\phi(.)$ from these terms as a disturbance $\Delta \phi_i(t)$. \par \noindent Let $C = \{c_1,c_2,\dots,c_p\}$ be the set of centres of the basis functions, $C_i \subset C$ be its subset which belongs to $\mathcal{Q}_i$. We can then bound $\Delta \phi_i(t)$ as follows: \begin{lemma} For each mobile sensor $i$, $i \in \{1,2,\dots,N\}$, \begin{equation} |\Delta \phi_i(t)| \leq p \delta_i a_{\max}. \end{equation} where $\delta_i := \max\limits_{j \in \{1,\dots,p\} } \exp\left\{-\frac{d_i^2}{\sigma_j^2}\right\}$, $d_i := \mbox{dist}(C_i,C \setminus C_i)$, $\mbox{dist}(A,B) = \min\limits_{a \in A, b \in B} \|a-b\|$, and $a_{\max}$ is an upper bound for the parameters, i.e., $|a^i| \leq a_{\max} \,\, \forall i \in \{1,2,\dots,p\}$. \\ Further the bound can be made independent of $i$ as follows, \begin{equation} |\Delta \phi_i(t)| \leq p \delta a_{\max}. \end{equation} where $\delta = \max\limits_{j \in \{1,\dots,N\} } \delta_i$. \end{lemma} \begin{proof} The lemma follows from the definition of $\Delta \phi_i(t)$ using Cauchy-Schwartz inequality. \end{proof} \par \noindent We again define the following integrators: \begin{align} \dot{\Lambda}_i &= s \mathcal{K}_i^{(i)} \mathcal{K}_i^{{(i)}^{\scriptscriptstyle\top}} \\ \dot{\lambda}_i &= s \mathcal{K}_i^{(i)} \phi_i \end{align} where $s$ is a switching signal which takes values in the set $\{0,1\}$. Consider the following adaptation law: \begin{equation} \dot{\hat{a}}_i = -\Gamma \left( \Lambda_i \hat{a}_i - \lambda_i \right) \label{eqn:adaptationlaw_estimationmultiagent2} \end{equation} Then we have the following result: \begin{theorem} \label{thm:estimation_multiagent2} Suppose the $N$ mobile sensors implement the parameter adaptation law (\ref{eqn:adaptationlaw_estimationmultiagent2}) with each sensor $i$ only estimating part of the full parameter vector $a^{(i)}$. Further assume that each mobile sensor $i$ produces a trajectory going through all the basis function centres in $\mathcal{Q}_i$ in time $T>0$. Then \[ \lim_{t \to \infty} \|\hat{a}_i(t) - a^{(i)}\| \leq r_i, \] where $r_i = \frac{Tp \delta_i a_{\max}}{\alpha \eta_i}$, $a_{\max}$ is the upper bound on the parameter values in $a^{(i)}$, $\alpha \in (0,1)$ and $\eta_i$ is the smallest eigen-vlaue of the matrix $\int_0^T \mathcal{K}_i^{(i)} \mathcal{K}_i^{{(i)}^{\scriptscriptstyle\top}} d\tau $. \end{theorem} \begin{proof} Consider \begin{equation} V = \frac{1}{2} \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \Gamma^{-1} \tilde{a}_i \end{equation} Taking derivative, \begin{align} \dot{V} &= - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \left( \Lambda_i \hat{a}_i - \lambda_i \right) \\ &= - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^t s \mathcal{K}_i^{(i)} \left( {\mathcal{K}_i^{(i)}}^{\scriptscriptstyle\top} \hat{a}_i - {\mathcal{K}_i^{(i)}}^{\scriptscriptstyle\top} a^{(i)} - \Delta \phi_i \right) d \tau \\ &= - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^t s \mathcal{K}_i^{(i)} \left( {\mathcal{K}_i^{(i)}}^{\scriptscriptstyle\top}\tilde{a}_i - \Delta \phi_i \right) d \tau \\ &= - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^t s \mathcal{K}_i^{(i)} {\mathcal{K}_i^{(i)}}^{\scriptscriptstyle\top} d \tau \, \tilde{a}_i + \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^t s \mathcal{K}_i^{(i)} \Delta \phi_i d \tau \end{align} For $t \geq T$, the first term becomes negative definite (assuming $s>0$). Setting $s=1$ for $t \leq T$ and $s=0$ for $t > T$, we have \begin{equation} \dot{V} = - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^T \mathcal{K}_i^{(i)} {\mathcal{K}_i^{(i)}}^{\scriptscriptstyle\top} d \tau \, \tilde{a}_i + \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^T \mathcal{K}_i^{(i)} \Delta \phi_i d \tau \end{equation} for $t>T$. Then \begin{align} \dot{V} &\leq -\sum_{i=1}^N \eta_i \|\tilde{a}_i\|^2 + \sum_{i=1}^N \|\tilde{a}_i\| Tp \delta_i a_{\max} \\ &\leq -\kappa V - \sum_{i=1}^N \|\tilde{a}_i\| \left(\alpha \eta_i \|\tilde{a}_i\| - Tp \delta_i a_{\max}\right) \end{align} where $\kappa = \frac{\eta_{\min}}{\lambda_{\max}(\Gamma^{-1})}$ and $\alpha \in (0,1)$. Thus for $\|\tilde{a}_i\| > r_i$, we have $\dot{V} \leq -\kappa V$ and $V$ decays exponentially. Therefore the statement of the theorem holds. \end{proof} \subsection{Improving the steady state error} \label{sec:improve_steadystateerror} In this section, we propose a strategy to improve the steady state error with the strategy in theorem \ref{thm:estimation_multiagent2}. Note that the strategy in theorem \ref{thm:estimation_multiagent2} is completely decentralized in that there is no real-time communication required between the mobile sensors to implement the estimation strategy. On the other hand, we can get better parameter estimates at the cost of exchanging information about parameter estimates with other mobile sensors. \par The term $\Delta \phi_i(t)$ depends on the true value of parameters corresponding to the other mobile sensors (denoted $\bar{a}^{(i)}$). Since we do not know the true values, we cannot cancel this term and treat it as a disturbance. However we know that the other mobile sensors have estimates for the true values of $\bar{a}^{(i)}$. We can use these parameter estimates to reduce the effect of the $\Delta \phi_i(t)$ term on the estimation algorithm. Note that the vector $\bar{a}^{(i)}$ consists of the sub-vectors $a^{(j)}$ for all $j \neq i$. Now, corresponding to each $a^{(i)}$, we construct a directed graph with a rooted outbranching (see \cite{mesbahi2010graph}), denoted $\mathcal{G}_i$ which is a sub-graph of the undirected graph $\mathcal{G}$ with mobile sensor $i$ as the root node. An illustration is shown in figure \ref{fig:partition2}. \begin{figure} \centering \includegraphics[scale=1.8]{estimation_distributed.pdf} \caption{Illustration of four mobile sensors with the directed graphs corresponding to $a^{(1)}$ and $a^{(4)}$.} \label{fig:partition2} \end{figure} \par For each mobile sensor $i$, we introduce additional states $b_i^j$ for each $j \in \{1,2,\dots,N\}$ and $j \neq i$, which evolves according to the equation \begin{equation} \dot{b}_i^j = - \sum_{k=1}^N l^d_{ik} \left( \hat{b}_i^j - \hat{b}_k^j \right) \end{equation} where we define $b_i^i := \hat{a}_i$ for ease of notation and $l^d_{ik}$ is zero if there is no directed path from node $i$ to $k$ in graph $\mathcal{G}_j$, and non-zero constant value otherwise. This implements a directed consensus protocol on the variables $b_i^j$ with $i=1,2,\dots,N$ (see \cite{mesbahi2010graph}) converging to the root value $b_j^j = \hat{a}_j$ for each $j$. Thus $b_i^j$ is an estimator of $\hat{a}_j$ by mobile sensor $i$. We now use the modified integrators: \begin{align} \dot{\Lambda}_i &= s \mathcal{K}_i^{(i)} \mathcal{K}_i^{{(i)}^{\scriptscriptstyle\top}} \\ \dot{\lambda}_i &= s \mathcal{K}_i^{(i)} \left( \phi_i - \bar{\mathcal{K}}_i^{{(i)}^{\scriptscriptstyle\top}} b_i \right) \end{align} where $b_i$ is the concatenated vector given by $b_i = \left[ b_i^{1^{\scriptscriptstyle\top}} \dots b_i^{j^{\scriptscriptstyle\top}} \dots b_i^{N^{\scriptscriptstyle\top}} \right]^{\scriptscriptstyle\top}$ ($j=i$ not included). Using the adaptation law (\ref{eqn:adaptationlaw_estimationmultiagent2}) we can see that the disturbance term now becomes \begin{equation} \Delta \phi_i'(t) := \bar{\mathcal{K}}_i^{(i)}(t)^{\scriptscriptstyle\top} (\bar{a}^{(i)}-b_i) \end{equation} which is expected to be smaller than $\Delta \phi_i(t)$, although we cannot put a theoretical bound better than $r_i$ in theorem \ref{thm:estimation_multiagent2}. The stability and convergence in case of the above modification is not proved here as it is essentially a similar exercise to that in the previous section. We will investigate the effect of the above modification in section \ref{sec:simulations}. \section{Unknown Centres} \label{sec:centresnotknown} In this section, we assume as before that the scalar field is a finite linear combination of radial basis functions. We further assume that the centres are not exactly known, but known to within an accuracy of $\epsilon_c$, i.e., $\|\hat{c}_i - c_i\| \leq \epsilon_c$. We will evaluate the quality of parameter estimates in this case. Define \[ \tilde{\mathcal{K}}(q) = \hat{\mathcal{K}}(q) - \mathcal{K}(q) \] where $\hat{\mathcal{K}}(q)$ is the RBF evaluated at the known values of the centres and $\mathcal{K}(q)$ corresponds to the true values of the centres. \subsection{Each mobile sensor estimates only a part of the parameter vector} As in section \ref{sec:ntwk_full}, we assume that each mobile sensor estimates part of the parameter vector $a^{(i)}$ corresponding to the partition $\mathcal{Q}_i$. In this case we propose the following modified filters, \begin{align} \label{eqn:unknowncentres1_filters1} \dot{\Lambda}_i &= s\hat{\mathcal{K}}_i^{(i)} \hat{\mathcal{K}}_i^{{(i)}^{\scriptscriptstyle\top}} \\ \dot{\lambda}_i &= s\hat{\mathcal{K}}_i^{(i)} \phi_i \label{eqn:unknowncentres1_filters2} \end{align} with equation (\ref{eqn:adaptationlaw_estimationmultiagent2}) as the adaptation law. Then we have the following result. \begin{prop} Assuming the centres are only known to within an accuracy of $\epsilon_c$ ($\|\hat{c}_i - c_i < \epsilon_c\|$), let each mobile sensor pass through the set of known (inaccurate) centres $\hat{c}_i$ in $\mathcal{Q}_i$. If each mobile sensor implements the adaptation law \eqref{eqn:adaptationlaw_estimationmultiagent2} along with \eqref{eqn:unknowncentres1_filters1}- \eqref{eqn:unknowncentres1_filters2}, the estimation error $\tilde{a}_i$ converges to within a bound $r_i$ of the origin, where $r_i = \frac{Tp a_{\max} (\sqrt{p} k \epsilon_c + \delta_i)}{\alpha \eta_i}$. \end{prop} \begin{proof} Consider the same Lyapunov function as before, \[ V = \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \Gamma^{-1} \tilde{a}_i \] Taking the time derivative, \begin{align*} \dot{V} &= - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \left( \Lambda_i \hat{a}_i - \lambda_i \right) \\ &= - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^t s \hat{\mathcal{K}}_i^{(i)} \left( {\hat{\mathcal{K}}_i^{(i)}}^{\scriptscriptstyle\top} \hat{a}_i - {\mathcal{K}_i^{(i)}}^{\scriptscriptstyle\top} a^{(i)} - \Delta \phi_i \right) d \tau \\ &= - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^t s \hat{\mathcal{K}}_i^{(i)} {\hat{\mathcal{K}}_i^{(i)}}^{\scriptscriptstyle\top} d \tau \, \tilde{a}_i - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^t s \hat{\mathcal{K}}_i^{(i)} {\tilde{\mathcal{K}}_i^{(i)}}^{\scriptscriptstyle\top} d \tau \, a^{(i)} \\ & \qquad + \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^t s \hat{\mathcal{K}}_i^{(i)} \Delta \phi_i d \tau \end{align*} Also note that $|\hat{\mathcal{K}}^i(q)| \leq 1 \implies \|\hat{\mathcal{K}}(q)\| \leq \sqrt{p}$, and $|\tilde{\mathcal{K}}^i(q)| \leq k \epsilon_c \implies \|\tilde{\mathcal{K}}(q)\| \leq \sqrt{p}k \epsilon_c$ for some $k$ (lipschitz constant), Setting $s=1$ for $t \leq T$ and $s=0$ for $t > T$ as before and, assuming the first term becomes negative definite at time $T$, we now have \begin{align*} \dot{V} & \leq - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^T \hat{\mathcal{K}}_i^{(i)} {\hat{\mathcal{K}}_i^{(i)}}^{\scriptscriptstyle\top} d \tau \, \tilde{a}_i \\ & \qquad + \sum_{i=1}^N \|\tilde{a}_i\| Tp a_{\max} (\sqrt{p}k \epsilon_c + \delta_i) \\ & \leq - \kappa V - \sum_{i=1}^N \|\tilde{a}_i\| \left(\alpha \eta_i \|\tilde{a}_i\| - Tp a_{\max} (\sqrt{p}k \epsilon_c + \delta_i) \right) \end{align*} for $t \geq T$. Therefore, the statement of the theorem follows. \end{proof} \subsection{Each mobile sensor estimates the entire parameter vector} We define the following filter equations, \begin{align} \dot{\Lambda}_i &= s\hat{\mathcal{K}}_i \hat{\mathcal{K}}_i^{\scriptscriptstyle\top} \label{eqn:unknowncentres2_filters1} \\ \dot{\lambda}_i &= s\hat{\mathcal{K}}_i \phi_i \label{eqn:unknowncentres2_filters2} \end{align} The adaptation law is given by equation (\ref{eqn:adaptationlaw_estimationmultiagent1}). In this case, we have the following proposition. \begin{prop} Suppose the $N$ mobile sensors adopt the parameter adaptation law (\ref{eqn:adaptationlaw_estimationmultiagent1}) with the integrators \eqref{eqn:unknowncentres2_filters1}- \eqref{eqn:unknowncentres2_filters2}. Also assume that each mobile sensor $i$ produces a trajectory going through all the approximate basis function centres $\hat{c}_i$ in $\mathcal{Q}_i$. Then the parameter estimation errors of the mobile sensors converge to within a bound $r_i$ of origin, where $r_i = \frac{Tp \sqrt{p} k \epsilon_c a_{\max}} {\alpha \eta_{\min}}$. \end{prop} \begin{proof} Consider the lyapunov function \[ V = \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \Gamma^{-1} \tilde{a}_i \] Taking the derivative of $V$, \begin{align*} \dot{V} &= \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \Gamma^{-1} \dot{\hat{a}}_i \\ &= -\sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \left( \Lambda_i \hat{a}_i - \lambda_i \right) - \zeta \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} l_{ij} \left( \hat{a}_i - \hat{a}_j \right) \end{align*} Substituting for the variables $\Lambda_i$, $\lambda_i$ and rearranging the second term, \begin{align*} \dot{V} &= -\sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^t s \hat{\mathcal{K}}_i \hat{\mathcal{K}}_i^{\scriptscriptstyle\top} d\tau \tilde{a}_i - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^t s \hat{\mathcal{K}}_i {\tilde{\mathcal{K}}_i}^{\scriptscriptstyle\top} d \tau \, a^{(i)} \\ & \qquad - \zeta \sum_{\alpha=1}^p \hat{a}^{{\alpha}^{\scriptscriptstyle\top}} L \hat{a}^{\alpha} \end{align*} Simplifying, \begin{align*} \dot{V} &= -\sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^T \hat{\mathcal{K}}_i \hat{\mathcal{K}}_i^{\scriptscriptstyle\top} d\tau \tilde{a}_i - \sum_{i=1}^N \tilde{a}_i^{\scriptscriptstyle\top} \int\limits_0^T \hat{\mathcal{K}}_i {\tilde{\mathcal{K}}_i}^{\scriptscriptstyle\top} d \tau \, a^{(i)} \\ & \qquad - \zeta \sum_{\alpha=1}^p \tilde{a}^{{\alpha}^{\scriptscriptstyle\top}} L \tilde{a}^{\alpha} \\ \end{align*} for $t \geq T$. We can write the first and last terms in the above equation in terms of stacked vectors as \begin{align*} \dot{V} &= - \tilde{\underline{a}}^{\scriptscriptstyle\top} \underline{Q} \tilde{\underline{a}} - \zeta \, \tilde{\underline{a}}^{\scriptscriptstyle\top} P^{\scriptscriptstyle\top} \underline{L} P \tilde{\underline{a}} - \tilde{\underline{a}}^{\scriptscriptstyle\top} E \underline{a} \\ &= - \tilde{\underline{a}}^{\scriptscriptstyle\top} \left( \underline{Q} + \zeta \, P^{\scriptscriptstyle\top} \underline{L} P \right) \tilde{\underline{a}} - \tilde{\underline{a}}^{\scriptscriptstyle\top} E \underline{a} \end{align*} where $\tilde{\underline{a}} = \left[ \tilde{a}_1^{\scriptscriptstyle\top} \, \tilde{a}_2^{\scriptscriptstyle\top} \, \dots \, \tilde{a}_N^{\scriptscriptstyle\top} \right]^{\scriptscriptstyle\top}$, \begin{align*} \underline{Q} &= \left[ \begin{array}{cccc} \int_0^T \hat{\mathcal{K}}_1 \hat{\mathcal{K}}_1^{\scriptscriptstyle\top} d\tau & \dots & 0 \\ 0 & \dots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \dots & \int_0^T \hat{\mathcal{K}}_N \hat{\mathcal{K}}_N^{\scriptscriptstyle\top} d\tau \end{array} \right], \\ \underline{L} &= \left[ \begin{array}{cccc} L & 0 & \dots & 0 \\ 0 & L & \dots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & L \end{array} \right], \\ E &= \left[ \begin{array}{cccc} \int_0^T \hat{\mathcal{K}}_1 \tilde{\mathcal{K}}_1^{\scriptscriptstyle\top} d\tau & \dots & 0 \\ 0 & \dots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \dots & \int_0^T \hat{\mathcal{K}}_N \tilde{\mathcal{K}}_N^{\scriptscriptstyle\top} d\tau \end{array} \right] \end{align*} and $P$ is the permutation matrix \[ P = \left[ \begin{array}{ccccccc} 1 & 0 & \dots & 0 & 0 & \dots & 0 \\ 0 & 0 & \dots & 1 & 0 & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 1 & \dots & 0 & 0 & \dots & 0 \\ 0 & 0 & \dots & 0 & 1 & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \end{array} \right] \] of dimension $Np \times Np$. We show that the matrix $ \left( \underline{Q} + P^{\scriptscriptstyle\top} \underline{L} P \right) $ is positive definite. Each of the terms are positive semi-definite. The nullspace of matrix $\underline{L}$ contains elements of the form \[ c_1 \left[ \begin{array}{c} \mathbf{1}_p \\ 0 \\ \vdots \\ 0 \end{array} \right] + c_2 \left[ \begin{array}{c} 0 \\ \mathbf{1}_p \\ \vdots \\ 0 \end{array} \right] + \dots + c_N \left[ \begin{array}{c} 0 \\ 0 \\ \vdots \\ \mathbf{1}_p \end{array} \right]. \] Therefore $P^{\scriptscriptstyle\top} \underline{L} P$ has nullspace elements of the form \[ c_1 \left[ \begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{array} \right] + c_2 \left[ \begin{array}{c} 0 \\ 1 \\ \vdots \\ 0 \\ 0 \\ 1 \\ \vdots \\ 0 \end{array} \right] + \dots + c_N \left[ \begin{array}{c} 0 \\ 0 \\ \vdots \\ 1 \\ 0 \\ 0 \\ \vdots \\ 1 \end{array} \right], \] i.e., elements of the form $\left[ \, c_1 \, c_2 \, \dots \, c_N \, c_1 \, c_2 \, \dots \, c_N \right]^{\scriptscriptstyle\top}$. Correspondingly the $\underline{Q}$ term can be written as \[ c^{\scriptscriptstyle\top} \sum_{i=1}^N \int_0^T \hat{\mathcal{K}}_i \hat{\mathcal{K}}_i^{\scriptscriptstyle\top} dt \, c \] where $c = \left[ \, c_1 \, c_2 \, \dots \, c_N \right]^{\scriptscriptstyle\top}$. Under the assumptions of the proposition, and lemma \ref{lemma:estimation_multiagent1}, the above term is strictly positive. Hence $\left( \underline{Q} + P^{\scriptscriptstyle\top} \underline{L} P \right)$ is positive definite. Let $\eta_{\min}$ be the smallest eigen-value of $\left( \underline{Q} + P^{\scriptscriptstyle\top} \underline{L} P \right)$. Then we have \begin{align*} \dot{V} & \leq -\kappa V - \alpha \eta_{\min} \|\underline{\tilde{a}}\|^2 + \sum_{i=1}^N \|\tilde{a}_i\| Tp \sqrt{p} k \epsilon_c a_{\max} \\ & = -\kappa V - \alpha \eta_{\min} \sum_{i=1}^N \|\tilde{a}_i\| \left( \|\tilde{a}_i\| - \frac{Tp \sqrt{p} k \epsilon_c a_{\max}} {\alpha \eta_{\min}} \right) \end{align*} for some $\kappa > 0$. Thus for $\|\tilde{a}_i\| > \frac{Tp \sqrt{p} k \epsilon_c a_{\max}} {\alpha \eta_{\min}}$, $V$ decreases exponentially and the result holds. \end{proof} \section{Simulations} \label{sec:simulations} In this section, we verify the algorithms presented using simulations. First we consider the exact parameterization case where the true scalar field is a linear combination of RBFs with the centres of the RBFs being known. This case allows us to verify the correctness of the algorithms presented in the paper. Next we consider a scalar field which is completely unknown, and use the algorithms presented to reconstruct the scalar field. The mobile sensors in the simulations are assumed to be single integrators with dynamics given by $\dot{x}_i = u_i$ where $x_i$ is the position of sensor $i$ and $u_i$ is its control input. For ease of comparing various algorithms, we refer to the algorithm in section \ref{sec:ntwk_full} as \emph{Algorithm S1}, the algorithm presented in section \ref{sec:ntwk_part} as \emph{Algorithm S2}, and the modified version of algorithm $S2$ in section \ref{sec:improve_steadystateerror} as \emph{Algorithm S3}. \subsection{Exact parameterization} We consider the unit square region $\mathcal{Q}$ with four mobile sensors. The scalar field to be estimated is exactly parameterized in terms of Gaussian RBFs (given by equation \eqref{eqn:gaussian}), the $x$ and $y$ coordinates of the RBF centres $c_i$ being given in table \ref{tab:sim1}. The standard deviation of each of the gaussians $\sigma_i$ is chosen to be $0.1$. The true parameter values $a^i$ are also given in table \ref{tab:sim1}. \begin{table} \begin{tabular}{l|cccccccc} $c_{i,x}$ & $0.20$ & $0.35$ & $0.60$ & $0.85$ & $0.70$ & $0.75$ & $0.15$ & $0.35$ \\ $c_{i,y}$ & $0.25$ & $0.26$ & $0.18$ & $0.30$ & $0.75$ & $0.90$ & $0.75$ & $0.60$ \\ $a^{i}$ & $2.0$ & $1.0$ & $1.5$ & $1.8$ & $1.2$ & $1.6$ & $2.5$ & $1.1$ \\ \end{tabular} \caption{Parameters of the simulated scalar field} \label{tab:sim1} \end{table} The scalar field is shown in figure \ref{fig:scalarfield1}. \begin{figure} \begin{subfigure}{0.23\textwidth} \centering \includegraphics[scale=0.38] {figures/truefield1.pdf} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.45] {figures/truefield2.pdf} \end{subfigure} \caption{The scalar field used for verifying the algorithms} \label{fig:scalarfield1} \end{figure} The initial positions of the mobile sensors were chosen randomly and shown in figure \ref{fig:reconstructedfield1}. The partition of the region was done by constructing the voronoi cells for each mobile sensor. The Voronoi cell of mobile sensor $i$ (denoted $\mathcal{Q}_i$) consists of those points which are closer to sensor $i$ as compared to all other sensors: \begin{equation} \mathcal{Q}_i = \{ q \in \mathcal{Q} : \|q-x_i\| \leq \|q-x_j\| , j=1,2,\dots,N; j \neq i \} \label{eqn:voronoi} \end{equation} For motion control of the sensors, we use a proportional control law $u_i = k (x_i - x_{gi})$ where $x_{gi}$ is made to switch between all the centres in the region $\mathcal{Q}_i$ making sure the condition in lemma \ref{lemma:relaxedpd} is satisfied. The control gain $k$ was chosen to be $5$. The simulation ran for $16.5$ seconds. The excitation condition was achieved in $T = 1.5$ seconds. The reconstructed scalar field with algorithm S$1$ is shown in figure \ref{fig:reconstructedfield1} on the right and the average (across all the mobile sensors) parameter estimation error is shown in figure \ref{fig:estimationerror1}. It can be seen that the parameters converge exactly to the true values and exact reconstruction is achieved. \begin{figure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/initialconfig.pdf} \caption{\small Partitions} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.45] {figures/truefield2.pdf} \caption{\small Algorithm S$1$.} \end{subfigure} \caption{Left: Initial positions (blue squares), corresponding partitions and centres of RBFs (red circles); Right: Reconstructed field using algorithm S$1$.} \label{fig:reconstructedfield1} \end{figure} \begin{figure} \centering \includegraphics [width=0.5\textwidth, height=0.2\textheight] {figures/avgperror.pdf} \caption{Algorithm S$1$: Average parameter estimation error with time} \label{fig:estimationerror1} \end{figure} The reconstructed field with algorithm S$2$ and algorithm S$3$ are shown in figure \ref{fig:reconstructedfield2}. The corresponding estimation errors are shown in figures \ref{fig:estimationerror2} and \ref{fig:estimationerror3} respectively. \begin{figure} \begin{subfigure}{0.23\textwidth} \centering \includegraphics [scale=0.38] {figures/algo2_reconstructedfield2.pdf} \caption{\small Algorithm S$2$.} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.38] {figures/algo2mod_reconstructedfield2.pdf} \caption{\small Algorithm S$3$.} \end{subfigure} \caption{The reconstructed field using algorithm S$2$ and S$3$.} \label{fig:reconstructedfield2} \end{figure} \begin{figure} \centering \includegraphics [width=0.5\textwidth, height=0.2\textheight] {figures/algo2_perror.pdf} \caption{Algorithm S$2$: Average parameter estimation error with time} \label{fig:estimationerror2} \end{figure} \begin{figure} \centering \includegraphics [width=0.5\textwidth, height=0.2\textheight] {figures/algo2mod_perror.pdf} \caption{Algorithm S$3$: Average parameter estimation error with time} \label{fig:estimationerror3} \end{figure} The maximum parameter estimation error using algorithm S$2$ was found to be $0.030$ and using the algorithm S$3$ was found to be $0.017$. Thus the algorithm S$3$ is seen to give better parameter estimates in this case. \par We also present simulation results where we do not know the exact value of the centres of the RBFs (as in section \ref{sec:centresnotknown}). We assume we know the centres within an accuracy of $\epsilon_c = 0.05$. For this, we add a random perturbation (bounded by $\epsilon_c$) to the true centre coordinates and use the perturbed centres in the estimation algorithm. The reconstructed fields with algorithms S$1$, S$2$ and S$3$ are shown in figures \ref{fig:reconstructedfield3} and \ref{fig:reconstructedfield4} respectively. Table \ref{tab:unknowncentres_maxerror} also compares the maximum steady state parameter errors in the three cases. As expected, algorithm S$1$ has much lower steady state error compared to algorithm S$2$ and algorithm S$3$ performs better than algorithm S$2$. It should be noted that all the algorithms identify the main features of the true field, as seen from the reconstructed field plots. \begin{figure} \begin{subfigure}{0.23\textwidth} \centering \begin{tabular}{lc} \toprule {\small \textit{Algorithm}} & {\small \textit{Max. est. error}} \\ \midrule \emph{\small S$1$} & $0.16$ \\ \emph{\small S$2$} & $0.62$ \\ \emph{\small S$3$} & $0.44$ \\ \bottomrule \end{tabular} \caption{\small Max. parameter estimation errors.} \label{tab:unknowncentres_maxerror} \end{subfigure} \begin{subfigure}{0.23\textwidth} \centering \includegraphics [scale=0.38] {figures/algo1_unknowncentres_reconstructedfield2.pdf} \caption{\small Algorithm S$1$.} \end{subfigure} \caption{Unknown Centres: Max. parameter estimation errors (left) and the reconstructed field using algorithm S$1$ (right).} \label{fig:reconstructedfield3} \end{figure} \begin{figure} \begin{subfigure}{0.23\textwidth} \centering \includegraphics [scale=0.38] {figures/algo2_unknowncentres_reconstructedfield2.pdf} \caption{\small Algorithm S$2$.} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.38] {figures/algo2mod_unknowncentres_reconstructedfield2.pdf} \caption{\small Algorithm S$3$.} \end{subfigure} \caption{Unknown Centres: Reconstructed field.} \label{fig:reconstructedfield4} \end{figure} \subsection{Fully unknown scalar field} \par Now we test the estimation algorithms on a more general scalar field which is not a linear combination of RBFs. For this we consider the continuous scalar field given by \begin{align*} \phi(x,y) &= 3x^2 e^{\frac{-(x-0.7)^2 - (y-0.7)^2}{0.05}} + e^{\frac{-(x-0.4)^2-(y-0.4)^2}{0.06}} \\ & \qquad + \frac{1}{3} e^{\frac{-(x-0.2)^2-(y-0.2)^2}{0.08}}. \end{align*} over the unit square region $\mathcal{Q}$. A plot of $\phi(\cdot)$ is shown in figure \ref{fig:scalarfield2}. We use $N=5$ mobile sensors with the partitions $\mathcal{Q}_i$ determined as follows: We first run a uniform coverage algorithm (coverage algorithm presented in \cite{cortes2004coverage} with a uniform density function $\phi(q) \equiv 1$). This makes the mobile sensors uniformly spread out in the region $\mathcal{Q}$. We then compute the voronoi partition \eqref{eqn:voronoi} of the sensors and use it as the required partition $\mathcal{Q}_i$. \begin{figure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/truefield3.pdf} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/true_field_top.pdf} \end{subfigure} \caption{The scalar field $\phi(x,y)$ used in the simulation} \label{fig:scalarfield2} \end{figure} We first show the results for approximating the field $\phi(\cdot)$ with $p=100$ Gaussian RBFs. The centres of the Gaussian are arranged in a uniform grid over the region $\mathcal{Q}$. The reconstructed field plots for two values of $\sigma_i$ (standard deviation of the Gaussian RBFs) are shown in figures \ref{fig:reconstructedfield5}, \ref{fig:reconstructedfield6} and \ref{fig:reconstructedfield7} with the three algorithms. \begin{figure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo1_reconstructed_np100_N5_sigma0-04.pdf} \caption{\small $\sigma_i = 0.04$.} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo1_reconstructed_np100_N5_sigma0-05.pdf} \caption{\small $\sigma_i = 0.05$.} \end{subfigure} \caption{Reconstructed field ($p=100$) with algorithm S$1$.} \label{fig:reconstructedfield5} \end{figure} \begin{figure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo2_reconstructed_np100_N5_sigma0-04.pdf} \caption{\small $\sigma_i = 0.04$.} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo2_reconstructed_np100_N5_sigma0-05.pdf} \caption{\small $\sigma_i = 0.05$.} \end{subfigure} \caption{Reconstructed field ($p=100$) with algorithm S$2$.} \label{fig:reconstructedfield6} \end{figure} \begin{figure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo3_reconstructed_np100_N5_sigma0-04.pdf} \caption{\small $\sigma_i = 0.04$.} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo3_reconstructed_np100_N5_sigma0-05.pdf} \caption{\small $\sigma_i = 0.05$.} \end{subfigure} \caption{Reconstructed field ($p=100$) with algorithm S$3$.} \label{fig:reconstructedfield7} \end{figure} To compare the various algorithms, we use the integral error (see theorem \ref{thm:approx1}) \[ \|e\|_2 = \int_{\mathcal{Q}} |\phi(q)-\mathcal{K}(q)^{\scriptscriptstyle\top}\hat{a}| dq \] where $\hat{a}$ is the final parameter estimate obtained from the given algorithm. The integral error for approximation of $\phi(\cdot)$ using $p=100$ parameters is shown in table \ref{tab:compare_np100}. The table also shows the time $T$ in seconds at which the excitation (positive definiteness) condition is achieved. The total runtime of the estimation algorithms was $T+20$ seconds. \begin{table} \centering \begin{tabular}{lcclcc} \toprule ${\sigma_i=0.04}$ & $T$ (sec) & $\|e\|_2$ & ${\sigma_i=0.05}$ & $T$ (sec) & $\|e\|_2$ \\ \cmidrule(r){1-3} \cmidrule(l){4-6} \emph{Algorithm S$1$} & $3.1$ & $0.045$ & \emph{Algorithm S$1$} & $3.9$ &$0.012$ \\ \emph{Algorithm S$2$} & $3.1$ & $0.054$ & \emph{Algorithm S$2$} & $3.7$ &$0.053$ \\ \emph{Algorithm S$3$} & $3.1$ & $0.048$ & \emph{Algorithm S$3$} & $3.7$ & $0.028$ \\ \bottomrule \end{tabular} \caption{Comparison of algorithms for $p=100$ parameters.} \label{tab:compare_np100} \end{table} \par The reconstructed field plots for $p=196$ parameters is shown in figures \ref{fig:reconstructedfield8}, \ref{fig:reconstructedfield9} and \ref{fig:reconstructedfield10} with the three algorithms. The comparison of various algorithms is given in table \ref{tab:compare_np196}. \begin{figure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo1_reconstructed_np196_N5_sigma0-03.pdf} \caption{\small $\sigma_i = 0.03$.} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo1_reconstructed_np196_N5_sigma0-04.pdf} \caption{\small $\sigma_i = 0.04$.} \end{subfigure} \caption{Reconstructed field ($p=196$) with algorithm S$1$.} \label{fig:reconstructedfield8} \end{figure} \begin{figure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo2_reconstructed_np196_N5_sigma0-03.pdf} \caption{\small $\sigma_i = 0.03$.} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo2_reconstructed_np196_N5_sigma0-04.pdf} \caption{\small $\sigma_i = 0.04$.} \end{subfigure} \caption{Reconstructed field ($p=196$) with algorithm S$2$.} \label{fig:reconstructedfield9} \end{figure} \begin{figure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo3_reconstructed_np196_N5_sigma0-03.pdf} \caption{\small $\sigma_i = 0.03$.} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics [scale=0.4] {figures/algo3_reconstructed_np196_N5_sigma0-04.pdf} \caption{\small $\sigma_i = 0.04$.} \end{subfigure} \caption{Reconstructed field ($p=196$) with algorithm S$3$.} \label{fig:reconstructedfield10} \end{figure} \begin{table} \centering \begin{tabular}{lcclcc} \toprule ${\sigma_i=0.03}$ & $T$ (sec) & $\|e\|_2$ & ${\sigma_i=0.04}$ & $T$ (sec) & $\|e\|_2$ \\ \cmidrule(r){1-3} \cmidrule(l){4-6} \emph{Algorithm S$1$} & $6.6$ & $0.031$ & \emph{Algorithm S$1$} & $8.9$ &$0.008$ \\ \emph{Algorithm S$2$} & $6.6$ & $0.059$ & \emph{Algorithm S$2$} & $8.8$ &0.073 \\ \emph{Algorithm S$3$} & $6.6$ & $0.053$ & \emph{Algorithm S$3$} & $8.8$ & $0.039$ \\ \bottomrule \end{tabular} \caption{Comparison of algorithms for $p=196$ parameters.} \label{tab:compare_np196} \end{table} We see that algorithm S$1$ gives better approximation compared to the others as expected. Also the algorithm S$3$ performs significantly better compared to algorithm S$2$. Increasing the number of parameters gives better approximation as expected for algorithm $1$, though for the other algorithms this is not guaranteed due to the extra error incurred (see theorem \ref{thm:estimation_multiagent2}) which may increase with larger $p$ depending on other variables such as the location of centres. $\sigma_i$ also plays an important role in the reconstruction of the original field. For $p=100$, $\sigma_i = 0.05$ seems to provide a better approximation compared to $\sigma_i = 0.04$, and for $p=196$, $\sigma_i = 0.04$ seems to provide a better approximation compared to $\sigma_i = 0.03$. To summarize, algorithm S$1$ gives better approximation compared to the others though it is more computational and memory intensive. The algorithm S$3$ also gives a good approximation requiring much less memory. It may also be noted that in many applications, we may only be interested in identifying the main features of the original field which was successfully done in most of the cases discussed. \section{Conclusion} \label{sec:conclusion} In this paper we consider the estimation of a scalar field motivated by tools from adaptive control theory and lyapunov analysis. We derived two estimation algorithms, one in which each mobile sensor estimates the entire parameter vector, and another in which each mobile sensor estimates only part of the parameter vector. We verified and tested the algorithms using simulations. Further work involves improving upon the proposed algorithms, and possibility of estimation of time-varying fields by persistent motion of the mobile sensors. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,477,468,750,267
arxiv
\section*{References}} \usepackage{graphicx} \usepackage{amsmath} \usepackage{natbib} \usepackage{amssymb} \usepackage{amsthm} \usepackage{lineno} \usepackage{subfig} \usepackage{enumerate} \usepackage{fullpage} \usepackage{algorithm} \usepackage{algpseudocode} \usepackage{xcolor} \usepackage[colorinlistoftodos]{todonotes} \biboptions{sort,compress} \usepackage{xcolor} \newcommand*\patchAmsMathEnvironmentForLineno[1]{% \expandafter\let\csname old#1\expandafter\endcsname\csname #1\endcsname \expandafter\let\csname oldend#1\expandafter\endcsname\csname end#1\endcsname \renewenvironment{#1}% {\linenomath\csname old#1\endcsname}% {\csname oldend#1\endcsname\endlinenomath} \newcommand*\patchBothAmsMathEnvironmentsForLineno[1]{% \patchAmsMathEnvironmentForLineno{#1}% \patchAmsMathEnvironmentForLineno{#1*}}% \AtBeginDocument{% \patchBothAmsMathEnvironmentsForLineno{equation}% \patchBothAmsMathEnvironmentsForLineno{align}% \patchBothAmsMathEnvironmentsForLineno{flalign}% \patchBothAmsMathEnvironmentsForLineno{alignat}% \patchBothAmsMathEnvironmentsForLineno{gather}% \patchBothAmsMathEnvironmentsForLineno{multline}% } \usepackage{graphicx} \usepackage{amssymb} \usepackage{amsthm} \usepackage{bbm} \usepackage{lineno} \usepackage{fullpage} \usepackage[colorinlistoftodos]{todonotes} \usepackage{url} \usepackage{listings} \newtheorem{remark}{Remark}[] \usepackage{color,soul} \usepackage{enumitem} \usepackage{mathtools} \definecolor{lightblue}{rgb}{.90,.95,1} \definecolor{darkgreen}{rgb}{0,.5,0.5} \newcommand\assignment[1]{\todo[inline,color=red!10,size=\normalsize]{#1}} \newcommand\guide[2]{\sethlcolor{lightblue}\hl{#2}\todo[color=lightblue,size=\tiny]{#1}} \newcommand\guidenoa[1]{\sethlcolor{lightblue}\hl{#1}} \definecolor{lightgreen}{rgb}{.90,1,0.90} \newcommand\bcon[1]{\todo[color=lightgreen,size=\tiny]{$\downarrow\downarrow\downarrow$ #1}} \newcommand\econ[1]{\todo[color=lightgreen,size=\tiny]{$\uparrow\uparrow\uparrow$ #1}} \newcommand\commofA[2]{\todo[color=red!50,size=\small,inline]{{\bf \color{blue} {#1}'s comments}: #2}} \newcommand\commofB[2]{\todo[color=blue!50,size=\small,inline]{{\bf \color{blue} {#1}'s comments}: #2}} \newcommand\commofC[2]{\todo[color=purple!50,size=\small,inline]{{\bf \color{blue} {#1}'s comments}: #2}} \newcommand{\boldsymbol{\tau}}{\boldsymbol{\tau}} \newcommand{\tilde{\boldsymbol{\tau}}^{rans}}{\tilde{\boldsymbol{\tau}}^{rans}} \newcommand{\bs}[1]{\boldsymbol{#1}} \newcommand{\textbf{[ref]}}{\textbf{[ref]}} \usepackage{array} \usepackage{multirow} \graphicspath{ {./figs/} } \newcolumntype{P}[1]{>{\centering\arraybackslash}m{#1}} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \graphicspath{ {./figs/} } \linespread{1.5} \journal{Elsevier} \begin{document} \begin{frontmatter} \title{A Comprehensive Physics-Informed Machine Learning Framework for Predictive Turbulence Modeling} \author[vt]{Jian-Xun Wang} \author[vt]{Jinlong Wu} \author[snl]{Julia Ling} \author[jops]{Gianluca Iaccarino} \author[vt]{Heng Xiao\corref{corxh}} \cortext[corxh]{Corresponding author. Tel: +1 540 231 0926} \ead{hengxiao@vt.edu} \address[vt]{Department of Aerospace and Ocean Engineering, Virginia Tech, Blacksburg, VA} \address[snl]{Sandia National Laboratories, Livermore, CA} \address[jops]{Department of Mechanical Engineering, Stanford University, Stanford, CA} \begin{abstract} Although an increased availability of computational resources has enabled high-fidelity simulations (e.g., large eddy simulations and direct numerical simulation) of turbulent flows, the Reynolds-Averaged Navier--Stokes (RANS) models are still the dominant tools for industrial applications. However, the predictive capabilities of RANS models are limited by potential inaccuracy driven by hypotheses in the Reynolds stress closure. Recently, a Physics-Informed Machine Learning (PIML) approach has been proposed to learn the functional form of Reynolds stress discrepancy in RANS simulations based on available data. It has been demonstrated that the learned discrepancy function can be used to improve Reynolds stresses in different flows where data are not available. However, owing to a number of challenges, the improvements have been demonstrated only in the Reynolds stress prediction but not in the corresponding propagated quantities of interest (e.g., velocity field), which is still an \emph{a priori} study. In this work, we introduce and demonstrate the procedures toward a complete PIML framework for predictive turbulence modeling, including learning Reynolds stress discrepancy function, predicting Reynolds stresses in different flows, and propagating the predicted Reynolds stresses to mean flow fields. The process of Reynolds stress propagation and predictive accuracy of the propagated velocity field are investigated. To improve the learning-prediction performance, the input features are enriched based on an integrity basis of invariants. The fully developed turbulent flow in a square duct is used as the test case. The discrepancy model is trained on flow fields obtained from several Reynolds numbers and evaluated on a duct flow at a Reynolds number higher than any of the training cases. The predicted Reynolds stresses are propagated to velocity field through RANS equations. Numerical results show excellent predictive performances in both Reynolds stresses and their propagated velocities, demonstrating the merits of the PIML approach in predictive turbulence modeling. \end{abstract} \begin{keyword} machine learning \sep turbulence modeling\sep Reynolds-Averaged Navier-Stokes equations \sep data-driven approach \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:intro} Computational fluid dynamic (CFD) has been widely used to simulate turbulent flows. Although the rapidly increasing availability of computational resource enables high-fidelity simulations, e.g., Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS), it is not yet computationally feasible to routinely apply them for complex, industrial flows. Therefore, Reynolds-Averaged Navier-Stokes (RANS) models, where empirical closures are used to model the Reynolds stresses, are still the dominant tools for practical engineering problems. However, RANS predictions are known to be unreliable for flows with strong pressure gradients, curvature, or separation~\cite{craft1996development}. This is because some assumptions (e.g., the Boussinesq assumption) made in the closure models are not universally valid, and thus model-form errors are potentially introduced in predicting the regime-dependent, physics-rich phenomena of turbulent flows. These assumptions are typically in the form of functional dependency between mean flow properties and turbulent quantities. They have been traditionally formulated by using intuition, physical observations, and theoretical constraints. Although advanced RANS models~\cite[e.g.,][]{launder1975progress,wallin2000explicit} have been developed in the past decades, a universally applicable one is still lacking. The traditional process of RANS model development solely based on physical understanding and reasoning seems to be insufficient to significantly improve the predictive capability. Recently, the increasing availability of high-fidelity data sets from both DNS and experiments makes it possible to use the data-driven approach as a complement to the physics-based approach to improve the predictive capability of RANS models. The past several years have witnessed a few efforts to develop data-driven turbulence modeling approaches by using machine learning algorithms~\cite[e.g.,][]{milano2002neural,dow11quanti, tracey2013application,parish2016paradigm,ling2015evaluation,mfu1,Wang2016}. Generally, machine learning refers to a process of using data to build explicit or implicit functions of responses with respect to input variables (features). The trained functions can be evaluated to predict cases where data are not available. In the context of turbulence modeling, different machine learning approaches aim to achieve a similar overarching goal, i.e., improving the predictive capability of turbulence models. However, so far there is no consensus yet on the choices of learning responses and input features to better achieve this goal. Dow and Wang~\cite{dow11quanti} chose the discrepancy field $\Delta\nu_t$ in turbulent viscosity as the response, while Duraisamy and co-workers~\cite{parish2016paradigm, singh16using,singh2016machine} introduced a full-field multiplicative discrepancy factor~$\beta$ into the production term of the transport equation as the learning target. Although both the inferred $\Delta\nu$ and $\beta$ are demonstrated to be able to extrapolate to certain extents, they are still modeled quantities and have less physical interpretations. Xiao et al.~\cite{mfu1} directly inferred the discrepancies in RANS simulated Reynolds stresses by using sparse velocity measurements. Wu et al.~\cite{mfu3} further demonstrated these inferred Reynolds stress discrepancies can be extrapolated to closely related flows. Although the response chosen in~\cite{mfu1,mfu3} is a physical quantity, an intrinsic limitation lies on their choice of input features (i.e., physic coordinates $\mathbf{x}$). As a result, the prediction can only be applied to the flows in the same geometry at the same location. Duraisamy and co-workers~\cite{tracey2013application,duraisamy2015new} used non-dimensional flow and model variables to construct the input feature space for calibrations of low-fidelity models. However, their input feature space was constructed with a very small number (three) of features and the invariant property was not fully considered. Ling et al.~\cite{ling2016machine} pointed out the merits of embedding the invariance properties into machine learning process. They explored several machine learning models for classifying the regions where RANS assumptions would fail~\cite{ling2015evaluation}. Most recently, they also attempted to directly predict the anisotropy tensors of the Reynolds stresses by using random forests~\cite{ling2016uncertainty} and deep neural networks~\cite{ling2016reynolds}. By comprehensively considering the physical interpretability of learning targets and the invariance property of input features, Wang et al.~\cite{Wang2016} proposed a physics-informed machine learning (PIML) approach to learn the functional forms of Reynolds stress discrepancy on its six physically meaningful components (i.e., magnitude, shape, and orientation of Reynolds stress tensor) with respect to a group of ten invariant mean flow features. They successfully demonstrated that the trained discrepancy model can be used to improve the RANS-modeled Reynolds stresses in flows with different configurations. However, this work is still \emph{a priori} investigation, since the improvements are demonstrated only in Reynolds stresses but not in their propagated velocities. There are a number of challenges associated with propagating forward the corrected Reynolds stresses through RANS equations to obtain the mean velocity and pressure fields. For example, the high-fidelity data themselves used for training must be highly accurate to obtain a precise mean velocity field after propagation. Moreover, the machine learning model should improve predictions of the mean flow variables, which requires not only the pointwise Reynolds stress predictions but also their derivatives to be improved, since it is the divergence of Reynolds stress that appears in the RANS momentum equations. The objective of this work is to introduce the procedures toward a complete PIML framework and demonstrate its capability of improving both the Reynolds stresses and their propagated mean velocities in a relatively less challenging scenario with reliable training data. To improve the predictive accuracy of the machine learning model, the input feature space adopted in~\cite{Wang2016} is expanded by constructing an integrity basis of invariants of mean flow variables. The systematic approach of invariant feature construction proposed in~\cite{ling2016machine} is employed to expand the input space for given raw tensorial mean flow variables. The fully developed turbulent flow in a square duct is used to demonstrate the merits of the proposed method. The discrepancy model is trained on duct flows from several Reynolds numbers and evaluated on a duct flow at a Reynolds number higher than any of the training cases. The predicted Reynolds stresses are propagated to the mean velocity field through the RANS equations, and the accuracy of the propagated mean velocities are investigated and the difficulties associated with the propagation are discussed. Although current work is developed in the context of turbulence modeling, it has potential implications in many other fields in which the governing equations are well understood but empirical closure models are used for the unresolved physical process. The rest of the paper is organized as follows. Section~\ref{sec:meth} introduces building blocks of the Physics-Informed Machine Learning (PIML) framework, including the construction of input feature space, representation of Reynolds stress discrepancy as the response, construction of regression function of the discrepancy with respect to input features, and propagation of corrected Reynolds stresses to mean velocity field. Section~\ref{sec:result} presents the numerical results to demonstrate the prediction performance of the proposed framework and merits of systematical expansion of input space. The concept of ``physics-informed machine learning'' and future perspectives of the current framework are discussed in Section~\ref{sec:discussion}. Finally, Section~\ref{sec:conclusion} concludes the paper. \section{Methodology} \label{sec:meth} In this section, the Physics-Informed Machine Learning (PIML) framework for predictive turbulence modeling is summarized. Its key procedures and components, including construction of the input feature set, choice of output responses, and building of regression functions, are discussed. \subsection{Overview of PIML Framework} \label{sec:meth:overview} The aim of the present work is to introduce and demonstrate the PIML framework for predictive turbulence modeling. Specifically, given high-fidelity data (e.g., Reynolds stresses from DNS simulations) from a set of training flows, the framework aims to improve the standard RANS prediction for different flows for which DNS data are not available. Here, \emph{training flow} refers to the flow with high-fidelity data, which are used to train the machine learning model. Accordingly, \emph{test flow} (prediction flow) is the flow to be predicted. Generally, training flows should share similar flow physics with the test flow, so that the model does not have to extrapolate. This scenario is typical in the engineering design process, where data are available for some flows and predictions are required for different flows with slightly changed configurations (e.g., different Reynolds numbers or slightly changed geometries) but without data. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{PIML-algorithm.pdf} \caption{Schematic of Physics-Informed Machine Learning (PIML) framework for predictive turbulence modeling. Conduct both DNS and RANS simulations for training flows to obtain the data (i.e., mean flow features $\mathbf{q}$ as the input and Reynolds stress discrepancies $\Delta\bs{\tau}$ as the output). The training data are then used to train the random forest models of discrepancy functions. To perform the prediction, RANS simulations are conducted for the test flow to obtain the mean flow features, which can be used to query the trained discrepancy functions and then correct the corresponding RANS-modeled Reynolds stresses. The corrected Reynolds stresses are propagated through RANS equations to the QoIs (e.g. mean velocity field)} \label{fig:piml} \end{figure} In RANS simulations model-form uncertainties stem from the RANS-modeled Reynolds stresses. Therefore, the aim of the machine learning is to extract the functional form of discrepancies in RANS-modeled Reynolds stresses from data. Figure~\ref{fig:piml} shows a schematic of the PIML framework. The overall procedure can be summarized as follows: \begin{enumerate} \item Perform baseline RANS simulations on both the training flows and the test flow. \item Compute the input feature field $\mathbf{q}(\mathbf{x})$ based on the local RANS flow variables. \item Compute the discrepancies field $\Delta \bs\tau(\mathbf{x})$ in the RANS-modeled Reynolds stresses for the training flows based on the high-fidelity data. \item Construct regression functions $ f: \mathbf{q} \mapsto \Delta \bs\tau$ for the discrepancies based on the training data prepared in Step 3, using machine learning algorithms. \item Compute the Reynolds stress discrepancies for the test flow by querying the regression functions. The Reynolds stresses can subsequently be obtained by correcting the baseline RANS predictions with the evaluated discrepancies. \item Propagate the corrected Reynolds stresses to the mean velocity field by solving the RANS equations with the corrected Reynolds stress field. \end{enumerate} There are four essential components in the PIML framework: (1) construction of the input feature set, (2) representation of the Reynolds stress discrepancy as the response, (3) construction of the regression function of the discrepancy with respect to input features, and (4) propagation of corrected Reynolds stresses to mean velocities. The details of the framework are introduced below, and the procedures to systematically construct and expand the input features are highlighted. \subsection{Construction of Invariant Input Feature Set} \label{sec:meth:input} Constructing a reasonable input feature space is of pivotal importance to the performance of machine learning models. First, the input features should be rich enough to differentiate data points in the feature space and better describe the functional relation between inputs and responses. Moreover, it is more desirable to embed known invariance properties into the construction process to achieve improved generalization. Wang et al.~\cite{Wang2016} employed a set of ten invariant features to build random forest regressors. Formulation of these invariant features heavily relied on physical understanding and reasoning. However, these ten features are not necessarily rich enough to represent all possible polynomial invariants of the local mean flow variables. Therefore, a systematic methodology of constructing a complete invariant input set from a group of given tensorial variables as suggested by Ling et al.~\cite{ling2016machine} is employed in the current work. Specifically, given a finite collection of raw mean flow variables (i.e., tensors or vectors), a finite integrity basis of invariants can be constructed. Any scalar invariant function of the raw inputs can be formulated as a function of the corresponding invariant basis. The first step is to identify the raw tensors and vectors. When choosing these raw input materials, they should be able to represent physical characteristics of the mean flow. Generally, these raw variables can be chosen in the same way as the traditional turbulence modeler does for developing advanced turbulence models. On the basis of the components used in the conventional turbulence modeling, a set of four raw inputs are identified as \begin{equation} \label{eq:raw} \mathcal{Q} = \{\mathbf{S}, \bs{\Omega}, \nabla p, \nabla k \}, \end{equation} where $\mathbf{S}$ and $\bs{\Omega}$ are strain rate and rotation rate tensors, respectively; $\nabla p$ and $\nabla k$ are the gradients of pressure and turbulence kinetic energy (TKE), respectively. The four raw tensors and vectors above are assumed to represent the important physical characteristics of the mean flow, which are also widely used as crucial gradients in traditional turbulence modeling. For example, the combinations of $\mathbf{S}$ and $\bs{\Omega}$ were used to construct nonlinear eddy viscosity models~\cite{pope2001turbulent}. \begin{table}[htbp] \centering \caption{ Non-dimensional raw mean flow variables used to construct the invariant basis. The normalized feature $\hat{\alpha}$ is obtained by normalizing the corresponding raw input $\alpha$ with normalization factor $\beta$ according to $\hat{\alpha} = \alpha / (|\alpha| + |\beta|)$. Notations are as follows: $\mathbf{U}$ is mean velocity vector, $k$ is turbulence kinetic energy (TKE), $\rho$ is fluid density, $\varepsilon$ is the turbulence dissipation rate, $\mathbf{S}$ is the strain rate tensor, $\bs{\Omega}$ is the rotation rate tensor, $\| \cdot \|$ indicate matrix norm. } \label{tab:featureRaw} \begin{tabular}{P{2.5cm} | P{3cm} P{3.0cm} P{5.0cm} } \hline Normalized raw input $\hat{\alpha}$ & description & raw input $\alpha$ & normalization factor $\beta$ \\ \hline $\hat{\mathbf{S}}$ & strain rate tensor& $\mathbf{S}$ & $\dfrac{\varepsilon}{k}$\\ \hline $\hat{\bs{\Omega}}$ & rotation rate tensor & $\bs{\Omega}$ & $\|\mathbf{\Omega}\|$\\ \hline $\hat{\nabla p}$ & Pressure gradient & $\nabla p$ & $\rho\|\mathbf{U} \cdot \nabla\mathbf{U}\|$\\ \hline $\hat{\nabla k}$ & Gradient of TKE& $\nabla k$ & $\dfrac{\varepsilon}{\sqrt{k}}$ \\ \hline \end{tabular} \end{table} To ensure non-dimensionality of the raw inputs, the normalization scheme used in~\cite{ling2015evaluation} is adopted. Each element $\alpha$ in the raw input set~$\mathcal{Q}$ is normalized by a corresponding normalization factor $\beta$ as $\hat{\alpha} = \alpha / (|\alpha| + |\beta|)$. Table~\ref{tab:featureRaw} shows all normalization factors of the raw input variables. Based on Hilbert basis theorem~\cite{johnson2016handbook}, a finite integrity basis of invariants for this set $\hat{\mathcal{Q}}$ of normalized raw inputs can be constructed. \begin{table}[htbp] \centering \caption{Minimal integrity bases for symmetric tensor $\hat{\mathbf{S}}$ and antisymmetric tensors $\hat{\bs{\Omega}}$, $\hat{\mathbf{A}}_{p}$, and $\hat{\mathbf{A}}_{k}$. In the implementation, $\hat{\mathbf{S}}$ is the rate of strain tensor, $\hat{\bs{\Omega}}$ is the rate of rotation tensor; $\hat{\mathbf{A}}_{p}$ and $\hat{\mathbf{A}}_{k}$ are the antisymmetric tensors associated with pressure gradient $\nabla \hat{P}$ and the gradient of turbulent kinetic energy $\nabla \hat{k}$; $n_S$ and $n_A$ denote the numbers of symmetric and antisymmetric raw tensors for the bases; an asterisk ($*$) on a term means to include all terms formed by cyclic permutation of labels of anti-symmetric tensors. Note the invariant bases are traces of the tensors in the third column. } \label{tab:basis} \begin{tabular}{c|C{2.5cm}|C{9.5cm}} \hline $(n_S, n_A)$ & feature index & invariant bases$^{(\mathrm{a})}$\\ \hline (1, 0) & 1--2 & $\hat{\mathbf{S}}^2$, $\hat{\mathbf{S}}^3$ \\ \hline (0, 1)& 3--5 & $\hat{\bs{\Omega}}^2$, $\hat{\mathbf{A}}_{p}^2$, $\hat{\mathbf{A}}_{k}^2$ \\ \hline \multirow{3}{*}{(1, 1)} & \multirow{3}{*}{6--14} & $\hat{\bs{\Omega}}^2 \hat{\mathbf{S}}$, $\hat{\bs{\Omega}}^2 \hat{\mathbf{S}}^2$, $\hat{\bs{\Omega}}^2 \hat{\mathbf{S}} \hat{\bs{\Omega}} \hat{\mathbf{S}}^2$;\\ && $\hat{\mathbf{A}}_{p}^2 \hat{\mathbf{S}}$, $\hat{\mathbf{A}}_{p}^2 \hat{\mathbf{S}}^2$, $\hat{\mathbf{A}}_{p}^2 \hat{\mathbf{S}} \hat{\mathbf{A}}_{p} \hat{\mathbf{S}}^2$;\\ && $\hat{\mathbf{A}}_{k}^2 \hat{\mathbf{S}}$, $\hat{\mathbf{A}}_{k}^2 \hat{\mathbf{S}}^2$, $\hat{\mathbf{A}}_{k}^2 \hat{\mathbf{S}} \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}^2$; \\ \hline (0, 2)& 15--17 & $\hat{\bs{\Omega}} \hat{\mathbf{A}}_{p}$, $\hat{\mathbf{A}}_{p} \hat{\mathbf{A}}_{k}$, $\hat{\bs{\Omega}} \hat{\mathbf{A}}_{k}$ \\ \hline \multirow{3}{*}{(1, 2)} & \multirow{3}{*}{18--41} & $\hat{\bs{\Omega}} \hat{\mathbf{A}}_{p} \hat{\mathbf{S}}$, $\hat{\bs{\Omega}} \hat{\mathbf{A}}_{p} \hat{\mathbf{S}}^2$, $\hat{\bs{\Omega}}^2 \hat{\mathbf{A}}_{p} \hat{\mathbf{S}}$*, $\hat{\bs{\Omega}}^2 \hat{\mathbf{A}}_{p} \hat{\mathbf{S}}^2$*, $\hat{\bs{\Omega}}^2 \hat{\mathbf{S}} \hat{\mathbf{A}}_{p} \hat{\mathbf{S}}^2$*;\\ &&$\hat{\bs{\Omega}} \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}$, $\hat{\bs{\Omega}} \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}^2$, $\hat{\bs{\Omega}}^2 \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}$*, $\hat{\bs{\Omega}}^2 \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}^2$*, $\hat{\bs{\Omega}}^2 \hat{\mathbf{S}} \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}^2$*;\\ &&$\hat{\mathbf{A}}_{p} \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}$, $\hat{\mathbf{A}}_{p} \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}^2$, $\hat{\mathbf{A}}_{p}^2 \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}$*, $\hat{\mathbf{A}}_{p}^2 \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}^2$*, $\hat{\mathbf{A}}_{p}^2 \hat{\mathbf{S}} \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}^2$*;\\ \hline (0, 3) & 42 & $\hat{\bs{\Omega}} \hat{\mathbf{A}}_{p} \hat{\mathbf{A}}_{k}$ \\ \hline (1, 3) & 43--47 & $\hat{\bs{\Omega}} \hat{\mathbf{A}}_{p} \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}$, $\hat{\bs{\Omega}} \hat{\mathbf{A}}_{k} \hat{\mathbf{A}}_{p} \hat{\mathbf{S}}$, $\hat{\bs{\Omega}} \hat{\mathbf{A}}_{p} \hat{\mathbf{A}}_{k} \hat{\mathbf{S}}^2$, $\hat{\bs{\Omega}} \hat{\mathbf{A}}_{k} \hat{\mathbf{A}}_{p} \hat{\mathbf{S}}^2$, $\hat{\bs{\Omega}} \hat{\mathbf{A}}_{p} \hat{\mathbf{S}} A_3 \mathbf{S}^2$ \\ \hline \end{tabular} \flushleft {\small Note: (a) The invariance basis is the trace of each tensor listed below.} \end{table} Table~\ref{tab:basis} shows the minimal integrity bases for rotational invariance with given raw input set $\hat{\mathcal{Q}}$~\cite{spencer1962isotropic}. Note that the vectors $\nabla p$ and $\nabla k$ should be first mapped to antisymmetric tensors as follows, \begin{subequations} \label{eq:vector2anti} \begin{align} \hat{\mathbf{A}}_p & = -\mathbf{I} \times \nabla \hat{p}\\ \hat{\mathbf{A}}_k & = -\mathbf{I} \times \nabla \hat{k} \end{align} \end{subequations} where $\mathbf{I}$ is the second order identity tensor, and $\times$ denotes tensor cross product. Note that the asterisk ($*$) on a term means to include all terms formed by cyclic permutation of anti-symmetric tensor labels (e.g., $\hat{\bs{\Omega}}^2 \hat{\mathbf{A}}_{p} \hat{\mathbf{S}}$* is short for $\hat{\bs{\Omega}}^2 \hat{\mathbf{A}}_{p} \hat{\mathbf{S}}$ and $ \hat{\mathbf{A}}_{p}^2 \hat{\bs{\Omega}} \hat{\mathbf{S}}$). As a result, a set of 47 invariant features is constructed to represent the information of the mean flow feature set $\mathcal{Q}$. This construction procedure of input features ensures completeness of rotational invariants with respect to the given set of raw tensor and vector inputs. To further enrich the input features, this basis of 47 features from vector and tensor local flow variables is supplemented by an additional ten features from Wang et al.~\cite{Wang2016}, which also utilizes scalar RANS flow variables. For example, wall distance based Reynolds number $Re_d$ is an important indicator to distinguish the boundary layer from shear flows. Although some features in~\cite{Wang2016} may be redundant, since they are invariant functions of the constructed invariant basis, the performance of random forest regressor is robust in the presence of the redundant inputs. Finally, an input feature space of 57 invariants (collectively denoted as $\mathbf{q}$) is constructed for machine learning. \subsection{Representation of Reynolds Stress Discrepancy} As discussed in Section~\ref{sec:intro}, it is preferable to choose physical variables as the responses (target results) of machine learning model. Since a major source of model-form errors in RANS simulation comes from the modeled Reynolds stresses, a natural choice would be to directly learn the Reynolds stresses from the data. However, this choice would totally abandon RANS model and solely rely on data instead. Although potential model-form errors may exist, the RANS model predictions are still valuable in most circumstances, and machine learning should play the role as a complement instead of a replacement of RANS modeling. Therefore, the discrepancies of RANS-modeled Reynolds stresses are suitable candidate for the responses. Nevertheless, the discrepancies cannot be simply represented by the difference of each tensor component, since it is frame dependent and difficult to impose physical constraints. Following the work of Iaccarino and co-worker~\cite{emory2013modeling}, the discrepancies are formulated in the six physically interpretable dimensions (i.e., magnitude, shape, and orientation) of Reynolds stress tensor based on eigen-decomposition, \begin{equation} \label{eq:tau-decomp} \boldsymbol{\tau} = 2 k \left( \frac{1}{3} \mathbf{I} + \mathbf{a} \right) = 2 k \left( \frac{1}{3} \mathbf{I} + \mathbf{V} \Lambda \mathbf{V}^T \right) \end{equation} where $k$ is the turbulent kinetic energy, which indicates the magnitude of $\boldsymbol{\tau}$; $\mathbf{I}$ is the second order identity tensor; $\mathbf{a}$ is the deviatoric part of $\boldsymbol{\tau}$; $\mathbf{V} = [\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3]$ and $\Lambda = \textrm{diag}[\lambda_1, \lambda_2, \lambda_3]$ with $\lambda_1+\lambda_2+\lambda_3=0$ are the orthonormal eigenvectors and eigenvalues of $\mathbf{a}$, respectively, indicating its shape and orientation. To impose the realizability constrain of Reynolds stress, the eigenvalues $\lambda_1$, $\lambda_2$, and $\lambda_3$ are transformed to barycentric coordinates $C_1$, $C_2$, and $C_3$ as follows~\cite{banerjee2007presentation}: \begin{subequations} \label{eq:lambda2c} \begin{align} C_1 & = \lambda_1 - \lambda_2 \\ C_2 & = 2(\lambda_2 - \lambda_3) \\ C_3 & = 3\lambda_3 + 1 \end{align} \end{subequations} with $C_1 + C_2 + C_3 = 1$. The barycentric coordinates can be plotted as a triangle and $C_1$, $C_2$, and $C_3$ indicate the portion of areas of the three sub-triangles in a Cartesian coordinate $\bs\xi \equiv (\xi, \eta)$. Any point within the triangle is a convex combination of three vertices, i.e., \begin{equation} \boldsymbol{\xi} = \boldsymbol{\xi}_{1c}C_1 + \boldsymbol{\xi}_{2c}C_2 + \boldsymbol{\xi}_{3c}C_3 \end{equation} where $\boldsymbol{\xi}_{1c}$, $\boldsymbol{\xi}_{2c}$, and $\boldsymbol{\xi}_{3c}$ denote coordinates of the three vertices of the triangle. After the above mapping, the coordinate $\boldsymbol{\xi} \equiv (\xi, \eta)$ uniquely identifies the shape of the anisotropy tensor. Similar to the Lumley triangle~\cite{pope2001turbulent}, the Reynolds stresses falling in the interior of the barycentric triangle are realizable. Representation of the discrepancy in the orientation (eigenvectors) of Reynolds stress tensor is more challenging than that for eigenvalues. Moreover, there are no explicit physical constraints on eigenvector systems. Less attention has been given to the quantification and reduction of discrepancies in eigenvector systems. Wang et al.~\cite{mfu5} proposed a method to perturb eigenvectors by using the Euler angle. Wang et al.~\cite{Wang2016} predicted the discrepancies in eigenvectors parameterized by the Euler angle based on DNS data. In this work, the Euler angle is also employed to represent the discrepancies in eigenvectors of RANS-modeled Reynolds stresses. The Euler angle system used follows the $z$--$x'$--$z''$ convention in rigid body dynamics~\cite{goldstein80euler}. That is, if a local coordinate system $x$--$y$--$z$ spanned by the three eigenvectors was initially aligned with the global coordinate system ($X$--$Y$--$Z$), the current configuration could be obtained by the following three consecutive intrinsic rotations about the axes of the local coordinate system: (1) a rotation about the $z$ axis by angle $\varphi_1$, (2) a rotation about the $x$ axis by $\varphi_2$, and (3) another rotation about its $z$ axis by $\varphi_3$. The local coordinate axes usually change orientations after each rotation. Finally, the Reynolds stress tensor is projected to six physically meaningful parameters representing its magnitude ($k$), shape ($\xi$, $\eta$), and orientation ($\varphi_1$, $\varphi_2$, $\varphi_3$). The discrepancies ($\Delta \log k$, $\Delta \xi$, $\Delta \eta$, $\Delta \varphi_1$, $\Delta \varphi_2$, $\Delta \varphi_3$, collectively denoted as $\Delta{\bs\tau}$) of Reynolds stress can be represented in these six projections. Note that the TKE discrepancy $\Delta\log k$ is the logarithm of the ratio of the DNS-predicted TKE ($k^{dns}$) to the RANS-predicted TKE ($k^{rans}$), i.e., \begin{equation} \Delta\log k = \log{\frac{k^{dns}}{k^{rans}}}. \end{equation} Therefore, these discrepancies are dimensionless quantities. Moreover, they are also demonstrated to have similar characteristics among closely related flows~\cite{mfu3,Wang2016}, and thus are chosen as the learning targets. \subsection{Construction of Machine Learning Model Based on Random Forest} After identifying the input feature set $\mathbf{q}$ and output responses $\Delta\bs\tau$, a machine learning algorithm needs to be chosen to build the functional relation between input features and responses. There are various choices of supervised learning algorithms, e.g., K-nearest neighbors~\cite{altman1992introduction}, linear regression~\cite{james2013introduction}, Gaussian processes~\cite{rasmussen2006gaussian}, tree-based methods (e.g., decision trees, random forest, and bagging)~\cite{breiman2001random}, and neural networks~\cite{anderson1995introduction}. As discussed in~\cite{Wang2016}, a major consideration is the capability to deal with the high dimensionality of the feature space. This consideration becomes more important in the current work since the input space is expanded to one with 57 features, the dimension of which is much higher than that in~\cite{Wang2016}. Therefore, the random forest regressor~\cite{breiman2001random}, known to be suitable for high dimensional regression problems, is employed to build the regression function of Reynolds stress discrepancies with respect to the mean flow features. Random forest regression is an ensemble learning technique that aggregates predictions from a number of decision trees. The decision tree model stratifies the input space by a series of decision rules, and those rules can be learned from the training data. By stratification of input space in a tree-like manner, the decision tree is able to handle high-dimensional problems and is computationally efficient, as well. However, one major disadvantage of the single decision tree is that it tends to overfit the data, which often leads to poor predictions. This disadvantage can be overcome and prediction performance can be significantly improved by aggregating a large number of trees, which is the essence of the random forest algorithm. In the random forest model, the ensemble of decision trees is built with bootstrap aggregation samples (i.e., sampling with replacement) drawn from the training data~\cite{friedman2001elements}. Moreover, only a subset of randomly selected features is used when determining the splits of each tree. This reduces the correlation among trees in the ensemble and thus decreases the bias of the ensemble prediction. \section{Numerical Results} \label{sec:result} \subsection{Case Setup} The fully developed turbulent flow in a square duct is considered to demonstrate the proposed framework. Although this flow has a simple geometry, it features the in-plane secondary flow induced by Reynolds stresses. All RANS turbulence models under linear eddy viscosity hypothesis fail to predict the secondary mean motion, and even the Reynolds-stress transport model (RSTM) cannot predict it well~\cite{billard2011}. The errors stem from the modeled Reynolds stresses. Therefore, we aim to improve the RANS-predicted Reynolds stresses by learning from the data of similar flows. The geometry of the duct flow is presented in Fig.~\ref{fig:domain_duct}. The Reynolds number $Re$ is based on the edge length $D$ of the square and bulk velocity. All lengths presented below are normalized by the height $h$ of the computational domain, which is half of the edge length. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{duct-domain.pdf} \caption{Domain shape for the flow in a square duct. The $x$ coordinate represents the streamwise direction. Secondary flows induced by Reynolds stress imbalance exist in the $y$--$z$ plane. Panel (b) shows that the computational domain covers a quarter of the cross-section of the physical domain. This is due to the symmetry of the mean flow in both $y$ and $z$ directions as shown in panel (c).} \label{fig:domain_duct} \end{figure} The Reynolds stress discrepancy function is trained on the data from flows at Reynolds numbers $Re = 2200, 2600, 2900$ to predict the flow at a higher Reynolds number $Re = 3500$. All flows have the same geometry. The data of training flows are obtained from direct numerical simulations (DNS)~\cite{pinelli2010reynolds}. Note that the DNS data of the flow to be predicted ($Re = 3500$) are reserved for comparison and are not used for training. The mean flow patterns among these flows are similar. In the cross-plane of the duct, there is a counter-rotating pair located at each of the four corners (Fig.~\ref{fig:domain_duct}(c)). However, the recirculation bubble moves closer to the wall and its size decreases as the Reynolds number increases. Baseline RANS simulation is conducted for each flow to obtain mean flow features and training data of the Reynolds stress discrepancies. Since linear eddy viscosity models are not able to predict the mean flow features of the secondary motions, the Launder-Gibson RSTM is adopted to perform the baseline simulation. As indicated in Fig.~\ref{fig:domain_duct}, only one quadrant of the physical domain is simulated due to the symmetry of the mean flow with respect to the centerlines along $y$- and $z$-axes. No-slip boundary conditions are applied at the walls, and symmetry boundary conditions are applied on the symmetry planes. The DNS Reynolds stresses are interpolated onto the mesh of the RANS simulation to calculate the discrepancy. The RANS simulations are performed in an open-source CFD platform, OpenFOAM, using a built-in incompressible flow solver~\texttt{simpleFoam}~\cite{weller1998tensorial}. Mesh convergence studies have been performed. The random forest regressor is constructed with decision trees, each of which is built to its maximum depth by successive splitting of nodes until each leaf is left with one training data point. To control the prediction performance of a random forest model, there are two important free parameters, i.e., number of trees and number of randomly selected features on which each split on each tree is determined. Generally, a higher number of trees leads to a better performance. Based on our testing, an ensemble of $200$ trees is large enough to have a robust prediction. The number of features randomly selected is commonly smaller than the total number of input features. The reason for embedding the randomness is to enhance the diversity of the trees. Therefore, the random forest prediction can be more robust and unlikely to over-fit the data. The size of the randomly selected subset of features is commonly chosen as the square root of the total number of input features~\cite{svetnik2003random}. In the current test case, the prediction results were shown to be insensitive to the number of features over which each split was determined. \subsection{Results and Interpretation} \subsubsection{Verification of DNS Data} The aim of the PIML framework is to reduce the discrepancies in the RANS-modeled Reynolds stresses. With the improved Reynolds stresses, one should be able to obtain an accurate prediction of the velocity field. However, the outcome of velocity propagation depends on the quality of training data, i.e., DNS Reynolds stresses. Although the Reynolds stresses from DNS simulations are assumed to be more accurate than the RANS predictions, it is not guaranteed that they can be propagated to a better mean velocity field due to potential statistical convergence errors. Thompson et al.~\cite{thompson2016methodology} recently demonstrated that, even for channel flows, the Reynolds stresses of different DNS databases in literature lead to significant discrepancies in the propagated velocity fields. To better evaluate the performance of machine learning predictions of propagated velocities, it is useful to check the velocity field obtained by directly propagating the DNS Reynolds stresses via RANS equations. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{u-DNS-legend}\\ \subfloat[$U_y$]{\includegraphics[width=0.45\textwidth]{uy-DNS}} \subfloat[$U_z$]{\includegraphics[width=0.45\textwidth]{uz-DNS}} \caption{In-plane velocity profiles (a) $U_y$ and (b) $U_z$ obtained by propagating the DNS Reynolds stresses $\boldsymbol{\tau}_{dns}$ via RANS equations at Reynolds number $Re = 3500$. The DNS results are also plotted for comparison.} \label{fig:U_DNS} \end{figure} The DNS data of Reynolds stresses at $Re = 3500$ are used to solve the RANS equations, and the corresponding mean velocity field is obtained. The propagated mean field of in-plane velocity is compared to that provided by the DNS space-time averaging, which is shown in Fig.~\ref{fig:U_DNS}. The in-plane velocity components $U_y$ and $U_z$ on the four cross-sections ($y/h = 0.25, 0.5, 0.75$ and $1.0$, as indicated in Fig.~\ref{fig:domain_duct}(b)) are presented, but only the profiles in the region below the diagonal are shown due to the diagonal symmetry. It can be seen that the propagated results agree well with the DNS profiles along all four cross sections for both $U_y$ and $U_z$. However, in the regions away from the corner (e.g., $y/h > 0.75$ or $z/h > 0.4$), the propagated velocity profiles slightly deviate from the DNS results. Especially for $U_z$, notable discrepancies can be observed in the profile at $y/h = 1$. These discrepancies might come from small errors in Reynolds stresses (e.g., error introduced in interpolations to RANS mesh), since the secondary flow is sensitive to the Reynolds stress components. As the magnitude of secondary velocity decreases away from the corner, the discrepancies are even more pronounced. Nevertheless, the overall data quality of Reynolds stresses are considered satisfactory to obtain an improved velocity field. \subsubsection{Learning and Prediction of Reynolds Stress} The discrepancy functions in six physical projections are learned from the training flows, and predictions are made for the test flow. We first investigate the prediction performance of the shape of the Reynolds stress anisotropy tensor, which can be visualized in a barycentric triangle. \begin{figure}[htbp] \centering \subfloat[$x/H = 0.25$]{\includegraphics[width=0.45\textwidth]{bary_yLoc_1_duct3500}} \subfloat[$x/H = 0.75$]{\includegraphics[width=0.45\textwidth]{bary_yLoc_3_duct3500}} \caption{Barycentric map of the predicted Reynolds stress anisotropy for the test flow ($Re = 3500$), learned from the training flows ($Re = 2200, 2600$, and $2900$) The prediction results on two streamwise locations at $x/H = 0.25$ and $0.75$ are compared with the corresponding baseline (RSTM) and DNS results in panels (a) and (b), respectively.} \label{fig:bayRe} \end{figure} The prediction results on two typical cross-sections ($y/H = 0.25$ and $y/H = 0.75$) are plotted in the barycentric triangle in Figs.~\ref{fig:bayRe}a and~\ref{fig:bayRe}b, respectively. The Reynolds stresses on the cross-section at $y/H = 0.25$ from the wall to the outer layer start from two-component limit states (bottom edge) towards three-component anisotropic states (middle area of the triangle). For the cross-section at $y/H = 0.75$, the spatial variation of turbulence states is similar. The baseline RSTM results capture this trend to some extent, especially in the regions away from the wall. This does much better than the linear eddy viscosity models (results are not shown), predictions of which have an opposite trend against the truth. Although the RSTM predictions away from the wall are satisfactory, discrepancies are still pronounced, especially in the near wall region. For example, It can be seen in Fig.~\ref{fig:bayRe}a that the DNS Reynolds stress anisotropy on the wall is at the two component state, since the velocity fluctuations in the wall normal direction are suppressed by the blocking of the bottom wall. As away from the wall, it moves towards the one-component state and then towards the three-component anisotropic states. In contrast, RSTM-predicted anisotropy on the wall is closer to the two-component axi-symmetric state, while it approaches directly towards the generic turbulent states as away from the wall. By correcting the baseline RANS-modeled Reynolds stresses with the trained discrepancy function, the PIML-predicted anisotropy is significantly improved. In both Figs~\ref{fig:bayRe}a and~\ref{fig:bayRe}b, the PIML-predicted anisotropies (circles) show much better agreement with the DNS results (squares) than the RSTM prediction does, especially in the near wall regions. \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{duct_Re3500_deltaVB_f57.pdf} \caption{Rotation angle $\Delta \varphi_2$ of predicted Reynolds stress from the baseline for the test flow (Re = 3500). The profiles are shown at four streamwise locations $x/H = 0.25, 0.5, 0.75,$ and $1$. Corresponding DNS and baseline (reference lines) results are also plotted for comparison.} \label{fig:angle} \end{figure} The barycentric coordinates of anisotropy tensor are shown to be considerably improved in the machine learning prediction process. However, the improvement of eigenvalues alone may not necessarily lead to a better prediction of anisotropy tensor. When the eigenvectors of RANS-predicted anisotropy markedly deviate from the DNS results, the reconstructed Reynolds stress even with the DNS eigenvalues but the RANS eigenvectors may lead to even larger discrepancies in its tensor components. Therefore, corrections are required for both the shape and orientation of the anisotropy tensor. Figure~\ref{fig:angle} presents the discrepancy profiles of RANS-predicted Reynolds stresses in orientations, i.e., the rotation angles of DNS anisotropies from the baseline RSTM results. Note that only the angle discrepancies $\Delta\varphi_2$ are presented, and the results for $\Delta\varphi_1$ and $\Delta\varphi_3$ are omitted due to their qualitative similarities. The angle is presented in radian. Notable discrepancies between the eigenvector systems of the DNS and baseline anisotropy tensors can be observed. Especially, in the near wall regions ($y/H < 0.2$), the rotation angles $\delta\varphi_2$ are more than 0.2 rad ($10$ degrees). It can be seen that these angle discrepancies are well predicted by the trained regression function over the entire domain, and their spatial variations are also well captured. However, slight wiggles can be found in the predicted $\Delta\varphi_2$ on the lower part of the profile at $y/H = 1$. This non-smoothness originates from the pointwise estimation of the Random forest model in the feature space, which cannot guarantee spatial smoothness in the physical domain. \begin{figure}[htbp] \centering \includegraphics[width=0.5\textwidth]{duct_Re3500_K_f57.pdf} \caption{Turbulence kinetic energy of the test flow (Re = 3500), learned from the training flows ($Re = 2200, 2600$, and $2900$). The profiles are shown at four streamwise locations $x/H = 0.25, 0.5, 0.75,$ and $1$. Corresponding DNS and baseline (RSTM) results are also plotted for comparison.} \label{fig:TKE} \end{figure} The TKE is also not well predicted by the baseline RANS model, which can be seen in Fig.~\ref{fig:TKE}. The RSTM tends to overestimate the magnitudes of Reynolds stresses, which are almost the twice those of the DNS results in most regions. The discrepancies of the RSTM modeled TKE exist in the entire flow domain and are especially large close to the corner. Similar to the anisotropy prediction, the TKE field corrected by the trained random forest model is significantly improved over the baseline results. Fig.~\ref{fig:TKE} shows that the TKE profiles of corrected Reynolds stresses are nearly identical to the DNS profiles. \begin{figure}[htpb] \centering \includegraphics[width=0.5\textwidth]{tauLegend}\\ \subfloat[Baseline $\tau_{yy}$]{\includegraphics[width=0.3\textwidth]{./tauyy_rans}} \hfill \subfloat[DNS $\tau_{yy}$]{\includegraphics[width=0.3\textwidth]{./tauyy_dns}}\hfill \subfloat[Predicted $\tau_{yy}$]{\includegraphics[width=0.3\textwidth]{./tauyy_predict}}\\ \subfloat[Baseline $\tau_{zz}$]{\includegraphics[width=0.3\textwidth]{./tauzz_rans}} \hfill \subfloat[DNS $\tau_{zz}$]{\includegraphics[width=0.3\textwidth]{./tauzz_dns}}\hfill \subfloat[Predicted $\tau_{zz}$]{\includegraphics[width=0.3\textwidth]{./tauzz_predict}} \caption{Contour plots of normal components $\tau_{yy}$ and $\tau_{zz}$ for baseline (a, d), DNS (b, e) and machine-learning-predicted (c, f) results.} \label{fig:tau_cont} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{u_f57_legend}\\ \subfloat[normal stress imbalance]{\includegraphics[width=0.45\textwidth]{./duct_Re3500_Tau_yy-zz_f57.pdf}} \subfloat[shear component]{\includegraphics[width=0.45\textwidth]{./duct_Re3500_Tau_2_f57.pdf}} \caption{Profiles of (a) normal stress imbalance $\tau_{yy} - \tau_{zz}$ and (b) shear component $\tau_{xy}$ of corrected Reynolds stress with the discrepancy model trained on 57 features. The profiles are shown at four streamwise locations $x/H = 0.25, 0.5, 0.75,$ and $1$. Corresponding DNS and baseline (RSTM) results are also plotted for comparison.} \label{fig:tau} \end{figure} The results shown above demonstrate that all the physical projections of the RANS-predicted Reynolds stresses are significantly improved by the random forest discrepancy model. Therefore, it is expected that the tensor components should also be improved over the corresponding baselines. Figure~\ref{fig:tau_cont} shows the contour plot comparisons of the baseline, DNS, and PIML-predicted results on the turbulent normal stress components $\tau_{yy}$ and $\tau_{zz}$. These two normal components of Reynolds stress tensor are known to be important to the velocity propagation in the duct flow since the imbalance of them ($\tau_{yy} - \tau_{zz}$) is the main driving force of the secondary flow. Both $\tau_{yy}$ and $\tau_{zz}$ are markedly overestimated by the RSTM over the entire domain, which is due to its overestimation of the TKE (see Fig.~\ref{fig:TKE}). Moreover, the spatial variation patterns of RSTM predictions are significantly different from those of DNS results, especially in the near-corner region. As expected, the machine learning predictions are considerably improved over the RSTM baseline. Most of the magnitudes, features, and patterns of DNS results are captured well in the PIML predictions for both $\tau_{yy}$ and $\tau_{zz}$. In Fig.~\ref{fig:tau}a and~\ref{fig:tau}b, we also compare the profiles of normal stress imbalance $\tau_{yy} - \tau_{zz}$ and turbulent shear stress $\tau_{xy}$ on four cross sections to more clearly demonstrate the improvement in the machine-learning predictions. It can be seen that RSTM captures the spatial pattern of the normal stress imbalance, which is positive near the wall and becomes negative away from the wall. However, the magnitude $\|\tau_{yy} - \tau_{zz}\|$ of imbalance term is significantly overestimated. Moreover, the RSTM underestimates the turbulent shear stress $\tau_{xy}$. The discrepancies of RSTM-modeled shear component $\tau_{xy}$ are more notable on the cross-section at $y/H = 0.25$, which is close to the left lower corner. As expected, the PIML-corrected results shows pronounced improvements over the RSTM baselines. The PIML predictions nearly coincide with the DNS results for both $\tau_{yy} - \tau_{zz}$ and $\tau_{xy}$ profiles. This demonstrates that the discrepancies in all Reynolds stress tensor components that are relevant to the mean motion predictions are well predicted by the trained discrepancy functions. \subsubsection{Propagation of Improved Reynolds Stress Prediction} The improvement of Reynolds stresses enabled by the PIML framework, the success of which has been demonstrated above, is an important step toward data-driven, predictive turbulence modeling. However, the ultimate goal is to obtain more accurate quantities of interest (QoI) after propagating the corrected Reynolds stress through RANS equations. To investigate the performance on the improvement of the propagated mean velocity field, we substitute the Reynolds stress field of RANS momentum equations with the corrected one and propagate them by solving the equations. \begin{figure}[htpb] \centering \includegraphics[width=0.4\textwidth]{u_f57_legend}\\ \subfloat[$U_y$]{\includegraphics[width=0.45\textwidth]{./uy_f57}} \subfloat[$U_z$]{\includegraphics[width=0.45\textwidth]{./uz_f57}} \caption{In-plane velocity profiles (a) $U_y$ and (b) $U_z$ obtained by propagating the PIML predicted Reynolds stresses via RANS equations at Reynolds number $Re = 3500$. The baseline (RSTM) and DNS results are also plotted for comparison. The discrepancy functions of Reynolds stresses are trained on 57 features. } \label{fig:U_f57} \end{figure} The mean secondary velocity profiles for $U_y$ and $U_z$ obtained by propagating the PIML-predicted Reynolds stress field are shown in Figs.~\ref{fig:U_f57}a and~\ref{fig:U_f57}b, respectively. To facilitate comparisons, the RSTM baseline and DNS results are also plotted in the same figures. The spatial variation of the RSTM simulated velocity basically captures the trend of the truth, but its magnitude is not predicted well. Notable deviations from the DNS results can be observed. Especially in the near-corner region, where the secondary motion is strong, the discrepancies are large in both RSTM simulated $U_y$ and $U_z$. In contrast, the PIML-predicted results show much better agreements with the DNS results. The improvements are even more pronounced in the regions where secondary flow is strong (e.g., $y/h = 0.25$ and $0.5$). This can be seen clearly in the contour plot of the secondary flow by zooming in the near-corner region (Figure~\ref{fig:U_cont}). In this region the mean flow pattern simulated by RSTM is notably different from the DNS results. For the RSTM modeled mean secondary motion (Fig.~\ref{fig:U_cont}(a)), the flow approaches to the corner along the diagonal, and its velocity only decreases very close to the corner ($y/h< 0.05$ and $z/h<0.05$). \begin{figure}[htpb] \centering \includegraphics[width=0.3\textwidth]{legend-contour}\\ \subfloat[Baseline RSTM]{\includegraphics[width=0.33\textwidth]{contour_uRANS_zoomin.pdf}} \subfloat[DNS]{\includegraphics[width=0.33\textwidth]{contour_uDNS_zoomin.pdf}} \subfloat[Predicted]{\includegraphics[width=0.33\textwidth]{contour_uPred_zoomin.pdf}} \caption{Contour and vector plots of secondary mean motions of (a) baseline (b) DNS and (c) machine-learning-predicted results at the corner region. The color of contour represents the magnitude of secondary velocity ($\sqrt{U_y^2 + U_z^2}$).} \label{fig:U_cont} \end{figure} However, the DNS data show a different flow pattern, where the magnitude of the flow velocity towards the corner decreases earlier. The secondary velocity is significantly reduced after $y/h < 0.15$ and $z/h < 0.15$. Its magnitude decreases to almost zero as $y/h < 0.1$ and $z/h < 0.1$. Comparing Figs.~\ref{fig:U_cont}b and~\ref{fig:U_cont}c, the propagated flow field from PIML prediction excellently captures the general pattern of the DNS results in the near-corner region. It shows a significant improvement over the baseline results, suggesting that PIML-predicted Reynolds stresses are superior and can provide a better in-plane velocity field, especially in the region with a strong secondary flow. Slight discrepancies still exist in the region with a mild secondary flow. For example, PIML-predicted $U_z$ profile at $y/h = 1$ deviates from the DNS results (Fig.~\ref{fig:U_f57}b). A possible reason is that the training data may contain small errors introduced in the interpolation process, which can cause notable velocity discrepancies in regions where the secondary flow is weak. Similar discrepancies can be found in the velocity profiles propagated with the DNS Reynolds stresses, which are shown in Fig.~\ref{fig:U_DNS}b. \subsubsection{Merits of Expansion of Invariant Feature Space} One of the novelties in this work lies in applying an integrity invariant basis on the construction of input feature space in the PIML learning-prediction process. Compared to the set of ten input features used in~\cite{Wang2016}, the current feature space is markedly expanded. A pertinent question is what benefits the expanded feature space offers? Or in other words, would it be possible to achieve a similar success by using an incomplete invariant basis as the input space, e.g., the ten features used in~\cite{Wang2016}. \begin{figure}[htpb] \centering \includegraphics[width=0.4\textwidth]{u_f57_legend}\\ \subfloat[$U_y$]{\includegraphics[width=0.45\textwidth]{uy_f10}} \subfloat[$U_z$]{\includegraphics[width=0.45\textwidth]{uz_f10}} \caption{In-plane velocity profiles (a) $U_y$ and (b) $U_z$ obtained by propagating the PIML-predicted Reynolds stresses via RANS equations at Reynolds number $Re = 3500$. The baseline (RSTM) and DNS results are also plotted for comparison. The discrepancy functions of Reynolds stresses are trained on ten features used in~\cite{Wang2016}.} \label{fig:U_f10} \end{figure} To investigate this issue, we perform the same training, prediction, and propagation steps for the same test case shown above, but with the input set of ten features used in \cite{Wang2016} instead of current 57 features. The propagated mean secondary velocity profiles $U_y$ and $U_z$ from the corrected Reynolds stress field are shown in Figs.~\ref{fig:U_f10}a and~\ref{fig:U_f10}b, respectively. The profiles of PIML-predicted velocity are improved over the baseline RSTM results in regions close to the corner ($y/h = 0.25$) for both components $U_y$ and $U_z$. However, in the regions away from the corner ($y/h = 0.5$ to $0.75$), the profiles of machine-learning predictions largely deviate from the DNS results and become even worse than the baseline predictions. Unphysical wiggling of the velocity profiles can be observed in both figures. Compared to the mean velocity with the expanded feature space shown in Fig.~\ref{fig:U_f57}, the accuracy of propagated mean motion with the input set of ten features significantly deteriorates. \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{u_f57_legend}\\ \subfloat[normal stress imbalance]{\includegraphics[width=0.45\textwidth]{./duct_Re3500_Tau_yy-zz_f10.pdf}} \subfloat[shear component]{\includegraphics[width=0.45\textwidth]{./duct_Re3500_Tau_2_f10.pdf}} \caption{Profiles of (a) normal stress imbalance $\tau_{yy} - \tau_{zz}$ and (b) shear component $\tau_{xy}$ of corrected Reynolds with the discrepancy model trained on 10 features used in~\cite{Wang2016}. The profiles are shown at four streamwise locations $x/H = 0.25, 0.5, 0.75,$ and $1$. Corresponding DNS and baseline (RSTM) results are also plotted for comparison.} \label{fig:tau_10} \end{figure} The deterioration of the mean flow prediction indicates that the PIML-corrected Reynolds stresses are not satisfactory to propagate to an improved mean velocity field. Figures~\ref{fig:tau_10}a and~\ref{fig:tau_10}b show profiles of the normal stress imbalance term ($\tau_{yy} - \tau_{zz}$) and shear stress component $\tau_{xy}$ of the corrected Reynolds stresses with the input space of ten features. It can be seen that the machine learning predictions are significantly improved over the baseline RSTM results since the profiles of both terms have a better agreement with the DNS results. The norms of discrepancies between the prediction and the truth are significantly reduced compared to those of the RSTM results. However, a notable difference from the machine learning predictions with 57 features (Fig.~\ref{fig:tau}) is that the profiles of the predicted Reynolds stress with ten input features wiggle at the lower part of the duct. Especially for the shear component~$\tau_{xy}$, several bumps can be clearly found on the profiles at $y/h = 0.5$ and $0.75$. Although the overall Reynolds stress predictions are improved (i.e., discrepancies in tensor components are reduced), the derivative of the turbulent shear stress field becomes even worse than the baseline in these regions with wiggles and bumps. These unphysical wiggles can pollute the propagated velocity field since it is the divergence of Reynolds stress appearing in the momentum equation and determining the velocity propagation. This explains the significant deterioration of the velocity prediction in Fig.~\ref{fig:U_f10}a. The expanded input space based on an integrity invariant basis used in this work significantly improves the learning and prediction performances of Reynolds stress discrepancies, and thus improved mean velocity predictions can be obtained through the RANS propagation. The merits of applying the expanded input set to construct the random forest model are twofold. First, the predicted field from the random forest model tends to be non-smooth due to its pointwise estimation. The level of non-smoothness increases when the dimension of the input space is lower than the truth. In other words, if the features are not rich enough to differentiate different response points in feature space, the prediction tends to be more non-smooth due to projecting onto an incomplete basis. Using a set of complete invariant bases significantly increases the ``resolution'' of input space. Thus, the prediction performance with expanded features (Fig.~\ref{fig:tau}) is markedly superior over that with the previous input set of ten features (Fig.~\ref{fig:tau_10}). Note that the increase in resolution of input space is more important for learning discrepancies in the orientation of the Reynolds stress tensor, since eigenvectors contain more information than the eigenvalues do, and thus more features are needed to explain them. Second, the current 57 input features also include rotational invariants other than full invariants, which also can improve the learning performance of angle discrepancies, since the Euler angles are not reflection invariant. To improve the capability of generalization, one possible approach would be to expand the training data to include reflected states of the system and then to teach the model to be fully invariant on this expanded training set~\cite{ling2016machine}. On the other hand, we can try to explore better parameterizations of eigenvector system instead of using Euler angles. To improve the representation of discrepancies in the eigenvectors is an ongoing research, which is out of the scope of the current work. It is worth noting that using the expanded feature basis increases the risk of overfitting because it introduces more free parameters into the model. In this study, the model performance was evaluated on a single flow configuration at multiple Reynolds numbers. It is very likely that performance on other flow configurations would be unreliable due to overfitting. Further validation is necessary to assess the generalization of these models. Nevertheless, it is already useful to have a model that is specific to a given flow configuration because it is very common in industry to run many simulations of closely related flows. \section{Discussion} \label{sec:discussion} \subsection{Concept of ``physics-informed machine learning''} The concept of ``physics-informed machine learning'' claimed in this work is to emphasize the attempt to consider and embed physical domain knowledge into every stage of the machine learning process, including the construction of the input feature space, choice of output responses, and learning-prediction process. For most physical problems, data are not rich enough to conduct the traditional machine learning, since most of existing algorithms have been developed for business applications and have rarely been used for physical systems. Ling et al.~\cite{ling2016machine} demonstrated the merits of embedding physical truth (e.g., invariance properties) into the machine learning process. Arguably, we believe that the state-of-the-art machine learning techniques have difficulties in learning the hard constraints in a physical system (e.g., conservation law, realizability) from any reasonable amount of data. Therefore, in the proposed method the machine learning is employed to correct the RANS model instead of replacing it. Moreover, physical hard constraints (e.g., realizability of Reynolds stress) and domain knowledge (e.g., reasoning for choosing raw features) are incorporated. We emphasize the concept of physics-informed machine learning to draw attentions from the audiences in both physical modeling and machine learning communities. We try to demonstrate that data-driven modeling is a promising compensate for traditional physical modeling. At the same time, we have incorporated as much turbulence domain knowledge as possible instead of entirely depending on data. \subsection{Challenges and Perspectives of the Current Framework} Although it has been demonstrated that the current PIML framework is a promising way to improve predictive turbulence modeling, there are a few of challenges associated with the propagation of corrected Reynolds stresses to the mean velocity field, which need improvement in future work. Here we briefly discuss these challenges. In both~\cite{Wang2016} and current work, the RANS-modeled Reynolds stresses are shown to be significantly improved. However, to propagate the success in Reynolds stress predictions to QoIs (e.g., mean velocity field) is still challenging. First, we should acknowledge that the improvements in Reynolds stresses are from the point view of pointwise estimation. It is possible that the predictions are close to the truth but are not smooth (i.e., slightly wiggling around the truth), which might pollute the propagated velocity field. This is because the currently used machine learning algorithm, i.e., random forest, may not necessarily improve the spatial derivative of Reynolds stress field due to the pointwise statistics. Second, the numerical stability could be another issue that affects a robust propagation. The second issue is relatively trivial and can be solved by using some numerical tricks, e.g., adding artificial diffusion terms. The first issue on the non-smoothness of machine learning predictions is the main roadblock for velocity propagation. How to effectively use the information of the spatial correlation of the Reynolds stress field from the data is crucial to further improve the current framework. One possible method would be to assume a non-stationary spatial correlation structure, whose hyper-parameters can be determined based on data and physical prior knowledge. The pointwise machine learning predictions could be regulated by this correlation structure to ensure the smoothness physically. Finally, it is worth noting that in this paper we have yet to demonstrate the ability to generalize the predictive performance more broadly for a wide range of flows, since only one flow configuration at multiple Reynolds numbers is considered. To generalize the predictive capability by using more comprehensive databases with various flow physics will be the subject of future study. \section{Conclusion} \label{sec:conclusion} Recently, the growing availability of high-fidelity data sets has led to increased interest in using data-driven approaches to improve the predictive capability of RANS models. Wang et al.~\cite{Wang2016} demonstrated that the RANS-modeled Reynolds stresses can be improved by learning the functional form of the Reynolds stress discrepancy from available data. However, it is still an \emph{a priori} study, since whether these improved Reynolds stresses can be propagated to obtain a better velocity field remains unclear. In this work, we introduce and demonstrate the procedures toward a complete Physics-Informed Machine Learning (PIML) framework for predictive turbulence modeling, including learning the Reynolds stress discrepancy function, predicting Reynolds stresses in different flows, and propagating the predicted Reynolds stresses to the mean flow field. To improve the learning-prediction performance, the input features are expanded by constructing an integrity invariants basis of given raw mean flow variables. The predictive accuracy of the velocity field by propagating the PIML-corrected Reynolds stresses is investigated. The fully developed turbulent flow in a square duct is used as the test case. The discrepancy functions are trained in the flows at low Reynolds numbers and used to predict a different flow at a high Reynolds number. The numerical results show excellent predictive performances in both Reynolds stresses and their propagated velocity field, demonstrating the merits of the proposed PIML approach in predictive turbulence modeling. \section*{Acknowledgment} Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND2017-0664 J \section*{Compliance with Ethical Standards} Conflict of Interest: The authors declare that they have no conflict of interest.
1,477,468,750,268
arxiv
\section{Introduction} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{fig1_5.pdf} \caption{ Prior methods (\eg, SPIN~\cite{kolotouros2019learning} and CMR~\cite{kolotouros2019convolutional}) usually reconstruct 3D meshes of human body from the global image feature vector extracted by neural networks, where the dense correspondences between the mesh surface and the image pixels are missing, leading to suboptimal results (top). Our DecoMR framework explicitly establishes such correspondence in the feature space with the aid of a novel continuous UV map, which results in better results in mesh details (bottom). } \label{fig:introduction} \vspace{-12pt} \end{figure} Estimation of the full human body pose and shape from a monocular image is a fundamental task for various applications such as human action recognition~\cite{hussein2013human, xia2012view}, VR/AR~\cite{huang2017towards} and video editing~\cite{huang2015hybrid}. It is challenging mostly due to the inherent depth ambiguity and the difficulty to obtain the ground-truth 3D human body data. There are several popular representations for 3D objects in literature, \eg, point clouds, 3D voxels and 3D meshes. Because of its compatibility with existing computer graphic engines and the efficiency to represent object surface in details with reasonable storage, 3D mesh representation has been widely adopted for 3D human body reconstruction~\cite{kanazawa2018end,bogo2016keep,kolotouros2019learning,guan2009estimating,pishchulin2016deepcut,zanfir2018monocular,huang2017towards,pavlakos2018learning,omran2018neural,yao2019densebody,kolotouros2019convolutional,zhu2019detailed}. However, unlike 3D voxel representation, the dense correspondence between the template human mesh surface and the image pixels is missing, while this dense correspondence between the input and the output has been proven crucial for various tasks~\cite{newell2016stacked,zhu2019detailed}. Due to this limitation, most existing 3D mesh based methods, either model-based~\cite{kanazawa2018end, pavlakos2018learning, omran2018neural, kolotouros2019learning} or model-free~\cite{kolotouros2019convolutional}, have to ignore the correspondence between the mesh representation and pixel representation. And they have to estimate the human meshes based on either global image feature~\cite{kanazawa2018end,kolotouros2019convolutional,kolotouros2019learning}, or hierarchical projection and refinement~\cite{zhu2019detailed}, which is time consuming and sensitive to initial estimation. To utilize the 3D mesh representation without losing the correspondence between the mesh space and the image space, we propose a 3D human mesh estimation framework that explicitly establishes the dense correspondence between the output 3D mesh and the input image in the UV space. \emph{Representing output mesh by a new UV map:} Every point on the mesh surface is represented by its coordinates on the continuous UV map. Therefore, the 3D mesh can be presented as a location map in the UV space, of which the pixel values are the 3D coordinates of its corresponding point on the mesh surface, as shown in Figure~\ref{fig:introduction}. Instead of using SMPL default UV map, we construct a new continuous UV map that maintains more neighboring relations of the original mesh surface, by parameterizing the whole mesh surface into a single part on the UV plane, as shown in Figure~\ref{fig:introduction}. \emph{Mapping image features to the UV space:} To map the image features to the continuous UV map space, we first use a network that takes a monocular image as input for predicting an IUV image~\cite{alp2018densepose}, which assign each pixel to a specific body part location. Then the local image features from the decoder are transferred to the UV space with the guidance of predicted IUV image to construct the transferred feature maps that are well aligned with the corresponding mesh area. Given the transferred local features, we use both the local features and the global feature to estimate the location map in the UV space, which is further used to reconstruct the 3D human body mesh with the predefined UV mapping function. Since our UV map is continuous and maintains the neighboring relationships among body parts, details between body parts can be well preserved when the local features are transferred. In summary, our contributions are twofold: \begin{enumerate} \item[$\bullet$] We propose a novel UV map that maintains most of the neighboring relations on the original mesh surface. \item[$\bullet$] We explicitly establish the dense correspondence between the output 3D mesh and the input image by the transferred local image features. \end{enumerate} We extensively evaluate our methods on multiple widely used benchmarks for 3D human body reconstruction. Our method achieves state-of-the-art performance on both 3D human body mesh reconstruction and 3D human body pose estimation. \section{Related Work} \subsection{Optimization-based methods} Pioneer works solve the 3D human body reconstruction by optimizing parameters of an predefined 3D human mesh models, \eg, SCAPE~\cite{anguelov2005scape} and SMPL~\cite{loper2015smpl}, with respect to the ground-truth body landmark locations~\cite{guan2009estimating}, or employing a 2D keypoints estimation network ~\cite{bogo2016keep}. To improve the precision, extra landmarks are used in~\cite{lassner2017unite}. Recent work~\cite{zanfir2018monocular} enables multiple persons body reconstruction by incorporating human semantic part segmentation clues, scene and temporal constrains. \subsection{Learning-based methods} \textbf{Model-based methods:} Directly reconstruction of the 3D human body from a single image is a relatively hard problem. Therefore, many methods incorporate a parameterized 3D human model and change the problem into the model parameter regression. For example, HMR~\cite{kanazawa2018end} regresses the SMPL parameters directly from RGB image. In order to mitigate the lack of robustness caused by the inadequacy of in-the-wild training data, some approaches employ intermediate representations, such as 2D joint heatmaps and silhouette~\cite{pavlakos2018learning}, semantic segmentation map~\cite{omran2018neural} or IUV image~\cite{xu2019denserac}. Recently, SPIN~\cite{kolotouros2019learning} incorporates 3D human model parameter optimization into network training process by supervising network with optimization result, and achieves the state-of-the-art results among model-based 3D human body estimation approaches. Compared with optimization-based methods, model parameter regression methods are more computationally efficient. While these methods can make use of the prior knowledge embedded in 3D human model, and tend to reconstruct more biologically plausible human bodies compared with model-free methods, the representation capability is also limited by the parameter space with these predefined human models. In addition, as stated in \cite{kolotouros2019convolutional}, 3D human model parameter space might not be so friendly to the learning of network. On the contrary, our framework does not regress model parameters. Instead, it directly outputs 3D coordinates of each mesh vertex. \begin{figure*}[htp] \centering \includegraphics[width=1\linewidth]{framework7.pdf} \caption{ Overview of our framework. Given an input image, an IUV map is first predicted by the correspondence net. Then local image features are transferred to the UV space. Location net takes transferred local features, expanded global feature and reference location map as input, and regresses a location map. Finally, 3D mesh is reconstructed from the location map. } \label{framework} \vspace{-10pt} \end{figure*} \textbf{Model-free methods: } Some methods do not rely on human models and regress 3D human body representation directly from image. BodyNet~\cite{varol2018bodynet} estimates volumetric representation of 3D human with a Voxel-CNN. A recent work~\cite{gabeur2019moulding} estimates visible and hidden depth maps, and combines them to form a point cloud of human. Voxel and point cloud based representations are flexible and can represent objects with different topology. However, the capability of reconstructing surface details is limited by the storage cost. CMR~\cite{kolotouros2019convolutional} uses a Graph-CNN to directly regress 3D coordinates of vertices from image features. Densebody~\cite{yao2019densebody} estimates vertex location in the form of UV-position map. A recent work~\cite{pumarola20193dpeople} represents the 3D shapes using 2D geometry images, which can be regarded as a special kind of UV-position map. These methods do not use any human model. However, they still lack correspondence between human mesh and image and estimate the whole surface only relying on global image feature. On the contrary, our method can employ local feature for the reconstruction of corresponding surface area. The efficacy of the UV space representation has been demonstrated in recent work Tex2Shape~\cite{alldieck2019tex2shape}, where the 3D human shape is estimated from the texture map which is obtained by transferring images pixels according to the IUV image estimated by DensePose~\cite{alp2018densepose}. We also use the IUV image to guide the human mesh estimation. However, in ~\cite{alldieck2019tex2shape}, the UV transfer is used to preprocess the raw image and is independent from the model learning, while we incorporate the UV transfer into our network to enable the end-to-end learning. We observe the efficacy of learning the transferred features end-to-end, which has also been proved by prior works, \eg, Spatial Transformer Networks~\cite{jaderberg2015spatial} and Deformable ConvNets~\cite{dai2017deformable}. Very recently, HMD~\cite{zhu2019detailed} refines initial estimated human mesh by hierarchical projection and mesh deformation. PIFu~\cite{saito2019pifu} reconstructs 3D human as implicit function. HMD and PIFu are able to utilize local image features to achieve impressive details in the reconstruction results. However, HMD is computationally intensive and sensitive to the initial estimation, while implicit function lacks the semantic information of human body. In contrast, we estimate the pixel-to-surface dense correspondence from images directly, which is computationally efficient and more robust, and the location map maintains the semantic information of human body. \section{Our Method} \textbf{Overview.} As shown in Figure \ref{framework}, our framework DecoMR consists of two components, including a dense correspondence estimation network (CNet), which preforms in the image space, as well as a localization network (LNet), which performs on a new continuous UV map space. The CNet has an encoder-decoder architecture to estimate an IUV image. It also extracts local image features $\mathcal{F}_{im}$, and then uses the the estimated IUV image for transferring the image features $\mathcal{F}_{im}$ to the transferred local features $\mathcal{F}_{UV}$ in the UV space. LNet takes the above transferred local features $\mathcal{F}_{UV}$ as input, and regresses a location map $X$, whose pixel value is the 3D coordinates of the corresponding points on the mesh surface. Finally, the 3D human mesh $V$ is reconstructed from the above location map by using a predefined UV mapping function. As a result, the location map and the transferred feature map are well aligned in the UV space, thus leading to dense correspondence between the output 3D mesh and the input image. Although the SMPL UV map~\cite{loper2015smpl} is widely used in the literature~\cite{yao2019densebody,alldieck2019tex2shape,grigorev2019coordinate}, it loses the neighboring relationships between different body parts as shown in Figure~\ref{fig:uv_map} (a),{ which is crucial for network learning as stated in \cite{kolotouros2019convolutional}.} Therefore, we design a new UV map that is able to maintain more neighboring relationships on the original mesh surface as shown in Figure~\ref{fig:uv_map} (b). The overall objective function of DecoMR is \begin{equation} \mathcal{L} = \mathcal{L}_{IUV} + \mathcal{L}_{Loc} + \lambda_{con}\mathcal{L}_{con}. \label{eq:L} \end{equation} It has three loss functions of different purposes. The first loss denoted as $\mathcal{L}_{IUV}$ minimizes the distance between the predicted IUV image and the ground-truth IUV image. The second loss function denoted as $\mathcal{L}_{Loc}$ minimizes the dissimilarity between the regressed human mesh (\eg location map) and the ground-truth human mesh. In order to encourage the output mesh to be aligned with the input image, we add an extra loss function, denoted as $\mathcal{L}_{con}$, which is a consistent loss to increase the consistency between the regressed location map and the ground-truth IUV image. The $\lambda_{con}$ in Equation \ref{eq:L} is a constant coefficient to balance the consistent loss $\mathcal{L}_{con}$. We first define the new UV map below and then introduce different loss functions in details. \subsection{The Continuous UV map} \label{section_uv} First we define a new continuous UV map that preserves more neighboring relationships of the original mesh than the ordinary UV map of SMPL. As shown in Figure~\ref{fig:uv_map} (a), multiple mesh surface parts are placed separately on the SMPL default UV map, which loses the neighboring relationships of the original mesh surface. Instead of utilizing SMPL UV map as \cite{alldieck2019tex2shape,grigorev2019coordinate,yao2019densebody}, we design a new continuous UV map. We first carefully split the template mesh into an open mesh, while keeping the entire mesh surface as a whole. Then we utilize an algorithm of area-preserving 3D mesh planar parameterization ~\cite{jacobson2017libigl,jiang2017simplicial}, to minimize the area distortion between the UV map and the original mesh surface, in order to obtain an initial UV map. To maintain symmetry for every pair of symmetric vertices on the UV map, we further refine the initial UV map by first aligning the fitted symmetric axis with $v$ axis and then averaging the UV coordinates with the symmetric vertex flipped by $v$ axis. \textbf{Comparisons.} Here we quantitatively show that our continuous UV map outperforms the SMPL UV map in terms of preserving connection relationships between vertices on the mesh. To do so, we compute the distance matrix, where each element is the distance between every vertex pair. We also compute the distance matrix on the UV map. Figure~\ref{distance} shows such distance matrices. This distance matrix can be computed by using different types of data. For the mesh surface, the distance between two vertices is defined as the length of the minimal path between them on the graph built from the mesh. For the UV map, the distance between two vertices is directly calculated by the the distance between their UV coordinates. \iffalse \zw{ In order to compare neighboring relations of the original mesh surface and UV map, we compute the distance between each pair of vertices and form a distance matrix. For mesh surface, we build a graph from the vertices and faces of the mesh, and the distance between two vertices is defined as the minimal path length between the corresponding nodes on the graph. For UV map, the distance between two vertices is directly calculated by the UV coordinates of vertices. The distance matrices are shown in Figure~\ref{distance}. } \fi Now we quantitatively evaluate the similarity between the distance matrices of UV map and original mesh in two aspects as shown in Table~\ref{tab:similarity}. In the first aspect, we calculate the 2D correlation coefficient denoted as $S_1$. We have \begin{equation} S_1 = \frac{\sum\limits_{m} \sum\limits_{n}\left(A_{m n}-\bar{A}\right)\left(B_{m n}-\bar{B}\right)}{\sqrt{\left(\sum\limits_{m} \sum\limits_{n}\left(A_{m n}-\bar{A}\right)^{2}\right)\left(\sum\limits_{m} \sum\limits_{n}\left(B_{m n}-\bar{B}\right)^{2}\right)}}, \end{equation} where $A$ and $B$ are the distance matrices of original mesh and UV map, respectively. $\bar{A}$ and $\bar{B}$ are the mean value of $A$ and $B$ respectively. $m$ and $n$ are the indices of mesh vertices. In the second aspect, we calculate the normalized cosine similarity between the distance matrices of UV map and original mesh, denoted as $S_2$. From Table~\ref{tab:similarity}, we see that our continuous UV map outperforms SMPL UV map by large margins on both metric values, showing that our UV map preserves more neighboring relationships than the SMPL UV map. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{uv_space.pdf} \caption{Comparisons of UV maps. Row (a) shows SMPL default UV map and row (b) shows our continuous UV map.} \label{fig:uv_map} \vspace{-5pt} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{dist_matrix.pdf} \caption{Comparisons of distance matrices between vertices calculated on SMPL UV map , the proposed UV map, and the original mesh surface. Compared to SMPL UV map, the distance matrix of the proposed UV map is more similar to that of the original mesh. } \label{distance} \end{figure} \begin{table}[t] \centering \small \begin{tabular}{c|c|c} \hline UV map & 2D correlation ($S_1$) & cosine similarity ($S_2$)\\ \hline SMPL~\cite{loper2015smpl} & 0.2132 & 0.8306 \\ Ours & 0.7758 & 0.9458 \\ \hline \end{tabular} \caption{ Comparisons of the similarity between the vertices' distance matrices of the original mesh surface and different types of UV maps. $S_1$ is the 2D correlation coefficient and $S_2$ is the normalized cosine similarity. We see that the proposed UV map outperforms SMPL default UV map on both metrics.} \label{tab:similarity} \vspace{-10pt} \end{table} \textbf{Pixel-to-Mesh Correspondence.} With the proposed UV map, every point on the mesh surface can be expressed by its coordinates on the UV map (\ie UV coordinates). Therefore, we can predict the pixel-to-surface correspondence by estimating the UV coordinates for each pixel belonging to human body, leading to an IUV image as shown in Figure~\ref{fig:uv_map}. More importantly, we can also represent a 3D mesh with a location map in the UV space, where the pixel values are 3D coordinates of the corresponding points on the mesh surface. Thus it is easy to reconstruct 3D mesh from a location map with the following formula, \begin{equation} V_{i} = X(u_{i}, v_{i}), \end{equation} where $V_{i}$ denotes 3D coordinates of vertex, $X$ is the location map, $u_{i}$ and $v_{i}$ are UV coordinates of the vertex. \subsection{Dense Correspondence Network (CNet)}\label{section_corr} CNet establishes the dense correspondence between pixels of the input image and areas of 3D mesh surface. As illustrated in Figure~\ref{framework}, CNet has an encoder-decoder architecture, where the encoder employs ResNet50~\cite{he2016deep} as backbone, and the decoder consists of several upsampling and convolutional layers with skip connection with encoder. In particular, the encoder encodes the image as a local feature map and a global feature vector, as well as regresses the camera parameters, which are used to project the 3D mesh into the image plane. The decoder first generates a mask of the human body, which distinguishes fore pixels (\ie human body) from those at the back. Then, the decoder outputs the exact UV coordinates for the fore pixels, constituting an IUV image as shown in Figure~\ref{fig:uv_map}. With the predicted IUV image, the corresponding point on the mesh surface for every image pixel can be determined. The loss function for the CNet contains two terms, \begin{equation} \mathcal{L}_{IUV}=\lambda_{c}\mathcal{L}_{c} + \lambda_{r}\mathcal{L}_{r}, \end{equation} where $\mathcal{L}_{c}$ is a dense binary cross-entropy loss for classifying each pixel as `fore' or `back', $\mathcal{L}_{r}$ is an $l_1$ dense regression loss for predicting the exact UV coordinates, and $\lambda_{c}$ and $\lambda_{r}$ are two constant coefficients. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{uv_transfer.pdf} \caption{Illustration of the UV transferring of raw image pixels. Elements in the image space can be transferred to the UV space with the guidance of IUV image. } \label{uv_transfer} \vspace{-10pt} \end{figure} \subsection{Vertex coordinates regression} \label{section_loc} The location net (LNet) aims to regress 3D coordinates of mesh vertices by outputting a location map, from which the 3D mesh can be reconstructed easily. As shown in Figure~\ref{framework}, the LNet first transfers image features from the image space to the UV space with the guidance of predicted IUV image: \begin{equation} \mathcal{F}_{UV}(u, v) = \mathcal{F}_{im}(x, y), \label{equation:UV_transfer} \end{equation} where $(x, y)$ are the coordinates in image space of the pixels classified as fore, and $(u, v)$ are the predicted coordinates in UV space of these pixels. $\mathcal{F}_{im}$ is the feature map in image space and $\mathcal{F}_{UV}$ is the transferred feature map in UV space. The feature map $\mathcal{F}_{UV}$ is well aligned with the output location map. So the LNet can predict location map utilizing corresponding local image features. In this way, the dense correspondence between image pixels and mesh surface areas is established explicitly. An example of raw image pixels transferred to UV space is shown in Figure~\ref{uv_transfer}. Note that our framework transfers features instead of pixel values. The LNet is a light CNN with skip connections taking the transferred local image features, expanded global image feature and a reference location map as input. Intuitively, we apply an weighted $l_1$ loss between the predicted location map $X$ and ground-truth location map $\hat{X}$, \ie, \begin{equation} \mathcal{L}_{map}=\sum_{u}\sum_{v}W(u,v)\cdot\left\|X(u,v)-\hat{X}(u,v)\right\|_{1}. \vspace{-5pt} \end{equation} $W$ is a weight map used to balance the contribution of different mesh areas, where areas away from torso are assigned higher weights. We also reconstruct a 3D human mesh from the predicted location map and get 3D joints from human mesh employing joint regressor as previous works~\cite{kanazawa2018end,kolotouros2019convolutional,kolotouros2019learning}. Then we add supervision on the 3D coordinates and projected 2D coordinates in the image space of the joints, \ie, \vspace{-5pt} \begin{equation} \mathcal{L}^{3D}_{J}=\sum_{i}^{k}\left\|Z_{i}-\hat{Z}_{i}\right\|_{1}, \vspace{-5pt} \end{equation} \begin{equation} \mathcal{L}_{J}^{2D}=\sum_{i}^{k}\left\|v_{i}(z_{i}-\hat{z}_{i})\right\|_{2}^{2}, \vspace{-5pt} \end{equation} where $Z_{i}$ and $z_{i}$ are the regressed 3D and 2D coordinates of joints, while $\hat{Z}_{i}$ and $\hat{z}_{i}$ refer to the coordinates of the ground-truth joints, and $v_i$ denotes the visibility of joints. Finally, the full loss for LNet is \begin{equation} \mathcal{L}_{loc}= \mathcal{L}_{map} + \mathcal{L}^{3D}_{J} + \mathcal{L}_{J}^{2D}. \vspace{-5pt} \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{consistent_loss.pdf} \caption{Illustration of our consistent loss between the location map and the IUV image. 3D coordinates in the location map are transferred back to the image space using IUV image, and then projected to the image plane. The projected 2D coordinates are supervised by the coordinates of image pixels in the image space.} \label{consistent} \vspace{-10pt} \end{figure} \textbf{Consistent Loss}: Besides the above widely used supervision, we add an extra supervision between regressed location map and ground-truth IUV image to improve the alignment between 3D mesh and image. As shown in Figure~\ref{consistent}, with an IUV image, we can also transfer location map from the UV space back to the image space and get 3D coordinates for every foreground pixel. The 3D coordinates are then projected to image plane to get 2D coordinates, which should be consistent with the coordinates of the pixels in the image space. Then the consistent loss is constructed as follows: \iffalse \begin{equation} \mathcal{L}_{con}=\sum_{(x, y)\in \mbox{Foreground}} \left\|(x, y)-\pi(X(u, v), c))\right\|^{2}_{2} \end{equation} \fi \begin{equation} \mathcal{L}_{con}=\sum_{(x, y)} \left\|(x, y)-\pi(X(u, v), c))\right\|^{2}_{2}, \vspace{-5pt} \end{equation} where $X$ is the predicted location map, $\pi(X, c)$ denotes the projection function with predicted camera parameters $c$, and $x, y, u, v$ are the same as that in Equation \ref{equation:UV_transfer}. This consistent loss is similar to the loss item $\mathcal{L}_{dense}$ in recent work of Rong \etal~\cite{Rong_2019_ICCV}. However, in our framework there is no need to calculate the corresponding point on mesh surface as in \cite{Rong_2019_ICCV}, because the correspondence between mesh surface and image pixel is already established. \subsection{Implementation details} \label{section_detail} We set $\lambda_{c}$, $\lambda_{r}$ and $\lambda_{cons}$ to $0.2$, $1$ and $1$ respectively and optimize the framework with an Adam optimizer~\cite{kingma2014adam}, with batch size 128 and learning rate 2.5e-4. The training data is augmented with randomly scaling, rotation, flipping and RGB channel noise. We first train the CNet for 5 epochs and then train the full framework end-to-end for 30 epochs. \section{Experiments} \subsection{Datasets}\label{dataset} In the experiment, we train our model on the Human3.6M~\cite{ionescu2013human3}, UP-3D~\cite{lassner2017unite} and SURREAL~\cite{varol17_surreal} dataset, while we provide evaluations on the test set of Human3.6M, SURREAL and LSP dataset~\cite{Johnson10}. \textbf{Human3.6M}: Human3.6M~\cite{ionescu2013human3} is a large scale indoor dataset for 3D human pose estimation, including multiple subjects performing typical actions like walking, sitting and eating. Following the common setting~\cite{kanazawa2018end}, we use subjects S1, S5, S6, S7 and S8 as training data and use subjects S9 and S11 for evaluation. For evaluation, results are reported using two widely used metrics (MPJPE and MPJPE-PA) under two popular protocols: P1 and P2, as defined in \cite{kanazawa2018end}, \textbf{UP-3D}: UP-3D~\cite{lassner2017unite} is an outdoor 3D human pose estimation dataset. It provides 3D human body ground truth by fitting SMPL model on images from 2D human pose benchmarks. We utilize the images of training and validation set for training. \textbf{SURREAL}: SURREAL dataset~\cite{varol17_surreal} is a large dataset providing synthetic images with ground-truth SMPL model parameters. We use the standard split setting~\cite{varol17_surreal} but remove all images with incomplete human body and evaluate on the same sampled test set as BodyNet~\cite{varol2018bodynet}. \textbf{LSP}: LSP~\cite{Johnson10} dataset is a 2D human pose estimation benchmark. In our work, we evaluate the segmentation accuracy of each model on the segmentation annotation~\cite{lassner2017unite}. \begin{table} \begin{center} \begin{tabular}{c|c} \hline Methods & MPJPE-PA \\ \hline Lassner \etc~\cite{lassner2017unite} & 93.9 \\ SMPLify~\cite{bogo2016keep} & 82.3 \\ \hline Pavlakos \etc~\cite{pavlakos2018learning} & 75.9 \\ HMR\cite{kanazawa2018end} & 56.8 \\ NBF\cite{omran2018neural} & 59.9 \\ CMR\cite{kolotouros2019convolutional} & 50.1 \\ DenseRaC\cite{xu2019denserac} & 48.0 \\ SPIN\cite{kolotouros2019learning} & 41.1 \\ \hline Ours & \textbf{39.3} \\ \hline \end{tabular} \end{center} \caption{Comparison with the state-of-the-art mesh-based 3D human estimation methods on Human3.6M test set. The numbers are joint errors in mm with Procrustes alignment under P2, and lower is better. Our approach achieves the state-of-the-art performance. } \label{h36m} \vspace{-5pt} \end{table} \begin{table} \begin{center} \begin{tabular}{c|c} \hline Methods & Surface Error\\ \hline SMPLify++~\cite{lassner2017unite} & 75.3 \\ Tung\etal~\cite{NIPS2017_7108} & 74.5 \\ BodyNet\cite{varol2018bodynet} & 73.6 \\ \hline Ours & \textbf{56.5} \\ \hline \end{tabular} \end{center} \caption{Comparison with the state-of-the-art methods on SURREAL dataset. The numbers are the mean vertex errors in mm, and lower is better. Our methods outperform baselines with a large margin.} \label{surreal} \vspace{-5pt} \end{table} \begin{table} \begin{center} \begin{tabular}{c|c|c|c|c} \hline & \multicolumn{2}{|c|}{FB Seg.} & \multicolumn{2}{|c}{Part Seg}\\ & acc. & f1 & acc. & f1\\ \hline SMPLify ${oracle}$~\cite{bogo2016keep} & \textbf{92.17} & \textbf{0.88} & 88.82 & 0.67 \\ SMPLify~\cite{bogo2016keep} & 91.89 & \textbf{0.88} & 87.71 & 0.67 \\ SMPLify on \cite{pavlakos2018learning} & 92.17 & \textbf{0.88} & 88.24 & 0.64 \\ \hline HMR~\cite{kanazawa2018end} & 91.67 & 0.87 & 87.12 & 0.60 \\ CMR~\cite{kolotouros2019convolutional} & 91.46 & 0.87 & 88.69 & 0.66 \\ SPIN~\cite{kolotouros2019learning} & 91.83 & 0.87 & 89.41 & 0.68 \\ \hline Ours & 92.10 & \textbf{0.88} & \textbf{89.45} & \textbf{0.69} \\ \hline \end{tabular} \end{center} \caption{Comparison with the state-of-the-art methods on LSP test set. The numbers are accuracy and f1 scores, and higher is better. SMPLify~\cite{bogo2016keep} is optimization based, while HMR~\cite{kanazawa2018end}, CMR~\cite{kolotouros2019convolutional}, SPIN~\cite{kolotouros2019learning} and our method are regression based. Our framework achieves the state-of-the-art result among regression based methods and is competitive with optimization based methods.} \label{lsp} \vspace{-10pt} \end{table} \subsection{Comparison with the state-of-the-art} In this section, we present comparison of our method with other state-of-the-art mesh-based methods. Table \ref{h36m} shows the results on Human3.6M test set. We train our model following the setting of CMR~\cite{kolotouros2019convolutional} and utilize Human3.6M and UP-3D as the training set. Our method achieves the state-of-the-art performance among the mesh-based methods. It's worth notice that SPIN~\cite{kolotouros2019learning} and our method focus on different aspect and are compatible. SPIN~\cite{tekin2017learning} focus on the training using data with scarce 3D ground truth and the network is trained with extra data from 2D human pose benchmarks. While we focus on the dense correspondence between mesh and image, and do not include data from 2D human pose benchmarks. Similarly, we show the results on SURREAL dataset in Table \ref{surreal}. Our model is trained only with training data of SURREAL dataset and outperforms the previous methods by a large margin. The human shape in SURREAL dataset is of great variety, and this verifies the human shape reconstruction capability of our method. We also investigate human shape estimation accuracy by evaluating the foreground-background and part-segmentation performance on the LSP test set. During the evaluation, we use the projection of the 3D mesh as segmentation result. The predicted IUV image is not used in evaluation for fair comparison. The results are shown in Table~\ref{lsp}. Our regression based method outperforms the state-of-the-art regression based methods and is competitive with the optimization based methods, which tend to outperform the regression based methods on this metric but are with much lower inference speed. \subsection{Ablative studies} \begin{table} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c} \hline UV & \multirow{2}{*}{$\mathcal{F}_{G}$} & \multirow{2}{*}{$\mathcal{F}_{L}$} & raw & \multicolumn{2}{|c|}{MPJPE} & \multicolumn{2}{|c}{MPJPE-PA} \\ map & ~ & ~ & pixel & P1 & P2 & P1 & P2 \\ \hline \multirow{4}{*}{SMPL} & \checkmark & & & 72.1 & 68.9 & 51.9 & 49.1 \\ ~ & & \checkmark & & 71.9 & 69.6 & 47.4 & 44.8 \\ ~ & \checkmark & \checkmark & & 65.0 & 61.7 & 45.1 & 42.6 \\ ~ & \checkmark & & \checkmark & 65.0 & 63.2 & 46.5 & 44.7 \\ \hline \multirow{4}{*}{Ours} & \checkmark & & & 69.5 & 67.7 & 49.4 & 47.1 \\ ~ & & \checkmark & & 69.8 & 68.4 & 44.6 & 42.3 \\ ~ & \checkmark & \checkmark & & \textbf{62.7} & \textbf{60.6} & \textbf{42.2} & \textbf{39.3} \\ ~ & \checkmark & & \checkmark & 63.2 & 61.0 & 45.5 & 42.6 \\ \hline \end{tabular} \end{center} \caption{Comparison on Human3.6M test set with different UV map and input of location net. The numbers are 3D joint errors in mm. $\mathcal{F}_{G}$ and $\mathcal{F}_{L}$ refer to global feature vector and local feature map, respectively. With both UV maps, the framework use local feature outperforms the baseline using global feature with a large margin. Combining global feature and local feature further improves the performance. However, transferring raw image pixels brings a gain much smaller. With the same input, the frameworks using our UV map outperform these using SMPL default UV map. \label{tab:ablation} } \vspace{-20pt} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{loc_2.pdf} \caption{An example of mesh reconstructed using our new UV map (top) and SMPL default UV map (bottom). SMPL default UV map may cause discontinuity between different parts as well as erroneous estimation of some vertices near part edges. While our new UV map mitigates these problems.} \label{uv_space} \vspace{-15pt} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.15\linewidth]{im1046.jpg} \includegraphics[width=0.15\linewidth]{im1046.png} \includegraphics[width=0.15\linewidth]{im1078.jpg} \includegraphics[width=0.15\linewidth]{im1078.png} \includegraphics[width=0.15\linewidth]{im1174.jpg} \includegraphics[width=0.15\linewidth]{im1174.png} \includegraphics[width=0.15\linewidth]{im1208.jpg} \includegraphics[width=0.15\linewidth]{im1208.png} \includegraphics[width=0.15\linewidth]{im1246.jpg} \includegraphics[width=0.15\linewidth]{im1246.png} \includegraphics[width=0.15\linewidth]{im1321.jpg} \includegraphics[width=0.15\linewidth]{im1321.png} \includegraphics[width=0.15\linewidth]{im1446.jpg} \includegraphics[width=0.15\linewidth]{im1446.png} \includegraphics[width=0.15\linewidth]{im1910.jpg} \includegraphics[width=0.15\linewidth]{im1910.png} \includegraphics[width=0.15\linewidth]{im1968.jpg} \includegraphics[width=0.15\linewidth]{im1968.png} \includegraphics[width=0.15\linewidth]{105540rgb.png} \includegraphics[width=0.15\linewidth]{ours105540pred_mesh.png} \includegraphics[width=0.15\linewidth]{90500rgb.png} \includegraphics[width=0.15\linewidth]{ours90500pred_mesh.png} \includegraphics[width=0.15\linewidth]{91000rgb.png} \includegraphics[width=0.15\linewidth]{ours91000pred_mesh.png} \includegraphics[width=0.15\linewidth]{91200rgb.png} \includegraphics[width=0.15\linewidth]{ours91200pred_mesh.png} \includegraphics[width=0.15\linewidth]{94060rgb.png} \includegraphics[width=0.15\linewidth]{ours94060pred_mesh.png} \includegraphics[width=0.15\linewidth]{97020rgb.png} \includegraphics[width=0.15\linewidth]{ours97020pred_mesh.png} \caption{Qualitative results of our approach. Rows 1-3: LSP~\cite{Johnson10}. Rows 4-5: Human3.6M~\cite{ionescu2013human3}.} \label{qualitative} \vspace{-12pt} \end{figure*} \begin{figure}[t] \centering \subfigure[Image]{ \begin{minipage}[b]{0.20\linewidth} \includegraphics[width=1\linewidth]{im1632.jpg} \includegraphics[width=1\linewidth]{im1703.jpg} \end{minipage}} \subfigure[Result]{ \begin{minipage}[b]{0.20\linewidth} \includegraphics[width=1\linewidth]{im1632.png} \includegraphics[width=1\linewidth]{im1703.png} \end{minipage}} \subfigure[Image]{ \begin{minipage}[b]{0.20\linewidth} \includegraphics[width=1\linewidth]{im1799.jpg} \includegraphics[width=1\linewidth]{im1960.jpg} \end{minipage}} \subfigure[Result]{ \begin{minipage}[b]{0.20\linewidth} \includegraphics[width=1\linewidth]{im1799.png} \includegraphics[width=1\linewidth]{im1960.png} \end{minipage}} \caption{Examples of erroneous reconstruction of our methods. Typical failures can be attributed to challenging poses, viewpoints rare seen in training set, severe self-osculation, as well as confusion caused by interaction among multiple people.} \label{erroneous} \vspace{-15pt} \end{figure} In this section, we provide the ablation studies of the proposed method. We train all networks with training data from Human3.6M and UP-3D dataset, and evaluate the models on Human3.6M test set. \textbf{Dense correspondence}: We first investigate the effectiveness of the dense correspondence between 3D mesh and image features. We train networks that only use global feature or transferred local feature as the input of LNet. The comparison is shown in Table \ref{tab:ablation}. With both UV maps, the framework utilizing transferred local feature outperforms the baseline using global feature with a large margin, which proves the effectiveness of the established dense correspondence. Combining global feature with local feature further improves the performance. We also train frameworks that transfer raw image pixels rather than image features and observe much less improvement than transferring local features. We attribute this phenomenon to the lack of human pose information in transferred raw pixels. For images with the same person in different poses, the pixels of a certain body part will be transferred to the same position in the UV space, which generates similar inputs for the LNet. So the LNet can only use transferred pixels to refine the estimation of human shape, and predict human pose only based on global feature. On the contrary, the CNet is able to embed human pose information into image features. Then the LNet can resort to transferred features to refine both human shape and pose estimation. \textbf{UV map}: For the second ablative study, we investigate the influence of different UV maps. We compare the performance of frameworks using SMPL default UV map~\cite{loper2015smpl}, and our continuous UV map. As shown in Table~\ref{tab:ablation}, with the same input of LNet, the frameworks using our continuous UV map outperforms these frameworks using SMPL default UV map with a large margin. We attribute the gain to the continuity of the new UV map. As shown in Figure~\ref{uv_space}, some neighboring parts on mesh surface are distant on SMPL default UV map, such as arms and hands. This may lead to discontinuity of these parts on the final 3D mesh. Additionally, some faraway surface parts are very close on the UV plane, such as hands and foots, which might cause erroneous estimation of vertices on edges of these parts. These phenomenons are both shown in Figure~\ref{uv_space}. On the contrary, our UV map preserves more neighboring relations of the original mesh surface, so these problems are mitigated. \subsection{Qualitative result} Some qualitative results are presented in Figure~\ref{qualitative}, and Figure~\ref{erroneous} includes some failure cases. Typical failure cases can be attributed to challenging poses, viewpoints rare seen in training set, severe self-osculation, as well as confusion caused by interaction among multiple people. \section{Conclusion} This work aims to solve the problem of lacking dense correspondence between the image feature and output 3D mesh in mesh-based monocular 3D human body estimation. The correspondence is explicitly established by IUV image estimation and image feature transferring. Instead of reconstructing human mesh from global feature, our framework is able to make use of extra dense local features transferred to the UV space. To facilitate the learning of frame work, we propose a new UV map that maintains more neighboring relations of the original mesh surface. Our framework achieves state-of-the-art performance among 3D mesh-based methods on several public benchmarks. Future work can focus on extending the framework to the reconstruction of surface details beyond existing human models, such as cloth wrinkles and hair styles. \section*{Acknowledgement} We thank reviewers for helpful discussions and comments. Wanli Ouyang is supported by the Australian Research Council Grant DP200103223. {\small \bibliographystyle{ieee}
1,477,468,750,269
arxiv
\section{Introduction} Sampling with selection bias is often the only means to acquire data. \emph{Bias} in this context refers to the fact that certain observations occur more frequently than normal \cite{heckman1990varieties}. For instance, in social science experiments, data collected from university students will have different properties than data collected from the larger national or global population. This results in a statistical classification problem where the training and test data come from different distributions. Such problems are very challenging, because the information that is relevant to accurately classify training samples might not be relevant to classify test samples. This problem is more commonly known as \emph{sample selection bias} or \emph{covariate shift} \cite{cortes2008sample,quionero2009dataset,moreno2012unifying}. The setting from which the training data originates is often referred to as the \emph{source domain}, while the setting of interest is called the \emph{target domain} \cite{ben2010theory}. Instead of attempting to collect data in an unbiased manner, which might be difficult due to operational, financial or ethical reasons, we are interested in correcting for the domain difference and generalize from the source to the target domain. In the case of covariate shift, the dominant method of accounting for the differences between domains is \emph{importance-weighting}: samples in the source domain are weighted based on their importance to the target domain. The classifier will subsequently change its predictions in order to avoid misclassifying highly important samples. It has been shown that, under certain conditions, an importance-weighted classifier will converge to the optimal target classifier \cite{cortes2010learning}. How fast it learns depends heavily on how different the domains are, expressed by for instance the R\'enyi divergence \cite{cortes2010learning}. The larger the divergence between the domains, the slower the rate of convergence of the classifier parameter estimator. Although importance-weighted classifiers are consistent under the right circumstances, their performance still depends strongly on how the importance weights themselves are determined. There has been quite a large variety of work into the behavior of different types of weight estimators: ratio of parametric probability distributions \cite{shimodaira2000improving}, kernel density estimators \cite{silverman1986density}, kernel mean matching \cite{huang2006correcting}, logistic discrimination \cite{bickel2009discriminative}, Kullback-Leibler importance estimation procedure \cite{sugiyama2007covariate}, unconstrained least-squares importance fitting \cite{kanamori2009least}, nearest-neighbour based \cite{loog2012nearest} or conservative minimax estimators \cite{wen2014robust}. Interestingly, these weight estimators can trade off consistency of the estimator for faster convergence, by enforcing smoothness, inhibiting weight distribution bimodality or ensuring a minimum weight value. Importance-weighting is crucial to evaluating classifiers as well. Model selection is often done through cross-validation, where the training set is split into parts and each part is held back once to be evaluated on later \cite{efron1994introduction,kohavi1995study}. However, the standard cross-validation procedure will not account for domain differences. As a result, its hyperparameter estimates are not optimal with respect to the target domain \cite{sugiyama2005model,sugiyama2007covariate,kouw2016regularization}. Effectively, the standard cross-validation procedure can produce a model selection bias \cite{cawley2010over}. The importance-weighted risk, on the other hand, \emph{can} account for domain differences. By weighting the source validation data, it approximates the target risk more closely. Better approximations of the target risk will allow for hyperparameter estimates that will make the model generalize better. Importance-weighting is a widely-trusted and influential method, but it can act in quite surprising ways. In this paper we show that, for small sample sizes, the sampling distribution of an importance-weighted estimator can be \emph{skewed}. Skewness refers to the fact that a distribution is not symmetric. That means that, although the estimator is unbiased, it will underestimate the parameter of interest for the majority of data sets, in the case of positive skew. Conversely, it will overestimate the true parameter for the majority of data sets in the case of negative skew. We explore the subsequent effects of this property on model selection under covariate shift. \section{Preliminaries} In this section, we introduce our notation, our example setting and explain importance-weighting. \subsection{Notation} Consider an input space ${\cal X}$, part of a $D$-dimensional vector space such as $\mathbb{R}^{D}$, and a set of classes ${\cal Y} = \{-1,+1\}$. A source domain is a joint distribution defined over these spaces, $({\cal X}, {\cal Y}, p_{\cal S})$, marked with the subscript ${\cal S}$ and a target domain is another $({\cal X}, {\cal Y}, p_{\cal T})$, marked with ${\cal T}$. Assuming covariate shift implies that the domains' conditional distributions are equal, i.e. $p_{\cal S}(y \mid x) = p_{\cal T}(y \mid x)$, while the marginal data distributions are different, i.e. $p_{\cal S}(x) \neq p_{\cal T}(x)$. Samples from the source domain are denoted as the pair $(x,y)$, with $n$ samples forming the source dataset ${\cal D}_{\cal S}^{n} = \{(x_i,y_i)\}_{i=1}^{n}$. Similarly, target samples are denoted as $(z,u)$ with $m$ samples forming the target dataset ${\cal D}_{\cal T}^{m} = \{(z_j, u_j)\}_{j=1}^{m}$. A classifier is a function that maps the input space to the set of classes, $h : {\cal X} \rightarrow {\cal Y}$. \subsection{Example setting} \label{sec:ex} For the purposes of illustrating a few concepts in the upcoming sections, we generate an example of a covariate shift classification problem. For the target data distribution, a normal distribution with a mean of $0$ and a standard deviation of $1$ is taken; $p_{\cal T}(x) = \mathcal{N}(x \mid 0, 1)$. For the source data distribution, we take a normal distribution with a mean of $0$ as well, but with a standard deviation of $0.75$. The class priors in both domains are set to be equal: $p_{\cal S}(y) = p_{\cal T}(y) = 1/2$. Similarly, the class-posterior distributions are set to be equal as well, both in the form of a cumulative normal distribution: $p_{\cal S}(y \mid x) = p_{\cal T}(y \mid x) = \Phi(yx)$. Figure \ref{fig:setting} plots the class-conditional distributions for the source domain (top) and the target domain (bottom). Essentially, the source domain is a biased sample of the target domain, because it favors samples close to $0$ and the decision boundary. Data is drawn through rejection sampling. \begin{figure}[h] \includegraphics[width=0.48\textwidth]{source_dist_siS075.png} \\ \includegraphics[width=0.48\textwidth]{target_dist_siT1.png} \caption{Example case of a covariate shift classification problem. (Top) source domain, with $p_{\cal S}(x \mid y) = \Phi(yx)\mathcal{N}(x \mid 0, 0.75) / p_{\cal S}(y)$. (Bottom) target domain, with $p_{\cal T}(x \mid y) = \Phi(yx)\mathcal{N}(x \mid 0, 1) / p_{\cal T}(y)$.} \label{fig:setting} \end{figure} \subsection{Empirical Risk Minimization} A classifier is a function that assigns a class to each input. Here we will focus on linear classifiers, which project the data onto a vector and make decisons based on which side of the decision boundary the datapoint falls; $h(x) = x\theta$ \cite{friedman2001elements}. In the empirical risk minimization framework, the classifier's decisions are evaluated using a loss function. \emph{Risk} corresponds to the expected loss that the classifier incurs: $R(h) = \ \mathbb{E}[ \ell(h(x), y) ]$ \cite{mohri2012foundations}. For the examples in this paper, we choose a quadratic loss function, $\ell(h(x), y) =(h(x) - y)^2$ (known for the Fisher classifier and the least-squares SVM). Because the risk function is an expectation, it can be approximated using the sample average $\hat{R}(h) = \ 1/n \sum_{i=1}^{n} \ell(h(x_i), y_i)$. \subsection{Importance weighting} Considering that each domain has its own joint distribution, it has its own risk function as well. The source risk is $R_{\cal S} = \mathbb{E}_{\cal S} [ \ell(h(x),y)]$, while the target risk is $R_{\cal S} = \mathbb{E}_{\cal S} [ \ell(h(x),y)]$. Their estimators are, respectively: \begin{align} \hat{R}_{\cal S}(h) =& \ \frac{1}{n} \sum_{i=1}^{n} \ell(h(x_i), y_i ) \nonumber \\ \hat{R}_{\cal T}(h) =& \ \frac{1}{m} \sum_{j=1}^{m} \ell(h(z_j), u_j) \nonumber \, . \end{align} It is possible to relate the source and target risk functions with each other as follows: \begin{align} R_{\cal T}(h) =& \int_{\cal X} \sum_{y \in {\cal Y}} \ \ell(h(x), y) \ p_{\cal T}(x,y) \ \mathrm{d}x \nonumber \\ =& \int_{\cal X} \sum_{y \in {\cal Y}} \ \ell(h(x), y) \ \frac{p_{\cal T}(x,y)}{p_{\cal S}(x,y)} p_{\cal S}(x,y) \ \mathrm{d}x \nonumber \end{align} In the case of covariate shift, where $p_{\cal T}(y \mid x) = p_{\cal S}(y \mid x)$, the ratio of the joint distributions $p_{\cal T}(x,y) / p_{\cal S}(x,y)$ can be reduced to the ratio of data marginal distributions $p_{\cal T}(x) / p_{\cal S}(x)$. The new estimator is: \begin{align} \hat{R}_{\cal W}(h) = \frac{1}{n} \sum_{i=1}^{n} \ell(h(x_i), y_i) w(x_i) \nonumber \, , \end{align} where $w(x_i) = p_{\cal T}(x_i) / p_{\cal S}(x_i)$. So, the target risk can be estimated through a weighted average with respect to the source samples. Hence, the ratio of distributions can be recognized as importance weights: the ratio is larger than $1$ for samples that have a high probability under the target distribution relative to the source distribution and smaller than $1$ for samples that have a relatively low probability. \begin{figure}[th] \includegraphics[width=.45\textwidth]{hist_W_siS075_N128_nR10000.png} \caption{Histogram of the importance weights in the example scenario.} \label{fig:histogram_weights} \end{figure} The importance weights themselves are often distributed according to an exponential or geometric distribution: many weights are small and a few weights are large. Figure \ref{fig:histogram_weights} presents a histogram for the example setting. As the domains become more dissimilar, eventually, all samples will be nearly zero. Weighting can have interesting effects on the behavior of an estimator. The next section discusses the variation in estimates as a function of different data sets. \section{Sampling distribution} The probability distribution of an estimator's results as a function of data, is called the \emph{sampling distribution}. Properties of this distribution are interesting for a number of reasons. Firstly, the difference between the expected value of the sampling distribution and the underlying true risk is called the estimator's bias. It can be desirable to have an unbiased risk estimator: $\mathbb{E}[ \hat{R}(h)] - R(h) = 0$ for all $h$. In other words, there should be no systematic deviation in its estimates. For the case of importance-weighting, it is possible to show that the risk estimator is unbiased: \begin{align} \mathbb{E}_{\cal S}[ \hat{R}_{\cal W}(h)] =& \ \mathbb{E}_{\cal S}[ \frac{1}{n} \sum_{i=1}^{n} \ell(h(x_i), y_i) w(x_i) ]\nonumber \\ =& \ \frac{1}{n} \sum_{i=1}^{n} \mathbb{E}_{\cal S}[ \ell(h(x), y) w(x) ]\nonumber \\ =& \ \frac{1}{n} \ n \ \mathbb{E}_{\cal T}[ \ell(h(x), y) ]\nonumber \\ =& \ R_{\cal T}(h) \nonumber \, . \end{align} \begin{figure*}[th] \centering \includegraphics[width=0.32\textwidth]{hist_hRW_siS075_N2_nR10000.png} \includegraphics[width=0.32\textwidth]{hist_hRW_siS075_N4_nR10000.png} \includegraphics[width=0.32\textwidth]{hist_hRW_siS075_N8_nR10000.png} \\ \includegraphics[width=0.32\textwidth]{hist_hRW_siS075_N16_nR10000.png} \includegraphics[width=0.32\textwidth]{hist_hRW_siS075_N32_nR10000.png} \includegraphics[width=0.32\textwidth]{hist_hRW_siS075_N64_nR10000.png} \caption{Histograms of the risk estimates $\hat{R}_{\cal W}$ over $10,000$ data sets drawn by rejection sampling from the setting described in Section \ref{sec:ex}, for different sample sizes. Note that the skewness diminishes with more samples.} \label{fig:hist_RhW} \end{figure*} \subsection{Sampling variance} Secondly, the variance of the sampling distribution is informative on how uncertain, or conversely how accurate, an estimator is. If the sampling variance reduces as a function of the sample size, then the estimator becomes more accurate with more data \cite{massart2007concentration}. However, in the case of a weighted estimator, depending on the size of the weights, the sampling variance might diverge to infinity \cite{cortes2010learning,kouw2017reducing}. For instance, it can be shown that the variance of the sampling distribution diverges for cases where the domains are too far apart \cite{cortes2010learning}. In fact, for our example case, it can be shown how the weights directly scale the sampling variance: \begin{align} \mathbb{V}_{\cal S}[ \hat{R}_{\cal W}&(h)] = \ \mathbb{E}_{\cal S}[ \Big( \frac{1}{n} \sum_{i=1}^{n} \ell(h(x_i), y_i) w(x_i) - R_{\cal T}(h) \Big)^2]\nonumber \\ =& \ \frac{1}{n^2} \sum_{i=1}^{n} \mathbb{E}_{\cal S}[ \Big( \ell(h(x), y) w(x) - R_{\cal T}(h) \Big)^2]\nonumber \\ =& \ \frac{1}{n^2} n \ \mathbb{E}_{\cal S}[ \ell(h(x), y)^2 w(x)^2 \nonumber \\ & \qquad \quad - 2 \ell(h(x), y) w(x) R_{\cal T}(h) \nonumber \\ & \qquad \quad + R_{\cal T}(h)^2 ] \nonumber \\ =& \ \frac{1}{n} \Big( \mathbb{E}_{\cal S}[ \ell(h(x), y)^2 w(x)^2] \nonumber \\ & \qquad - 2 \mathbb{E}_{\cal S}[ \ell(h(x), y) w(x)] R_{\cal T}(h) \nonumber \\ & \qquad + R_{\cal T}(h)^2 \Big) \nonumber \\ =& \ \frac{1}{n} \big( \mathbb{E}_{\cal T}[ \ell(h(x), y)^2 w(x)] - R_{\cal T}(h)^2 \Big) \nonumber \, . \end{align} Doing the same derivation for the target risk estimator yields: \begin{align} \mathbb{V}_{\cal T}[ \hat{R}_{\cal T}(h)] = 1/m \big( \mathbb{E}_{\cal T}[ \ell(h(x), y)^2 ] - R_{\cal T}(h)^2 \Big) \, . \nonumber \end{align} They differ in the expectation term: the weights scale the expected squared loss. For settings where the weights are small, i.e. settings where the domains are close, the importance-weighted estimator converges faster and is more accurate. This fact is exploited in importance sampling \cite{kahn1953methods,neal2001annealed,mcbook}. However, for settings where the weights are large, i.e. settings where the domains are far apart, the weighted estimator has a larger sampling variance, is therefore more uncertain and will need more samples to achieve the same level of accuracy as the target risk estimator. \subsection{Sampling skewness} The \emph{skewness} of a distribution is an indicator of how symmetric it is around its expected value. For small sample sizes, the distribution of the weights can skew the sampling distribution of an importance-weighted estimator. The skewness of a distribution can be expressed using the moment coefficient of skewness: $\Gamma[x] = \mathbb{E}[ \big( (x - \mu)/ \sigma \big)^3 ]$ \cite{cramer2016mathematical,joanes1998comparing}. A negative skew (also known as \emph{left-skewed}) means that the probability mass of the distribution is concentrated to the right of the mean, while a positive skew (a.k.a. \emph{right-skewed}) implies that the probability mass concentrates to the left of the mean. Our importance-weighted estimator is skewed as: \begin{align} \Gamma_{\cal S} [ \hat{R}_{\cal W}&(h)] = \mathbb{E}_{\cal S} \big[ \Big( \frac{1/n \sum_{i}^{n} \ell((h(x_i), y_i) w(x_i) - R_{\cal T}(h)}{\sqrt{\mathbb{V}_{\cal S} [ \hat{R}_{\cal W}(h)]}} \Big)^3 \big] \nonumber \\ =& \ \frac{1}{n^3} \sum_{i=1}^{n} \mathbb{V}_{\cal S} [ \hat{R}_{\cal W}(h)]^{-2/3} \nonumber \\ & \qquad \mathbb{E}_{\cal S} \big[ \big( \ell((h(x), y) w(x) - R_{\cal T}(h) \big)^3 \big] \nonumber \\ =& \ \frac{n}{n^3} \mathbb{V}_{\cal S} [ \hat{R}_{\cal W}(h)]^{-2/3} \ \mathbb{E}_{\cal S} \big[\ell((h(x), y)^3 w(x)^3 \nonumber \\ & \qquad -3 \ \ell((h(x), y)^2 w(x)^2 R_{\cal T}(h) \nonumber \\ & \qquad +3 \ \ell((h(x), y) \ w(x) R_{\cal T}(h)^2 \nonumber \\ & \qquad - R_{\cal T}(h)^3 \big] \nonumber \\ =& \ \frac{1}{n^2} \mathbb{V}_{\cal S} [ \hat{R}_{\cal W}(h)]^{-2/3} \ \Big( \mathbb{E}_{\cal T}[\ell((h(x), y)^3 w(x)^2] \nonumber \\ & \qquad -3 \ R_{\cal T}(h) \mathbb{V}_{\cal S}[\hat{R}_{\cal W}(h)] \nonumber \\ & \qquad - R_{\cal T}(h)^3 \Big) \label{eq:skew} \nonumber \, . \end{align} Again, doing the same derivation for the target risk estimator leads to: \begin{align} \Gamma_{\cal T}[ \hat{R}_{\cal T}(h)] =& \ \frac{1}{m^{2}} \ \mathbb{V}_{\cal T} [ \hat{R}_{\cal T}(h)]^{-2/3} \ \big( \mathbb{E}_{\cal T}[\ell((h(x), y)^3] \nonumber \\ & \qquad -3 \ R_{\cal T}(h) \mathbb{V}_{\cal T}[\hat{R}_{\cal T}(h)] - R_{\cal T}(h)^3 \big) \, , \nonumber \end{align} showing that the skew of the importance-weighted estimator depends on multiplying the cubic loss with the squared weights. If the weights are large, the existing skew is scaled up. Note that the skew also reduces as the sampling variance increases. The moments of the sampling distribution of the risk estimator depend heavily on the problem setting. It is therefore difficult to make general statements regarding all possible covariate shift classification problems. We can, however, illustrate the skew for the example case. In order to evaluate the risk estimator's ability to validate a classifier, the classifier needs to remain fixed while the risk function is computed for different data sets. We took a linear classifier, $h(x \mid \theta) = x\theta$ with $\theta = 1/(2\sqrt{\pi})$. Figure \ref{fig:hist_RhW} plots the histograms of $10,000$ repetitions of rejection sampling. Note that each repetition correponds to a single validation data set. After computing the risks, it becomes apparent that the sampling distribution of $\hat{R}_{\cal W}$ is positively skewed and that its skew diminishes as the sample size increases. \begin{figure*}[th] \centering \includegraphics[width=0.32\textwidth]{box_bodyvstail_hLW_siS075_N2_nR10000.png} \includegraphics[width=0.32\textwidth]{box_bodyvstail_hLW_siS075_N4_nR10000.png} \includegraphics[width=0.32\textwidth]{box_bodyvstail_hLW_siS075_N8_nR10000.png} \includegraphics[width=0.32\textwidth]{box_bodyvstail_hLW_siS075_N16_nR10000.png} \includegraphics[width=0.32\textwidth]{box_bodyvstail_hLW_siS075_N32_nR10000.png} \includegraphics[width=0.32\textwidth]{box_bodyvstail_hLW_siS075_N64_nR10000.png} \caption{Boxplots of the regularization parameter estimates $\hat{\lambda}$ based on the importance-weighted risk estimator $\hat{R}_{\cal W}$, for different sample sizes.} \label{fig:box_bodyvstail} \end{figure*} \section{Model selection} The importance-weighted risk estimator is crucial to model selection under covariate shift. Standard cross-validation does not account for domain differences \cite{sugiyama2005model}. Validating the model on source samples leads to hyperparameters that are not optimal with respect to the target domain \cite{kouw2016regularization}. Importance-weighting the source data results in a validation data set that more closely matches the target domain \cite{sugiyama2007covariate}. However, importance-weighed cross-validation suffers from a number of issues: for large domain differences, the sampling variance can diverge, resulting in highly inaccurate estimates \cite{cortes2010learning,kouw2017reducing} and for small sample sizes, the sampling distribution can be skewed. How this skew affects validation will be shown in the following experiment. \subsection{Body versus tail} The defining property of a skewed distribution is that the majority of the probability mass lies to the side of its expected value. The narrow region with a large amount of probability mass is called the \emph{body}, while the long low-probability-mass region to the side of the body is called the \emph{tail}. In the case of the example setting, the weighted risk estimator's sampling distribution has a body on the left and a tail that drops off slowly to the right, as can be seen in Figure \ref{fig:hist_RhW} for $N=2$. Note that high probability mass regions of a sampling distribution correspond to many data sets. The risk estimates in the body are smaller than that of the expected value of the sampling distribution, i.e., the true target risk $R_{\cal T}$. Hence, the body contains \emph{under}estimates of the target risk. The right-hand tail on the other hand contains \emph{over}estimates. Note that the body contains many, relatively small, underestimates while the tail contains a few, relatively large, overestimates. We know that they cancel out, because we know the importance-weighted risk estimator is unbiased. However, \emph{for the large majority of data sets}, the risk is underestimated. This directly affects the hyperparameter estimates obtained in cross-validation. \subsection{Regularization parameter selection} In order to evaluate the importance-weighted risk estimator's usefulness for model selection, we evaluate it for a regularized classifier. The problem setting is still the the example setting from Section \ref{sec:ex}. We take the same linear classifier as before, but this time we add $L^2$-regularization: $h(x) = x \theta_{\lambda}$ where $\theta_{\lambda} = 1/(2\sqrt{\pi})+\lambda$. We draw $N$ samples from the source domain and evaluate $\theta_{\lambda}$ using the importance-weighted risk estimator. Following that, we select the $\lambda$ for which the risk is minimal: $\hat{\lambda} = \ \arg \min_{\lambda} \ \hat{R}_{\cal W}(\theta_{\lambda})$. For the example case, $p_{\cal T}(x,y) = \Phi(yx) \mathcal{N}(x \mid 0,1)$, the expected risk can be found analytically. The true risk is minimal for $\theta_{\lambda} = 1 / \sqrt{\pi}$, which means the optimal value of $\lambda$ is $1/(2\sqrt{\pi})$. The better the risk estimator approximates the expected risk, the better the resulting $\hat{\lambda}$ will approximate the optimal value for $\lambda$. The above procedure of drawing samples, computing risk and selecting $\lambda$ is repeated $10,000$ times. All data sets for which the risk is smaller than the average risk over all repetitions are deemed part of the body, while all data sets with risks larger than the average are deemed part of the tail. Figure \ref{fig:box_bodyvstail} shows the boxplots of $\hat{\lambda}$ for the body and tail separately. Each subplot covers one sample size: $N = \{2,4,8,16,32,64\}$. The dotted line corresponds to the optimal $\lambda$, and the black bars in the boxplots mark the average estimates. For $N=2$, the body produces overestimates of the regularization parameter on the order of $+1$, while the tail produces underestimates on the order of $-2$. For $N = 4$ to $8$, the effect is smaller, with the tail producing more accurate estimates. For $N \ge 32$, the differences between the body and the tail are nearly gone. Figure \ref{fig:prps} plots the proportions of data sets belonging to the body versus the tail, out of the $10,000$ repetitions. Looking at the amount of data sets that make up each part, we can conclude that the majority is part of the body. For the smallest data sets, there are, in fact, twice as many data sets in the body. As the sample size increases, the sampling distribution becomes less skewed and the proportions become equal. \begin{figure}[h!] \centering \includegraphics[width=0.48\textwidth]{prps_N.png} \caption{Proportions of data sets in the body (blue) versus tail (red).} \label{fig:prps} \end{figure} \section{Discussion} Although the current problem setting is $1$-dimensional, we do believe that higher-dimensional problem settings behave along the same lines. However, the current indications of ''enough'' validation data may not be completely indicative of higher-dimensional settings; we expect that more validation data is required in that case. Also, in the reverse setting, where the source domain is wider than the target domain, the skew of the risk estimator's sampling distribution is negative instead of positive, which means that all statements regarding over- and underestimates are reversed. We have chosen a quadratic loss function to evaluate risk, but we believe that the results presented here will hold for other choices of loss functions as well. The skewness stems from the skewness of the importance-weights, which does not depend on the loss function. A limitation of our study is the fact that it only covers the case of Gaussian data distributions. It would be helpful to attain a more general understanding of the effects of the skewness of the risk estimator's sampling distribution. Unfortunately, generalizing the behavior of a sampling distribution for all possible covariate shift problem settings is not trivial. Nonetheless, if the skew in the sampling distribution is indeed caused by the geometric distribution of the weights, then the current results might extend to importance-weighted classifiers as well. If the sampling distributions of classifier parameter estimators are skewed, then we might also see many overestimates of classifier parameters for the majority of data sets and a few large underestimates for rare cases, when sample sizes are small. Regularization has the potential to correct this, which again stresses the importance of having a good model selection procedure for covariate shift problems. \section{Conclusion} We presented an empirical study of the effects of a skewed sampling distribution for the importance-weighted risk estimator on model selection. Depending on the problem setting, for small sample sizes, the estimator will frequently produce underestimates and infrequently produce large overestimates of the target risk. When used for validation, the risk estimator ensures that the regularization parameter is overestimated for the majority of data sets, for cases of sample selection bias. However, with enough data, the skew diminishes. \bibliographystyle{IEEEtran}
1,477,468,750,270
arxiv
\section{Introduction}\label{Introduction} The use of machine--learning for the exploration of big data sets in astronomy was predicted over three decades ago \citep{1988ESOC...28..245R}. Yet, the high computational costs of this method have long delayed its advance. Some of the first applications of neural networks, a sub-field of machine--learning, include the automatic detection of sources in astronomical images (SExtractor, \citealt{1996A&AS..117..393B}), the morphological classification of galaxies \citep{1996MNRAS.283..207L} and the classification of stellar spectra \citep{1997Obs...117..250B}. In recent years, the increasing power of modern computer systems and the possibilities of cloud computing have led to a growing popularity of machine--learning methods. Powerful open-source libraries such as TensorFlow \citep{tensorflow2015-whitepaper} and PyTorch \citep{NEURIPS2019_9015} for Python programming offer easy-to-use frameworks for building and training various types of neural networks. \par Spectroscopic surveys provide insights into the evolution of individual stars, of large-scale structures such as globular clusters and of the Milky Way galaxy as a whole. Upcoming projects, for example the William Herschel Telescope Enhanced Area Velocity Explorer (WEAVE, \citealt{10.1117/12.2312031}) and the 4-metre Multi-Object Spectroscopic Telescope (4MOST, \citealt{2019Msngr.175....3D}) will observe millions of stars. Efficient automatic tools will be needed to analyze the large number of spectra that such surveys will deliver. \par To determine the atmospheric parameters and chemical composition of stars, classical spectroscopic methods either measure equivalent widths of absorption lines or compare observed spectra to synthetic spectra. These synthetic spectra can be generated on-the-fly or are part of a pre-computed spectral grid. \cite{doi:10.1146/annurev-astro-091918-104509} provide an overview over classical spectral analysis methods in the context or large spectroscopic surveys. \par Convolutional Neural-Networks (CNNs) have recently been used to simultaneously infer multiple stellar labels (i.e. atmospheric parameters and chemical abundances) from stellar spectra. Every CNN contains convolutional layers which enable the network to identify extended features in the input data. In stellar spectra these features are absorption lines and continuum points; In 2-D images such features could be eyes in a face or star clusters in a spiral galaxy \citep{2020AJ....160..264B}. Neural network methods are purely data-driven and therefore require no input of any physical laws or models. Instead, during a training phase the network learns to associate the strength of spectral features with the values of the stellar labels. This requires a training set of spectra with pre-determined labels, from which the network can learn. Training sets for spectral analysis typically contain several thousand stellar spectra with high quality labels. Current spectral surveys, which provide $\sim$10\textsuperscript{5} spectra with labels, are an ideal testing ground for the CNN approach to spectral parameterization. \par Examples of stellar parameterization using CNNs can be found in several recent studies. \cite{2018MNRAS.475.2978F} have developed StarNet, a CNN that is able to infer the stellar atmospheric parameters directly from observed spectra in the APO Galactic Evolution Experiment (APOGEE, \citealt{2017AJ....154...94M}). A grid of synthetic spectra was used to train and test StarNet. Purely observational data from APOGEE DR14 were used by \cite{2019MNRAS.483.3255L} to train their astroNN convolutional network. To mimic the methods of standard spectroscopic analysis, astroNN is designed to use the whole spectrum when predicting atmospheric parameters but is limited to individual spectral features for the prediction of chemical abundances. \cite{2020A&A...644A.168G} trained their CNN on medium-resolution stellar spectra from the RAdial Velocity Experiment (RAVE, \citealt{Steinmetz_2020}) together with stellar labels that were derived from high-resolution APOGEE DR16 spectra. They also added absolute magnitudes and extinction corrections for their sample stars as inputs for the network. This information allowed their CNN to put additional constraints on its predictions of the effective temperature and surface gravity. \par In this work, we propose to test a CNN approach in the context of the Gaia-ESO survey (GES, \citealt{2012Msngr.147...25G, 2013Msngr.154...47R}). We use GIRAFFE spectra with labels from the sixth internal data release. The GES survey is designed to complement the astrometric data from the Gaia space observatory \citep{2016A&A...595A...1G}. The goal of the present project is to prepare machine--learning ground for the next generation of spectroscopic surveys, such as 4MOST and WEAVE. This paper goes together with Nepal et al. (sub) that focuses on the chemical evolution of lithium with CNNs from GES GIRAFFE HR15N spectra. This paper is organized as follows: In Sect.~\ref{Data} we present the data that we used to train and test our CNN. Section \ref{Network architecture and training} describes the architecture of our network and explains the details of the training process. The results of the training and the network predictions for the observed set are presented in Sect.~\ref{Training results}. In Sect.~\ref{Validation of results} we validate our results by investigating the CNN predictions for a number of benchmark stars. For the further validation we use our results to recover several properties of the Milky Way galaxy. \section{Data} \label{Data} \subsection{Data preparation} \label{Data preparation} Our data set consists of spectra, associated stellar parameters, and abundances from the GES iDR6 data set. In the Gaia-ESO survey, atmospheric parameters and chemical abundances are determined by multiple nodes that apply different codes and methodologies to the same spectra. A summary of the determination of atmospheric parameters from the GIRAFFE spectra is given in \cite{2014A&A...567A...5R}. Further information about the determination of chemical abundances can be found in \cite{2014A&A...572A..33M}. The spectra were taken with the GIRAFFE spectrograph that covers the visible wavelength range of 370 - 900~nm. Several setups divide the whole GIRAFFE spectral range into smaller parts. For this study we chose the HR10 (533.9 - 561.9~nm, R = 19800) and HR21 (848.4 - 900.1~nm, R = 16200) setups because they cover important Mg and Al absorption features. \par For our analysis we used normalized 1-D spectra from the GES archive. We removed bad pixels and cosmic ray spikes where necessary. To do so, we first calculated the median of all spectrum flux values. We then identified cosmic ray spikes by finding all pixels with flux values that exceeded this median flux by five sigma. The spikes were removed by setting their flux value to be equal to the spectrum median flux. Pixels with zero flux values were also set to the median flux. Afterwards, we corrected the spectra for redshift based on the radial velocity provided by GES. To reduce the number of pixels per spectrum and therefore the computational cost of the further analysis, we re-binned the spectra to larger wavelength intervals per pixel. The HR10 spectra were resampled to 0.06~$\AA$ per pixel and the HR21 spectra to 0.1~$\AA$ per pixel. After re-binning, the spectra were truncated at the ends to ensure that all spectra from one setup share the exactly same wavelength range. Eventually we combined the HR10 and HR21 spectra to create one input spectrum per star for our network. The combined spectra are composed of 8669 pixels each and cover the wavelength ranges from 5350-5600~$\AA$ and 8480-8930~$\AA$. \par To build our training set, we performed several quality checks to ensure that our network will be trained on high-quality data. Spectra with signal-to-noise ratio (S/N) < 30 and large errors in atmospheric parameters and elemental abundances (\textit{eT}\textsubscript{eff} > 200~K, \textit{e}log(\textit{g}) > 0.3~dex, \textit{e}A(element) > 0.2~dex) were discarded, as well as spectra that were marked with the TECH or PECULI flags or have rotation velocities > 20~km\, s$^{-1}$. We also removed spectra that showed a difference larger than 0.2~dex between the provided metallicity [Fe/H] (as a stellar atmospheric parameter) and the \ion{Fe}{i} elemental abundance. \par We further examined the remaining spectra to find possible outliers and incorrect measurements. To investigate the similarity between all the spectra, a t-distributed stochastic neighbor embedding (t-SNE) analysis was employed. The t-SNE analysis is a popular technique to visualize the internal relationships and similarities in high dimensional data sets by giving each data point a location in a two- or three-dimensional similarity map \citep{JMLR:v9:vandermaaten08a}. In our case, the data points are the individual spectra and the data set is n-dimensional, where n is the number of pixels in each spectrum. Figure \ref{fig:t-SNE} shows a two-dimensional similarity map for our combined spectra, obtained with the \textit{sklearn.manifold} library for python programming \citep{JMLR:v12:pedregosa11a}. Every point in the map corresponds to one spectrum and the distance between the individual points is determined by the similarity of the shapes of the individual spectra. There are two main branches in the map with several sub-structures. The two branches represent spectra from stars in two distinct populations: Main sequence stars with surface gravity log(\textit{g}) $\gtrsim$ 3.5 and stars in the giant branch with lower log(\textit{g}) values. The different physical properties in stellar atmospheres are reflected in the shapes of their spectra which in turn determine their locations on the t-SNE map. This connection between physical parameters and spectral features is what our CNN learns during the training phase. We see several outlier-spectra in the map. Upon inspection, these spectra show signs of emission lines, have distorted absorption features or have suffered from failed cosmic removal or wrong normalisation. We excluded these outliers from the further analysis. For the analysis of future surveys such as WEAVE and 4MIDABLE-HR surveys, including emission line stars will be a necessity, as we expect many young stars to be observed. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{tSNE.png} \caption{A t-SNE similarity map of our sample GIRAFFE spectra. The three panels show the same map, each color-coded with a different physical parameter. While the relative distance of points in the map indicate the degree of similarity of the corresponding spectra, their X and Y coordinates themselves have no physical meaning.} \label{fig:t-SNE} \end{figure*} Every training spectrum has a set of associated stellar labels. In our case these are the two atmospheric parameters \textit{T}\textsubscript{eff} and log(\textit{g}) and the chemical abundances [Mg/Fe], [Al/Fe], and [Fe/H]. In the GES iDR6 data set the elemental abundances are given as absolute abundance values A(Element). We calculated [Fe/H] and [Element/Fe] as follows: $\rm [Fe/H]=A(Fe)_{star}-A(Fe)_{\odot}$ and $\rm [Element/Fe]=A(Element)_{star}-A(Element)_{\odot}-[Fe/H]$. The absolute solar abundances were taken from \cite{2007SSRv..130..105G}, consistently with GES spectral analysis strategy. The decision to use these relative abundances instead absolute abundances for the training of our network is justified in Sect.~\ref{Learning from spectral features}. \par Magnesium and aluminum abundances are known to be sensitive to non-local thermodynamical equilibrium (NLTE) effects (\citealt{2017ApJ...847...15B}, \citealt{2022arXiv220611070L}). These were were not considered by GES during the abundance analysis from GIRAFFE spectra. However, the NLTE effects are small for the Mg and Al absorption lines that are relevant for this study: $-$0.10~dex to 0.08~dex for Mg and 0.02~dex to 0.08~dex for Al, depending on the stellar atmospheric parameters. We therefore do not expect that NLTE will have a significant effect on the training of our network and the following scientific validation and did not attempt to correct the given GES abundances for NLTE effects. After applying all of the constraints mentioned above, we were left with 14\,696 combined spectra with associated high-quality atmospheric parameters and elemental abundances. As explained in Sect.~\ref{Training set and test set}, these 14\,696 spectra will be randomly split into a training set and a test set for the training of our CNN. \subsection{Parameter space of input labels} \label{Parameter space of input labels} To assess the parameter space of our training set input labels, we show the Kiel-diagram and abundance plots in Figs.~\ref{fig:kiel GES} and~\ref{fig:MgAl_Fe_hist2d}. Effective temperatures range from \textit{T}\textsubscript{eff} = 4000 - 6987~K, the surface gravity log(\textit{g}) is between 1.08 and 4.87~dex and [Fe/H] spans a range of $\sim$2~dex, from $-1.53$ to 0.72~dex. The color-coding in Fig.~\ref{fig:kiel GES} reveals the metallicity sequence in the giant-branch of the Kiel-diagram. \par Figure \ref{fig:MgAl_Fe_hist2d} shows density maps of the [Mg/Fe] and [Al/Fe] distribution of our training set. The [Mg/Fe] values range from $-0.25$ to 0.80~dex, [Al/Fe] values have a large spread of almost 2~dex, from $-0.95$ to 1.00~dex. The Mg distribution reveals two distinct regions of enhanced density, separated by a narrow region of lower density. These two regions reflect the separation of Milky Way stars into a thin-disk (low [Mg/Fe]) and a thick-disk (enhanced [Mg/Fe]) populations. Magnesium abundances are the best probe for this chemical separation between the thin- and thick-disk of our Galaxy (e.g. \cite{1998A&A...338..161F}, \citealt{2000A&A...358..671G}). As expected, we do not observe this separation in the [Al/Fe] plot. Our training set is dominated by nearby stars, due to the S/N cut and other quality criteria that we applied to the entire GES iDR6 data set. Therefore our data does not cover some of the Milky Way properties that become apparent when one investigates a larger volume of our galaxy. \cite{2020A&A...638A..76Q}, for example, find two detached [Al/Fe] sequences for stars close to the galactic center (R\textsubscript{Gal} < 2~kpc) in their sample of APOGEE stars. At low [Fe/H] several groups of stars can be observed in both the Mg and Al plots. The stars in these patches belong to different globular clusters. In the [Al/Fe] plot, the scatter of Al abundances in the globular clusters is considerably higher than the scatter of Mg at equal metallicities. This large spread of Al abundances, especially in globular clusters at low metallicities has already been observed in earlier GES releases (Fig.~4 in \citealt{2017A&A...601A.112P}) and indicates the existence of multiple stellar populations within the clusters. \subsection{Observed set} In addition to these spectra with high quality GES parameters, we composed set of spectra with S/N between 20 and 30. We call this the "observed set". The observed set will be used to test the performance of our CNN on spectra that were not involved in the training process. We did not put any quality constraints on the input labels for the spectra in the observed set, and a number of them do not have any reported Mg and Al abundances. As for the training and test sets, we removed spectra that were labeled with the TECH and PECULI flags and outliers in the t-SNE map. The ability of neural networks to extrapolate to labels outside of the parameter space of the training data is limited. Therefore we excluded spectra with GES labels outside of the training data distribution from our observed set. After applying these criteria to the GES iDR6 data set, our observed set contains 15\,419 spectra, most of them with associated GES labels. \begin{figure} \includegraphics[width=0.9\columnwidth]{kiel_GES.png} \caption{Kiel diagram containing the stars that will be used to train and test our neural network. The color-coding indicates the metallicity gradient in the giant branch stars.} \label{fig:kiel GES} \end{figure} \begin{figure} \includegraphics[width=0.9\columnwidth]{density_MgAl_Fe.png} \caption{Density plots of [Mg/Fe] vs. [Fe/H] (top panel) and [Al/Fe] vs. [Fe/H] (bottom panel) for the 14\,696 stars in the training and test sets. Brighter colors indicate a higher density of data points.} \label{fig:MgAl_Fe_hist2d} \end{figure} \section{Network architecture and training} \label{Network architecture and training} A CNN acts as a function with many free parameters. In our case, this function takes stellar spectra as an input and outputs the associated atmospheric parameters and abundances. The network architecture then describes the shape of this neural network function. The goal of the training process is to find the optimal values of the free CNN parameters to accurately parameterize the input stellar spectra. In the following subsections we describe how a neural network can "learn" how to accurately parameterize stellar spectra. Our CNN was built and trained in a Python programming environment with the open source deep-learning library Keras \citep{chollet2015keras} using the TensorFlow back-end \citep{tensorflow2015-whitepaper}. \subsection{Network architecture} \label{Network architecture} The different parts of a neural network architecture, the "layers", fulfil different purposes in the process of parameterizing stellar spectra. Our neural network consists of two main types of layers: Convolution layers that identify features and patterns in the input spectra, and dense layers which associate those spectral features to the output stellar parameters. A visualisation of our network architecture can be seen in Fig.~\ref{fig:model architecture}. \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{architecture.png} \caption{Architecture of our CNN. The input layer reads in the flux information of the stellar spectra. It is followed by three pairs of convolution and max-pooling layers. The filter outputs from the third convolution and max-pooling pair are then flattened to serve as inputs for the dense layers. Three dense layers (with a dropout layer after each) interpret the spectral features, found by the convolution layers, into output labels. The outputs from a last dense layer are the values of our six stellar labels (atmospheric parameters and elemental abundances).} \label{fig:model architecture} \end{figure} \subsubsection{Convolution layers} \label{Convolution layers} To identify the spectral features that correlate with the stellar labels, our CNN is composed of convolution layers. These layers convolve the input spectra with a number of 1-dimensional filters. The filters move across the input spectra and produce feature maps, which are the results of the spectrum-filter convolutions. While the length and number of filters is fixed, the purpose of each filer is learned during the training phase. The neural network learns how to adjust the filter values to achieve the best label predictions. Multiple convolution layers with multiple filters each can be put in sequence in a neural network architecture. Filters in one convolution layer then extract features in the feature maps that were produced by the previous convolution layer. Our CNN has three convolution layers with an increasing number of filters in each layer. \subsubsection{Dense layers} \label{Dense layers} In order to build a high-dimensional complex function between the feature maps from the last convolution layer and the labels, so-called dense layers are necessary. Each dense layer consists of a fixed number of artificial neurons. An artificial neuron receives inputs from a previous layer, multiplies every input with its associated weight, and then passes the result to the neurons of the next dense layer. In this way, every neuron in one dense layer is connected to all neurons of the previous layer and to all neurons of the following layer (this is the reason why dense layers are also called "fully connected" layers). The last layer in a CNN is a dense layer where the number of neurons is equal to the number of labels that the network is designed to predict (in our case 5). \subsubsection{Activation function} \label{Activation function} The relations between spectral features and physical stellar labels are non-linear. To reflect this non-linearity in our network training process, activation functions are used. Activation functions transform the output of the convolution filters and the artificial neurons before they are passed on to the next layer. In recent machine--learning applications the "Leaky ReLU" activation function is most frequently used. It leaves positive and zero output values unchanged and multiplies negative outputs with a small positive value. Or, notated mathematically \citep{Maas2013RectifierNI}: \ \\ $ f(x)= \begin{cases} a \cdot x &\text{if $x<0$} \\ x &\text{otherwise,} \end{cases} $ \\ \ where $x$ is a filter or neuron output value before it is passed to the next layer. For our network, we adopt a Leaky ReLU activation function with $a = 0.3$ for all layers. \subsubsection{Max-pooling and dropout} \label{Max-pooling and dropout} Over-fitting occurs when the network is very accurate in predicting the labels of the training set but shows a poor performance when predicting labels for the test set or an external observed set. In this case the network is not generalizing well for inputs outside of the training data. This is often the case when the network architecture is complex and the number of weights and biases is too large. In this context, max-pooling and dropout are popular regularisation devices used to prevent over-fitting during the training of a CNN.\par Max-pooling helps to prevent over-fitting by reducing the complexity of the feature maps that are produced by the convolution layers. This is achieved by keeping only the highest value within a defined interval in every feature map. In this way the less important pixels of a feature map are discarded and the network is able to focus on pixels that show a strong response to the convolution filters. \par Applying dropout after a dense layer randomly deactivates the output of a fraction of the layer neurons (these neurons are "dropped"). The weights associated with dropped neurons are therefore not updated for one training epoch (one passage of the entire training set through the network, see Sect. \ref{Epochs and batches}). After every epoch all neurons are reactivated and a new collection of neurons is dropped for the next epoch. As a consequence, the network architecture changes slightly after every epoch during the training. This prevents the network from relying too much on individual parts of the architecture and therefore individual features in the input spectrum. In this way the network is forced to learn from the whole spectrum which leads to a good generalization for different input spectra. \subsection{Network training} \label{Network training} When the network architecture is designed, the values of the convolution filter cells and the weights and biases in the dense layers are unknown. During the training phase, these values are "learned" by the neural network. Training means to repeatedly pass a large number of spectra with known labels (training set) through the network and to compare the output of the network with the known input labels of the training set. At the start of the training phase the filter values, weights and biases are initialized randomly. Therefore the predictions of the untrained network will differ strongly from the labels of the input spectra. The difference between the network predictions for the labels and their known values from the input is called "loss" and it is calculated with a loss-function. The loss-function calculates the overall difference between input and output values across all labels. Therefore the loss is a measure of the overall accuracy of the network predictions. An optimisation algorithm is used to slightly change the weights and biases in the network in such a way that, when the training sample is passed through the network again, the loss will be slightly smaller than in the first iteration. Over the course of many iterations of passing the training spectra through the network, calculating the loss and updating the weights and biases for optimisation, the loss steadily decreases and the network predictions get more precise (Figs.~\ref{fig:kiel training} and \ref{fig:training losses}). In the following subsections important concepts that are involved in the training of our neural network are explained in detail. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{kiel_epochs.png} \caption{Evolution of the prediction-based Kiel-diagram during the network training. The far-left panel shows the Kiel-diagram based on the GES input values of \textit{T}\textsubscript{eff} and log(\textit{g}). Succeeding panels show the Kiel-diagram based on network predictions after 1, 10 and 150 training epochs. The color-coding, indicating the [Fe/H] values of each data point, is on the same scale as in Fig.~\ref{fig:kiel GES}.} \label{fig:kiel training} \end{figure*} \begin{figure} \includegraphics[width=0.9\columnwidth]{losses.png} \caption{Evolution of the training and test losses during the network training phase. The loss of the test set is closely following the training loss. The small difference between training and test sets at the end of the training phase shows that the network is not over-fitting.} \label{fig:training losses} \end{figure} \subsubsection{Training set and test set} \label{Training set and test set} Training relies on a large number of stellar spectra (several thousand) with associated stellar labels. In our case, the labels are previously determined stellar atmospheric parameters and chemical abundances (see Sect.~\ref{Data}). The available data are split randomly into a training set, that is used to train the network, and a test set. During training, the test set is passed through the network as often as the training set, but it is not used in the optimisation calculations that update the weights and biases. Instead, the test set is used to monitor the performance of the network on data that it was not trained on. The loss calculated from the label predictions for the test set is used to determine when to stop the training: If the test loss does not increase anymore over a specified number of training iterations, the weights and biases are assumed to have reached their optimal values for the given network architecture and the training ends. Comparing the performance of the network on the training and test sets also helps to determine if the network is over-fitting. We found that assigning 40\% of our available data to the test set yields the best training results for our application. That means that of our 14\,696 spectra, 8817 spectra are assigned to the training set and 5879 to the test set. Training and test spectra are chosen randomly before the training and it is assured that their labels cover the same parameter space. \subsubsection{Epochs and batches} \label{Epochs and batches} One iteration of passing the entire training set through the network is called an epoch. The number of epochs that are necessary to train a network to achieve good results depends on the model architecture. \par In one epoch the training data that is passed through the network is divided into equally sized batches. For example, for a training set of 6400 spectra and a batch size of 64, 100 batches pass through the network in one epoch. After every batch that passes through the network, the weights and biases are updated based on the current loss-function in an attempt to decrease the loss after the next batch. This means that in the above example the weights and biases are updated 100 times before the training set has fully passed through the network. This speeds up the overall training because less computer memory is required to process the smaller number of spectra for one update. Using batches can also help to prevent over-fitting. The training spectra are shuffled and assigned to new batches after every epoch. \section{Training results} \label{Training results} We performed ten training runs which resulted in ten slightly different CNN models. The results of the training runs vary slightly because the weights and biases of the network are initialized randomly before every run (the network architecture remains the same). On average one training run lasted for 159 epochs and took $\sim$40 minutes to complete. We removed the two CNN models with the largest remaining test losses at the end of their training phase. The remaining eight CNN models were then used to predict the labels for the spectra in the training, test, and observed sets. The label prediction was very fast: the parameterization of the $\sim$30\,000 spectra in our data set took less than 20 seconds per CNN model. \par The averages of the eight sets of labels is what we report here as our results. In Fig.~\ref{fig:one-to-one}, we show a direct comparison between the input GES measurements and the CNN predictions for the training, test, and observed sets. There is a good agreement between the GES measurements and CNN predictions across all labels and for all three sets. Both the CNN predictions for the training set and test set show the same offset (if any) and small dispersion around the 1:1 relation. This indicates that the network performs well on spectra which it was not directly trained on and does not overfit. The dispersion around the 1:1 relation is uniform across most of the value ranges of all five labels. However, our CNN does not accurately reproduce the highest and lowest GES measurements. This is especially apparent in the case of [Al/Fe], where the CNN predictions overestimate the lowest [Al/Fe] measurements by $\sim$0.5~dex, while the highest values are underestimated by approximately the same amount. We explain this behaviour by noting that only a small number of spectra with these extreme measurements were available for the network training. The CNN therefore predicts more moderate labels for these spectra. The predictions for the observed set spectra are also in good agreement with the GES input labels, albeit with a larger scatter. The over- and underestimation of the highest and lowest label values is more pronounced in the observed set, especially for the abundance predictions. We expect this poorer performance on the observed set when compared to the training and test sets, because our observed set spectra have a lower S/N between 20 and 30. \par We further investigated the CNN performance on low quality spectra by constructing a second observed set with S/N between 10 and 20. For this set the predictions for \textit{T}\textsubscript{eff} and log(\textit{g}) are still in good agreement with the GES measurements for most stars. However, the quality of the abundance predictions starts to degrade further, especially for [Mg/Fe] and [Al/Fe]. For these two labels only a weak correlation between GES and CNN values remains. We will show in Sect.~\ref{Learning from spectral features} that our network relies on individual spectral features when predicting abundances. As the quality of the spectra decreases, these features become increasingly hidden by the noise. \begin{figure*} \centering \includegraphics[width=1.6\columnwidth]{1v1.png} \caption{One-to-one comparison of labels from GES iDR6 and the values predicted by our CNN. The three columns show results for the three different data sets (training, test, and observed). Each row contains the results for a different label. In every panel the horizontal axis stands for the GES input labels, the vertical axis represents the labels predicted by our CNN. The average bias and the standard deviation (scatter) of the results around the 1:1 relation are given in every panel. Solid diagonal lines indicate the 1:1 relation.} \label{fig:one-to-one} \end{figure*} \subsection{Estimation of internal uncertainties} \label{Estimation of internal uncertainties} As described, the label predictions from our eight trained CNN models vary slightly. This variation can be used to estimate the internal precision of our methodology. We define the uncertainties of our results as the dispersion between the label predictions from the eight CNN models. In Fig.~\ref{fig:uncertainties} we display the distribution of the label uncertainties $\sigma (Label)$ relative to the predicted label values of our five labels. Overall the uncertainties are small with no strong trends with respect to the absolute label values. The exception is [Fe/H], where the errors increase with lower [Fe/H] abundances, presumably due to the smaller number of stars in the metal-poor regime compared to the main bulk of the sample, as well as less spectral features. Also, our CNN struggles to provide precise predictions due to the weak spectral features present in this [Fe/H] regime. The prediction uncertainties for \textit{T}\textsubscript{eff} tend to increase towards the edges of the temperature distribution. The mean uncertainties of the label predictions are small: 24~K for \textit{T}\textsubscript{eff}, 0.03 for log(\textit{g}), 0.02~dex for [Mg/Fe], 0.03~dex for [Al/Fe], and 0.02~dex for [Fe/H]. We find that the uncertainties of the predictions increase as the S/N of the spectra decreases. This is true for spectra from all three sets. CNN predictions with large uncertainties for one label also show large uncertainties for all other labels while precise predictions are precise across all five labels. The few outliers with higher uncertainties are results from spectra in the observed set, with lower S/N than the majority of investigated spectra. We further investigate the precision of our CNN in Sect.~\ref{Benchmark stars} by using repeat observations of Benchmark stars. \begin{figure} \includegraphics[width=0.9\columnwidth]{uncertainties.png} \caption{Distributions of label uncertainty as a function of absolute label value for our five parameters. Brighter colors indicate higher density of data points. The displayed data includes all predictions for the training, test, and observed sets.} \label{fig:uncertainties} \end{figure} \subsection{Learning from spectral features} \label{Learning from spectral features} The purpose of the convolution layers in our CNN is to find spectral features. These spectral features are then interpreted into the labels by the dense layers. This approach is also used by classical spectral classification methods, where individual spectral features are investigated to derive the stellar parameters. However, since machine--learning is purely data-driven, the predictions of our CNN could merely be the result of our network learning correlations between labels in our data set. Individual elemental abundances for example are correlated with the iron abundance: Stars with low iron generally show low abundances of other elements as well. Inferring stellar parameters from correlations like these can lead to satisfying results for some spectra. However, stars with exotic chemical compositions (for example stars with a non-solar mixture of elements such as old thick disk stars) do not follow such trends and will not be parameterized well. We therefore want to show that our CNN is indeed able to identify spectral lines and to associate them with the right labels. \par During the training phase the optimisation algorithm calculates the sensitivity of the output labels to small changes in the input flux values. This is done for every wavelength bin in the input spectrum. It is therefore inherent to neural network training to calculate which output label is sensitive to which portion of the input spectrum. The sensitivity of the output labels to the flux at a certain wavelength bin $\lambda$ can be expressed as the gradient $\partial$ Label / $\partial \lambda$. A large absolute gradient value at a wavelength bin then means that the network is very sensitive to flux changes in that bin. In Fig.~\ref{fig:network gradients} we show the network gradients for our five labels across the whole wavelength range of the input spectra. The gradients are scattered randomly around zero for most of the wavelength range. Only at certain wavelength bins the network is sensitive to flux changes. Here, the gradients show individual, narrow spikes. This is especially apparent in the gradients for [Mg/Fe] and [Al/Fe] in the HR21 part of our input spectra. The [Mg/Fe] gradients show two clear spikes at 8736.0 and 8806.8~$\AA$. These are the locations of two \ion{Mg}{i} absorption lines. The largest spike in the [Al/Fe] gradients mark the location of the \ion{Al}{i} double feature at $\sim$8773~$\AA$. We therefore see that our network is able to identify absorption lines in the input spectra. The negative gradient values at these wavelengths means that if the flux at the absorption lines are low, the predicted abundance is high, and vice-versa. This reflects the fact that stronger absorption features in spectra indicate higher elemental abundances in stellar atmospheres. The CNN label predictions are therefore directly based on the strength of the relevant absorption lines in the input spectra. \par Our network does not only learn from the correlation between spectral features and stellar labels in individual stars, but also from correlations between labels across the whole training set. These data-wide correlations are of astrophysical origin, showing for example that stars with high iron abundance generally also have high abundances of other metals. To investigate how astrophysical correlations in the input data influence the network gradients, we trained our CNN with different combinations of input labels. We found that the gradients of a combination of \textit{T}\textsubscript{eff}, log(\textit{g}), and one or all of the abundances show no gradient correlations, meaning the CNN learns mainly from the spectral features. If the network is trained only with the highly correlated labels A(Mg), A(Al) and A(Fe) (absolute abundances), the gradients for the three labels are almost identical. In this case the CNN is still able to identify the locations of the Mg, Al and Fe absorption lines, but the network predictions for one element is also very sensitive to absorption lines of the other two elements. In addition, the quality of the CNN predictions starts to degrade, leading to larger differences between GES input labels and CNN predictions. This is because the network relied too much on the label correlations within the training set instead of the connection between spectral features and labels of individual spectra. For future surveys, we therefore recommend to carefully inspect the training data for strong correlations because they can influence the CNN performance. \par Further investigation of the gradient peaks gives interesting insights into the behaviour of our CNN. Some spectral lines influence the network predictions for only one of the labels. An example in the HR10 setup is a \ion{Cr}{i} line at $\sim$5410 $\AA$, that corresponds to a peak in the gradient for \textit{T}\textsubscript{eff}. Other lines have an effect on multiple, uncorrelated labels. For deriving \textit{T}\textsubscript{eff} and log(\textit{g}), our CNN is sensitive to the \ion{Ni}{i} line in the red end of the HR10 setup. While this line coincides with the strongest peak in the log(\textit{g}) gradient, only a minor peak is present in the \textit{T}\textsubscript{eff} gradient. A \ion{Fe}{i} line at $\sim$8805~$\AA$ is also important for both the \textit{T}\textsubscript{eff} and log(\textit{g}) predictions, but not for the [Fe/H] likely due to its blend with a Mg line. The infrared calcium triplet (the three most prominent absorption lines in the HR21 setup) does not have a significant influence on the network predictions for any of the labels, but the \ion{Ca}{ii} line beyond 8900~$\AA$ causes a very strong response of the \textit{T}\textsubscript{eff} and [Fe/H] gradients. A deeper investigation of the CNN gradients could be done to search for complementary spectral features that could be used by standard spectroscopic pipelines, but this is out of the scope of the present paper. \begin{figure*} \centering \includegraphics[width=1.6\columnwidth]{network_gradients.png} \caption{Network gradients for our five labels as a function of wavelength (black). The top panel shows the gradients across the GIRAFFE setup HR10, the bottom panel shows the same for the HR21 setup. An average input spectrum is shown in gray as the top line in both panels. The locations of selected absorption lines of different elements are marked with vertical colored lines. The highlighted Mg and Al lines were used by GES for the determination of our input Mg and Al abundances. Their wavelengths are 5528.41, 8717.81, 8736.02 and 8806.756~$\AA$ for Mg and 5557.06, 8772.87 and 8773.90~$\AA$ for Al \citep{2021A&A...645A.106H}.} \label{fig:network gradients} \end{figure*} \section{Validation of results} \label{Validation of results} In this section we validate our results in two ways. First, we compare CNN results to the GES input labels for a set of benchmark stars. In this way we can validate that our CNN can accurately parameterize individual spectra. Then we investigate the label predictions for spectra from stars in different stellar populations to confirm that our results recover important Milky Way properties. Our validation covers the results from our whole sample of spectra, combining training, validation, and observed sets. \subsection{Benchmark stars} \label{Benchmark stars} The GES iDR6 data set contains a number of benchmark stars with high quality spectra and precise stellar labels \citep{2015A&A...582A..49H}. This benchmark set covers stars in different evolutionary stages with a wide range of stellar parameters and abundances, suited for the verification and calibration of large data sets \citep{2017A&A...598A...5P}. Our data set contains 25 benchmark stars, including the Sun. For our analysis we excluded four benchmark stars that have labels outside our training label space. Three of the excluded stars have [Fe/H] < $-2.0$~dex and one star has a [Mg/Fe] abundance of $-0.79~dex$. We compare the GES labels and CNN output labels for the remaining 21 benchmark stars in Fig.~\ref{fig:benchmarks}. The CNN predictions agree well with the GES values across all five labels for most of the benchmark stars. The largest differences occur for stars on the edges of the parameter space, where the network predicts more moderate values compared to the extreme GES values. An example is HD 49933, the benchmark star with the highest \textit{T}\textsubscript{eff}, for which our network predicts $\sim$350~K less than what is reported by GES. This star remains one of the hottest in our benchmark set, even with this reduction in \textit{T}\textsubscript{eff}. Despite the large difference in one label, the CNN predictions for the other labels of HD 49933 agree well with the GES measurements. The only star that shows a large difference across several labels is HD 102200. It has the highest GES [Al/Fe] measurement of the benchmark stars, which our network underestimates by $\sim$0.6~dex. \par The label-specific bias and scatter between GES and CNN labels for the benchmark stars is comparable to the bias and scatter that we found for the training and test sets in Fig.~\ref{fig:one-to-one}. \par The CNN predicts similar label values for repeat spectra, oftentimes predicting identical labels for multiple repeats. The dispersions between repeated label predictions can be interpreted as the uncertainties of the CNN results. These CNN uncertainties are within the GES label uncertainties for the benchmark stars. \par We conclude that our CNN is able to accurately predict multiple labels of individual stars. However, the most extreme CNN results should be used cautiously, because they are likely underestimating high values and overestimating low values. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{benchmarks.png} \caption{Comparison of GES input labels with CNN predictions for the benchmark stars. Different data point sizes have no physical meaning and are for visualisation purposes only.} \label{fig:benchmarks} \end{figure} \subsection{Comparison to asteroseismic surface gravities} \label{Comparison of CNN log(g)s with asteroseismic surface gravities} Asteroseismology is an extremely powerful tool to provide accurate surface gravities, based on stellar oscillations. This method is massively used by spectroscopic surveys for validation or calibration purposes (RAVE, \citealt{Valentini2017}; APOGEE, \citealt{Pinsonneault2018}). The Convection, Rotation and planetary Transits (CoRoT) mission was a space observatory dedicated to stellar seismology. Our aim here is to compare the log(\textit{g}) values of our GES input data and our CNN results to GES-CoRoT log(\textit{g}) values from \cite{valentini2016}. In Figure \ref{fig:1v1_CoRoT}, the comparison between GES log(\textit{g}) values and asteroseismic CoRoT results shows no residual trend, with a low dispersion of 0.08~dex. The CNN log(\textit{g}) values show also no residual trend compared to GES-CoRoT log(\textit{g}) and a similarly small dispersion. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{1v1_CoRoT_logg_GOOD.png} \caption{One-to-one comparison between input labels from GES in red (output CNN label in black) with respect to seismic log(\textit{g}) derived using asteroseismology \citep{valentini2016}.} \label{fig:1v1_CoRoT} \end{figure} \subsection{Globular clusters} \label{Globular clusters} Our data set covers stars that belong to a number of different globular clusters. We identified member stars of five separate clusters based on their position in the sky and their scatter in [Fe/H] and radial velocities that are reported in GES iDR6. The position of the cluster members in the [Mg/Fe] and [Al/Fe] plots is displayed in Fig.~\ref{fig:clusters}. The CNN predictions reproduce the grouping of cluster members in the plots, with a small spread of [Fe/H] within each cluster. However, the CNN predictions show a smaller scatter in [Element/Fe] compared to the GES values, especially for Al. This reduced scatter is a reflection of the results that we saw in Fig.~\ref{fig:one-to-one}, where the CNN predicts more moderate labels for spectra with extreme GES labels. \begin{figure} \includegraphics[width=0.9\columnwidth]{cluster_plot.png} \caption{[Mg/Fe] and [Al/Fe] vs. [Fe/H] plots for stars in the training, test, and observed sets. The panels on the left show the distributions of the GES iDR6 values, panels on the right are the predictions of the trained neural network. Cluster membership is indicated by differently colored data-points.} \label{fig:clusters} \end{figure} Our CNN results recover the Mg-Al anti-correlation, which is used to investigate the chemical evolution of globular clusters \citep{2017A&A...601A.112P}. Figure \ref{fig:MgAl_3clusters} shows the Mg-Al anti-correlation in the clusters NGC 1904, NGC 6218 and NGC 2808. The average [Fe/H] values of these three clusters span a range of $\sim$0.5~dex. We see that the CNN results trace well the anti-correlations in all three clusters. Except for the most extreme stars, all CNN predictions agree with the GES results within their reported uncertainties. We observe the largest difference between CNN and GES labels in NGC 1904, which contains stars with the lowest [Fe/H] in our entire data set. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{plot_anticorr.png} \caption{Mg-Al anti-correlation plots for three of our sample clusters with decreasing cluster metallicity. Colored data points show the labels predicted by the CNN, black points are the GES results. The color for the differnet clusters is the same as in Fig.~\ref{fig:clusters}. Average uncertainties of the GES results are shown in the lower left corner.} \label{fig:MgAl_3clusters} \end{figure} \subsection{Thin- and thick disk populations} \label{Thin- and thick disk populations} As discussed in Sect.~\ref{Parameter space of input labels}, [Mg/Fe] values can be used to separate the Milky Way stars into a thin disk and a thick disk population. We performed this separation based on our CNN results for the combination of training plus test set and and, independently, for the observed set. The top panel of Fig.~\ref{fig:disks} shows the distribution of [Mg/Fe] vs. [Fe/H] for the CNN predictions (analogous to the top panel of Fig.~\ref{fig:MgAl_Fe_hist2d}) for the training/test samples. We can see the separation between the two main populations: Thin disk stars with [Mg/Fe] lower than $\sim$0.2~dex and thick disk stars with enhanced [Mg/Fe]. To identify thick and thin disk stars in our data set, we used the clustering algorithm HDBSCAN \citep{10.1007/978-3-642-37456-2_14}, which is implemented in the \textit{hdbscan} library for Python programming. This algorithm assigns data points to different clusters, depending on the density of data points in a distribution. The result of this clustering for our CNN data is displayed in the right panel of Fig.~\ref{fig:disks}. Two clusters are identified that correspond to the two stellar populations. About 35\% of the stars do not fall into any of the two clusters. Stars outside of the two dense regions in the distribution are considered to be "noise" by the HDBSCAN algorithm and are not assigned to any cluster. In the literature the chemical separation between thin and thick disk is often performed by splitting the distribution into several [Fe/H] bins and finding the [Mg/Fe] value in each bin where the density of stars is at a minimum (e.g. \citealt{2011A&A...535L..11A}, \citealt{2014A&A...572A..33M}). \cite{2018A&A...619A.125A} use a sophisticated t-SNE approach to identify the different stellar populations. They include abundances measurements from 13 chemical elements to further dissect the thin and thick disk into additional sub-populations. \par We proceeded the same investigation for the observed sample (covering 20 $\le$ S/N $\le$ 30) to test the precision of CNN abundances in a regime of low S/N. The results are shown in the bottom panels of Fig.~\ref{fig:disks}. The HDBSCAN algorithm is able to identify the same two clumps corresponding to the thin and the thick discs but, instead of two separated blobs, they seem to form a single sequence from low to high [Mg/Fe]. Machine--learning on low-S/N spectra may not be able to derive abundances precisely enough for Galactic Archaeology. We therefore warn the community that spectra with high-enough S/N should be gathered by surveys in order to perform Galactic Archaeology. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{disks.png} \includegraphics[width=0.9\columnwidth]{disks_obs.png} \caption{Top panel: Density map of the [Mg/Fe] vs. [Fe/H] distribution of our CNN results for the training+test sample. Brighter colors indicate a higher density of data points. Thin and thick disk populations found by the HDBSCAN algorithm are shown on the right. The two populations correspond to the two separate dense regions on the left panel. Bottom: same plot but for the observed sample composed of 15\,419 stars with 20 $\le$ S/N $\le$ 30.} \label{fig:disks} \end{figure} To investigate the age distributions of the two populations, we used the isochrone fitting code \textit{A Unified tool to estimate Distances, Ages and Masses} (UniDAM). The UniDAM tool \citep{2017A&A...604A.108M} follows a Bayesian approach of isochrone fitting. It compares stellar atmospheric parameters and absolute magnitudes from simulated PARSEC isochrones \citep{2012MNRAS.427..127B} to the corresponding values in observed stars. All PARSEC isochrones also have stellar masses and ages associated to them. For the isochrone fitting we used the CNN predictions for the atmospheric parameters \textit{T}\textsubscript{eff} and log(\textit{g}) in combination with [Fe/H]. Magnitudes of our sample stars in the \textit{J}, \textit{H}, and \textit{K} bands were taken from the 2MASS catalogue \citep{2006AJ....131.1163S}. In order for UniDAM to calculate the absolute magnitudes, it is also necessary to provide the parallax value for each sample star. We used the parallaxes from Gaia EDR3 \citep{2020yCat.1350....0G}. We removed stars with negative parallaxes as well as stars with relative parallax errors > 0.2. To get the most precise age estimates, we only considered turn-off stars in this analysis. Turn-off stars in our thin and thick disk samples were selected by their position in the Kiel-diagram. The resulting average age of the thin disk stars is 8.8~Gyr, the average thick disk age is 9.8~Gyr. This age difference between the two populations has been found in numerous studies and by using several different age determination methods. \cite{Kilic_2017} for example find ages from 7.4–8.2~Gyr for the thin disk and 9.5–9.9~Gyr for the thick disk by analyzing luminosity functions of white dwarfs in the two disks. Using APOGEE spectra and precise age estimates based on asteroseismic constraints, \cite{2021A&A...645A..85M} also show that the chemically selected thick disk stars are old, with a mean age of $\sim$11~Gyr. We note that the detailed age distribution of thin and thick disk members is sensitive to several selection criteria such as metallicity, kinematic properties and the distance from the Milky Way center. A detailed investigation of the two stellar populations is out of the scope of this work. \par We also investigated the kinematical properties of our thin and thick disk samples. Based on the current positions and velocities of the stars, we integrated their orbits for 5 Gyr in a theoretical Milky Way potential, using the python-based galactic dynamics package \textit{galpy} \citep{2015ApJS..216...29B}. For the integration we used the gravitational potential \textit{MWPotential2014}, which combines bulge, disk and halo potentials. Proper motions, sky coordinates and parallaxes of our sample were taken from the Gaia EDR3. In Fig.~\ref{fig:eccentricities} we show the trends of the orbital eccentricities relative to [Fe/H] for our thick and thin disk stars. A linear regression model shows that the eccentricity $e$ of thick disk orbits is decreasing with increasing [Fe/H]: $\Delta e / \Delta\rm[Fe/H]$ = $-0.25$. The eccentricities of thin disk stars are on average lower than the thick disk eccentricities and show a slight positive trend ($\Delta e / \Delta\rm[Fe/H]$ = 0.01). These results are consistent with the findings of \cite{Yan_2019}, who investigated the chemical and kinematical properties of thin and thick disk stars from the LAMOST data set \citep{2012RAA....12..723Z}. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{eccentricities.png} \caption{Eccentricities $e$ of stellar orbits as a function of [Fe/H] for our thin disk and thick disk samples. Dashed lines show linear fits to the thick disk (black) and thin disk data points (gray).} \label{fig:eccentricities} \end{figure} \section{Caveats} \label{Caveats} During the network training the GES input labels are considered to provide the true parameterization of the training spectra. The quality of the network predictions therefore depends entirely on the quality of the training data. We limited the uncertainties and errors in our training data by applying several quality constraints (Sect.~\ref{Data preparation}), but there is still a possibility that the input labels may suffer from systematics. Inaccurate labels of a small number of input spectra will not have a noticeable effect on the training process. The cases with a large difference between GES input value and CNN prediction could therefore be the result of the network predicting accurate labels for spectra with inaccurate GES labels. Future work could investigate if and how CNNs can be used for the quality control of classically derived stellar parameters. \par We are able to estimate the internal uncertainties of our network predictions by training multiple CNN models on the same data. These uncertainties however do not take into account the uncertainties of the training labels themselves. Bayesian deep learning frameworks account for both the training data uncertainties and model uncertainties \citep{10.5555/3295222.3295309}. Future work could benefit from implementing this Bayesian approach into our CNN method. \par The predictive power of our CNN is limited by the sparse training data that is available at the edges of the parameter space (Sect.~\ref{Training results}). A more homogeneous coverage of the parameter space, achieved by increasing the number of training spectra with extreme parameter values, will increase the precision of the CNN predictions. \par During the training phase our CNN not only learns the correlations between spectral features and stellar labels, but is also sensitive to correlations within the training labels themselves. The effect of this is discussed in Sect.~\ref{Learning from spectral features}, where we see for example how the strength of Mg absorption lines also has an effect on the network predictions for [Al/Fe]. These correlations can never be avoided when training the network to predict multiple abundances at once. The alternative then is to train a separate network model for each abundance label. This strategy decreases the efficiency of the CNN approach, especially when the goal is to predict a large number of chemical elements. Therefore, care has to be taken to reduce the correlations in the training data without sacrificing the ability of the network to predict multiple labels at once. \par \section{Conclusions} \label{Conclusions} Here we summarize the main results of our study and the steps we carried out to find these results. \begin{itemize} \item We built a training and a test set based on GES iDR6 spectra with S/N > 30. Together, these sets consist of 14\,696 stellar spectra with associated atmospheric parameters and chemical abundances. We applied several quality checks on these sets to ensure that our network is trained on high quality spectra and stellar labels. We use the parameters \textit{T}\textsubscript{eff} and log(\textit{g}) and the abundances [Mg/Fe], [Al/Fe], and [Fe/H] as the input labels for our neural network. We also built an observed set of 15\,419 spectra to test the performance of our CNN on spectra that were not involved in the training process. The observed set spectra have a lower S/N between 20 and 30. \item We then built a convolutional neural network with the python-based library \textit{Keras}. Our network architecture contains three convolutional layers, designed to detect features and absorption lines in input spectra. Three succeeding dense layers then convert the found spectral features into the values of the five output labels. We performed ten training runs, resulting in ten slightly different CNN models. We used the eight best CNN models to predict the labels of the training, test, and observed set spectra. \item The CNN label predictions are in good agreement with the GES input labels. The bias (average offset) and scatter between CNN and GES labels are identical for the training and test sets, showing that our CNN is not over-fitting during the training. The results for the observed set are also in good agreement with the GES input values, albeit with a larger scatter between CNN and GES values. We find that the quality of the CNN results degrades for low S/N spectra (20 $\le$ S/N $\le$ 30), especially for abundance predictions. We warn the community that machine--learning on low-S/N spectra may not be sufficient for deriving precise enough abundances. Surveys should therefore gather spectra with high-enough S/N (depending on their science goals). \item All three sets have in common that the differences between CNN predictions and GES values increase at the edges of the parameter space. Here the number of available training spectra is small. Increasing the number of training spectra in these parameter regimes can increase the precision of the CNN predictions. \item The scatter between the predictions from the eight different CNN models can be used to assess the internal precision of our network. This scatter is small: 24~K for \textit{T}\textsubscript{eff}, 0.03 for log(\textit{g}), 0.02~dex for [Mg/Fe], 0.03~dex for [Al/Fe], and 0.02~dex for [Fe/H]. \item We use network gradients to demonstrate the sensitivity of our network to different parts of the input spectra. The gradients show that the network is able to identify absorption lines in the input spectra and associates those lines to the relevant stellar labels. Caution has to be applied when choosing input labels, because strongly correlated input labels lead to strongly correlated network gradients. The network then predicts labels based on unrelated spectral features (for example absolute Al abundance from Mg absorption lines). Inferring stellar parameters from correlations like these can lead to satisfying results for some spectra. However, stars with exotic chemical compositions will not be parameterized well. \item The validation of our results with 21 GES benchmark stars shows that our CNN is able to precisely predict labels for individual stars over a large range of label values. Network predictions for repeat spectra of the benchmark stars show a small scatter per star. This scatter is within the GES uncertainties for the benchmark star labels. \item We investigated the Mg-Al anti-correlation in globular clusters, ranging from -0.92 to -1.5 in metallicity. As our training set do not contain a large number of stars in the metal-poor regime with large Al and Mg abundances, CNN mainly recovered the spread in Al in GCs. \item We investigated the ages and chemical properties of the galactic thin and thick disk populations. We identified thin- and thick disk stars based on their position in the [Mg/Fe] vs. [Fe/H] plane with the HBDSCAN algorithm. We find the average age of the thin disk stars to be 8.8~Gyr and 9.8~Gyr for the thick disk stars. The orbit eccentricities of the thick disk stars show a negative trend with [Fe/H] ($\Delta e$/$\Delta$[Fe/H] = $-0.25$). The eccentricities of thin disk orbits are lower than those of the thick disk and show no significant trend with [Fe/H]. These results, based on our CNN predictions, are consistent with similar results in the literature. \end{itemize} Our study is of a significant importance for the exploitation of future large spectroscopic surveys, such as WEAVE and 4MOST. We showed that CNN is a robust methodology for stellar parametrization, while we raised some caveats that should be taken into by the community for future use of ML algorithms in general. \begin{acknowledgements} These data products have been processed by the Cambridge Astronomy Survey Unit (CASU) at the Institute of Astronomy, University of Cambridge, and by the FLAMES/UVES reduction team at INAF/Osservatorio Astrofisico di Arcetri. These data have been obtained from the Gaia-ESO Survey Data Archive, prepared and hosted by the Wide Field Astronomy Unit, Institute for Astronomy, University of Edinburgh, which is funded by the UK Science and Technology Facilities Council. This work was partly supported by the European Union FP7 programme through ERC grant number 320360 and by the Leverhulme Trust through grant RPG-2012-541. We acknowledge the support from INAF and Ministero dell' Istruzione, dell' Universit\`a' e della Ricerca (MIUR) in the form of the grant "Premiale VLT 2012". The results presented here benefit from discussions held during the Gaia-ESO workshops and conferences supported by the ESF (European Science Foundation) through the GREAT Research Network Programme.) This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This article is based upon work from COST Action CA16117, supported by COST (European Cooperation in Science and Technology). T.B. was supported by grant No. 2018-04857 from the Swedish Research Council. M.B. is supported through the Lise Meitner grant from the Max Planck Society. We acknowledge support by the Collaborative Research centre SFB 881 (projects A5, A10), Heidelberg University, of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 949173) \end{acknowledgements}
1,477,468,750,271
arxiv
\section{Introduction} \noindent This article is concerned with the problem of classifying tuples of natural numbers $a_1$, $\ldots$, $a_K$ and $b_1$, $\ldots$, $b_L$ with $\sum_i a_i =\sum_j b_j$ such that for all natural numbers $n$ one has $$ \frac{(a_1 n)! (a_2 n)! \cdots (a_K n)!}{(b_1 n)! (b_2 n)! \cdots (b_L n)!} \in {\Bbb N}. $$ Clearly we may assume that no $a_i$ equals $b_j$. Further, it turns out that there are no solutions unless $L >K$, and that one can restrict attention to {\sl primitive} tuples such that the gcd of $(a_1, \ldots, a_K, b_1, \ldots, b_L)=1$. The condition $\sum_i a_i = \sum_j b_j$ guarantees that the factorial ratios grow only exponentially with $n$, so that the power series formed with these coefficients is a hypergeometric series. We have in mind the situation when $D=L-K$ is a fixed positive integer, which is called the {\sl height} of the factorial ratio. The general problem of describing such factorial ratios is largely open, with a complete solution being available only in the case of height $1$. In this case, Rodriguez-Villegas \cite{RV} made the fundamental observation that the integrality of the factorial ratio is equivalent to the algebraicity of the associated hypergeometric function. The work of Beukers and Heckman \cite{BH} gave a complete classification of such algebraic hypergeometric functions (which correspond to the instances where the associated monodromy group is finite). This connection was made precise by Bober \cite{Bober, Bober2}, who showed that for $D=1$ there are three infinite families and fifty two sporadic examples. One of these sporadic examples goes back to Chebyshev in connection with his works on prime numbers: for all $n\in {\Bbb N}$ $$ \frac{(30n)! n!}{(15n)! (10n)! (6n)!} \in {\Bbb N}. $$ Bober's work confirmed a conjecture of Vasyunin \cite{V} who had identified the three infinite families and fifty two sporadic examples in connection with a problem motivated by the Nyman--Beurling equivalent formulation of the Riemann Hypothesis. In the recent paper \cite{S}, I gave a new elementary proof of the classification in the case $D=1$, which is independent of the results of Beukers and Heckman, and made partial progress on understanding larger values of $D$. In this article we shall give a number of new examples of factorial ratios with $D\ge 2$. Trivially, one can take two factorial ratios with $D=1$ and multiply these together to obtain an example with $D=2$. The examples we give will be shown to be {\sl irreducible}; that is, not to arise in this fashion. In particular, for $D=2$ we shall give more than fifty examples of irreducible two parameter families of integral factorial ratios. Here is one such two parameter family: if $a$ and $b$ are coprime natural numbers with $a \ge 5b$ then for all $n\in {\Bbb N}$ we have $$ \frac{(6an)! (bn)!}{(2an)! (3an)! (6bn)! ((a-5b)n)! } \in {\Bbb N}. $$ Taking $b=1$, $a=5$ in the above example leads to the Chebyshev example with $D=1$. Before we can describe our work, we must recapitulate the notation and some of the results from our earlier paper \cite{S}. Let $\frak a = [a_1,\ldots, a_n]$ denote a list of $n$ non-zero integers. We shall always assume that our lists are non-degenerate in the sense that $\frak a$ does not contain a pair of elements $a$, $-a$. Given a non-degenerate list $\frak a$, we denote by $\ell(\frak a)$ the length of the list, by $s(\frak a)$ the sum of the elements $a_1+\ldots +a_n$, and by $h(\frak a)$ its height which is defined as the number of negative elements in $\frak a$ minus the number of positive elements. We call a list primitive if the gcd of all its elements equals $1$. The order of elements in lists is irrelevant, and we will treat all permutations of a list as being the same. Also, given a non-zero integer $k$, we denote by $k\frak{a}$ the list obtained by multiplying every element of $\frak a$ by $k$. Let $\{ x\} = x-\lfloor x\rfloor$ denote the fractional part of $x$, and let $\psi(x) = 1/2- \{x\}$ denote the ``saw-tooth function". To a list $\frak a$ we associate a $1$-periodic function ${\frak a}(x)$, defined as follows. If $a_j x \not \in {\Bbb Z}$ for all $j$, put \begin{equation} \label{1.1} \frak a(x) = \sum_{j=1}^{n} \psi(a_j x), \end{equation} and extend $\frak a(x)$ to the remaining points by right continuity: $\frak a(x) = \frak a(x^+)$. We also define the ``norm" of $\frak a$ (which played a central role in the investigations of \cite{S}) by \begin{equation} \label{1.2} N([a_1,\ldots, a_n] ) = N(\frak a) = \int_0^1 {\frak a}(x)^2 dx = \frac 1{12} \sum_{i, j=1}^n \frac{(a_i, a_j)^2}{a_i a_j}. \end{equation} The last identity above follows from an easy calculation using Parseval's formula and the Fourier expansion of the saw-tooth function; see (2.1) of \cite{S}. If $(a_1,\ldots, a_K, b_1,\ldots, b_L)$ is a $K+L$--tuple of natural numbers corresponding to an integral factorial ratio of height $D=L-K$, then we associate to this tuple the list $\frak a = [a_1, \ldots, a_K, -b_1, -b_2, \ldots, -b_L]$ which is a non-degenerate list with $\ell(\frak a) = K+L =2K +D$, $h(\frak a)=D$, and $s(\frak a) =0$. The integrality of the factorial ratio is equivalent to the condition that $\sum_{i=1}^{K} \lfloor a_i x\rfloor - \sum_{j=1}^{L } \lfloor b_j x\rfloor \ge 0$ for all real numbers $x$. This observation goes back to Landau, and is based on comparing the power of a prime $p$ dividing the numerator and denominator of the factorial ratio. More precisely, the integrality of the factorial ratio is equivalent to $\sum_{i=1}^{K} \lfloor a_i x\rfloor - \sum_{j=1}^{L } \lfloor b_j x\rfloor $ taking values in the set $\{ 0, 1, \ldots, D\}$ for all real $x$, which is the same as requiring $\frak a(x)$ to take values in the set $\{ -D/2 +k: \ \ 0\le k\le D \}$. Here it may be useful to note that $\sum_{i=1}^{K} \lfloor a_i x \rfloor - \sum_{j=1}^{L} \lfloor b_j x\rfloor$ is right continuous, which motivated our prescription of right continuity for $\frak a(x)$. In the case $D=1$, the function $\frak a(x)$ is constrained to take just the two values $-1/2$ and $1/2$. This permits an elegant characterization of integral factorial ratios of height $1$: these correspond to lists $\frak a$ with odd length $\ell(\frak a)$, height $h(\frak a)=1$, sum $s(\frak a)=0$, and with norm $N(\frak a) =1/4$. Therefore the norm is a particularly valuable tool in understanding factorial ratios of height $1$, and forms the basis for the classification of such ratios in \cite{S}. When $D\ge 2$, the norm alone does not characterize integral factorial ratios, but nevertheless it forms a useful starting point for the investigation of that problem. If $\frak a$ corresponds to an integral factorial ratio with height $D$, then its norm $N(\frak a)$ must be $\le D^2/4$. In \cite{S}, we showed (using this observation) that if $D=2$ then $K+L \le 80$, and that the points $(a_1,\ldots, a_K, b_1, \ldots, b_L) \in {\Bbb R}^{K+L}$ lie on finitely many vector subspaces of ${\Bbb R}^{K+L}$ of dimension at most $11$. Given two lists $\frak a_1$ and $\frak a_2$, we denote by $\frak a_1 + \frak a_2$ the list obtained by concatenating the two lists and removing any degeneracies. If $\frak a_1$ and $\frak a_2$ correspond to integral factorial ratios with height $D_1$ and $D_2$ then $\frak a_1 + \frak a_2$ corresponds to an integral factorial ratio of height $D_1+ D_2$. This gives a trivial way of constructing factorial ratios of height larger than $1$, and the following definition is an attempt to distinguish such examples from genuinely new examples of height larger than $1$. \begin{definition} A list $\frak a$ corresponding to a factorial ratio with height $D$ is called reducible if $\frak a = \frak b+ \frak c$ and $\frak b$ and $\frak c$ correspond to factorial ratios with smaller heights. If $\frak a$ cannot be reduced in that way, then $\frak a$ is called irreducible. \end{definition} For example, the lists $[(a+b+c), -a, -b, -c]$ with $a$, $b$, $c$ being positive integers, correspond to multinomial coefficients, and thus give examples of integral factorial ratios with height $2$. These lists are all reducible since they may be decomposed as $[(a+b+c), -a, -(b+c)] + [(b+c), -b, -c]$; that is, the multinomial coefficient can be expressed as a product of binomial coefficients. We are now ready to present our results. The first result provides a classification of all lists with height $2$ (that is, with two more negative entries than positive) and norm at most $1/3 +\delta$ for some small $\delta >0$. \begin{theorem} \label{thm1} Let $\frak a$ be a primitive list of height $2$ with $N(\frak a) \le 1/3+\delta$ for some small $\delta >0$. Then, apart from finitely many lists, $\frak a$ belongs to one of $28$ families described explicitly in Section 3. There are two three parameter families, and $26$ two parameter families. Sixteen of the families are reducible in the sense that every list of height $2$ in this family is a reducible list, and the remaining $12$ are irreducible in the sense that they contain infinitely many primitive lists of height $2$ that are irreducible. All lists with height $2$ in the $28$ families give examples of integral factorial ratios with height $2$. \end{theorem} In theory it would be possible to determine the finitely many lists left unspecified in our theorem, but this might be computationally demanding (or even infeasible). Just as the lists in the infinite families all gave examples of factorial ratios, we hazard the guess that the same property holds for the finitely many lists also; in other words, every primitive list of height $2$ and norm at most $1/3+ \delta$ for some small $\delta >0$ gives rise to an integral factorial ratio. In contrast, a typical list of height $2$ from the family $[3a,18a,-a,-9a,-b,-11a+b]$ has norm very nearly $37/108=0.34259\ldots$, but one can find many examples in this family that do not correspond to integral factorial ratios (indeed, we believe that there are only finitely many primitive lists in this family that are integral factorial ratios). The other main result of this paper gives a way of constructing integral factorial ratios of height larger than $1$, and we shall use this method to exhibit many irreducible two parameter families with height $2$. \begin{definition} A list $\frak b = [b_1, \ldots, b_k]$ is called monotone if the associated function $\sum_{i=1}^{k} \lfloor b_i x\rfloor$ (defined thus if $b_i x \not \in {\Bbb Z}$ for all $i$, and extended by right continuity to all $x$) is a monotone function of $x$. If $s(\frak b)$ is positive, then this associated function is monotone increasing, and if $s(\frak b)$ is negative then it is monotone decreasing. \end{definition} \begin{theorem} \label{thm2} Suppose $\frak a$ and $\frak b$ are primitive lists, with $\frak b$ monotone, and such that $s(\frak a)$ and $s(\frak b)$ are both non-zero with $(s(\frak a), s(\frak b)) =1$. Suppose $s(\frak b) \frak a + (-s(\frak a)) \frak b$ is a list of height $D$ corresponding to an integral factorial ratio. Then the lists with height $D+1$ that belong to the family $$ a \frak a + b \frak b + (-a s(\frak a) - bs(\frak b)) [1] $$ are integral factorial ratios. \end{theorem} It is easy to check that the lists $[1,-k]$ (for any integer $k\ge 2$), $[1,-2,-k,2k]$ (for odd $k\ge 3$), $[1,-2,k]$ (for even $k\ge 4$), $[-1,2,3]$, $[2,-3,-4]$ are all monotone, and using these together with a knowledge of factorial ratios with height $1$, we give in Section 5 many examples of two parameter families of height $2$ arising from Theorem \ref{thm2}. Starting with these examples, and using Theorem \ref{thm2} repeatedly, one can produce three parameter families with height $3$ and so on. In Section 6 we discuss the structure of reducible lists with height $2$, and use this to show that the examples produced in Section 5 with height $2$ along with the $12$ families mentioned in Theorem \ref{thm1} are all irreducible. Finally, let us mention some other examples of integral factorial ratios with height larger than $1$. In a {\sl Monthly} problem, Askey \cite{A} gives the two parameter, height $2$ family $[3 (m+n), 3n, 2m, 2n, -(2m+3n),-(m+2n),-(m+n),-m, -n ,-n]$, which we checked is irreducible using our work in Section 6. Askey's example arose in the context of the Macdonald--Morris conjectures, which in this context was resolved in the work of Zeilberger \cite{Z}. The Macdonald--Morris conjectures are intimately connected with the theory of the Selberg integral and provide further examples of integral factorial ratios; see \cite{FW} for further information in this direction. For example, the root system $BC_n$ gives rise to a three parameter factorial ratio of height $n$ (see page 501 of \cite{FW}), giving in particular a three parameter family of height $2$. Gessel \cite{G} discusses finding integral factorial ratios via combinatorial arguments. In particular, Gessel gives a $4$ parameter family of height $3$ -- namely, $[(k+2\ell),(k+2m),(k+2n),(k+\ell+m+n), -k, -\ell, -m , -n, -(k+\ell+m),-(k+\ell+n), -(k+m+n)]$ -- along with several examples of $3$ parameter families of height $2$. Wider \cite{W} gives examples of integral factorial ratios of height larger than $1$, and discusses the problem of showing whether such examples are reducible or not. He gives the height $2$ family $[3a, -a, 3b, -b, -(a+b), -(a+b)]$, which he shows is irreducible. \section{Toward the proof of Theorem \ref{thm1}} \noindent In this section we recall some results from \cite{S} on identifying lists with small norm, and set the stage for the proof of Theorem \ref{thm1}. A key notion developed in \cite{S} is that of $k$-separated lists; see Definition 2.1 of \cite{S}. Briefly, a primitive list $\frak a$ is called $k$-separated if there are two primitive lists $\frak b$ and $\frak c$ with $1\le \ell(\frak b), \ell(\frak c) < \ell(\frak a)$ such that the following properties hold. There are non-zero coprime integers $B$ and $C$ such that $\frak a = B\frak b + C\frak c$. Exactly one of $B$ or $C$ is divisible by $k$, and the other is coprime to $k$. If $k|B$, then for all $kb \in B\frak b$ and $c\in C\frak c$ we have $(kb,c) = (b,c)$, and an analogous criterion holds if $k|C$. For a more fleshed out discussion of this definition, and examples, we refer to \cite{S}. There are two key properties of this definition. First, one can compute the norm of $\frak a$ in terms of the norms of $\frak b$ and $\frak c$: in particular, one has from Proposition 2.2 of \cite{S} that $N(\frak a) \ge (1-1/k)(N(\frak b) + N(\frak c))$. Second, given $n$ and $k$, there are only finitely many primitive lists of length $n$ that are at most $k$--separated (which means that the list is not $\ell$ separated for any $\ell >k$); see Proposition 2.4 of \cite{S}. These two properties enable an inductive approach to classifying lists of small norm, and we now extract from \cite{S} some conclusions in this regard. \begin{lemma} \label{lem2.1} Let $\frak a$ be a primitive list. Then $N(\frak a) \ge 31/180$ except in the following cases: $$ \frak a = [1], \qquad [a,b], \qquad [a,-2a, b], \qquad [a,-2a, b,-2b], $$ $$ \text{Norm } \tfrac{17}{108}: \qquad [1,-3,9], \qquad [1,-2, -3, 6,9 ,-18]; $$ $$ \text{Norm } \tfrac{1}{6}: [1, -3, -4], \ \ [3, 4, -12], \ \ [1, - 3, 6, -12], \ \ [1, -3, -4, 6], \ \ [1, -3, -4, 12], \ \ $$ $$ [1,-2,4,-12], \ \ [1,-2,-3,4], \ \ [1,-2,-3,12], \ \ [1,-4,-6,12], \ \ [2,-3,-4,12], \ [3,-4,-6,12], \\ $$ $$ [1,-2,-3,4,6], \ \ [1,-2,-3,6,-12], \ \ [2,3,-4,-5,12], \ [1, -2,4,6,-12], \ [1,-2,-3,4,6,-12]. $$ \end{lemma} \begin{proof} Clearly $N([1])=1/12$, and the three families $[a,b]$, $[a,-2a, b]$, and $[a,-2a, b,-2b]$ give lists with norms close to $1/6$ once $|a|$ or $|b|$ is sufficiently large (and with $(a,b)=1$). The remaining catalog of lists follows from the work in \cite{S}: see there Section 4.3, Lemmas 4.2, 7,1, 7,3, 7.4, together with the bounds for $G(n)$ discussed in Section 3. \end{proof} For future use, let us also record the first few smallest norms that are possible: $$ \text{Norm } \tfrac{1}{12}: [1], \ [1,-2]; \ \ \text{Norm } \tfrac{1}{9}: [1,-3], \ [1,-2, -3, 6]; \ \ \text{Norm } \tfrac 18: [1, -4], \ [1,-2,4]. $$ \begin{lemma} \label{lem2.2} Let $\frak a$ be a primitive list with $s(\frak a)=0$. If $\ell(\frak a)$ is odd, then $N(\frak a) \ge 1/4$. If $\ell(\frak a)$ is even, then $N(\frak a) \ge 31/180$ except for the two lists $[1,-2,-3,4]$ and $[1,-3,-4,6]$ which have norm $1/6$. \end{lemma} \begin{proof} Note that if $\frak a$ is primitive with $s(\frak a) =0$ then $\ell(\frak a) \ge 3$. If $\ell(\frak a)$ is odd, then $\frak a(x)$ takes values $k+1/2$ for an integer $k$, which implies that $N(\frak a) \ge 1/4$. If $\ell(\frak a)$ is even, then the lemma follows upon examining the lists in Lemma \ref{lem2.1}. \end{proof} \begin{proposition} \label{prop2.1} Let $\frak a$ be a primitive list with norm $\le 1/3+ \delta$, $s(\frak a) =0$, and $h(\frak a)=2$. Then, apart from finitely many exceptional lists, $\frak a$ lies in a space of the form $$ x_1 \frak a_1 + x_2 \frak a_2 + x_3 \frak a_3, \qquad\text{ with } x_1 s(\frak a_1) + x_2 s(\frak a_2) + x_3 s( \frak a_3) =0, $$ where $\frak a_1$, $\frak a_2$, $\frak a_3$ are primitive lists with $s(\frak a_i) \neq 0$ and $N(\frak a_1) + N(\frak a_2) + N(\frak a_3) \le 1/3+2\delta$. \end{proposition} \begin{proof} From \cite{S} we know that if $\ell(\frak a)$ is sufficiently large, then $N(\frak a)$ is also large; for example, if $\ell(\frak a) \ge 82$ then $N(\frak a) >1$. Thus we can restrict attention to lists $\frak a$ with bounded length, and so after excluding finitely many primitive lists, we may assume that $\frak a$ is $k$-separated for some $k \ge 2/\delta$. Thus, by the definition of $k$-separated, we can find two primitive lists $\frak b$ and $\frak c$ such that $\frak a = B\frak b + C\frak c$, and $N(\frak b) + N(\frak c) \le N(\frak a) (k/(k-1)) \le N(\frak a)(1+\delta)$. If either $s(\frak b)$ or $s(\frak c)$ is zero, then (because $s(\frak a)=0$) the other must also be zero. If $s(\frak b) = s(\frak c) =0$, then Lemma \ref{lem2.2} would imply that $\frak b$ and $\frak c$ would have to be either $[1,-2,-3,4]$ or $[1, -3, -4, 6]$, but in all these cases it is not possible for $\frak a = B\frak b + C\frak c$ to have height $2$. Therefore, we may suppose that $s(\frak b)$ and $s(\frak c)$ are both non-zero. Since $\frak a = B\frak b + C\frak c$ is primitive, we must have $C= \pm s(\frak b)/(s(\frak b), s(\frak c))$ and $B = \mp s(\frak c)/(s(\frak b),s(\frak c))$, so that $\frak a$ is determined uniquely by $\frak b$ and $\frak c$. If both $\frak b$ and $\frak c$ are at most $\lceil 2/\delta \rceil$-separated, then there would only be finitely many possibilities for $\frak b$ and $\frak c$, and therefore only finitely many choices for $\frak a$. Suppose then that $\frak c$ is at least $2/\delta$--separated. Now $\frak c$ must decompose as $\frak c = D\frak d + E \frak e$, where $\frak d$ and $\frak e$ are primitive lists with $N(\frak d) + N(\frak e) \le N(\frak c)(1+\delta)$. Renaming $\frak b$ as $\frak a_1$, $\frak d$ as $\frak a_2$, and $\frak e$ as $\frak a_3$ we conclude that $\frak a$ is of the form $x_1 \frak a_1 + x_2 \frak a_2 + x_3 \frak a_3$ as desired, and that $N(\frak a_1) + N(\frak a_2) + N(\frak a_3) \le N(\frak b) + (1+\delta) N(\frak c) \le (1+\delta)^2 N(\frak a) \le \frac 13+2\delta$. There is one final remaining point. We know that $s(\frak a_1) (=s(\frak b)) \neq 0$, but it is conceivable that one of $s(\frak a_2)$ or $s(\frak a_3) =0$; say, $s(\frak a_3)=0$. To rule this scenario out, note that by Lemma \ref{lem2.2} we must then have $\frak a_3 = [1,-2,-3,4]$ or $[1,-3,-4,6]$ and thus $N(\frak a_3)=1/6$. This forces $N(\frak a_1) + N(\frak a_2) \le 1/6+2\delta$, which implies that $\frak a_1$ and $\frak a_2$ must be $[1]$ or $[1,-2]$. Since $\frak a$ has height $2$, we are further forced to have $\frak a_1 = \frak a_2 =[1]$, but now we must have $x_1 = -x_2$ in order to have $s(\frak a) =0$, and the resulting $\frak a$ has height $0$. Therefore $s(\frak a_i) \neq 0$ for $i=1$, $2$, $3$, and the proof of the proposition is complete. \end{proof} \section{Proof of Theorem \ref{thm1}: Restricting to $28$ families} \noindent If $\frak a$ is a primitive list of height $2$ with $s(\frak a)=0$ and $N(\frak a) \le 1/3+\delta$, then apart from finitely many exceptions, we know (by Proposition \ref{prop2.1}) that $\frak a$ is of the form \begin{equation} \label{3.1} x_1 \frak a_1 + x_2 \frak a_2 + x_3 \frak a_3, \qquad \text{with } x_1 s(\frak a_1) + x_2 s(\frak a_2) + x_3 s(\frak a_3) = 0, \end{equation} where $N(\frak a_1) + N(\frak a_2) + N(\frak a_3) \le 1/3+ 2\delta$. In this section, we classify all the possibilities for $\frak a_1$, $\frak a_2$, $\frak a_3$ satisfying this bound. Naturally we may suppose that $N(\frak a_1) \le N(\frak a_2) \le N(\frak a_3)$. It follows that $N(\frak a_1) \le 1/9+ \delta$, so that $N(\frak a_1)$ is either $1/12$ or $1/9$. If $N(\frak a_1) =1/9$, then both $N(\frak a_2)$ and $N(\frak a_3)$ are also forced to be $1/9$, so that $\frak a_1$, $\frak a_2$, $\frak a_3$ are all either $[1,-3]$, or $[1,-2,-3, 6]$. But in this case, it is not possible to have $h(\frak a) =2$. We conclude that $N(\frak a_1) = 1/12$, so that $\frak a_1$ is either $[1]$ or $[1,-2]$. Next we must have $N(\frak a_2) \le (1/3+2\delta - 1/12)/2 = 1/8+\delta$, so that we must be in one of the following three cases: $$ \text{Case I}: \qquad \frak a_2= [1], [1,-2] \qquad N(\frak a_2)= 1/12, $$ $$ \text{Case II}: \qquad \frak a_2 = [1,-3], [1,-2,-3,6] \qquad N(\frak a_2) =1/9, $$ $$ \text{Case III}: \qquad \frak a_2 = [1,-4], [1,-2, 4] \qquad N(\frak a_2) =1/8. $$ \subsection{Case I analysis} Note that $N(\frak a_3) \le 1/3 + 2\delta -1/12 -1/12 = 1/6+2\delta$, and Lemma \ref{lem2.1} now gives the various possibilities for $\frak a_3$. If $\ell(\frak a_3) = 1$ (so that $\frak a_3=[1]$) or $\ell(\frak a_3)= 2$ (so that $\frak a_3=[a,b]$ with for coprime integers $a$ and $b$) then the resulting lists $\frak a$ are all included in the family of multinomial coefficients \begin{equation} \label{3.1} [a+b+c, -a, -b, -c] = [a+b+c, -a, -b-c] + [b+c,-b,-c]. \end{equation} This is a three parameter family, which is clearly reducible to two binomial coefficients. \smallskip Now suppose that $\ell(\frak a_3)=3$. Here $\frak a_3$ must be of the form $[a,-2a, b]$, or one of $[1, -3,9]$, $[1,-3,-4]$, $[3, 4, -12]$. Since $\frak a$ must have height $2$, we are forced to have one of $\frak a_1$ or $\frak a_2$ be $[1]$ and the other $[1,-2]$; say, $\frak a_1 = [1]$ and $\frak a_2=[1,-2]$. If $\frak a_3$ is of the form $[a,-2a, b]$, then a little calculation shows that $\frak a$ must belong to the three parameter, reducible family \begin{equation} \label{3.2} [2a, -a, 2b, -b, -c, -(a+b-c)] = [2a, -a, 2b, -b, -a-b] + [a+b, -c, - (a+b-c)]. \end{equation} In order for the left side of \eqref{3.2} to be a list of height $2$, we must have $c$ being a positive integer with $c < a+b$. In the sequel, such conditions will be left implicit. The three other cases of length $3$, namely $\frak a_3= [1, -3, 9]$, $[1,-3,- 4]$, or $[3,4,-12]$ lead to the following three reducible, two parameter, families: \begin{equation} \label{3.3} [3a, -a, -9a, 2b, -b, 7a-b] = a [3, 14, -1, -7, -9] + [7a, 2b, -14a, -b, 7a-b], \end{equation} \begin{equation} \label{3.4} [a,-3a,-4a, 2b, -b, 6a-b] = a [1, 12, -3, -4, -6] + [6a, 2b, -b, -12a, 6a-b], \end{equation} \begin{equation} \label{3.5} [12a, -3a, -4a, 2b,-b, -(b+5a)] = a[12, 5, -3,-4,-10] + [10a, -5a, 2b, -b, -b-5a]. \end{equation} \medskip Now suppose $\ell(\frak a_3)=4$. Then $\frak a_3 =[2a, 2b, -a,-b]$, or is given by one of the seven lists with length $4$, sum not equal to $0$, and norm $1/6$ given in Lemma \ref{lem2.1}. In all these cases $\frak a_3$ has height $0$, and therefore we must have $\frak a_1 = \frak a_2 = [1]$. The case $\frak a_3=[2a, 2b, -a, -b]$ leads to the family $[2a, 2b, -a, -b, -c, -d]$ with $c+d=a+b$, which is already included above, see \eqref{3.2}. Thus we are left with the following seven possibilities for $\frak a_3$: $$ [1,-3,6,-12]; \ \ [1, -3, -4, 12]; \ \ [1,-2, 4, -12]; $$ $$ [1, -2, -3, 12]; \ \ [1,-4, -6, 12]; \ \ [2, -3, -4, 12]; \ \ [3, -4, -6, 12]. $$ Each of these seven lists may be completed to a five term list with sum $0$ which corresponds to a factorial ratio. Therefore the families for $\frak a$ that we obtain from these lists are then all reducible, arising from one of these sporadic lists (with $D=1$) combined with a binomial coefficient. Thus we obtained five new reducible, two parameter, families: \begin{equation} \label{3.6} [3a, 12a, -a, -6a, -b, -(8a-b)] =a[3,12,-1,-6,-8] + [8a, -b, b-8a], \end{equation} \begin{equation} \label{3.7} [2a,12a,-a,-4a, -b, -(9a-b)] = a[2,12, -1,-4,-9] + [9a, -b, b-9a], \end{equation} \begin{equation} \label{3.8} [a,12a, -2a, -3a, -(8a-b)] =a [1,12, -2, -3, -8] + [8a,-b,b-8a], \end{equation} \begin{equation} \label{3.9} [2a,12a,-3a,-4a,-b,-(7a-b)] = a[2,12, -3,-4,-7] + [7a,-b,b-7a], \end{equation} \begin{equation} \label{3.10} [3a,12 a, -4a, -6a, -b, -(5a-b)] = a[3, 12, -4, -5, -6] + [5a,-b,b-5a]. \end{equation} \medskip Now suppose $\ell(\frak a_3)= 5$, so that by Lemma \ref{lem2.1}, $\frak a_3$ must be one of the following four lists: $$ [1,-2,-3,4,6]; \ \ [1,-2,-3,6,-12]; \ \ [2,3,-4,-6,12]; \ \ [1,-2, 4,6,-12]. $$ In all these cases we may suppose that $\frak a_1 = [1]$ and $\frak a_2 = [1,-2]$ because $\frak a$ must have height $2$. Then these four cases lead to the following reducible, two parameter, families: \begin{equation} \label{3.11} [2a, 3a, -a, - 4a, -6a, 2b, -b, 6a-b] = a [2, 3, 12, -1, -4, -6, -6 ] + [2b, -b, 6a, -12 a, 6a-b], \end{equation} \begin{equation} \label{3.12} [a,6a,-2a,-3a,-12a, 2b,-b,10a-b] =a[1, 6, 20, -2, -3, -10, -12] + [2b, -b, 10a, -20a, 10a-b], \end{equation} \begin{equation} \label{3.13} [4a, 6a, -2a, -3a, -12 a, 2b, -b, 7a-b] = a[4, 6, 14, -2, -3, -7,-12] + [7a , -14a, 2b, -b, 7a-b], \end{equation} \begin{equation} \label{3.14} [2a, 12a, -a, -4a, -6a, 2b, -b, -3a-b] = a[2,3,12,-1,-4,-6,-6] + [6a,-3a,2b,-b,-3a-b]. \end{equation} \medskip By Lemma \ref{lem2.1}, the last remaining cases are when $\ell(\frak a_3)=6$, and $\frak a_3$ is either $[1,-2,-3,6,9,-18]$, or $[1,-2,-3,4,6,-12]$. Since these lists have height $0$, we must have $\frak a_1= \frak a_2 =[1]$. Each of these possibilities for $\frak a_3$ can be completed to a $7$ term list with sum $0$, which forms a factorial ratio with $D=1$. Thus, we get two more reducible, two parameter, families: \begin{equation} \label{3.15} [2a, 3a, 18a, -a, -6a, -9a, -b, b-7a] =a[2, 3, 18, -1, -6, -7, -9] + [7a, -b, b-7a], \end{equation} \begin{equation} \label{3.16} [2a,3a, 12a, -a, -4a,-6a, -b, b-6a] =a[2, 3, 12, -1, -4, -6, -6] + [6a, -b, b-6a]. \end{equation} Thus Case I led to sixteen families of solutions, all of which are reducible. \subsection{Case II analysis} Here $\frak a_2 = [1,-3]$ or $[1,-2,-3, 6]$ has norm $1/9$, so that $\frak a_3$ has norm in the range $1/9$ to $5/36$. The possibilities for $\frak a_3$ are thus limited to the examples in Lemma \ref{lem2.1}, and indeed to just the cases $[a,b]$, $[a,-2a, b]$ and $[a,-2a, b,-2b]$. The length $4$ case is ruled out as its height is $0$, and it would be impossible to have $\frak a$ of height $2$. The case $\frak a_3 =[a,b]$ can only arise with $a$ and $b$ of opposite sign (else the norm will exceed $1/6$), and again it is impossible to have $\frak a$ of height $2$. We are left with $\frak a_3$ being of the form $[a,-2a,b]$, which gives the following five possibilities: $$ [1,-2,4], \ \ [1,-2, -3], \ \ [2, 3, -6], \ \ [1, -3, 6], \ \ [1, -2, 6]. $$ In order for $\frak a$ to have height $2$, we must have $\frak a_1 =[1]$, and thus each of these five possibilities gives rise to two families for $\frak a$, corresponding to the two choices, $[1,-3]$ and $[1,-2,-3,6]$, for $\frak a_2$. Going over the five possibilities for $\frak a_3$ in order, we find the following $10$ two parameter families: \begin{equation} \label{3.17} [2a, b, 6b, -a, -4a, -2b, -3b, -(2b-3a)], \end{equation} \begin{equation} \label{3.18} [2a, 3b, -a, -4a, -b, -(2b-3a)], \end{equation} \begin{equation} \label{3.19} [a, b, 6b, -2a, -3a, -2b, -3b, -(2b-4a)], \end{equation} \begin{equation} \label{3.20} [a, 3b, -2a, -3a, -b, -(2b-4a)], \end{equation} \begin{equation} \label{3.21} [6a, 2b, 3b, -2a, -3a, -b, -6b, 2b-a], \end{equation} \begin{equation} \label{3.22} [6a, b, -2a, -3a, -3b, 2b-a], \end{equation} \begin{equation} \label{3.23} [3a, b, 6b, -a, -6a, -2b, -3b, 4a-2b], \end{equation} \begin{equation} \label{3.24} [3a, 3b, -a, -6a, -b, 4a-2b], \end{equation} \begin{equation} \label{3.25} [2a, b, 6b, -a, -6a, -2b, -3b, 5a-2b], \end{equation} \begin{equation} \label{3.26} [2a, 3b, -a, -6a, -b, 5a-2b]. \end{equation} \subsection{Case III analysis} Here both $\frak a_2$ and $\frak a_3$ are either $[1,-4]$ or $[1,-2,4]$. Both cannot be $[1,-4]$, since then $\frak a$ cannot have height $2$. So there are two cases: both are $[1, -2, 4]$ and $\frak a_1 = [1,-2]$; or $\frak a_1= [1]$, $\frak a_2= [1,-4]$, and $\frak a_3= [1,-2,4]$. These lead to two further two parameter families: \begin{equation} \label{3.27} [2a, 4b, -a, -4a, -b, 3a-3b], \end{equation} \begin{equation} \label{3.28} [2a, 2b, 6(a+b), -a, -4a, -b, -4b, -3(a+b)]. \end{equation} \medskip To sum up, we have shown that (apart from finitely many exceptions) lists of height $2$ and norm at most $1/3+\delta$ must belong to one of the $28$ families catalogued above. The $16$ families of Section 3.1 are reducible, and every element in them with height $2$ corresponds to an integral factorial ratio. To complete the proof of Theorem \ref{thm1}, it remains to show that the $12$ families described in Sections 3.1 and 3.2 are irreducible (see Corollary \ref{cor1}), and that lists of height $2$ in these families correspond to integral factorial ratios (see Section 7). \section{Proof of Theorem \ref{thm2}} \noindent We begin by recalling that to any list $\frak a = [a_1,\ldots, a_n]$, we associate the function $\frak a(x) = \sum_{i=1}^{n} \psi(a_i x)$ (away from points $a_i x\in {\Bbb Z}$), which is odd and periodic with period $1$. If the list $\frak a$ has sum $0$ and height $D$, then it is an integral factorial ratio precisely when $\frak a(x)$ takes values in the set $\{ -D/2, -D/2 +1, \ldots, D/2\}$. In the sequel, we shall check this criterion for $\frak a(x)$ implicitly keeping $x$ away from points of discontinuity; right continuity will then ensure the result for all $x$. For brevity, put $u= s(\frak a)$ and $v= s(\frak b)$. The assumption that $v\frak a - u \frak b$ is an integral factorial ratio of height $D$ implies that for all real $x$ \begin{equation} \label{4.1} \frak{a}(vx) + \frak b(-ux) \in \{ -D/2, -D/2 +1, \ldots, D/2\}. \end{equation} We shall show that for all $x$ and $y$ one has \begin{equation} \label{4.2} \frak a(x) + \frak b(y) + \psi(-ux-vy) \in \{ -(D+1)/2, -(D-1)/2, \ldots, (D+1)/2\}. \end{equation} The theorem then follows upon applying the Landau criterion to compute the power of a prime dividing the numerator and denominator of the claimed integral factorial ratio. Replacing $x$ by $vx$, and $y$ by $-u(x+t)$, we see that \eqref{4.2} is equivalent to the assertion that \begin{equation} \label{4.3} \frak a (vx) + \frak b(-u(x+t)) + \psi( uv t) \in \{ -(D+1)/2, -(D-1)/2, \ldots, (D+1)/2\}. \end{equation} Using the monotonicity of $\frak b$, we shall reduce the above assertion to proving (for all real $x$ and integers $k$) \begin{equation} \label{4.4} \frak a(vx) + \frak b (- u(x+k/uv)) \in \{ - D/2, \ldots, D/2\}. \end{equation} Postponing the proof of this reduction, we now establish \eqref{4.4}. Since $u$ and $v$ are coprime, we may write $k/uv = m/u + n/v$ for suitable integers $m$ and $n$. Then, since $\frak a(vx)$ is periodic in $x$ with period $1/v$ and analogously for $\frak b(-ux)$, $$ \frak a(vx) + \frak b(-u(x+m/u+n/v)) = \frak a(vx) + \frak b(-u (x+n/v)) = \frak a(v(x+n/v)) + \frak b(-u(x+n/v)), $$ and so \eqref{4.4} follows from the assumption \eqref{4.1}. We now prove that \eqref{4.4} implies \eqref{4.3}. The left side of \eqref{4.3} takes values in the set $(D+1)/2 + {\Bbb Z}$, and changes sign when $(x,t)$ is replaced by $(-x,-t)$. Therefore it suffices to establish that the left side of \eqref{4.3} always takes values $\ge -(D+1)/2$, or that it always takes values $\le (D+1)/2$. Suppose that $uv >0$. Here we shall show from \eqref{4.4} that the left side of \eqref{4.3} always takes values $\le (D+1)/2$. In the case $uv <0$, the analogous argument shows that the left side of \eqref{4.3} always takes values $\ge -(D+1)/2$. Suppose $k/uv \le t < (k+1)/uv$. From the monotonicity of $\frak b$ (and note that the associated function in Definition 1.3 is increasing or decreasing depending on the sign of $v$) we see that $$ \frak b(-u(x+t)) + \psi(uvt) \le \frak b(-u(x+k/(uv))) + uv (t-k/uv) + \psi(uv t) = 1/2 + \frak b(-u(x+k/uv)). $$ Therefore, given \eqref{4.4} it follows that $$ \frak a(vx) + \frak b(-u(x+t)) + \psi(uv t) \le \frak a(vx) + \frak b(-u(x+k/uv)) + 1/2 \le (D+1)/2, $$ as needed. This completes our proof of Theorem \ref{thm2}. \section{Examples arising from Theorem \ref{thm2}} \noindent In this section we give examples of factorial ratios of height $2$ obtained using Theorem \ref{thm2}. The table gives a monotone list $\frak b$, a primitive list $\frak a$, and these lists satisfy the condition $(s(\frak a), s(\frak b)) =1$, and the table also displays the list $s(\frak b) \frak a - s(\frak a) \frak b$ which corresponds to an integral factorial ratio with height $1$. Thus each line of the table produces a two parameter family of integral factorial ratios with height $2$; for example, line 10 shows that $[6a,-2a,-3a,b,-6b,-(a-5b)]$ is a factorial ratio of height $2$ provided $a > 5b$. We have not included in this table six further examples of Theorem \ref{thm2}; namely, the examples corresponding to the families \eqref{3.17}, \eqref{3.18}, \eqref{3.21}, \eqref{3.22}, \eqref{3.25}, and \eqref{3.26}. \vfill \begin{center} \begin{tabular} {|c|| c| c| c|} \hline Number & List $\frak a$ & Monotone $\frak b$ & Height $1$ factorial ratio \\ \hline $1$ & $[2,-3,-4]$ & $[3, -1]$ & $[4,15,-5,-6,-8]$ \\ $2$ & $[10,-5,-6]$ & $[3,-1]$ & $[3,20,-1,-10,-12]$ \\ $3$ &$[6,-3,-4]$ &$[3, -1]$ & $[3,12,-1,-6,-8]$ \\ $4$ &$[10,-2,-5]$& $[3, -1]$ & $[3,20,-4,-9,-10]$ \\ $5$ & $[10,-4,-5]$ & $[3, -1]$ & $[1,20,-3,-8,-10]$ \\ $6$ & $[6,-1,-4]$ & $[3, -1]$ & $[1,12, -2, -3,-8]$ \\ $7$ & $[4,-1,-2]$ & $[3, -1]$ & $[1,12,-3,-4,-6]$ \\ $8$ & $[6,-3,-4]$ & $[4, -1]$ & $[4,18,-1,-9,-12]$ \\ $9$ & $[6,-2,-3]$ & $[5, -1]$ & $[1,24,-5,-8,-12]$ \\ $10$ & $[6,-2,-3]$ & $[6, -1]$ & $[1,30,-6,-10,-15]$ \\ $11$ & $[3,10,-1,-5,-6]$ & $[3, -1]$ & $[1,6,20, -2,-3,-10,-12]$ \\ $12$ & $[2, 15,-1,-5,-6]$ & $[3, -1]$ & $[4,5,30,-2,-10,-12,-15]$ \\ $13$ & $[2,9,-1,-3,-4]$ & $[3, -1]$ & $[3, 4, 18, -2,-6,-8,-9]$ \\ $14$ & $[1,6,-2,-3,-3]$ & $[3, -1]$ & $[2,3,12,-1,-4,-6,-6]$ \\ $15$ & $[1,10,-3,-4,-5]$ & $[3, -1]$ & $[2,3,20,-1,-6,-8,-10]$ \\ $16$ & $[2, 15, -3, -4, -5]$ & $[3,-1]$ & $[4, 5, 30, -6,-8,-10,-15]$ \\ $17$ & $[3,10,-2,-5,-9]$ & $[3,-1]$ & $[6, 9, 20, -3, -4, -10,-18]$\\ $18$ & $[2, 12, -1,-4,-6]$ & $[3,-1]$ & $[3, 4, 24, -2, -8, -9, -12]$ \\ $19$ & $[2,3, 12,-1,-4,-6,-9]$ & $[3,-1]$ & $[4,6,9,24,-2,-3,-8,-12,-18]$\\ $20$ & $[2,-3,-4]$ & $[1, 6, -2, -3]$ & $[4, 5, 30, -6, -8, -10, -15]$\\ $21$ & $[10,-5,-6]$ & $[1,6,-2,-3]$ & $[1,6,20,-2,-3,-10,-12]$ \\ $22$ & $[6,-3,-4]$ & $[1,6,-2,-3]$ & $[1,12, -2,-3,-8]$ \\ $23$ & $[10,-2,-5]$ & $[1,6,-2,-3]$ & $[6,9,20, -3,-4,-10,-18]$ \\ $24$ & $[10,-4,-5]$ & $[1,6,-2,-3]$ & $[2, 3, 20, -1, -6,-8,-10]$ \\ $25$ & $[6,-1,-4]$ & $[1,6, -2,-3]$ & $[3,12,-1,-6,-8]$ \\ $26$ & $[6,-2,-3]$ & $[1,10,-2, -5]$ & $[2, 5, 24,-1,-8,-10,-12]$ \\ $27$ &$[3, 10, -1, -5,-6]$ & $[1,6,-2,-3]$ & $[3, 20, -1, -10, -12]$ \\ $28$ & $[2, 15, -1, -5, -6]$ & $[1,6, -2,-3]$ & $[4, 15, -2,-5,-12]$ \\ $29$ & $[2,9,-1,-3,-4]$ & $[1,6,-2,-3$ & $[4,9,-2,-3,-8]$ \\ $30$ & $[1,6,-2,-3,-3]$ & $[1,6,-2,-3]$ & $[1,12,-3,-4,-6]$ \\ $31$ & $[1,10,-3,-4,-5]$ & $[1,6,-2,-3]$ & $[1,20,-3,-8,-10]$ \\ $32$ & $[2,15,-3,-4,-5]$ & $[1,6,-2,-3]$ & $[4,15,-5,-6,-8]$ \\ $33$ & $[3,10,-2,-5,-9]$ & $[1,6,-2,-3]$ & $[3,20,-4,-9,-10]$ \\ $34$ & $[2,12,-1,-4,-6]$ & $[1,6,-2,-3]$ & $[4,6,9,24,-2,-3,-8,-12,-18]$\\ $35$ & $[2,3,12,-1,-4,-6,-9]$ & $[1,6,-2,-3]$ & $[3,4,14,-2,-8,-9,-12]$\\ $36$ & $[4,-3]$ & $[1,4,-2]$ & $[2,12,-1,-4,-9]$ \\ $37$ & $[3,-2]$ & $[1,4,-2]$ & $[2,9,-1,-4,-6]$ \\ $38$ & $[1,4,-2,-2]$ & $[1,4,-2]$ & $[2,3,12,-1,-4,-6,-6]$ \\ $39$ & $[1,8,-3,-4]$ & $[1,4,-2]$ & $[3,4,24,-2,-8,-9,-12]$ \\ $40$ & $[2,3,8,-1,-4,-6]$ & $[1,4,-2]$ & $[4,6,9,24,-2,-3,-8,-12,-18]$ \\ $41$ & $[3,-2]$ & $[2,3,-1]$ & $[1,12,-2,-3,-8]$ \\ $42$ & $[3,-2]$ & $[1,6,-2]$ & $[2,15,-1,-6,-10]$ \\ $43$ & $[3,-2]$ & $[3,4,-2]$ & $[2,15,-3,-4,-10]$ \\ \hline \end{tabular} \end{center} \section{The structure of reducible lists with $D=2$} Suppose $\frak a$ is a primitive list corresponding to a factorial ratio with $D=2$. We wish to develop criteria to check whether the list $\frak a$ is irreducible. \begin{lemma} \label{lemr.1} Suppose that $p \ge 11$ is a prime which divides some, but not all, elements of $\frak a$, and suppose that the multiples of $p$ in $\frak a$ do not sum to zero. Then $\frak a$ cannot be decomposed as $\frak b + \frak c$ where both $\frak b$ and $\frak c$ are dilates of sporadic integral factorial ratios of height $1$. \end{lemma} \begin{proof} Suppose $\frak a$ can be decomposed as $\frak b +\frak c$. The primitive sporadic factorial ratios with $D=1$ have all elements divisible only by the primes $2$, $3$, $5$, $7$. Therefore if either $\frak b$ or $\frak c$ contains a multiple of $p$ then all elements of that list must be multiples of $p$. Since $\frak a$ is primitive, the elements of the other list must be coprime to $p$. Therefore the multiples of $p$ in $\frak a$ must sum to zero, which we assumed not to be the case. \end{proof} \begin{lemma} \label{lemr.2} Suppose that $p \ge 11$ is a prime which divides some, but not all, elements of $\frak a$, and suppose that the multiples of $p$ in $\frak a$ do not sum to zero. Suppose $\frak a$ decomposes as $\frak b + \frak c$ where $\frak c$ is a dilate of a sporadic factorial ratio with $D=1$, while $\frak b$ lies in one of the infinite families with $D=1$ (so either $\frak b$ is of the form $[a+b,-a,-b]$ or of the form $[2a,-a,2b,-b, -(a+b)]$). Then one of the following three cases holds: (i). The number of multiples of $p$ in $\frak a$ is either exactly $1$, or is even and at least $4$. (ii). There are exactly two multiples of $p$ in $\frak a$ and these are of the form $-ap$, $2ap$. (iii). There are exactly three elements of $\frak a$ that are not multiples of $p$, and these include a pair $-b$, $2b$ with the third non-multiple being $\equiv -b \pmod p$. \end{lemma} \begin{proof} Since $\frak c$ is a dilate of a sporadic factorial ratio with $D=1$, either $\frak c$ consists entirely of multiples of $p$, or entirely of elements coprime to $p$. In either case, since the sum of multiples of $p$ in $\frak a$ is non-zero, the list $\frak b$ must contain some multiples of $p$ and some elements coprime to $p$. If $\frak b$ is a binomial coefficient, then from the above remark, $\frak b$ must contain exactly one multiple of $p$. If $\frak c$ has no multiples of $p$, then $\frak a$ will be left with exactly $1$ multiple of $p$. If $\frak c$ consists entirely of multiples of $p$, then $\frak a$ either has $\ell(\frak c)-1$ or $\ell(\frak c) +1$ multiples of $p$, and this is an even number at least $4$. Thus we are in case (i). If $\frak b$ is of the form $[2a,-a,2b,-b, -(a+b)]$ then (again by our previous remark) either $\frak b$ contains exactly $1$ multiple of $p$, or has $2$ multiples of $p$. If $\frak b$ contains exactly $1$ multiple of $p$, then the argument of the preceding paragraph shows that we are in case (i). Suppose now that $\frak b$ contains two multiples of $p$, which must be a pair of the form $-ap$, $2ap$ for some integer $a$. If $\frak c$ has no multiples of $p$, then these are the only multiples of $p$ in $\frak a$, and we are in case (ii) of the lemma. Finally, if $\frak c$ consists entirely of multiples of $p$, then the three elements of $\frak b$ that are not multiples of $p$ must be left uncanceled in $\frak a$, and these include a pair of elements $-b$, $2b$. Thus we must be in case (iii) here. \end{proof} \begin{lemma} \label{lemr.3} Suppose that $p\ge 11$ is a prime, and that the number of multiples of $p$ in $\frak a$ is odd and at least $3$. Suppose that the sum of the multiples of $p$ in $\frak a$ is not zero. Suppose $\frak a$ decomposes as $\frak b + \frak c$ where both $\frak b$ and $\frak c$ belong to one of the infinite families with height $1$. Then one of the following cases occurs: (i). There are exactly three non-multiples of $p$ in $\frak a$, and when reduced $\pmod p$ these three elements are congruent to $x$, $x$, $-2x \pmod p$ for some non-zero $x\pmod p$. (ii). There are exactly five non-multiples of $p$ in $\frak a$, and these element are of the form $4x$, $-x$, $2y$, $-y$, $-z$ for integers $x$, $y$, $z$. There are three multiples of $p$ in $\frak a$, and these are either $2z-4x$, $-(z-2x)$, $-(x+y)$, or $2(z-x)$, $-(z-x)$, $-(2x+y)$. (iii). There are exactly five non-multiples of $p$ in $\frak a$, and these are elements of the form $x$, $2y$, $-y$, $2z$, $-z$. There are three multiples of $p$ in $\frak a$, and these are either $-(x/2+y)$, $-(x+2z)$, $(x/2+z)$ (and this only occurs for $x$ even), or $(x-y)$, $-2(2x+z)$, $(2x+z)$. (iv). There are no degeneracies in concatenating $\frak b$ and $\frak c$, and either $\frak a = [2a, -a, 2b, -b, 2c, -c, 2d, -d, -(a+b), -(c+d)]$, or $\frak a = [2a, -a, 2b, -b, -(a+b), (c+d), -c, -d]$. \end{lemma} \begin{proof} If either $\frak b$ or $\frak c$ has no multiples of $p$ and the other list is entirely composed of multiples of $p$, then the sum of multiples of $p$ in $\frak a$ would be zero. Thus, this case is forbidden. Further, if $\frak b$ has $u$ multiples of $p$ and $\frak c$ has $v$ multiples of $p$, then the number of multiples of $p$ in $\frak a$ is at most $u+v$ and has the same parity as $u+v$. Thus we may restrict attention to the cases when $u+v$ is odd and at least three. We will make repeated use of these observations below. Indeed, these observations immediately rule out the possibility that both $\frak b$ and $\frak c$ are binomial coefficients (since any binomial coefficient would have $0$, $1$ or $3$ multiples of $p$). We are left with two cases: by symmetry we assume that $\frak c$ is of the form $[2a,-a,2b,-b,-(a+b)]$, and $\frak b$ is either also of this form, or $\frak b$ is a binomial coefficient. \smallskip \noindent {\bf The case $\frak b$ is a binomial coefficient and $\frak c$ is of the form $[2a,-a, 2b, -b, -(a+b)]$.} Then $\frak b$ has $0$, $1$ or $3$ multiples of $p$, and $\frak c$ has $0$, $1$, $2$, or $5$ multiples of $p$. Using our observations above, we are reduced to two possibilities: $\frak b$ has $1$ or $3$ multiples of $p$, and $\frak c$ has $2$ multiples of $p$. Suppose $\frak b$ has $3$ multiples of $p$ and $\frak c$ has $2$ multiples of $p$. Then there are three non-multiples of $p$ in $\frak c$, which are left uncanceled in $\frak a$. These elements in $\frak c$ must sum to zero $\pmod p$, and include a pair $-a$, $2a$, so that we are in case (i). Now suppose $\frak b$ has exactly $1$ multiple of $p$ and $\frak c$ has $2$ multiples of $p$. Since $\frak a$ has at least $3$ multiples of $p$, there is no cancelation among the multiples of $p$ in $\frak b$ and $\frak c$. If there is no cancelation among the non-multiples of $p$ in $\frak b$ and $\frak c$ as well, then we must be in case (iv). Suppose then that there is some cancelation among the non-multiples of $p$ in $\frak b$ and $\frak c$. Now the non-multiples of $p$ in $\frak b$ look like $-y$, $y \pmod p$ for some $y\not\equiv 0\pmod p$, and the non-multiples of $p$ in $\frak c$ look like $2x$, $-x$, $-x \pmod p$ for some $x\neq 0 \pmod p$. It follows that there must be exactly one non-multiple in $\frak b$ that cancels with a non-multiple in $\frak c$. After canceling them, we must be left with three non-multiples that look like $2x$, $-x$, $-x \pmod p$. That is, we are in case (i). \smallskip \noindent{\bf Both $\frak b$ and $\frak c$ are of the form $[2a,-a,2b,-b,-(a+b)]$.} Suppose, by symmetry, that $\frak c$ has at least as many multiplies of $p$ as $\frak b$. Culling the possibilities for the number of multiples of $p$ using our earlier observations, we are left with two choices: $\frak b$ has $1$ multiple of $p$ and $\frak c$ has $2$ multiples of $p$, or $\frak b$ has $2$ multiples of $p$ and $\frak c$ has $5$ multiples of $p$. In the second case, there are $3$ non-multiples of $p$ in $\frak a$, and we are in case (i). We are left with the case that $\frak b$ has $1$ multiple of $p$ and $\frak c$ has $2$ multiples of $p$, and we may assume that there is no cancelation among these multiples of $p$. If there is no cancelation among the non-multiples of $p$ in $\frak b$ and $\frak c$ then we are in case (iv). So there must be some cancelation among the non-multiples of $p$ in $\frak b$ (which we write as $2a$, $-a$, $2b$, $-b$ with $a+b\equiv 0 \pmod p$) and $\frak c$ (which we write as $2c$, $-c$, $-d$ with $c\equiv d \pmod p$). If $c=-a$ or $-b$ then we are in case (i). If $2c=a$ or $b$, or $c=2a$ or $2b$ then a quick check shows that we are in case (ii). If $d=-a$ or $d=-b$, or $d=-2a$ or $d=-2b$ then we are in case (iii). Having exhausted all possibilities, the proof of the lemma is complete. \end{proof} \begin{corollary} \label{cor1} The twelve families given in \eqref{3.17} to \eqref{3.28}, together with the $43$ families listed in the table in Section 5 are all irreducible. Askey's family, $$ [3(m+n), 3n, 2m, 2n, -(2m+3n), -(m+2n), -(m+n), -m, -n, -n], $$ is also irreducible. \end{corollary} \begin{proof} Apart from Askey's family and the example \eqref{3.28}, the remaining $54$ families look like $a \frak a + b\frak b + (-a s(\frak a) - bs(\frak b)) [1]$ for suitable primitive lists $\frak a$, $\frak b$, where $a$ and $b$ are coprime, and chosen so that the resulting list has height $2$. In all these examples, $s(\frak a)$ and $s(\frak b)$ are both non-zero, and either $\ell(\frak a)$ or $\ell(\frak b)$ is an odd number at least $3$. If $\frak a$ has an odd number of elements, then choose $a$ with large size and divisible by $p$ for some prime $p\ge 11$, and similarly if $\frak b$ has an odd number of elements, choose $b$ with large size and divisible by $p$. Lemma \ref{lemr.1} now guarantees that such a list cannot be decomposed into two dilates of sporadic factorial ratios of height $1$. By straightforward (if lengthy) inspection, we can eliminate the various cases that reducible lists must belong to (given in Lemmas \ref{lemr.2} and \ref{lemr.3}), and conclude that all these lists are irreducible. The argument for list \eqref{3.28} is similar, choosing $a$ to be a large multiple of a prime $p\ge 11$, and checking the conclusions of Lemmas \ref{lemr.1}, \ref{lemr.2}, and \ref{lemr.3}. In Askey's family, we choose $m$ and $n$ to be large coprime positive numbers with $m+n$ being an odd multiple of a prime $p\ge 11$. Lemmas \ref{lemr.1} and \ref{lemr.2} still apply and shows that such a list is not reducible into two sporadic factorial ratios, or into a sporadic factorial ratio and one from an infinite family. Since the lists in Askey's family have length $10$, the only remaining possibility is that the list looks like $[2a,-a, 2b, -b, 2c, -c, 2d, -d, -(a+b), -(c+d)]$. But such lists (with height $2$) have the property that the largest element in them is even, whereas our example from Askey's family has largest element $3(m+n)$ which is odd. \end{proof} \section{Completing the proof of Theorem \ref{thm1}} \noindent We have already shown that, apart from finitely many exceptions, all primitive lists with height $2$ and norm at most $1/3+ \delta$ must lie in one of the $28$ families catalogued in Section 3. The $16$ families given in Section 3.1 are reducible, and thus the lists of height $2$ in these families correspond automatically to integral factorial ratios. The $12$ families given in Sections 3.2 and 3.3 are all known by Corollary \ref{cor1} to be irreducible. In Section 5 we noted that the height $2$ lists from the families \eqref{3.17}, \eqref{3.18}, \eqref{3.21}, \eqref{3.22}, \eqref{3.25}, and \eqref{3.26} are all integral factorial ratios thanks to Theorem \ref{thm2}. Thus all that remains is to show that height $2$ lists in the six families \eqref{3.19}, \eqref{3.20}, \eqref{3.23}, \eqref{3.24}, \eqref{3.27}, and \eqref{3.28} are also integral factorial ratios. Given a particular family it is straightforward to check whether the elements in it correspond to integral factorial ratios. We illustrate with one of these six remaining families, the others following similarly. To show that lists of height $2$ from \eqref{3.28} give rise to integral factorial ratios, it is enough to show that $$ \lfloor 2x \rfloor - \lfloor x \rfloor -\lfloor 4x \rfloor + \lfloor 2y \rfloor - \lfloor y \rfloor -\lfloor 4y \rfloor + \lfloor 6(x+y) \rfloor - \lfloor 3(x+y) \rfloor \ge 0 $$ for all $x$ and $y$. Since the left side is periodic in $x$ and $y$ with period $1$, it is enough to verify the inequality for $x$ and $y$ in $[0,1)$. For fixed $x$, the quantity $\lfloor 6(x+y)\rfloor -\lfloor 3(x+y)\rfloor$ is increasing in $y$, while the quantity $\lfloor 2y \rfloor -\lfloor y\rfloor -\lfloor 4y\rfloor$ is constant on the intervals $[0,1/4)$, $[1/4,3/4)$, and $[3/4,1)$. So it is enough to verify the inequality at $y=0$, $1/4$ and $3/4$. Arguing similarly, it is enough to check the inequality when $x=0$, $1/4$, $3/4$. After a small calculation to check these nine cases, the inequality follows.
1,477,468,750,272
arxiv
\section{Introduction}\label{sec1} In this paper, we shall focus on local times for L\'evy processes. It is known that there are several definitions of local times for different stochastic processes, see Geman and Horowitz \cite{Gem}. Thus, we define a local time $L=\{L^x_t:x\in\bR,t\geq0\}$ for a L\'evy process $X$ by the occupation density which means random variables $L=\{L^x_t:x\in\bR,t\geq0\}$ satisfying for each non-negative Borel measurable function $f$ and $t\geq0$, \begin{equation*} \int^t_0 f(X_s) ds = \int_\bR f(a) L^a_t da \quad \text{a.s.}, \end{equation*} and is chosen as \begin{equation*} L^x_t := \limsup_{\ep \downarrow 0}\frac{1}{2\ep}\int^t_0 1_{\{|X_s-x|<\ep\}}ds. \end{equation*} A local time is an important amount to study the reflection problem (see, e.g. Chung and Williams \cite{Chu}) and the Ray--Knight theorem (see, e.g. Eisenbaum et al. \cite{Eis}). In the case of Brownian motions, the Tanaka formula is an important expression to understand these problems. Thus, in the case of L\'evy processes we expect that the Tanaka formula is a useful tool to consider those problems. For a real-valued Brownian motion $B=(B)_{t\geq0}$, it is well known that the Tanaka formula holds: \begin{equation*} |B_t-x| - |B_0-x| = \int^t_0 \sgn(B_s-x) dB_s + L^x_t, \end{equation*} where $L^x_t$ denotes the local time of the Brownian motion at level $x$. It represents that the local time $L^x$ can be understood as a bounded variation process in the Doob--Meyer decomposition on the positive submartingale $|B-x|$. Our goal in this paper is to construct the Tanaka formula from the viewpoint of the Doob--Meyer decomposition. The Tanaka formula has already studied for symmetric stable processes with index $\al\in(1,2)$ by Yamada \cite{Yam}, for symmetric L\'evy processes by Salminen and Yor \cite{Sal}. In this paper, we are interested in asymmetric L\'evy processes, while the formula has been obtained for asymmetric stable processes in \cite{Tsu}. We shall make the Tanaka formula for asymmetric L\'evy processes based upon the potential approach as stated in \cite{Sal}. Moreover, it will clearly extend the original Tanaka formula for Brownian motions to our formula for asymmetric L\'evy processes. In \cite{Tsu}, we have already obtained the Tanaka formula for asymmetric stable processes with index $\al \in (1,2)$ via It\^o's stochastic calculus. By using the Fourier transform, we can obtain the fundamental solution $F$ of the infinitesimal generator $\cL$ for asymmetric stable processes: \[ \cL F(x) = \de_0(x) \] where $\de_0$ is the Dirac delta function, in the sense of Schwartz distribution. We can construct the Tanaka formula for an asymmetric stable process $S =(S_t)_{t\geq0}$ with index $\al \in (1,2)$ by using It\^o's stochastic calculus and the scaling property of stable processes: \[ F(S_t-x)-F(S_0-x)= \tilde{N}^x_t +L^x_t \] where the process $(\tilde{N}^x_t)_{t\geq0}$ given by \[ \tilde{N}^x_t :=\int^t_0\int_{\bR\setminus\{0\}}\{F(S_{s-}-x+h)-F(S_{s-}-x)\} \tilde{N}(ds,dh) \] is a square integrable martingale and $L^x_t$ is the local time at level $x$. Here, $\tilde{N}(ds,dh)$ is the compensated Poisson random measure. But it is not clear whether a similar representation can be obtained for general L\'evy processes, or not, because it is very difficult to find the fundamental solution of the infinitesimal generator for L\'evy processes. Salminen and Yor \cite{Sal} used the potential theoretic approach and constructed the Tanaka formula for a symmetric L\'evy process $X=(X_t)_{t\geq0}$, if the local time exists, by using the continuous resolvent density $r_q$: \[ h(X_t-x)-h(x)= \tilde{N}^x_t +L^x_t \] where $h(x):=\lim_{q\downarrow0}(r_q(0)-r_q(x))$ which is called a renormalized zero resolvent, $\tilde{N}^x_t :=-\lim_{q\downarrow0}M^{q,x}_t$ is a martingale and $L^x_t$ is the local time at level $x$. But the expression of the martingale part $\tilde{N}^x_t$ was not given. In \cite{Yan1} and \cite{Yan2}, Yano obtained an invariant excessive function $h$ with respect to the killed process: \[ \bE^0_x[h(X_t)]=h(x) \] where $\bE^0_x$ is the expectation with respect to the law of a L\'evy process $X$ starting at $x$ killed upon hitting zero, which associates with the Tanaka formula at level zero because the local time for such a process at level zero becomes zero. In the symmetric case Yano \cite{Yan1} assumed a necessary and sufficient condition for the existence of local times, and Salminen and Yor \cite{Sal} also assumed the same condition, but in the asymmetric case Yano \cite{Yan2} needed sufficient conditions for the existence of the function and its expression. Our result also gives the existence and its expression in the asymmetric case under weaker conditions than the ones in \cite{Yan2}. Our approach is different from Pant\'i \cite{Pan}. In Section \ref{sec2}, we shall give the preliminaries about resolvent operators of L\'evy processes and a connection between the local time and the resolvent density. The convergence and its expression of the renormalized zero resolvent are mentioned in Section \ref{sec3}. In Section \ref{sec4}, the Doob--Meyer decomposition can be constructed in the case of asymmetric L\'evy processes. And then, we obtain the Tanaka formula for asymmetric L\'evy processes and the invariant excessive function with respect to the killed process. In Section \ref{sec5}, we give several examples that satisfy the conditions introduced in Section \ref{sec4}. \section{Preliminaries}\label{sec2} Let $\cS(\bR)$ be the Schwartz space of rapidly decreasing functions on $\bR$. We denote the law of processes starting at $x$ and the corresponding expectation by $\bP_x$ and $\bE_x$ respectively. Consider a L\'evy process $X=(X_t)_{t\geq0}$ on $\bR$ with the L\'evy--Khintchine representation given by \begin{equation*} \bE_0[e^{iuX_t}]=e^{t\eta(u)}, \end{equation*} where the L\'evy symbol $\eta$ of $X$ can be represented as \begin{equation*} \eta(u)=ibu-\frac{1}{2}au^2+\int_{\bR \setminus \{0\}}\left( e^{iuy}-1-iuy1_{|y|\leq1} \right) \nu(dy) \end{equation*} for constants $b \in \bR$ and $a \geq 0$ and a L\'evy measure $\nu$ on $\bR \setminus \{0\}$ satisfying $\int_{\bR\setminus\{0\}}(|y|^2 \wedge 1)\nu(dy) < \infty$. We note that the L\'evy symbol $\eta$ is continuous. Let $\Re\eta$ and $\Im\eta$ be the real and imaginary parts of $\eta$ respectively. Remark that $\Re\eta\leq0$, $\Re\eta$ is even and $\Im\eta$ is odd. Let $T_0$ be the first hitting time to $0$ of $X$: \[ T_0 := \inf\{t>0 : X_t=0\}. \] We say that $0$ is regular for itself if $\bP_0(T_0=0)=1$, and irregular for itself otherwise. From the Blumenthal zero-one law, $0$ is irregular if $\bP_0(T_0=0)=0$. We introduce the following conditions: \begin{description} \item[\textbf{(A1)}] The L\'evy symbol $\eta$ satisfies that \[ \int_\bR \Re \left(\frac{1}{q-\eta(u)}\right)du<\infty, \quad \text{for all $q>0$},\] \item[\textbf{(A2)}] $0$ is regular for itself. \end{description} Denote the resolvent operator of the process $X$ by \begin{equation*} R_qf(x):=\bE_x\left[\int^\infty_0 e^{-qt}f(X_t)dt\right], \quad q>0, x \in \bR \end{equation*} for all bounded Borel measurable function $f$. Denote the Fourier transform of $f \in \cS(\bR)$ by \begin{equation*} \cF[f](u) := \int_\bR e^{-iux} f(x)dx, \quad u \in \bR, \end{equation*} and the inverse Fourier transform by \begin{equation*} \cF^{-1}[f](x) := \frac{1}{2\pi} \int_\bR e^{iux} f(u)du, \quad x \in \bR. \end{equation*} Then, the resolvent operator is also represented as follows. \begin{proposition}[{\cite[Proposition I.9]{Bert}}]\label{prop1} For any $f \in \cS(\bR)$ and $x \in \bR$, \begin{equation*} R_qf(x)=\cF^{-1}\left[\frac{1}{q-\eta(u)}\cF[f](u)\right](x), \quad q>0. \end{equation*} \end{proposition} Denote the resolvent kernel by $R_q(x,dy)$ for all $x \in\bR$ such that \begin{equation*} R_qf(x)=\int_\bR f(y)R_q(x,dy) \end{equation*} for all bounded Borel measurable function $f$. It is known that the condition \textbf{(A1)} is equivalent to the existence of its density. See \cite{Bert, Bre, Kes}. \begin{remark} \rm{ In \cite[Theorem II.16]{Bert}, the condition \textbf{(A1)} holds if and only if the resolvent kernel $R_q(0,dy)$ is absolutely continuous with respect to the Lebesgue measure and has a bounded density $r_q$. } \end{remark} It is known that the condition \textbf{(A2)} is equivalent to the continuity of its density. See \cite{Bert, Blu, Bre, Get, Kes}. \begin{lemma}[{\cite[Theorem II.19]{Bert}}]\label{lem1} Suppose that the condition \textbf{(A1)} holds. Then, the followings hold for all $q>0$: \begin{enumerate}[$(i)$] \item The condition \textbf{(A2)} holds if and only if there exist a bounded continuous resolvent density $r_q$ such that \[ R_qf(x)=\int_\bR f(y)r_q(y-x)dy,\] for all bounded Borel measurable function $f$ and that \[ \bE_x[ e^{-qT_0}]= \frac{r_q(-x)}{r_q(0)}, \quad x\in\bR. \] \item If $r_q$ is continuous, then \[ r_q(0)=\frac{1}{\pi}\int^\infty_0\Re\left(\frac{1}{q-\eta(u)}\right)du, \] and for all $x \in \bR$ \[ 2r_q(0)-\{r_q(x)+r_q(-x)\}=\frac{2}{\pi}\int^\infty_0 \Re\left(\frac{1-\cos (ux)}{q-\eta(u)}\right)du. \] \end{enumerate} \end{lemma} We introduce the following conditions: \begin{description} \item[\textbf{(A3)}] The process $X$ is the type C, i.e., \[ \text{either} \quad a > 0 \quad \text{or} \quad \int_{|y|\leq1}|y|\nu(dy)=\infty, \] \item[\textbf{(A4)}] The process $X$ is not a compound Poisson process. \end{description} The following was proved by Kesten \cite{Kes}, and another proof was given by Bretagnolle \cite{Bre}. \begin{lemma}[\cite{Kes} and \cite{Bre}]\label{lem2} The conditions \textbf{(A1)} and \textbf{(A3)} hold if and only if the conditions \textbf{(A2)} and \textbf{(A4)}. Furthermore, under the condition \textbf{(A1)}, the condition \textbf{(A2)} holds if and only if the condition \textbf{(A3)} holds. \end{lemma} In order to construct the Tanaka formula via the techniques in the potential theory, we use a connection between the local time and the resolvent density. \begin{lemma}[{\cite[Lemma V.3]{Bert}}]\label{lem3} Suppose that the conditions \textbf{(A1)} and \textbf{(A2)} hold. For any $x \in \bR$, denote by $dL^x_t$ the Stieltjes measure of the increasing function $L^x_\cdot$. Then, it holds that \begin{equation*} \bE_y\left[\int^\infty_0 e^{-qt} dL^x_t\right]=r_q(x-y), \quad q>0, y \in \bR. \end{equation*} \end{lemma} \begin{remark} \rm{ In \cite[Theorem V.1]{Bert}, the condition \textbf{(A1)} holds if and only if the occupation measure $\mu_t$ satisfying for each non-negative Borel measurable function $f$ and $t \geq 0$, \begin{equation*} \int^t_0 f(X_s) ds = \int_\bR f(x) \mu_t(dx), \end{equation*} has the density in $L^2(dx\otimes d\bP_0)$ as the Radon--Nikodym derivative. Therefore, if the condition \textbf{(A1)} holds, the local time for the process $X$ exists. If the condition \textbf{(A1)} holds, then under the condition \textbf{(A2)} the local time $L^x_t$ is continuous almost surely with respect to $t$. In the symmetric case, if the condition \textbf{(A1)} holds, then the condition \textbf{(A2)} holds. } \end{remark} \begin{remark} \rm{ By Blumenthal and Getoor \cite{Blu}, it can be considered as the potential theoretic definition of the local time, i.e. the local time can be defined as a positive additive functional $L_t^x$ such that \begin{equation*} \bE_0\left[\int^\infty_0 e^{-qt} dL^x_t\right]=r_q(x). \end{equation*} } \end{remark} \section{Renormalized zero resolvent}\label{sec3} Using the Fourier transform for $L^2(\bR)$-functions, the resolvent density can be represented as follows: \begin{proposition}\label{prop2} Suppose that the conditions \textbf{(A1)} and \textbf{(A2)} hold. The bounded continuous resolvent density can be expressed as: \begin{equation*} r_q(x)=\cF^{-1}\left[ \frac{1}{q-\eta(u)} \right](-x) \end{equation*} for all $q>0$ and $x \in \bR$. \end{proposition} \begin{proof} Since $\Re(q-\eta(u)) \geq q$, we have \begin{align*} \left|\frac{1}{q-\eta(u)}\right|^2&\leq \frac{\Re(q-\eta(u))}{q|q-\eta(u)|^2} =\frac{1}{q}\Re\left(\frac{1}{q-\eta(u)}\right). \end{align*} Thus, by the condition \textbf{(A1)} we have $1\slash (q-\eta(u)) \in L^2(\bR)$. By Proposition \ref{prop1} and Parseval's theorem, we have for all $\phi \in \cS(\bR)$, \begin{align*} R_q\phi(x) &= \cF^{-1}\left[ \frac{1}{q-\eta(u)}\cF[\phi](u)\right](x) \\&=\frac{1}{2\pi}\int_\bR \frac{e^{iux}}{q-\eta(u)}\cF[\phi](u)du \\&=\frac{1}{2\pi}\int_\bR\cF\left[ \frac{e^{iux}}{q-\eta(u)} \right](y)\phi(y)dy \\&=\int_\bR \cF^{-1}\left[ \frac{1}{q-\eta(u)}\right](x-y) \phi(y)dy. \end{align*} From the definition of the resolvent operator $R_q$, we then have for all $\phi \in \cS(\bR)$, \begin{align*} \int_\bR \left(r_q(y)-\cF^{-1}\left[ \frac{1}{q-\eta(u)} \right](-y)\right)\phi(y)dy=0. \end{align*} Since $r_q$ is continuous and integrable, by Lemma \ref{lem1}(i), we have \begin{equation*} r_q(x)=\cF^{-1}\left[ \frac{1}{q-\eta(u)} \right](-x) \end{equation*} for all $q>0$ and $x \in \bR$. \end{proof} We introduce the following condition: \begin{description} \item[\textbf{(A)}] The L\'evy symbol $\eta$ satisfies that \[ \frac{1}{q-\eta(u)} \in L^1(\bR), \quad \text{for all $q >0$}. \] \end{description} \begin{corollary}\label{cor1} Suppose that the condition \textbf{(A)} holds. The bounded continuous resolvent density $r_q$ can be expressed as: \begin{equation*} r_q(x)=\frac{1}{\pi}\int^\infty_0 \Re\left( \frac{e^{-iux}}{q-\eta(u)} \right) du \end{equation*} for all $q>0$ and $x \in \bR$. \end{corollary} From Lemma \ref{lem1}(i), We have the following: \begin{corollary}\label{cor2} If the condition \textbf{(A)} holds, then the conditions \textbf{(A1)} and \textbf{(A2)} hold. \end{corollary} \begin{remark} \rm{ From Lemma \ref{lem2}, we know that if the condition \textbf{(A)} holds, then the conditions \textbf{(A1)}, \textbf{(A2)}, \textbf{(A3)} and \textbf{(A4)} hold. } \end{remark} \begin{remark} \rm{ An asymmetric Cauchy process ($\al =1, \be\neq0$) does not satisfy the condition \textbf{(A)} but satisfy the conditions \textbf{(A1)} and \textbf{(A2)}. } \end{remark} Now, we set \begin{equation*} h_q(x):=r_q(0)-r_q(-x), \quad q>0, x \in \bR. \end{equation*} From Lemma \ref{lem1}(i), since $0\leq r_q(y)\leq r_q(0)$ for all $y \in \bR$, then we have $h_q \geq 0$. In \cite{Yan2}, the limit $h := \lim_{q \downarrow 0} h_q$ is called the renormalized zero resolvent if the limit exists, which is known as a harmonic function for the killed process under some conditions. But its convergence of $h_q$ is not clear for the asymmetric case, and Yano \cite{Yan2} needed the following conditions: \begin{description} \item[\textbf{(L1)}] The L\'evy symbol $\eta$ satisfies that \[ \int^\infty_0 \frac{1}{q-\Re\eta(u)}du < \infty, \quad \text{for all $q >0$}, \] \item[\textbf{(L2)}] The process $X$ is the type C, that is the same condition as \textbf{(A3)}, \item[\textbf{(L3)}] The real and imaginary parts of the L\'evy symbol $\eta$ have measurable derivatives on $(0,\infty)$ which satisfy \[ \int^\infty_0 (u^2\wedge 1)\frac{|\Re\eta(u)'|+|\Im\eta(u)'|}{\Re\eta(u)^2+\Im\eta(u)^2}du < \infty. \] \end{description} However, we suppose the condition \textbf{(A)}, which is weaker than the condition \textbf{(L1)}. The condition \textbf{(L2)} holds under the condition \textbf{(A)}. Moreover, we shall introduce the condition \textbf{(B)} which is weaker than the condition \textbf{(L3)}: \begin{description} \item[\textbf{(B)}] The L\'evy symbol $\eta$ satisfies that \[ \int^1_0\left|\Im\left(\frac{u}{\eta(u)}\right)\right| du <\infty. \] \end{description} \begin{theorem}\label{thm1} Suppose that the condition \textbf{(A)} and \textbf{(B)} hold. For all $x\in\bR$, \begin{align*} \lim_{q\downarrow 0} h_q(x) =\frac{1}{\pi}\int^\infty_0 \Re\left(\frac{e^{iux}-1}{\eta(u)}\right)du =:h(x). \end{align*} \end{theorem} To show Theorem \ref{thm1} and establish the Tanaka formula, we need the following lemma. \begin{lemma}\label{lem4} Suppose that the condition \textbf{(A)} holds. Then, the followings hold: \begin{flalign*} (i)\quad&|\eta(u)| \to \infty \quad \text{as $|u| \to \infty$}.& \\(ii)\quad&\int^\infty_c \left| \frac{1}{\eta(u)}\right| du < \infty \quad \text{for all $c>0$}.& \\(iii)\quad&\int^c_0 \left| \frac{u^2}{\eta(u)}\right|du < \infty \quad \text{for all $c>0$}.& \\(iv)\quad&\lim_{q\downarrow 0}\int_\bR\left|\frac{q}{q-\eta(u)}\right|du = 0.& \end{flalign*} \end{lemma} \begin{proof} (i) Since $r_1 \in L^1(\bR)$, $\cF[r_1](u)=1\slash(1-\eta(-u))$ and \begin{equation*} \left| \frac{1}{1-\eta(-u)} \right| \geq \frac{1}{1+|\eta(-u)|}, \end{equation*} then, by the Riemann--Lebesgue theorem, we have $|\eta(u)| \to \infty$ as $|u| \to \infty$. (ii) By Corollary \ref{cor2} and Lemma \ref{lem2}, the condition \textbf{(A3)} holds. We then know $\Re\eta(u) \neq 0$ for $u \neq 0$. By the condition \textbf{(A)}, we have \begin{equation*} \int_\bR \left|\frac{1}{1-\eta(u)}\right|du < \infty. \end{equation*} By the assertion (i), we know $|\eta(u)\slash(1-\eta(u))| \to 1$ as $|u| \to \infty$. Hence, the required result follows. (iii) Since we know $1-\cos(x) \geq x^2 \slash 4$ for $|x| \leq1$, by the condition \textbf{(A3)}, we have for all $0 < u \leq 1$ \begin{align*} \left| \frac{\eta(u)}{u^2}\right| &\geq -\frac{\Re\eta(u)}{u^2}& \\ &\geq \frac{a}{2}+\int_{|y|\leq |u|^{-1}}\frac{1-\cos(uy)}{(uy)^2}y^2\nu(dy)& \\ &\geq \frac{a}{2}+\frac{1}{4}\int_{|y|\leq |u|^{-1}}y^2\nu(dy)& \\ &\geq \frac{a}{2}+\frac{1}{4}\int_{|y|\leq1}y^2\nu(dy)>0.& \end{align*} Hence, the required result follows from the dominated convergence theorem. (iv) For each $q<1$, we have $|q\slash(q-\eta(u))| \leq 1 \wedge |1\slash\eta(u)|$. Thus, by the assertion (ii) and the dominated convergence theorem, we have \begin{align*} \lim_{q\downarrow 0}\int_\bR\left|\frac{q}{q-\eta(u)}\right|du &=\int_\bR \lim_{q\downarrow 0} \left|\frac{q}{q-\eta(u)}\right| du \\&= 0. \qedhere \end{align*} \end{proof} Now, we shall prove Theorem \ref{thm1}. \begin{proof}[Proof of Theorem \ref{thm1}] By Corollary \ref{cor1}, we have for each $x \in \bR$, \begin{align*} h_q(x)&=\frac{1}{\pi}\int^\infty_0 \Re\left(\frac{1-e^{iux}}{q-\eta(u)}\right)du \\&=\frac{1}{\pi}\int^\infty_0 \Re\left(\frac{1-\cos(ux)}{q-\eta(u)}\right)du +\frac{1}{\pi}\int^\infty_0 \Im\left(\frac{\sin(ux)}{q-\eta(u)}\right)du. \end{align*} Since we have for all $u\in\bR$, \[ \left|\Re\left(\frac{1-\cos(u)}{q-\eta(u)}\right)\right| \leq \frac{u^2 \wedge 1}{|\eta(u)|}, \] by Lemma \ref{lem4}(ii), (iii) and using the dominated convergence theorem, we have \begin{equation*} \int^\infty_0 \Re\left(\frac{1-\cos(u)}{q-\eta(u)}\right)du \to \int^\infty_0 \Re\left(\frac{\cos(u)-1}{\eta(u)}\right)du, \end{equation*} as $q \downarrow 0$. Since we have \begin{align*} \left|\Im\left(\frac{\sin(u)}{q-\eta(u)}\right)\right| \leq \left|\Im\left(\frac{u\wedge1}{\eta(u)}\right)\right| \leq \left|\Im\left(\frac{u}{\eta(u)}\right)\right| \wedge \left|\frac{1}{\eta(u)}\right|, \end{align*} by the condition \textbf{(B)}, Lemma \ref{lem4}(ii) and using the dominated convergence theorem, we have \begin{equation*} \int^\infty_0 \Im\left(\frac{\sin(ux)}{q-\eta(u)}\right)du \to -\int^\infty_0 \Im\left(\frac{\sin(ux)}{\eta(u)}\right)du, \end{equation*} as $q \downarrow 0$. \end{proof} \section{Tanaka formula}\label{sec4} Using Lemma \ref{lem3}, we can construct the Doob--Meyer decomposition as stated in \cite[Proposition 1]{Sal}. \begin{proposition}\label{prop3} Suppose that the conditions \textbf{(A1)} and \textbf{(A2)} hold. For each $q>0$, $t\geqslant0$ and $x \in \bR$, it holds that \begin{equation*} r_q(-X_t+x)=r_q(-X_0+x)+M^{q,x}_t+q\int^t_0r_q(-X_s+x)ds-L^x_t, \end{equation*} where $M^{q,x}_t$ is a martingale with respect to the natural filtration $\{\cG_t\}_{t\geq0}$ of $X$. \end{proposition} \begin{proof} By Lemma \ref{lem3} and the Markov property, we have \begin{align}\label{prop3-1} \bE_{X_0}\left[ \int^\infty_0 e^{-qu}dL^x_u | \cG_s\right] &=\int^s_0e^{-qu}dL^x_u + \bE_{X_s}\left[ \int^\infty_0 e^{-q(s+u)}dL^x_u\right] \notag \\&=\int^s_0e^{-qu}dL^x_u + e^{-qs}r_q(-X_s+x). \end{align} Using the integration by parts, and by \eqref{prop3-1}, we obtain \begin{align}\label{prop3-2} &q\int^t_0e^{qs}\int^s_0 e^{-qu}dL^x_u ds \notag \\&=e^{qt}\int^t_0 e^{-qu}dL^x_u - L^x_t \notag \\&=e^{qt}\left(\bE_{X_0}\left[ \int^\infty_0 e^{-qu}dL^x_u | \cG_t\right] -e^{-qt}r_q(-X_t+x) \right) - L^x_t \notag \\&=e^{qt}\bE_{X_0}\left[ \int^\infty_0 e^{-qu}dL^x_u | \cG_t\right] -r_q(-X_t+x) - L^x_t \end{align} Hence, by \eqref{prop3-1} and \eqref{prop3-2} we have \begin{align}\label{prop3-3} &r_q(-X_t+x)-q\int^t_0r_q(-X_s+x)ds+L^x_t \notag \\&=-q\int^t_0e^{qs}\bE_{X_0}\left[\int^\infty_0 e^{-qu}dL^x_u | \cG_s \right]ds +e^{qt}\bE_{X_0}\left[ \int^\infty_0 e^{-qu}dL^x_u | \cG_t\right] \end{align} For the sake of simplicity of notations, we shall write \begin{align*} Y_t&:=\bE_{X_0}\left[ \int^\infty_0 e^{-qu}dL^x_u | \cG_t\right], \\ Z_t &:=-q\int^t_0e^{qs}Y_s ds+e^{qt}Y_t. \end{align*} Since we know $Z_0=r_q(-X_0+x)$, by \eqref{prop3-3}, we will show that $Z_t$ is a martingale with respect to the natural filtration $\{\cG_t\}_{t\geq0}$. By Fubini's theorem, we have for all $0 \leq v <t$, \begin{align*} \bE_{X_0}[Z_t | \cG_v] &=-q\int^t_0 e^{qs}\bE_{X_0}[Y_s|\cG_v]ds+e^{qt}\bE_{X_0}[Y_t|\cG_v] \\&=-q\int^v_0 e^{qs}Y_s ds-q\int^t_v e^{qs}Y_v ds+e^{qt}Y_v \\&=-q\int^v_0 e^{qs}Y_s ds+e^{qv}Y_v \\&=Z_v, \end{align*} and the required result follows. \end{proof} Now we will establish the Tanaka formula for asymmetric L\'evy processes. \begin{theorem}\label{thm2} Suppose that the conditions \textbf{(A)} and \textbf{(B)} hold. Let $h$ and $M^{q,x}$ be the same as in Theorem \ref{thm1} and Proposition \ref{prop3} respectively. Then, for each $t\geq0$ and $x \in \bR$, it holds that \begin{equation*} h(X_t-x)=h(X_0-x)+\tilde{N}^x_t + L^x_t, \end{equation*} where $\tilde{N}^x_t := - \lim_{q\downarrow0}M^{q,x}_t$ is a martingale. \end{theorem} \begin{proof} From the Doob--Meyer decomposition (Proposition \ref{prop3}), let $q \downarrow 0$ and by Theorem \ref{thm1}, then we have \begin{equation*} h(X_t-x)=h(X_0-x) -\lim_{q \downarrow 0}\left(M^{q,x}_t+q\int^t_0 r_q(-X_s+x)ds\right) +L^x_t. \end{equation*} Recall that $0\leq r_q(y)\leq r_q(0)$ for all $y \in \bR$, and then, \begin{equation*} 0 \leq q\int^t_0 r_q(-X_s+x)ds \leq qr_q(0)t. \end{equation*} Hence, by Lemma \ref{lem4}(iv), \begin{equation}\label{thm2-1} q\int^t_0 r_q(-X_s+x)ds \to 0 \quad \text{as $q \downarrow 0$}. \end{equation} It remains to show that $\tilde{N} := -\lim_{q\downarrow0}M^{q,x}$ is a martingale. Thus, we will prove that \begin{equation*} \bE_0|\tilde{N}^x_t-M^{q,x}_t| \to 0 \quad \text{as $q \downarrow 0$}. \end{equation*} We know that \begin{align*} |\tilde{N}^x_t-M^{q,x}_t| &\leq |h(X_t-x)-h_q(X_t-x)| +|h(X_0-x)-h_q(X_0-x)| \\&\quad +q\int^t_0r_q(-X_s+x)ds. \end{align*} By Theorem \ref{thm1}, the second term of the above right-hand side goes to $0$ as $q \downarrow 0$. By \eqref{thm2-1}, the last term converges to $0$ as $q \downarrow 0$. It remains to prove the convergence of the first term as $q \downarrow 0$. Thus, it is enough to show that $h_q(X_t-x)$ converges in $L^1(d\bP_0)$ to $h(X_t-x)$ as $q \downarrow 0$. Since $h_q(y)\geq0$ for any $y \in \bR$, we have \begin{align*} h_q(x) &\leq h_q(x) + h_q(-x) \\&=\frac{2}{\pi}\int^\infty_0 \Re\left( \frac{1-\cos(ux)}{q-\eta(u)} \right)du \\&\leq \frac{2}{\pi}\int^\infty_0 \frac{1-\cos(ux)}{|\eta(u)|}du. \end{align*} Using Fubini's theorem, Lemma \ref{lem4}(ii) and (iii), we have \begin{align*} &\bE_0\left[ \int^\infty_0 \frac{1-\cos(u(X_t-x))}{|\eta(u)|}du\right] \\&=\int^\infty_0 \frac{1-\Re\exp \left(t\eta(u)-iux\right)}{|\eta(u)|} du \\&=\int^\infty_0 \frac{1-\cos(t\Im\eta(u)-ux)\exp\left(t\Re\eta(u)\right)}{|\eta(u)|} du \\&\leq \int^1_0 \frac{1-\cos(t\Im\eta(u)-ux)-t\Re\eta(u)}{|\eta(u)|} du +\int^\infty_1 \left|\frac{2}{\eta(u)}\right|du \\&\leq \int^1_0 \frac{\left(t\Im\eta(u)-ux\right)^2}{|\eta(u)|} du +\int^\infty_1 \left|\frac{2}{\eta(u)}\right|du + t \\&\leq 2\int^1_0 \frac{\left(t\Im\eta(u)\right)^2+ (ux)^2}{|\eta(u)|} du +\int^\infty_1 \left|\frac{2}{\eta(u)}\right|du + t <\infty. \end{align*} Hence, it follows from the dominated convergence theorem. Therefore, \begin{equation*} \bE_0 |\tilde{N}^x_t-M^{q,x}_t| \to 0 \quad \text{as $q \downarrow 0$}. \end{equation*} The proof is now complete. \end{proof} \begin{remark} \rm{ From Theorem \ref{thm2}, we obtain the invariant excessive function with respect to the killed process. Indeed, when we denote the law of the process starting at $x$ killed upon hitting zero and the corresponding expectation by $\bP^0_x$ and $\bE^0_x$ respectively, under the condition \textbf{(A)} and \textbf{(B)}, we have, \begin{equation*} \bE^0_x[h(X_t)]=h(x), \end{equation*} for all $t\geq0$ and $x \in \bR$. } \end{remark} \section{Examples}\label{sec5} We shall introduce examples satisfying the conditions \textbf{(A)} and \textbf{(B)}. Because the condition \textbf{(A)} is a sufficient condition to have local times and explicit resolvent densities, we give examples with a focus on satisfying the condition \textbf{(B)}. \begin{example}[Stable process]\label{ex1} \rm{ Let $X$ be an asymmetric stable process with index $\a l\in (1,2)$. The L\'evy measure $\nu$ on $\bR \setminus \{0\}$ is given by \begin{equation*} \nu(dy) = \begin{cases} c_+|y|^{-\al-1}dy &\quad \text{on $(0,\infty)$}, \\ c_-|y|^{-\al-1}dy &\quad \text{on $(-\infty,0)$}, \end{cases} \end{equation*} where $\al \in (1,2)$, and $c_+$ and $c_-$ are non-negative constants such that $c_+ + c_- > 0$. The L\'evy symbol $\eta$ of $X$ is represented as \begin{equation*} \eta(u) = -d|u|^\al \left(1-i\be\sgn(u)\tan\frac{\pi \al}{2}\right), \end{equation*} where $d > 0$ and $\be \in [-1,1]$ are given by \begin{equation*} d=\frac{c_+ +c_-}{2c(\al)}, \quad \be=\frac{c_+-c_-}{c_+ +c_-} \end{equation*} with \begin{equation*} c(\al)=\frac{1}{\pi}\Ga(\al+1)\sin\frac{\pi \al}{2}. \end{equation*} See \cite{Sat} on details. Since we have for all $q>0$, \begin{align*} \left|\frac{1}{q-\eta(u)}\right| \leq \frac{1}{q-\Re\eta(u)} =\frac{1}{q+d|u|^{\al}}, \end{align*} and $\al \in(1,2)$, the process $X$ satisfies the condition \textbf{(A)}. Since we have for all $0<u\leq1$, \begin{align*} \left|\Im\left(\frac{u}{\eta(u)}\right)\right| \leq \left|\frac{u}{\eta(u)}\right| \leq \frac{u}{|\Re\eta(u)|} =\frac{1}{d}|u|^{1-\al}, \end{align*} by $1-\al \in(-1,0)$, the process $X$ satisfies the condition \textbf{(B)}. In this case, it can be represented by \begin{equation*} h(x) = c(-\al) \frac{1 - \be \sgn(x)} {d\left(1 + \be^2 \tan^2 (\pi \al \slash 2)\right)} |x|^{\al - 1}. \end{equation*} The result is consistent with \cite{Tsu}. } \end{example} \begin{remark} \rm{ In \cite{Tsu}, by using the Fourier transform, we could find the fundamental solution $F$ of the infinitesimal generator for a stable process $S =(S_t)_{t\geq0}$ with index $\al \in(1,2)$. Moreover, we have $F(x)=h(x)$ for all $x\in\bR$. In \cite{Tsu}, since we used It\^o's stochastic calculus, we have the martingale part $\tilde{N}^x_t$ of the Tanaka formula can be represented as the explicit form: \[ \tilde{N}^x_t :=\int^t_0\int_{\bR\setminus\{0\}}\{F(S_{s-}-x+h)-F(S_{s-}-x)\} \tilde{N}(ds,dh). \] Thus, we could study the property of local times from the Tanaka formula. On the other hand, for general L\'evy processes, even if the renormalized zero resolvent and the local time exist, we could not use It\^o's stochastic calculus, because we do not know the explicit form of the renormalized zero resolvent. } \end{remark} \begin{example}[Truncated stable process]\label{ex2} \rm{ A truncated stable process is a L\'evy process with the L\'evy measure $\nu$ on $\bR \setminus \{0\}$ given by \begin{equation*} \nu(dy) = \begin{cases} c_+|y|^{-\al -1}1_{\{y\leq1\}}dy &\quad \text{on $(0,\infty)$}, \\ c_-|y|^{-\al -1}1_{\{y\geq-1\}}dy &\quad \text{on $(-\infty,0)$}, \end{cases} \end{equation*} where $\al \in (1,2)$, and $c_+$ and $c_-$ are non-negative constants such that $c_+ + c_- > 0$. Since we know $1-\cos(x) \geq x^2 \slash 4$ for $|x| \leq1$, we have for all $u\geq1$, \begin{align*} -\Re\eta(u)&=\int_{\bR\setminus\{0\}} \left(1-\cos(uy)\right)\nu(dy) \\&\geq \frac{1}{4}\int_{|y| \leq u^{-1}} (uy)^2\nu(dy) \\&=\frac{c_++c_-}{4}\int^{u^{-1}}_0 u^2y^{-\al+1}dy \\&=\frac{c_++c_-}{4(2-\al)}u^{\al}. \end{align*} We then have for all $q>0$, \begin{align*} \int^\infty_0 \left|\frac{1}{q-\eta(u)}\right|du &\leq \int^\infty_0 \frac{1}{q-\Re\eta(u)}du \\&\leq \frac{1}{q} + \frac{4(2-\al)}{c_++c_-}\int^\infty_1u^{-\al}du < \infty, \end{align*} by $-\al \in (-2,-1)$. Thus, the process $X$ satisfies the condition \textbf{(A)}. Since $|\sin(x)-x| \leq |x|^3$ for all $x\in\bR$, we have for all $0 < u \leq1$, \begin{align*} \left| \frac{\Im\eta(u)}{u^3} \right| &=\left| \int_{|y|\leq1}\frac{\sin(uy)-uy}{u^3}\nu(dy)\right| \\&\leq \int_{|y|\leq1} \left| \frac{\sin(uy)-uy}{u^3}\right| \nu(dy) \\&\leq \int_{|y|\leq1}|y|^3\nu(dy) < \infty. \end{align*} Thus, the process $X$ satisfies the condition \textbf{(B)}. } \end{example} \begin{remark} \rm{ If a L\'evy measure has a bounded support, the condition \textbf{(B)} holds by the same argument as stated Example \ref{ex2}. } \end{remark} \begin{example}[Tempered stable process]\label{ex3} \rm{ A tempered stable process is a L\'evy process with the L\'evy measure $\nu$ on $\bR \setminus \{0\}$ is given by \begin{equation*} \nu(dy) = \begin{cases} c_+|y|^{-\al_+ -1}e^{-\la_+ |y|}dy &\quad \text{on $(0,\infty)$}, \\ c_-|y|^{-\al_- -1}e^{-\la_- |y|}dy &\quad \text{on $(-\infty,0)$}, \end{cases} \end{equation*} where $\al_+, \al_- \in (1,2)$, and $c_+$, $c_-$, $\la_+$ and $\la_-$ are non-negative constants such that $c_+ + c_- > 0$. The processes have studied as models for stock price behavior in finance. See Carr et al. \cite{Car} on details. Since we have for all $u\geq1$, \begin{align*} -\Re\eta(u) &\geq \frac{1}{4}\int_{|y|\leq u^{-1}}(uy)^2\nu(dy) \\&\geq \frac{u^2}{4}\left(c_+ e^{-\la_+}\int^{u^{-1}}_0 y^{-\al_+ +1}dy + c_- e^{-\la_-}\int^{u^{-1}}_0 y^{-\al_- +1}dy\right) \\&=\frac{c_+ e^{-\la_+}}{4(2-\al_+)}u^{\al_+} + \frac{c_- e^{-\la_-}}{4(2-\al_-)}u^{\al_-}, \end{align*} by $\al_+, \al_- \in (1,2)$, the process $X$ satisfies the condition \textbf{(A)}. In the case of $\la_+, \la_- >0$, or $c_+=0, \la_->0$, or $ c_-=0, \la_+>0$, since $|\sin(x)-x| \leq |x|^3$ for all $x\in\bR$, we have for all $0< u \leq1$, \begin{align*} \left| \frac{\Im\eta(u)}{u^3}\right| \leq \int_{\bR \setminus \{0\}} \left| \frac{\sin(uy)-uy}{u^3} \right|\nu(dy) \leq \int_{\bR \setminus \{0\}} |y|^3 \nu(dy) <\infty. \end{align*} Thus, this case satisfies the condition \textbf{(B)}. In the case of $c_+>0, \la_+ =0$, by the same argument as Example \ref{ex1}, we have for all $u \in \bR$, \begin{align*} -\Re\eta(u) \geq \int^\infty_0 \left(1-\cos(uy)\right)\nu(dy) =\frac{c_+}{2c(\al)}|u|^{\al_+} \end{align*} where $c(\al)=(1\slash\pi)\Ga(\al+1)\sin(\pi \al \slash 2)$. Since we have for $0<u\leq1$, \begin{align*} \left|\Im\left(\frac{u}{\eta(u)}\right)\right| =\left|\frac{u\Im\eta(u)}{(\Re\eta(u))^2+(\Im\eta(u))^2}\right| \leq \frac{u}{2|\Re\eta(u)|} \leq \frac{c(\al)}{c_+}u^{1-\al_+}, \end{align*} by $1-\al_+\in (-1,0)$, this case satisfies the condition \textbf{(B)}. In the case of $c_->0, \la_-=0$, the condition \textbf{(B)} holds by the same argument as the case of $\la_+=0$. Thus, the process $X$ satisfies the condition \textbf{(B)}. } \end{example} \begin{example}\label{ex4} \rm{ Suppose that the condition \textbf{(A)} holds, and that a L\'evy measure $\nu$ satisfies $\int_{|y|>1}|y|\nu(dy)<\infty$ and $b \neq -\int_{|y|>1}y\nu(dy)$. Since we have \[ \left|\Im\left(\frac{u}{\eta(u)}\right)\right| \leq \left|\frac{u}{\Im\eta(u)}\right|, \] and $|\sin(x)-x1_{|x|\leq1}|\leq|x|^3 \wedge |x|$ for all $x \in \bR$, we have \begin{align*} \left|\frac{\Im\eta(u)}{u}\right| &=\left|b + \int_{|y|\leq1}\frac{\sin(uy)-uy}{u}\nu(dy) +\int_{|y|>1}\frac{\sin(uy)}{u}\nu(dy)\right| \\&\to \left| b+\int_{|y|>1}y\nu(dy)\right| >0, \end{align*} as $q \downarrow 0$. By the dominated convergence theorem, the process $X$ satisfies the condition \textbf{(B)}. } \end{example} \begin{example}[Spectrally positive or negative process]\label{ex5} \rm{ A L\'evy process with no positive (negative) jumps is called a spectrally negative (positive) process. The processes have studied as models for insurance risk and dam theory. Suppose that the condition \textbf{(A)} holds, and that a L\'evy measure $\nu$ has a support in $(-\infty,0)$ and satisfies $\int_{|y|>1}|y|\nu(dy)<\infty$. The processes are integrable spectrally negative processes satisfying the condition \textbf{(A)}. In the case of $b \neq -\int_{|y|>1}y\nu(dy)$, the process is one of Example \ref{ex4}. We consider the case of $b = -\int_{|y|>1}y\nu(dy)$. Since we have for all $x\in\bR$, \begin{align*} 0 &\leq h_q(x) \leq h_q(x)+h_q(-x) \\&=\frac{2}{\pi}\int^\infty_0 \Re\left(\frac{1-\cos(ux)}{q-\eta(u)}\right)du., \end{align*} by Lemma \ref{lem4}(ii) and (iii), we have \begin{align*} &\left|\int^1_0\Im\left(\frac{\sin(u)}{q-\eta(u)}\right)du\right| \\&\leq \left|\int^\infty_0\Re\left(\frac{1-\cos(u)}{q-\eta(u)}\right)du\right| + \left|\int^\infty_1\Im\left(\frac{\sin(u)}{q-\eta(u)}\right)du\right| \\&\leq \int^\infty_0\frac{|u|^2\wedge1}{|\eta(u)|}du + \int^\infty_1\left|\frac{1}{\eta(u)}\right|du <\infty. \end{align*} Since $\Im\eta(u) \geq 0$ for all $u \geq0$, we have for all $0<u\leq1$, \begin{align*} \Im\left(\frac{\sin(u)}{q-\eta(u)}\right) = \frac{\Im\eta(u)\sin(u)}{(q-\Re\eta(u))^2+(\Im\eta(u))^2} \end{align*} is increasing as $q \downarrow 0$, by the monotone convergence theorem, the condition \textbf{(B)} follows. Integrable spectrally positive processes satisfying the condition \textbf{(A)} also satisfy the condition \textbf{(B)} by same argument as the spectrally negative case. } \end{example} \begin{remark} \rm{ Example \ref{ex1} also satisfies the condition \textbf{(L3)}. But Example \ref{ex2} and \ref{ex3} do not satisfy the condition \textbf{(L3)}, and in Example \ref{ex4} and \ref{ex5} there exists processes that do not satisfy the condition \textbf{(L3)}. } \end{remark} \section*{Acknowledgements} I would like to thank Professor Atsushi Takeuchi of Osaka City University and Professor Kouji Yano of Kyoto University for their valuable advice.
1,477,468,750,273
arxiv
\section*{Abstract} Sea ice cover in the Arctic and Antarctic is an important indicator of changes in the climate, with important environmental, economic and security consequences. The complexity of the spatio-temporal dynamics of sea ice makes it difficult to assess the temporal nature of the changes - e.g. linear or exponential - and their precise geographical loci. In this study, Koopman Mode Decomposition (KMD) was applied to satellite data of sea ice concentration for the northern and southern hemispheres to gain insight into the temporal and spatial dynamics of the sea ice behavior and to predict future sea ice behavior. We discover exponentially decaying spatial modes in both hemispheres and discuss their precise spatial extent, and also perform precise geographic predictions of sea ice concentration up to four years in the future. This data-driven decomposition technique gives insight in spatial and temporal dynamics not apparent in traditional linear approaches. \section*{Introduction} Sea ice is floating ice that forms when ocean water freezes. The formation and distribution of sea ice plays an important role in the planet's climate and thus large amounts of data related to quantitative measures of sea ice have been collected, including continuous satellite remote sensing measurements since 1978. The decreasing extent of Arctic sea ice extent over the last several decades has had negative effects on Arctic wildlife and local communities, while also potentially opening new regions to maritime commerce and natural resources exploration. The future of sea ice behavior is thus of great significance for environmental, economic, and national security reasons. There are several studies that suggest a nonlinear trend in the decline of the sea ice cover \cite{comisoetal:2008,stroeveetal:2012}. A variety of approaches have been applied to predict future sea ice behavior over short time scales (1-3 months in the future), including both dynamical (model-based, either coupled ice-ocean-atmosphere or ice-ocean models) and statistical (data-based) (see the reports of the Sea Ice Prediction Network (SIPN) \cite{MeierEtAl:2018} and, e.g. kernel analog forecasting \cite{ComeauEtAl:2018}). Statistical based approaches were reported by the SIPN to be similarly or more successful compared with dynamical approaches for prediction of sea ice distributions 1-3 months in the future \cite{MeierEtAl:2018}. Of statistical based approaches, spatial mode based approaches \cite{KondrashovEtAl:2018} have particular advantages. Compared to regression or trend based approaches, spatial mode based approaches are powerful tools for studying and predicting the geographic and temporal behavior of sea ice because they decompose the time dependent sea ice data into time varying spatial structures of physical significance. Here we apply Koopman Mode Decomposition (KMD) \cite{Mezic:2005} to sea ice concentration dynamics and prediction. KMD is a mathematical tool well suited to analyzing sea ice dynamical behavior because it identifies important spatial structures and their complex time dependence from large data sets such as those available for sea ice. The Koopman operator theory \cite{MezicandBanaszuk:2004,Mezic:2005,Rowleyetal:2009,Budisicetal:2012,Williamsetal:2015,Bruntonetal:2016,Giannakisetal:2015} is already widely used to analyze data and provide models for complex dynamic processes. Mathematically, the Koopman operator \cite{Nageletal:2014} is a linear representation on an observables space of an appropriate group action on state space. As it is a linear representation, it is natural that key objects of analysis will be its eigenvalues and eigenfunctions. It turns out that in distributed systems (for example, those with a spatial component, e.g. a fluid velocity field, or a dynamical system on a graph), that, in addition to the eigenvalues and eigenfunctions, there is a third class of objects of importance: the Koopman modes \cite{Rowleyetal:2009}. The eigenvalues of the Koopman operator provide the time scales on which - a potentially exponential - change in sea ice cover is happening, and the modes indicate the spatial extent of the changes. Crucially, the Koopman operator methods do not require a model – observables like sea ice thickness and concentration are sufficient to compute the eigenvalues and the associated modes. The most popular computational method for Koopman eigenvalues and modes is the Dynamic Mode Decomposition (DMD), that has become a major tool in the data-driven analysis of complex dynamical systems. DMD was first introduced in 2008 by P. Schmid \cite{Schmid:2008wv} for the study of fluid flows where it was conceptualized as an algorithm to decompose the flow field into component fluid structures, called “dynamic modes” or “DMD modes”, that described the evolution of the flow. The DMD modes and their temporal behavior are given by the spectral analysis of the linear operator, which is constructed from data since it is assumed that direct access to it is not available. The book \cite{DMDbook} provides references and introduction to a variety of DMD-related algorithms. Rowley et al. \cite{Rowleyetal:2009} gave the method theoretical underpinnings by connecting it to the spectral analysis of the Koopman operator. The paper \cite{Drmacetal:2018} provides both enhancements and analysis to the DMD method as well as additional theoretical underpinning for its relationship to the Koopman operator, and contrast to the Galerkin projection (or finite section) methods such as EDMD \cite{Williamsetal:2015}. A data set well suited for study and prediction with KMD is the satellite-based sea ice concentration measurements from the NSIDC Sea Ice Index \cite{nsidc_seaIceIndex}, due primarily to the long and continuous time period (from November 1978 to the present) and the large geographic regions over which this data is available. KMD analysis was applied to both entire Arctic and Antarctic sea region as a whole and to specific sea regions in each hemisphere. The geographic regions used for the Arctic were those given by Boisvert and Stroeve \cite{boisvert2015arctic} and the Antarctic regions were those given by the NSIDC \cite{nsidc_seaIceIndex} (see Fig. \ref{fig1_regions} for the definitions of each region). Note that using these geographic definitions, not all of the ocean region in the northern hemisphere data is considered in the Arctic, therefore the sea ice concentration data from these non-Arctic regions were excluded from the KMD analysis. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.33\textwidth} \centering \fbox{\includegraphics[width=0.95\textwidth]{fig1_north_regions}} \caption{} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \centering \setlength{\fboxsep}{0.9pt} \fbox{\includegraphics[width=0.95\textwidth]{fig2_south_regions}} \caption{} \end{subfigure} \caption{Northern and southern hemisphere geographic regions considered. (a) Northern hemisphere geographic regions. 1: Sea of Okhotsk, 2: Bering Sea, 3: Hudson Bay, 4: Gulf of St Lawrence, 5: Baffin Bay, 6: East Greenland Sea, 7: Barents Sea, 8: Kara Sea, 9: Laptev Sea, 10: East Siberian Sea, 11: Chukchi Sea, 12: Beaufort Sea, 13: Canadian Arctic Archipelago, 14: Central Arctic Ocean. (b) Southern hemisphere geographic regions. 1: Weddell Sea, 2: Indian Ocean, 3: Pacific Ocean, 4: Ross Sea, 5: Bellingshausen and Amundsen Seas.} \label{fig1_regions} \end{figure} Examination of a mode shows the geographic locations where the sea ice concentration has oscillatory, growth or decay behavior as determined by the associated eigenvalue, and thus allows one to associate particular Koopman modes with aspects of sea ice dynamics of interest. For example, modes with oscillatory frequencies with periods of one year correspond to the annual variation between the sea ice minimum extent in the late summer and the sea ice maximum extent in the late winter, and modes with eigenvalues near zero correspond to the mean sea ice concentration over the time period spanned by the data. Furthermore, the presence of eigenvalues with multi-year oscillatory periods suggests the presence of long-term periodic variations in sea ice behavior, and eigenvalues with real components leading to relatively slow growth or decay time constants suggests the existence of long-term increases or decreases in sea ice concentration, where again the associated modes show the geographic regions where such behaviors occur. Note that while, by definition, the modes resulting from KMD have complex-exponential time dependence, the identification modes possessing long term exponential decay is non-trivial because the identified modes have corresponding eigenvalues that are isolated in the eigenvalue space. Linear growth or decay can be reproduced with a combination of exponential modes with eigenvalues close together in the eigenvalue space. The long-term exponential decay modes identified here have no such clustering by the corresponding eigenvalues. The Koopman modes and eigenvalues also permit prediction of the future sea ice concentration behavior. Data decomposed using KMD can be reconstructed over its original time period, and these same reconstruction equations allow for prediction of future behavior simply by increasing the time variable to values beyond the time period of the original input data. In this work we apply KMD to data over various multi-year time windows before a given year and produce prediction results for after the time window for which data was available, thus enabling a judgment of the goodness of the KMD predictions by comparison with the true “future” values. We applied KMD reconstruction techniques to the prediction of future sea ice concentrations in both the entire northern and southern hemispheres where data was available and also in particular geographic regions within each hemisphere. Because the dynamics of the seasonal variation in sea ice concentration differ greatly between high latitude regions, which have significant or complete sea ice coverage in the winter and can retain some sea ice through the summer, and lower latitude regions, which do not necessarily reach complete sea ice coverage in the winter nor retain any sea ice through the summer, it is of interest to examine the KMD prediction results in various seas, bays, and other specific regions. To that end, KMD prediction results for each of the sea regions described previously were examined separately. The primary results of the work are: \begin{itemize} \item The presence of Koopman modes showing the change in geographic distribution of sea ice over the time period covered by satellite data, specifically the reduction in the mean Arctic and Antarctic sea ice concentration and the increased annual variation near West Antarctica and in the Arctic marginal seas. \item Long-term exponential decay behavior in sea ice concentration in both the Arctic and Antarctic that is indicative of feedback mechanisms that accelerate decline in the extent of the sea ice cover \cite{comisoetal:2008,cohenetal:2014,goosseetal:2018}. \item The ability of Koopman-based reconstruction techniques to predict future sea ice behavior over multi-year timescales. \end{itemize} \section*{Materials and Methods} The objective of this study was to apply Koopman Mode Decomposition (KMD) and analysis techniques to existing satellite image data of sea ice concentration. Existing KMD algorithms were applied to the satellite images and the resulting Koopman modes and eigenvalues, defined in the following paragraph, showed the temporal and spatial details of the sea ice concentration dynamics. The application of Koopman Mode Decomposition to time series data $g(t,\mathbf{z}_{0})\in \mathbb{R}^n\times I$, where $n$ is the number of field observations, $I$ the set of time snapshots and ${\bf z}_0$ an initial condition consists of expanding the time series observables onto the Koopman eigenfunctions to produce a set of Koopman modes and Koopman eigenvalues \cite{Mezic:2005, Mezic:2013}: \begin{equation*} g(t,\mathbf{z}_{0}) = \sum_{j=1}^{\infty} e^{\lambda_{j} t} \mathbf{v}_{j}(\mathbf{z}_{0}) + e^{\bar{\lambda}_{j} t} \bar{\mathbf{v}}_{j}(\mathbf{z}_{0}) +{\bf n}(t) \end{equation*} where $\lambda_{j}$ are the Koopman eigenvalues, $\mathbf{v}_{j}$ are the Koopman modes, the overline indicates complex conjugation, and ${\bf n}(t)$ is part of the time evolution with continuous spectrum \cite{Mezic:2005,Mezic:2013}. Note that the dependence on the initial state ${\bf z}_0$ is sometimes taken out of the mode itself, when eigenfunctions are used in the expansion. For each eigenvalue and its corresponding mode, the imaginary component of the eigenvalue $\Im(\lambda_{j}) = \omega_{j}$ determines the oscillation frequency $\omega_{j}$ of the mode $j$ (and the oscillatory period $\tau^{osc}_{j} = 1/\omega_{j}$) and the real component $\Re(\lambda_{j}) = 1/\tau^{decay}_{j}$ determines the growth or decay time constant $\tau^{decay}_{j}$ of the mode. It has been shown that short-timescale fluctuations can have a significant effect on large-scale features of climate systems, but the standard modeling practice of averaging the outputs of multiple simulation leads to the loss of this effect and thus incorrect predictions \cite{DingEtAl:2018}. An advantage of KMD as a data-driven analytical technique is that in the eigenvalues it retains and uses the entire range of timescales present in the input data. The mode itself determines the spatial structure of the specific dynamical behavior given by the eigenvalue. The data pre-processing method was to convert the image data files from the NSIDC Sea Ice Index of sea ice concentration showing average monthly concentration to numerical arrays, remove the pixels corresponding to land areas, and reshape the remaining sea pixels into a 1-D array for each month. Note that there is a “polar data gap” in a circular region around the North Pole where concentration measurements are not available due to the coverage of the satellite based remote sensing instruments used to collect the sea ice concentration data. This region is traditionally either treated as completely ice covered or filled in based on the observed boundary conditions of the region \cite{StrongAndGolden:2016}. For KMD, it is not necessary to fill in this region and so the points in the polar data gap were excluded from our analysis. Data files were missing for a small number (three in the northern hemisphere and two in the southern hemisphere) of months in the 1980's, so it was necessary to interpolate over the missing months to allow use of all data back to 1979, giving 40 full years of data (1979 to 2018). The arrays for each month were then combined into a 2-D data matrix for performing KMD analysis run using KMD algorithms based on both Arnoldi \cite{Rowleyetal:2009} and DMD type methods \cite{DMDbook,Drmacetal:2018}. The results from the two categories of algorithms were found to be identical, which was taken to be a strong indication that the results are a good representation of the true Koopman eigenvalues and modes. As described in the text, the calculated Koopman eigenvalues showed the time dependence (oscillatory and growth/decay) of the Koopman modes, which themselves showed the spatial structure of the time dependence of the input data. To capture relatively short time scale dynamics, the analysis was performed on windowed data sets. The windowing consisted of performing KMD on subsets of the sea ice concentration data covering time periods of 5 to 40 years (e.g. five-year windows consisted of 1979-1983, 1980-1984, . . . , 2014-2018). Reconstruction of the $N_{p}$ sea ice concentration pixel values $\mathbf{C}$ at discrete time step $k$ is performed using the Koopman eigenvalues $\lambda_{j}$ and the Koopman modes $\mathbf{v}_{j}$ obtained from applying KMD to the concentration values over $N$ time steps (months, in this case): \begin{equation*} \mathbf{C}_k = \sum_{j=1}^{N} \lambda_{j}^{k-1} \mathbf{v}_{j} \end{equation*} Here, there are $N$ Koopman eigenvalues and Koopman modes, where each Koopman eigenvalue is a single complex number and each Koopman mode has dimensions 1 by $N_{p}$. For $1 \le k \le N$, $\mathbf{C_{k}}$ is termed a reconstruction of the $k$th time step in the original data $\mathbf{C}$, as the Koopman eigenvalues and modes came from a decomposition of the observations over this time range and should simply reproduce the data used as input to the KMD. For $k>N$, $\mathbf{C_{k}}$ is a prediction of the future behavior of the sea ice concentration for the (future) $k$th time step, based on the system dynamics deduced from decomposition of earlier observations. No probability distribution is assumed in the KMD process so no statistical methods were applied. The deviation between the KMD reconstruction-based predictions of future sea ice concentrations and the actual values are due to two factors: the finite dimensionality of numerical realizations of KMD algorithms, which for relatively high-dimensional data as used in this study is not expected to be a major source of error, and the stochastic nature of the underlying climatological processes driving sea ice concentration dynamics, which will produce behavior not predictable in a purely dynamical model such as that produced by KMD reconstruction-based predictions. \section*{Results} We have performed KMD processing on the sea ice concentration data using both Arnoldi-type and DMD-based KMD algorithms. All of the algorithms used gave very similar results. This suggests that the sea ice concentration data dynamical behavior is “well behaved” in the sense that the resulting condition number is sufficiently small that any of the various approximations of the Koopman decomposition are valid here \cite{Drmacetal:2018} and thus supports the conclusion that the KMD results obtained here are physically meaningful and not numerical artifacts. Figs. \ref{modes_north} and \ref{modes_south} show Koopman modes corresponding to the mean and annual variation in sea ice concentration for two 5-year time periods (1979-1983 and 2014-2018), as well as modes corresponding to long-term exponential decay over a 40-year time period. The mean mode in each case is defined as that with the largest L2-norm (taken over the components of the mode) and zero or negligible real and imaginary eigenvalue. The annual mode in each case is the mode with a $\tau^{osc}$ value closest to 12 months. In all cases the annual mode was unambiguously identifiable as a large L2-norm mode. The long-term exponential decay modes shown for the 40-year window are the two largest L2-norm modes with $\tau^{decay}$ periods greater than one year. Note that although the modes have the same units as the input data, the modes can include non-physical values (i.e. concentration values less than 0\% or greater than 100\%) because the modes are mathematical structures resulting from a decomposition of the input observable data and not themselves direct representations of observable quantities. A similar distinction can be made with, e.g., the Fourier coefficients in Fourier analysis, the values of which are not limited to values between the extrema of the analyzed function. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.96\textwidth]{fig3_mean_mode_1979to1983_north} \caption{} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.96\textwidth]{fig4_annual_mode_1979to1983_north} \caption{} \end{subfigure} \\ \centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.96\textwidth]{fig5_mean_mode_2014to2018_north} \caption{} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.96\textwidth]{fig6_annual_mode_2014to2018_north} \caption{} \end{subfigure} \caption{Koopman modes representing the mean and annual variation in sea ice concentration over five year windows for the northern hemisphere. (a) Mean coverage, 1979-1983 period, (b) annual variation, 1979-1983 period, (c) mean coverage, 2014-2018 period, (d) annual variation, 2014-2018 period. The colorbar units are percent concentration.} \label{modes_north} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.96\textwidth]{fig7_mean_mode_1979to1983_south} \caption{} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.96\textwidth]{fig8_annual_mode_1979to1983_south} \caption{} \end{subfigure} \\ \centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.96\textwidth]{fig9_mean_mode_2014to2018_south} \caption{} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.96\textwidth]{fig10_annual_mode_2014to2018_south} \caption{} \end{subfigure} \caption{Koopman modes representing the mean and annual variation in sea ice concentration over five year windows for the southern hemisphere. (a) Mean coverage, 1979-1983 period, (b) annual variation, 1979-1983 period, (c) mean coverage, 2014-2018 period, (d) annual variation, 2014-2018 period. The colorbar units are percent concentration, but as each mode is one component of a decomposition, concentration values in the modes can take on non-physical values.} \label{modes_south} \end{figure} Comparison of the mean modes in Fig. \ref{modes_north} (a) with Fig. \ref{modes_north} (c), and Fig. \ref{modes_south} (a) with Fig. \ref{modes_south} (c), shows that the mean sea ice concentration is lower in the later time periods, suggesting that the sea ice does not reach as great an extent in the winter. Similarly, examination of the annual variation modes in the two time periods shows a greater annual variation in sea ice concentration in some regions. This greater annual variation is particularly in the regions of the Beaufort Sea, Kara Sea, and the coastal corridor in between, and in the Bellingshausen and Amundsen Seas near West Antarctica and the Pacific Ocean in East Antarctica, with relatively little change in the Ross Sea area. The combination of the mean and annual variation modes shown in Fig. \ref{modes_north} and \ref{modes_south} can be viewed as first-order models of the sea ice concentration dynamics in each hemisphere over short annual timescales during the respective five-year windows. The observed decreases in sea ice concentration from the earlier to later periods suggests that a mode with primarily long-time decaying behavior is needed to reproduce the long-term loss of sea ice observed in the regions identified above. Such modes are apparent in analysis of the entire 40 year period of the data set. Fig. \ref{longterm} (a) shows such a mode from the northern hemisphere for the entire 40 year data set with $\tau_{decay}$ = 131 months and no oscillatory component. Consistent with other observations \cite{BeitschEtAl:2014, BoisvertEtAl:2016, RickerEtAl:2017}, these modes show that the decrease in sea ice coverage is most pronounced in the regions of the Beaufort Sea and the Arctic Ocean north of European Russia. This is also consistent with the changes in the mean and annual variation observed in the five-year window cases above. Fig. \ref{longterm} (b) shows an equivalent mode for the southern hemisphere, with slow decay ($\tau_{decay}$ = 234 months) and long oscillation period ($\tau^{osc}$ = 238 months) representing a decrease in sea ice concentration, primarily consisting of a decrease in sea ice concentration in West Antarctica. This region is known to be warming more rapidly than the region as a whole \cite{Rignot:2008},\cite{GardnerEtAl:2018}, so this KMD mode is consistent with that observation and the result from the five-year windows above showing decreased mean ice coverage and increased annual variation near West Antarctica. Note that the exponential decay described by the identified modes occurs in the geographic regions indicated by the spatial content of the mode, so the decrease in sea ice concentration on those time scales occurs locally in those regions. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.96\textwidth]{fig11_longterm_mode_1979to2018_north} \caption{} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=0.96\textwidth]{fig12_longterm_mode_1979to2018_south} \caption{} \end{subfigure} \caption{Koopman modes representing long term, exponential decay for the period 1979-2018. (a) Koopman mode representing long term, exponential decay in the northern hemisphere, corresponding to exponential decay with $\tau_{decay}$ =131 months. (b) Koopman mode representing long term, exponential decay in the southern hemisphere, corresponding to exponential decay in the Antarctic with $\tau_{decay}$ = 234 months. The colorbar units are percent concentration, but as each mode is one component of a decomposition, concentration values in the modes can take on non-physical values.} \label{longterm} \end{figure} The identification of an oscillatory period comparable to or greater than half of the time period of the input observations (that is, 238 months as described above, compared to the 480 months of available input data) does not violate the Nyquist criterion, as the large number of spatial dimensions effectively permits sampling of the underlying system dynamics over wider range of oscillation phase values than would observation of the time variation of a single spatial point. Fig. \ref{prediction_north} shows example northern hemisphere sea ice concentration prediction results compared with the actual sea ice concentration for the same month. The data shown are the predictions for March, when the sea ice concentration is at its annual maximum, and September, when the sea ice concentration is at its annual minimum, for the four years following the 30-year window 1984-2013 that was used as the input data for KMD, where prediction calculated included all of the 360 KMD modes from the decomposition of the 360 months in the 30-year time window. It is seen that the prediction results match the general extent and magnitude of the actual data as well as capturing many small-scale features such as the shape of the concentration near the east coast of Greenland. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.8\textwidth} \includegraphics[width=\textwidth]{fig13_prediction_north_1984to2013_3_Mar2014} \caption{} \end{subfigure} \begin{subfigure}[b]{0.8\textwidth} \includegraphics[width=\textwidth]{fig14_prediction_north_1984to2013_9_Sep2014} \caption{} \end{subfigure} \begin{subfigure}[b]{0.8\textwidth} \includegraphics[width=\textwidth]{fig19_prediction_north_1984to2013_39_Mar2017} \caption{} \end{subfigure} \begin{subfigure}[b]{0.8\textwidth} \includegraphics[width=\textwidth]{fig20_prediction_north_1984to2013_45_Sep2017} \caption{} \end{subfigure} \caption{Comparison of actual data and prediction results for the winter sea ice maxima and summer minima in the northern hemisphere for KMA performed on the input data period January 1984 to December 2013 (Left: actual concentration. Middle: predicted concentration. Right: Absolute difference between actual and predicted concentration). (a) March 2014, (b) September 2014, (c) March 2017, (d) September 2017.} \label{prediction_north} \end{figure} Fig. \ref{prediction_south} shows example southern hemisphere sea ice concentration prediction results compared with the actual sea ice concentration for the same month. The data shown are the predictions for March, when the sea ice concentration is at its annual minimum, and September, when the sea ice concentration is at its annual maximum, for the four years following the 30-year window 1984-2013 that was used as the input data for KMD. Again, it is seen that the prediction results match the general extent and magnitude of the actual data as well as capturing many small scale features, such as the remaining summer sea ice in the smaller bays and sea around East Antarctica and off of Marie Byrd Land in West Antarctica. The predictions tend to overestimate the sea ice concentration in the oceans away from the coast during the summer sea ice minima and underestimate the magnitude of the concentration values during the winter sea ice maxima while still capturing the winter sea ice extent well. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.85\textwidth} \includegraphics[width=\textwidth]{fig21_prediction_south_1984to2013_3_Mar2014} \caption{} \end{subfigure} \begin{subfigure}[b]{0.85\textwidth} \includegraphics[width=\textwidth]{fig22_prediction_south_1984to2013_9_Sep2014} \caption{} \end{subfigure} \begin{subfigure}[b]{0.85\textwidth} \includegraphics[width=\textwidth]{fig27_prediction_south_1984to2013_39_Mar2017} \caption{} \end{subfigure} \begin{subfigure}[b]{0.85\textwidth} \includegraphics[width=\textwidth]{fig28_prediction_south_1984to2013_45_Sep2017} \caption{} \end{subfigure} \caption{Comparison of actual data and prediction results for the summer sea ice minima and winter maxima in the southern hemisphere for 1984-2013 input data (Left: actual concentration. Middle: predicted concentration. Right: Absolute difference between actual and predicted concentration). (a) March 2014, (b) September 2014, (c) March 2017, (d) September 2017.} \label{prediction_south} \end{figure} Fig. \ref{prediction_mean} shows a different view of the goodness of the northern hemisphere prediction, showing the mean of the actual data (blue lines) and predictions (red lines) for the entire northern hemisphere and for each region. The values were computed by averaging the actual or predicted sea ice concentration of each pixel within a given region. This shows the general trends of sea ice concentration in each region, rather than being a pixel-to-pixel comparison of the actual and predicted results. Here we see that the various Arctic seas mentioned above and other regions with large seasonal variations show good agreement between the actual and predicted results. This implies that while the prediction may not always be geographically precise in predicting the distribution of sea ice concentration within a particular region, it is generally successful at predicting the average sea ice concentration within the region. Close examination of regions of interest shows that even when discrepancies exist between the prediction and actual results of the summer sea ice concentration minimum, such as in the Central Arctic, the prediction does match the trend of the actual result (i.e., the summer minimum decreases year to year for the first three years, then increases in the fourth year). For the southern hemisphere it is seen that in the first year, the prediction of the maximum sea ice concentration is very good for each region, and even the minimum for the following year is reasonably well predicted in most of the regions. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.65\textwidth} \centering \includegraphics[width=\textwidth]{fig29_predictionMean_north_1984to2013} \caption{} \end{subfigure}\\ \begin{subfigure}[b]{0.65\textwidth} \centering \includegraphics[width=\textwidth]{fig30_predictionMean_south_1984to2013} \caption{} \end{subfigure} \caption{Prediction of mean sea ice concentration for all polar region seas. Comparison of actual (blue) and predicted (red) northern hemisphere mean sea ice concentration for the entire hemisphere and for each region. Prediction is based on KMD of 30 year data range (1984 to 2013) and prediction of four future years. Vertical axis units are sea ice concentration in percent. (a) Prediction for all Arctic polar region seas; (b) prediction for all Antarctic polar region seas.} \label{prediction_mean} \end{figure} \section*{Discussion and Conclusions} These results show not only the previously-known existence of long term variation in sea ice concentration \cite{CavalieriAndParkinson:2012, StrongAndRigor:2013}, including long term decreases in sea ice coverage near West Antarctica and in the Arctic marginal seas, but also that a long-term exponential decrease in sea ice concentration exists and that Koopman Mode Decomposition allows a precise geographic view of where changes occur on annual and multi-year timescales. Such nonlinear trends in dynamics commonly result from positive feedback mechanisms such as those suggested in sea-ice dynamics studies \cite{comisoetal:2008,cohenetal:2014,goosseetal:2018}. The prediction results are possible over multi-year periods and also capture both large-scale features and trends in sea ice distribution and certain small-scale geographic details. The existence of long-term exponentially decaying modes seems to be of potentially substantial physical significance and warrants further measurement (including other physical fields) and investigation. A limitation of the application of KMD is that, as a purely data-driven tool, it does not provide the physical insight into the underlying forcing or other drivers of a system’s dynamical behavior that numerical or theoretical models can provide. In this case, the geographic heterogeneity of the sea ice concentration behavior in Antarctica suggests a possible link with a proposed physical driver of the decrease in the Antarctic ice mass balance. Recent work \cite{rignot2019four} suggests the circumpolar deep water (CDW) flow as a physical mechanism for the decline of land-based ice due to increased glacier flow, where this increased decline is largest in the regions listed above where the decrease in the mean mode and increase in the annual variation mode is most apparent. The undersea topography of these regions is most consistent with the upwelling of relatively warm water by a strengthening CDW, leading to increase melting of ice shelves and, we suggest, reduced sea ice formation. The accuracy of the predictions of future sea ice behavior by KMD reconstruction depends on the extent to which the sea ice concentration dynamics are governed by underlying nonlinear continuous processes, rather than stochastic or discontinuous drivers. That is, KMD-based prediction is expected to accurately predict variations in sea ice concentration due to the interactions of both long-term growth or decay and oscillatory behavior on fast or slow time-scales, however, changes due to “tipping points” such as the greater mixing between the Barents Sea and North Atlantic \cite{lind2018arctic} are not predictable from purely data-driven examination of sea ice concentration. \section*{Data Management} All data used in this work were obtained from the NSIDC Sea Ice Index \cite{nsidc_seaIceIndex}. \section*{Acknowledgments} This work was supported by ONR contracts N00014-18-P-2004 and N00014-19-C-1053 (Program Managers: Dr. Reza Malek-Madani and Dr. Behzad Kamgar-Parsi). \nolinenumbers
1,477,468,750,274
arxiv
\section{Introduction} \label{sec:intro} Speaker embedding refers to a fixed-length continuous-valued vector extracted from a variable-length utterance~\cite{lee2021asvtorch}. In speaker verification, the extracted speaker embedding is forwarded to the backend classifier. It is important that a speaker embedding charaterizes well the speaker's individuality. Ideal speaker embedding contains solely the speaker information so that extracted speaker embeddings are similar for the same speaker, and very different between speakers. Traditionally, GMM supervectors~\cite{campbell2006support,kenny2008study} and i-vector~\cite{dehak2009support} were used to extract speaker embedding. With the development of deep learning, d-vector~\cite{variani2014deep} and x-vector~\cite{snyder2018x} were proposed recently. The basic hypothesis of all of these extraction methods is that the speaker embedding contains solely one speaker’s information. However, in the presence of background noise and interference, it is difficult to record only the voice of the interested speaker. \begin{figure}[!t] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=9cm]{./overview.pdf} \caption{A diagram illustrating noise-robust speaker representation using a separate training scheme (top panel), and an end-to-end training scheme (bottom panel).} \label{fig:Overview} \end{figure} In this work, we seek to extract consistent speaker embedding for a monaural utterance under either noisy or clean conditions. Speech enhancement, which aims to keep the target signal of interest and filter out additive background noise~\cite{loizou2007speech}, is a conventional method for handling noisy utterances. Recently, deep learning based methods have shown significant improvement over the classical methods. These methods usually generate a mask imposed element-wise on the original signal to estimate the underlying clean signal \cite{kjems2009role,wang2014training,yu2020constrained,williamson2015complex,pandey2020learning,liu2020speaker}. Usually the model is trained using mean square loss on noisy samples and targets. This function only guarantees proximity of the enhanced speech to clean speech and simply accounts for the averaged errors, which leads to artifacts in the output. It has been observed that speaker recognition accuracy is often hurt by first enhancing the speech signal and then performing the recognition~\cite{sadjadi2010assessment,shi2020robust}. In addition to using classification loss, we can use also task-specific loss, such as {\em perceptual loss}, or also called {\em deep feature loss}~\cite{johnson2016perceptual}. Perceptual loss is based on the difference of high-level feature representations extracted from a pre-trained auxiliary network. It was first proposed to deal with the image style transfer and super-resolution task~\cite{johnson2016perceptual}. This idea can be generalized for example to speaker verification task-specified enhancement training~\cite{kataria2020feature,kataria2020analysis}. The approach used in~\cite{kataria2020feature,kataria2020analysis} is similar to our method to compare the activation of enhanced and referenced signals using the auxiliary network. However, one drawback of \cite{kataria2020feature,kataria2020analysis} is that the auxiliary network is different from the speaker representation network although these two models are trained using the same task. The number of the total parameters in their system is up to 26.1M, which is larger than a standard x-vector network. This also means that they adopt a two-stage training strategy and the enhancement model is independent of the speaker verification model. Another drawback is that the perceptual loss used in \cite{kataria2020feature,kataria2020analysis} relies heavily on clean speech as a training target which we need to select manually. In this paper, we propose an end-to-end joint training network, of which the total number of parameters is 9.8M. And we modified the perceptual loss to make it work for clean utterances as well. Recent research suggests an end-to-end scheme that combines the speech enhancement task with another downstream speech task \cite{hou2020multi} because it reduces the distortion. As for joint training of speech enhancement with speaker verification, similar work is \cite{shon2019voiceid}, in which the enhancement module is trained using the loss function of speaker identification task to improve the accuracy of speaker verification in both clean and noisy conditions. Although their method shows improvement for speaker verification, intuitively the proposed VoiceID loss based on softmax only tries to enlarge the inter-class differences among different speakers. However, a robust speaker representation network should not only maximize the inter-class distance but also minimize the intra-class variations of the learned embeddings. For this reason, we proposed the PL-EESR model, which focused on both inter-class and intra-class distance efficiently. The contribution of this paper is twofold. Firstly, we proposed a robust end-to-end speaker representation network optimized using perceptual loss and cross entropy loss. Our model contains two parts: a task-specific enhancement module and a speaker embedding extraction module. During training, firstly these two modules are pre-trained in proper order: the embedding extraction module is trained on speaker identification task and fixed first; then the enhancement module is trained using cross entropy loss and perceptual loss with the embedding extraction module as an auxiliary network. Secondly, these two modules are fine-tuned simultaneously to reduce potential mismatch. The idea is illustrated in Fig. \ref{fig:Overview} To verify the effectiveness of our proposed network, we performed a speaker verification task on the development and evaluation part of Speaker In The Wild(SITW). The SITW is viewed as a high SNR condition in our work. We also simulated a noisy SITW by corrupting SITW with background noise. This noisy SITW is set as low SNR condition. \section{perceptual loss} \subsection{Problem description} Let us denote the $X$ to be the log-mel spectrogram of a noisy utterance. The utterance is corrupted by the additive background noise so that it can be written as $X=S+N$, where $S$ and $N$ are log-mel spectrogram of the clean utterance and the noise. The target of speech enhancement in our work is to estimate a mask $M$ and we can get the estimated clean spectrogram $\hat{S}$ by the element-wise multiplication of the mask and the input utterance: $\hat{S}=X\otimes M$. Conventionally, a speech enhancement network is trained using the Euclidean distance between the estimated clean spectrogram and the ground-truth: $\mathcal{L}(S,\hat{S})= \rVert S-\hat{S}\rVert_2$. However, this method may cause some speaker related information loss or distortion in the resultant enhanced spectrogram. If we apply the enhanced spectrogram on the speaker embedding extraction task directly, it may cause degraded performance, especially for high SNR conditions. We believe this is because the Euclidean distance can not capture the perceptual difference between estimated and ground-truth spectrogram. \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{table}[!t] \caption{Dataset and loss function for each stage.\\} \label{dataset} \centering \begin{tabular}{ @{}cccc@{} } \toprule \textbf{Stage}& \textbf{Clean data}& \textbf{Noisy data}& \textbf{Loss} \\ \midrule Pre-training 1 & \textbf{vox\_clean\_aug} & - & CE loss\\ \midrule \multirow{5}*{\tabincell{c}{Pre-training 2;\\ Finetune}} & \textbf{vox\_clean} & \textbf{vox\_noisy} & \multirow{5}*{\tabincell{c}{CE loss;\\ Percep\\-tual loss}} \\ & \textbf{vox\_clean} & \tabincell{c}{\textbf{vox\_noisy\_aug}\\\textbf{(MUSAN)}} & \\ & \tabincell{c}{\textbf{vox\_clean\_aug}\\\textbf{(RIRs)}} & \tabincell{c}{\textbf{vox\_noisy\_aug}\\\textbf{(RIRs)}} & \\ \bottomrule \end{tabular} \end{table} \subsection{Perceptual loss} It is believed that a trained deep network shows different activations in their hidden layers, which helps the model to learn characteristics from the input feature. Based on this assumption, the perceptual loss, which is the distance of activations in hidden layers of the trained auxiliary network with estimated and ground-truth spectrogram as input respectively, is proposed in~\cite{johnson2016perceptual}. According to the principle of perceptual loss, the auxiliary network decides what information to be kept. In this paper, our goal is to enhance the speaker relative information. Therefore, the speaker verification network can be trained first, then it is fixed as the auxiliary network to extract perceptual loss for the enhancement module: \begin{equation} \mathcal{L}(X)= \sum_{i=1}^j\rVert \phi_i(X)-\phi_i(\theta(X))\rVert_2 \label{perceptual} \end{equation} where the $\theta$ and the $\phi$ are the enhancement module and auxiliary speaker embedding extraction module respectively, and $\phi_i$ denotes the $i_{th}$ layer of embedding extraction module. Equation (\ref{perceptual}) is what \cite{kataria2020feature, kataria2020analysis} used. \section{METHOD} \subsection{Architecture} \label{architecture} We now introduce the end-to-end model that incorporates enhancement process and speaker representation together. The flow chart is illustrated in Fig. \ref{exp} (d). As shown in Fig. \ref{exp} (d), at first, the feature (30-dimensional log-Mel spectrogram) is extracted from the speech signal in the time domain. It was used for both the enhancement module and the representation module. To remove the effect of the corpus channel, as mentioned in \cite{pandey2020cross}, we applied the channel normalization to help the enhancement module converge rapidly. The mean $\mu$ and standard deviation $\sigma$ of the clean and noisy channel are computed using the clean and noisy training set respectively. Noisy utterance is normalized before enhancement, as follow: \begin{equation} X(k) \leftarrow \frac{X(k)-\mu_n(k)}{\sigma_n(k)^2} \label{normalization} \end{equation} An inverse normalization is then performed on the enhanced speech: \begin{equation} \hat{S}(k) \leftarrow \hat{S}(k)\times \sigma_c(k)^2+\mu_c(k) \label{inverse-normalization} \end{equation} where $\sigma_n$ and $\mu_n$ are the statistical parameters of the noisy training set, and $\sigma_c$ and $\mu_c$ are corresponding clean version. The $k$ denotes the index of Mel-filter and $X(k)$ is the feature corresponding to $k_{th}$ Mel-filter. After the inverse-channel-normalization, instance normalization is applied before speaker representation extraction \begin{equation} \hat{s}_i(k) \leftarrow \frac{\hat{s}_i(k)-\mu_i(k)}{\sigma_i(k)^2} \label{ins-normalization} \end{equation} where $\sigma_i$ and $\mu_i$ are the statistical parameters of each utterance $\hat{s}_i$. To exploit the temporal context information effectively, three bidirectional long short-term memory (BLSTM) layers and one fully-connected layer are stacked as the enhancement module. The number of features in the hidden state of BLSTM is 128. The activation of the sigmoid is applied on the output of fully-connected layer to generate the mask in the range 0 to 1. Our verification module adopts the x-vector. The first five TDNN layers extract frame-level information. Then two fully connected layers extract information in utterance-level. The two phases are connected by a global average pooling layer. Since we don't know the effect of activation in each layer on our task, all of the five TDNN layers and the first fully connected layer are used to compute perceptual loss in the training stage. The speaker embedding was extracted from the first fully connected layer during inference. \subsection{Training strategy} The perceptual as in (\ref{perceptual}) was used in \cite{kataria2020feature} to train the speech enhancement network. The idea is illustrated in Fig. \ref{exp}(b): the clean utterance is used as the target and the noisy utterance is trained to show the same activations of the auxiliary network as the target does. However, the gradient of (\ref{perceptual}) only concerns the enhancement for noisy utterance. It may cause some potential detrimental to the clean utterance. Therefore, we propose the training scheme as in Fig. \ref{exp}(c) and (d): the mask is generated and applied on clean utterance as well, and the enhancement module is optimized using the gradient of both noisy utterance and clean utterance. The goal of the enhancement module is to make sure the noisy utterance and the clean utterance have a consistent representation. So the perceptual loss in our system is modified as: \begin{equation} \mathcal{L}_{pcptl}(\theta(X),\theta(S))= \sum_{i=1}^j\rVert \phi_i(\theta(X))-\phi_i(\theta(S))\rVert_2 \end{equation} where the $X$ and $S$ are the noisy and clean utterance respectively. As mentioned in Section \ref{architecture}, $j$ equals $6$ in our case. \begin{figure*}[!t] \centering \includegraphics[trim=0cm 0.5cm 0cm 0cm, clip, width=18.3cm]{./exp.pdf} \caption{Block diagrams of (a) optimizing speech enhancement module using cross entropy loss; (b) optimizing speech enhancement module using perceptual loss on noisy speech; (c)optimizing speech enhancement module using cross entropy loss and perceptual loss on both noisy and clean speech; (d)optimizing speech enhancement and speaker extraction modules jointly using cross entropy loss and perceptual loss on both noisy and clean speech. The ``Ch Norm" and ``Ch Inv-Norm'' denote channel normalization and channel inverse-normalization, ``Inst Norm" is the instance normalization.} \label{exp} \end{figure*} A good speaker representation network should minimize the distance between the utterances belonging to the same speaker, while maximize the distance of different speakers in embedding vector space. The cross entropy loss in (\ref{ce}) is a common loss function in the classification problem to enlarge the distance of different speakers. \begin{equation} \mathcal{L}_{ce}(\phi(\theta(S)),y)=-log\frac{exp(\phi(\theta(S))[y])}{\sum_{y'}exp(\phi(\theta(S))[y'])} \label{ce} \end{equation} In our case, the $\phi(\theta(S))$ is the speaker label estimated by the speaker representation module following the speech enhancement process, and $y$ is the ground-truth speaker label. Intuitively, the softmax in (\ref{ce}) enlarge the intra-class discrimination, but it shows no effect on inter-class distance. To remedy this limitation, we train our system using cross entropy loss and perceptual loss jointly: \begin{equation} \mathcal{L}=\lambda\times\mathcal{L}_{pcptl}+(1-\lambda)\times\mathcal{L}_{ce} \end{equation} where the perceptual loss can be viewed as the criterion for inter-class distance. The $\lambda$ is a constant for balancing the intra-class and inter-class distance. We set the value of $\lambda$ as 0.5 in our work. (Code is available online\footnote{Source code: https://github.com/mmmmayi/PL-EESR}.) \section{Experiments} \subsection{Datasets} \label{data} \textbf{Training set:} we combine VoxCeleb1 and VoxCeleb2 \cite{chung2018voxceleb2} as our training set. Background noise in this dataset is inevitable because it is collected from YouTube. But we still set samples in this dataset as clean utterances in our experiment and we name it as \textbf{vox\_clean}. It should be noted that speakers who are in both VoxCele2 and Speakers in the Wild (SITW) \cite{mclaren2016speakers} are removed from voxceleb2 because we use the SITW as our test set. The \textbf{vox\_clean} is augmented with noise drawn from MUSAN \cite{snyder2015musan} and is convolved with simulated RIRs \cite{ko2017study}. The augmentation process is based on Kaldi SITW/v2 recipe \cite{povey2011kaldi}. The augmented \textbf{vox\_clean} is called \textbf{vox\_clean\_aug}. To simulate speech in noisy conditions, we corrupt the \textbf{vox\_clean} using the noise randomly selected from MUSAN as noisy utterance. The SNR of each noisy utterance is randomly selected from the range of 0 to 20 excluding $\{0,5,10,15,20\}$. This data set is named \textbf{vox\_noisy}. Each sample in the \textbf{vox\_noisy} has a corresponding clean version in the \textbf{vox\_clean}. To be consistent with the \textbf{vox\_clean\_aug}, the \textbf{vox\_noisy} is augmented in the same procedure and we call it \textbf{vox\_noisy\_aug}.\\ \textbf{Test set:} We use two datasets to evaluate the performance of our method in noisy and clean conditions. The first dataset is SITW. Being similar to VoxCeleb, the speech in SITW is collected from open-source media channels as well. We use the development and evaluation part core-core condition of SITW to evaluate our network in clean condition. We generated the noisy Speakers in the Wild (NSITW) as our second test set. The noise used to corrupt the SITW is provided by DNS-challenge \cite{reddy2021icassp}. We manually select the noise categories which are similar to MUSAN (bubble, music and plain noise). Totally 282 categories are used and the SNR of each noisy utterance in NSITW is randomly selected from set $\{0,5,10,15,20\}$. The NSITW is used to evaluate the performance of our model in noisy conditions. \subsection{Feature} In VoxCeleb1 and VoxCeleb2, multiple scenarios were given for each speaker, and there are several utterances in each scenario. We concatenated utterances in the same scenario to one training utterance. We generate 30-D log Mel-spectrogram as the feature from each training utterance. The sample rate of all utterances in our work is 16kHz. Our feature is extracted with a Hann window of 400 frames width and 160 frames hop size. This feature is used for both speech enhancement module and speaker representation module, but we need to note that mean and variance normalization (MVN) is performed only for speaker representation. The training utterance whose length is less than 500 frames and the speaker whose utterance amount is less than 10 are removed. Therefore, there are 5916 speakers in the training set. We randomly clipped a consecutive 300-frame segment from each training utterance to optimize our network during training. We do not use voice activity detection in the training stage because the silence segment has been removed from VoxCeleb1 and VoxCeleb2, but energy based voice activity detection is performed in the test stage. \subsection{Training} \label{training} Our training process includes two stages: pre-training stage and finetune stage. For the pre-training stage, firstly, we trained our speaker representation module (Pre-training 1) with a batch size of 512 and optimizer of stochastic gradient descent (SGD). The initial learning rate is 0.2 and it decreases by half when the loss decrease ratio in each epoch is less than 0.01. The early stopping scheme was applied as soon as the learning rate decreases twice in succession. As shown in Table \ref{dataset}, the \textbf{vox\_clean\_aug} is used to pre-train the speaker representation module. Only the cross entropy loss on the speaker label is used in this stage. We use the trained speaker representation module as one of our baseline models to show the effect of speech enhancement as well. The trained speaker representation module is fixed as the auxiliary network to pre-train our speaker enhancement module (Pre-training 2). The enhancement module is optimized using the perceptual loss and cross entropy loss jointly. As summarized in Table \ref{dataset}, three pairs of clean and noisy utterance are used in this stage. For \textbf{vox\_noisy}, their clean data is the corresponding sample in \textbf{vox\_clean}. For \textbf{vox\_noisy\_aug}, two types of augmentation are included just like \textbf{vox\_clean\_aug}. When utterances in \textbf{vox\_noisy} are augmented with MUSAN, we expect our enhancement module to remove the effect of augmentation as well. So the clean data for these utterances are their corresponding utterances in \textbf{vox\_clean}. However, for the utterances in \textbf{vox\_noisy\_aug} which are convolved with RIRs, we don't expect our enhancement work for dereverberation. So their corresponding clean utterances should come from \textbf{vox\_clean\_aug} in which samples augmented with RIRs. The batch size of this stage is 128, which includes 64 noisy utterances and their clean version. We use the Adadelta optimizer and initial learning rate of 0.3 in this stage. The learning rate decrease and early stopping scheme are identical with Pre-training 1. After pre-training, the speech enhancement module and speaker representation module are finetuned together to reduce potential mismatch. Except the initial rate is 0.0001 in this stage, both training setting and data set are same as what we used in Pre-training 2. \subsection{Evaluation} To evaluate the performance of our system to extract robust speaker representation in both clean and noisy environments, we conduct the speaker verification task using both SITW and NSITW. In the inference stage, \textbf{vox\_noisy\_aug} passing through the enhancement module is used to train the PLDA backend. The Equal Error Rate (EER) and minimum Detection Cost Function (minDCF) with target prior $p = 0.05$ are used to evaluate our system. \subsection{Baseline} As we mentioned in Section \ref{training}, one of our baseline models is our proposed model without speech enhancement module (PL-EESR w/o enh), which is used to compare the effect of speech enhancement on our system. Then we use the standard x-vector as our second baseline model (x-vector). We trained the model using the Kaldi SITW/v2 recipe. We set the x-vector as the state-of-the-art model for the speaker verification task. We trained PL-EESR w/o enh using the setting of Pre-training 1 which is introduced in Section \ref{training}. This is the difference between it and x-vector. \section{Results} \begin{table}[!t] \caption{Comparison of our PL-EESR with two baseline models on SITW and NSITW.} \label{result1} \centering \setlength\tabcolsep{3pt} \begin{tabular}{ p{1cm}ccclcc} \toprule \multicolumn{1}{c}{Test set}& \multicolumn{1}{c}{System}& \multicolumn{2}{c}{Dev} & & \multicolumn{2}{c}{Eval} \\ \cline{3-4} \cline{6-7} & & EER & DCF & &EER&DCF\\ \hline \multirow{3}*{SITW}& x-vector& 2.965&0.1946 & &3.417 & 0.2241\\ &PL-EESR w/o enh &3.656&0.2215&&3.442 & 0.2210\\ & \textbf{PL-EESR} &\textbf{2.851} & \textbf{0.1805}& &\textbf{2.460}& \textbf{0.1690}\\ \hline \multirow{3}*{NSITW} & x-vector& 6.816&0.3685 & &7.190& 0.4131\\ &PL-EESR w/o enh &7.470&0.3835&& 8.066& 0.4550\\ & \textbf{PL-EESR} &\textbf{4.698} & \textbf{0.3007}& & \textbf{5.272} &\textbf{0.3334} \\ \bottomrule \end{tabular} \end{table} \begin{table*}[h] \caption{Comparison of our end-to-end training scheme with different training settings. ``Joint" and ``Separate" denote whether the enhancement module and speaker representation module are fine-tuned together. ``Noisy" and ``Clean" denote whether the network is optimized on the gradient of noisy utterance or clean utterance.\\} \label{result2} \centering \begin{tabular}{ ccccccclcc} \toprule \multicolumn{1}{c}{Test set}& \multicolumn{1}{c}{System}& \multicolumn{1}{c}{Training}& \multicolumn{1}{c}{CE Loss Objective} &\multicolumn{1}{c}{Perceptual Loss Objective} &\multicolumn{2}{c}{Dev} & & \multicolumn{2}{c}{Eval} \\ \cline{6-7} \cline{9-10} &&& &&EER & DCF & &EER&DCF\\ \hline \multirow{4}*{SITW}&(a)&Separate &Noisy &- &\textbf{2.734}&0.1906 && 2.788 & 0.1833 \\ &(b) &Separate&-&Noisy&3.812 & 0.2274& &3.827&0.2400 \\ &(c) &Separate&Noisy; Clean&Noisy; Clean&2.811 &0.1919 & &2.570&0.1804 \\ &(d) &Joint&Noisy; Clean&Noisy; Clean& 2.851&\textbf{0.1805}& &\textbf{2.460}& \textbf{0.1690}\\ \hline \multirow{4}*{NSITW}&(a)&Separate &Noisy &- &5.930 & 0.3351&&5.931& 0.3706\\ &(b) &Separate&-&Noisy&7.624 &0.3951& &8.124&0.4611\\ &(c) &Separate&Noisy; Clean&Noisy; Clean&5.160 &0.3161 & &5.632& 0.3539\\ &(d) &Joint&Noisy; Clean&Noisy; Clean& \textbf{4.698}&\textbf{0.3007} & &\textbf{5.272}& \textbf{0.3334}\\ \bottomrule \end{tabular} \end{table*} \subsection{Baseline results} Table \ref{result1} shows the comparison of the baselines and our PL-EESR on the development set and evaluation set of SITW (top) and NSITW (bottom) respectively. The text in bold is the best performance for each metric in different conditions. We can find in clean condition, compared without enhancement processing, our method achieves 28.5\% and 23.5\% relative improvements in terms of EER and minDCF respectively in evaluation set, 22.0\% and 18.5\% in the development set. In a noisy environment, the relative improvements in terms of EER and minDCF are 34.6\% and 26.7\% for the evaluation set, 37.1\% and 21.6\% for the development set. Compared to the standard x-vector, which can be seen as the state of the art model for speaker representation, the relative improvements are 28.0\% and 24.6\% in terms of EER and minDCF for evaluation set, 3.8\% and 7.2\% for the development set for SITW. For NSITW, the improvements in evaluation set and development set are 26.7\%, 19.3\% and 31.1\%, 18.4\% respectively. We can conclude from this comparison that our proposed method is beneficial to speaker representation extraction from both clean and noisy utterances. \subsection{Overall comparisons} Table \ref{result2} shows the performance of our end-to-end model with different training settings. Firstly, we trained the speech enhancement module and speaker verification module separately to verify the effect of the perceptual loss, which means there is no finetune stage in System (a)-(c) in Table \ref{result2} and all of these three systems are optimized in the stage of Pre-training 2. In the first experiment, we use only the feedback of the speaker classification task, which is the cross entropy loss on predicted speaker ID with noisy utterance as input. The system flow chart is shown in \ref{exp}(a), and this training scheme is based on the basic idea in \cite{shon2019voiceid}. The results are summarized as System (a) in Table \ref{result2}. System (b) is the original perceptual loss used in \cite{kataria2020feature}. In this experiment, the clean utterance is set as the target and the module is trained on the gradient of noisy utterance as shown in \ref{exp} (b). This performance is summarized in System (b) in Table \ref{result2}. Then we modified the perceptual loss function. Since the cross entropy loss on the speaker label can be used to enlarge the inter-class distance, the perceptual loss in our system focus on decrease the intra-class distance. Specifically, after computing the perceptual loss, the module is optimized on both the gradient of noisy utterance and clean utterance. To make sure this idea works for utterance in clean environments as well, the cross entropy is computed for both noisy utterance and clean utterance. The flow chart of this experiment is shown in Fig. \ref{exp} (c) and results are summarized in System (c) of Table \ref{result2}. System (d) is used to verify the facility of training jointly to decrease the mismatch between these two modules. The only difference between System (c) and System (d) in Table \ref{result2} is that System(d) is trained in finetune stage. Comparing the results of System (a)-(c) in Table \ref{result2} with baseline performance in Table \ref{result1}, we can find using System (b) harmful for both clean and noisy environments. Perhaps this degradation is because the \textbf{vox\_clean} still involved some background noise although we set it as the target in training. However, both System (a) and System (c) outperform the baseline. This shows the necessity of end-to-end speech enhancement to reduce distortion and information loss. Finally, the System (d) gains the most improvements in all of these four systems in most conditions except the EER of devaluation set of SITW. \section{Conclusion} Motivated by the unsatisfactory performance of speech enhancement applied on speaker embedding extraction task, we proposed the end-to-end training scheme for robust speaker representation. The model is trained on loss functions that aim at mapping the noisy and clean utterances to the identical representation as well as classifying speaker labels. The experiment results show that our method outperforms baseline models in both clean and noisy environments. \bibliographystyle{IEEEbib} \section{Introduction} \label{sec:intro} Speaker embedding refers to a fixed-length continuous-valued vector extracted from a variable-length utterance~\cite{lee2021asvtorch}. In speaker verification, the extracted speaker embedding is forwarded to the backend classifier. It is important that a speaker embedding charaterizes well the speaker's individuality. Ideal speaker embedding contains solely the speaker information so that extracted speaker embeddings are similar for the same speaker, and very different between speakers. Traditionally, GMM supervectors~\cite{campbell2006support,kenny2008study} and i-vector~\cite{dehak2009support} were used to extract speaker embedding. With the development of deep learning, d-vector~\cite{variani2014deep} and x-vector~\cite{snyder2018x} were proposed recently. The basic hypothesis of all of these extraction methods is that the speaker embedding contains solely one speaker’s information. However, in the presence of background noise and interference, it is difficult to record only the voice of the interested speaker. \begin{figure}[!t] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip, width=9cm]{./overview.pdf} \caption{A diagram illustrating noise-robust speaker representation using a separate training scheme (top panel), and an end-to-end training scheme (bottom panel).} \label{fig:Overview} \end{figure} In this work, we seek to extract consistent speaker embedding for a monaural utterance under either noisy or clean conditions. Speech enhancement, which aims to keep the target signal of interest and filter out additive background noise~\cite{loizou2007speech}, is a conventional method for handling noisy utterances. Recently, deep learning based methods have shown significant improvement over the classical methods. These methods usually generate a mask imposed element-wise on the original signal to estimate the underlying clean signal \cite{kjems2009role,wang2014training,yu2020constrained,williamson2015complex,pandey2020learning,liu2020speaker}. Usually the model is trained using mean square loss on noisy samples and targets. This function only guarantees proximity of the enhanced speech to clean speech and simply accounts for the averaged errors, which leads to artifacts in the output. It has been observed that speaker recognition accuracy is often hurt by first enhancing the speech signal and then performing the recognition~\cite{sadjadi2010assessment,shi2020robust}. In addition to using classification loss, we can use also task-specific loss, such as {\em perceptual loss}, or also called {\em deep feature loss}~\cite{johnson2016perceptual}. Perceptual loss is based on the difference of high-level feature representations extracted from a pre-trained auxiliary network. It was first proposed to deal with the image style transfer and super-resolution task~\cite{johnson2016perceptual}. This idea can be generalized for example to speaker verification task-specified enhancement training~\cite{kataria2020feature,kataria2020analysis}. The approach used in~\cite{kataria2020feature,kataria2020analysis} is similar to our method to compare the activation of enhanced and referenced signals using the auxiliary network. However, one drawback of \cite{kataria2020feature,kataria2020analysis} is that the auxiliary network is different from the speaker representation network although these two models are trained using the same task. The number of the total parameters in their system is up to 26.1M, which is larger than a standard x-vector network. This also means that they adopt a two-stage training strategy and the enhancement model is independent of the speaker verification model. Another drawback is that the perceptual loss used in \cite{kataria2020feature,kataria2020analysis} relies heavily on clean speech as a training target which we need to select manually. In this paper, we propose an end-to-end joint training network, of which the total number of parameters is 9.8M. And we modified the perceptual loss to make it work for clean utterances as well. Recent research suggests an end-to-end scheme that combines the speech enhancement task with another downstream speech task \cite{hou2020multi} because it reduces the distortion. As for joint training of speech enhancement with speaker verification, similar work is \cite{shon2019voiceid}, in which the enhancement module is trained using the loss function of speaker identification task to improve the accuracy of speaker verification in both clean and noisy conditions. Although their method shows improvement for speaker verification, intuitively the proposed VoiceID loss based on softmax only tries to enlarge the inter-class differences among different speakers. However, a robust speaker representation network should not only maximize the inter-class distance but also minimize the intra-class variations of the learned embeddings. For this reason, we proposed the PL-EESR model, which focused on both inter-class and intra-class distance efficiently. The contribution of this paper is twofold. Firstly, we proposed a robust end-to-end speaker representation network optimized using perceptual loss and cross entropy loss. Our model contains two parts: a task-specific enhancement module and a speaker embedding extraction module. During training, firstly these two modules are pre-trained in proper order: the embedding extraction module is trained on speaker identification task and fixed first; then the enhancement module is trained using cross entropy loss and perceptual loss with the embedding extraction module as an auxiliary network. Secondly, these two modules are fine-tuned simultaneously to reduce potential mismatch. The idea is illustrated in Fig. \ref{fig:Overview} To verify the effectiveness of our proposed network, we performed a speaker verification task on the development and evaluation part of Speaker In The Wild(SITW). The SITW is viewed as a high SNR condition in our work. We also simulated a noisy SITW by corrupting SITW with background noise. This noisy SITW is set as low SNR condition. \section{perceptual loss} \subsection{Problem description} Let us denote the $X$ to be the log-mel spectrogram of a noisy utterance. The utterance is corrupted by the additive background noise so that it can be written as $X=S+N$, where $S$ and $N$ are log-mel spectrogram of the clean utterance and the noise. The target of speech enhancement in our work is to estimate a mask $M$ and we can get the estimated clean spectrogram $\hat{S}$ by the element-wise multiplication of the mask and the input utterance: $\hat{S}=X\otimes M$. Conventionally, a speech enhancement network is trained using the Euclidean distance between the estimated clean spectrogram and the ground-truth: $\mathcal{L}(S,\hat{S})= \rVert S-\hat{S}\rVert_2$. However, this method may cause some speaker related information loss or distortion in the resultant enhanced spectrogram. If we apply the enhanced spectrogram on the speaker embedding extraction task directly, it may cause degraded performance, especially for high SNR conditions. We believe this is because the Euclidean distance can not capture the perceptual difference between estimated and ground-truth spectrogram. \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{table}[!t] \caption{Dataset and loss function for each stage.\\} \label{dataset} \centering \begin{tabular}{ @{}cccc@{} } \toprule \textbf{Stage}& \textbf{Clean data}& \textbf{Noisy data}& \textbf{Loss} \\ \midrule Pre-training 1 & \textbf{vox\_clean\_aug} & - & CE loss\\ \midrule \multirow{5}*{\tabincell{c}{Pre-training 2;\\ Finetune}} & \textbf{vox\_clean} & \textbf{vox\_noisy} & \multirow{5}*{\tabincell{c}{CE loss;\\ Percep\\-tual loss}} \\ & \textbf{vox\_clean} & \tabincell{c}{\textbf{vox\_noisy\_aug}\\\textbf{(MUSAN)}} & \\ & \tabincell{c}{\textbf{vox\_clean\_aug}\\\textbf{(RIRs)}} & \tabincell{c}{\textbf{vox\_noisy\_aug}\\\textbf{(RIRs)}} & \\ \bottomrule \end{tabular} \end{table} \subsection{Perceptual loss} It is believed that a trained deep network shows different activations in their hidden layers, which helps the model to learn characteristics from the input feature. Based on this assumption, the perceptual loss, which is the distance of activations in hidden layers of the trained auxiliary network with estimated and ground-truth spectrogram as input respectively, is proposed in~\cite{johnson2016perceptual}. According to the principle of perceptual loss, the auxiliary network decides what information to be kept. In this paper, our goal is to enhance the speaker relative information. Therefore, the speaker verification network can be trained first, then it is fixed as the auxiliary network to extract perceptual loss for the enhancement module: \begin{equation} \mathcal{L}(X)= \sum_{i=1}^j\rVert \phi_i(X)-\phi_i(\theta(X))\rVert_2 \label{perceptual} \end{equation} where the $\theta$ and the $\phi$ are the enhancement module and auxiliary speaker embedding extraction module respectively, and $\phi_i$ denotes the $i_{th}$ layer of embedding extraction module. Equation (\ref{perceptual}) is what \cite{kataria2020feature, kataria2020analysis} used. \section{METHOD} \subsection{Architecture} \label{architecture} We now introduce the end-to-end model that incorporates enhancement process and speaker representation together. The flow chart is illustrated in Fig. \ref{exp} (d). As shown in Fig. \ref{exp} (d), at first, the feature (30-dimensional log-Mel spectrogram) is extracted from the speech signal in the time domain. It was used for both the enhancement module and the representation module. To remove the effect of the corpus channel, as mentioned in \cite{pandey2020cross}, we applied the channel normalization to help the enhancement module converge rapidly. The mean $\mu$ and standard deviation $\sigma$ of the clean and noisy channel are computed using the clean and noisy training set respectively. Noisy utterance is normalized before enhancement, as follow: \begin{equation} X(k) \leftarrow \frac{X(k)-\mu_n(k)}{\sigma_n(k)^2} \label{normalization} \end{equation} An inverse normalization is then performed on the enhanced speech: \begin{equation} \hat{S}(k) \leftarrow \hat{S}(k)\times \sigma_c(k)^2+\mu_c(k) \label{inverse-normalization} \end{equation} where $\sigma_n$ and $\mu_n$ are the statistical parameters of the noisy training set, and $\sigma_c$ and $\mu_c$ are corresponding clean version. The $k$ denotes the index of Mel-filter and $X(k)$ is the feature corresponding to $k_{th}$ Mel-filter. After the inverse-channel-normalization, instance normalization is applied before speaker representation extraction \begin{equation} \hat{s}_i(k) \leftarrow \frac{\hat{s}_i(k)-\mu_i(k)}{\sigma_i(k)^2} \label{ins-normalization} \end{equation} where $\sigma_i$ and $\mu_i$ are the statistical parameters of each utterance $\hat{s}_i$. To exploit the temporal context information effectively, three bidirectional long short-term memory (BLSTM) layers and one fully-connected layer are stacked as the enhancement module. The number of features in the hidden state of BLSTM is 128. The activation of the sigmoid is applied on the output of fully-connected layer to generate the mask in the range 0 to 1. Our verification module adopts the x-vector. The first five TDNN layers extract frame-level information. Then two fully connected layers extract information in utterance-level. The two phases are connected by a global average pooling layer. Since we don't know the effect of activation in each layer on our task, all of the five TDNN layers and the first fully connected layer are used to compute perceptual loss in the training stage. The speaker embedding was extracted from the first fully connected layer during inference. \subsection{Training strategy} The perceptual as in (\ref{perceptual}) was used in \cite{kataria2020feature} to train the speech enhancement network. The idea is illustrated in Fig. \ref{exp}(b): the clean utterance is used as the target and the noisy utterance is trained to show the same activations of the auxiliary network as the target does. However, the gradient of (\ref{perceptual}) only concerns the enhancement for noisy utterance. It may cause some potential detrimental to the clean utterance. Therefore, we propose the training scheme as in Fig. \ref{exp}(c) and (d): the mask is generated and applied on clean utterance as well, and the enhancement module is optimized using the gradient of both noisy utterance and clean utterance. The goal of the enhancement module is to make sure the noisy utterance and the clean utterance have a consistent representation. So the perceptual loss in our system is modified as: \begin{equation} \mathcal{L}_{pcptl}(\theta(X),\theta(S))= \sum_{i=1}^j\rVert \phi_i(\theta(X))-\phi_i(\theta(S))\rVert_2 \end{equation} where the $X$ and $S$ are the noisy and clean utterance respectively. As mentioned in Section \ref{architecture}, $j$ equals $6$ in our case. \begin{figure*}[!t] \centering \includegraphics[trim=0cm 0.5cm 0cm 0cm, clip, width=18.3cm]{./exp.pdf} \caption{Block diagrams of (a) optimizing speech enhancement module using cross entropy loss; (b) optimizing speech enhancement module using perceptual loss on noisy speech; (c)optimizing speech enhancement module using cross entropy loss and perceptual loss on both noisy and clean speech; (d)optimizing speech enhancement and speaker extraction modules jointly using cross entropy loss and perceptual loss on both noisy and clean speech. The ``Ch Norm" and ``Ch Inv-Norm'' denote channel normalization and channel inverse-normalization, ``Inst Norm" is the instance normalization.} \label{exp} \end{figure*} A good speaker representation network should minimize the distance between the utterances belonging to the same speaker, while maximize the distance of different speakers in embedding vector space. The cross entropy loss in (\ref{ce}) is a common loss function in the classification problem to enlarge the distance of different speakers. \begin{equation} \mathcal{L}_{ce}(\phi(\theta(S)),y)=-log\frac{exp(\phi(\theta(S))[y])}{\sum_{y'}exp(\phi(\theta(S))[y'])} \label{ce} \end{equation} In our case, the $\phi(\theta(S))$ is the speaker label estimated by the speaker representation module following the speech enhancement process, and $y$ is the ground-truth speaker label. Intuitively, the softmax in (\ref{ce}) enlarge the intra-class discrimination, but it shows no effect on inter-class distance. To remedy this limitation, we train our system using cross entropy loss and perceptual loss jointly: \begin{equation} \mathcal{L}=\lambda\times\mathcal{L}_{pcptl}+(1-\lambda)\times\mathcal{L}_{ce} \end{equation} where the perceptual loss can be viewed as the criterion for inter-class distance. The $\lambda$ is a constant for balancing the intra-class and inter-class distance. We set the value of $\lambda$ as 0.5 in our work. (Code is available online\footnote{Source code: https://github.com/mmmmayi/PL-EESR}.) \section{Experiments} \subsection{Datasets} \label{data} \textbf{Training set:} we combine VoxCeleb1 and VoxCeleb2 \cite{chung2018voxceleb2} as our training set. Background noise in this dataset is inevitable because it is collected from YouTube. But we still set samples in this dataset as clean utterances in our experiment and we name it as \textbf{vox\_clean}. It should be noted that speakers who are in both VoxCele2 and Speakers in the Wild (SITW) \cite{mclaren2016speakers} are removed from voxceleb2 because we use the SITW as our test set. The \textbf{vox\_clean} is augmented with noise drawn from MUSAN \cite{snyder2015musan} and is convolved with simulated RIRs \cite{ko2017study}. The augmentation process is based on Kaldi SITW/v2 recipe \cite{povey2011kaldi}. The augmented \textbf{vox\_clean} is called \textbf{vox\_clean\_aug}. To simulate speech in noisy conditions, we corrupt the \textbf{vox\_clean} using the noise randomly selected from MUSAN as noisy utterance. The SNR of each noisy utterance is randomly selected from the range of 0 to 20 excluding $\{0,5,10,15,20\}$. This data set is named \textbf{vox\_noisy}. Each sample in the \textbf{vox\_noisy} has a corresponding clean version in the \textbf{vox\_clean}. To be consistent with the \textbf{vox\_clean\_aug}, the \textbf{vox\_noisy} is augmented in the same procedure and we call it \textbf{vox\_noisy\_aug}.\\ \textbf{Test set:} We use two datasets to evaluate the performance of our method in noisy and clean conditions. The first dataset is SITW. Being similar to VoxCeleb, the speech in SITW is collected from open-source media channels as well. We use the development and evaluation part core-core condition of SITW to evaluate our network in clean condition. We generated the noisy Speakers in the Wild (NSITW) as our second test set. The noise used to corrupt the SITW is provided by DNS-challenge \cite{reddy2021icassp}. We manually select the noise categories which are similar to MUSAN (bubble, music and plain noise). Totally 282 categories are used and the SNR of each noisy utterance in NSITW is randomly selected from set $\{0,5,10,15,20\}$. The NSITW is used to evaluate the performance of our model in noisy conditions. \subsection{Feature} In VoxCeleb1 and VoxCeleb2, multiple scenarios were given for each speaker, and there are several utterances in each scenario. We concatenated utterances in the same scenario to one training utterance. We generate 30-D log Mel-spectrogram as the feature from each training utterance. The sample rate of all utterances in our work is 16kHz. Our feature is extracted with a Hann window of 400 frames width and 160 frames hop size. This feature is used for both speech enhancement module and speaker representation module, but we need to note that mean and variance normalization (MVN) is performed only for speaker representation. The training utterance whose length is less than 500 frames and the speaker whose utterance amount is less than 10 are removed. Therefore, there are 5916 speakers in the training set. We randomly clipped a consecutive 300-frame segment from each training utterance to optimize our network during training. We do not use voice activity detection in the training stage because the silence segment has been removed from VoxCeleb1 and VoxCeleb2, but energy based voice activity detection is performed in the test stage. \subsection{Training} \label{training} Our training process includes two stages: pre-training stage and finetune stage. For the pre-training stage, firstly, we trained our speaker representation module (Pre-training 1) with a batch size of 512 and optimizer of stochastic gradient descent (SGD). The initial learning rate is 0.2 and it decreases by half when the loss decrease ratio in each epoch is less than 0.01. The early stopping scheme was applied as soon as the learning rate decreases twice in succession. As shown in Table \ref{dataset}, the \textbf{vox\_clean\_aug} is used to pre-train the speaker representation module. Only the cross entropy loss on the speaker label is used in this stage. We use the trained speaker representation module as one of our baseline models to show the effect of speech enhancement as well. The trained speaker representation module is fixed as the auxiliary network to pre-train our speaker enhancement module (Pre-training 2). The enhancement module is optimized using the perceptual loss and cross entropy loss jointly. As summarized in Table \ref{dataset}, three pairs of clean and noisy utterance are used in this stage. For \textbf{vox\_noisy}, their clean data is the corresponding sample in \textbf{vox\_clean}. For \textbf{vox\_noisy\_aug}, two types of augmentation are included just like \textbf{vox\_clean\_aug}. When utterances in \textbf{vox\_noisy} are augmented with MUSAN, we expect our enhancement module to remove the effect of augmentation as well. So the clean data for these utterances are their corresponding utterances in \textbf{vox\_clean}. However, for the utterances in \textbf{vox\_noisy\_aug} which are convolved with RIRs, we don't expect our enhancement work for dereverberation. So their corresponding clean utterances should come from \textbf{vox\_clean\_aug} in which samples augmented with RIRs. The batch size of this stage is 128, which includes 64 noisy utterances and their clean version. We use the Adadelta optimizer and initial learning rate of 0.3 in this stage. The learning rate decrease and early stopping scheme are identical with Pre-training 1. After pre-training, the speech enhancement module and speaker representation module are finetuned together to reduce potential mismatch. Except the initial rate is 0.0001 in this stage, both training setting and data set are same as what we used in Pre-training 2. \subsection{Evaluation} To evaluate the performance of our system to extract robust speaker representation in both clean and noisy environments, we conduct the speaker verification task using both SITW and NSITW. In the inference stage, \textbf{vox\_noisy\_aug} passing through the enhancement module is used to train the PLDA backend. The Equal Error Rate (EER) and minimum Detection Cost Function (minDCF) with target prior $p = 0.05$ are used to evaluate our system. \subsection{Baseline} As we mentioned in Section \ref{training}, one of our baseline models is our proposed model without speech enhancement module (PL-EESR w/o enh), which is used to compare the effect of speech enhancement on our system. Then we use the standard x-vector as our second baseline model (x-vector). We trained the model using the Kaldi SITW/v2 recipe. We set the x-vector as the state-of-the-art model for the speaker verification task. We trained PL-EESR w/o enh using the setting of Pre-training 1 which is introduced in Section \ref{training}. This is the difference between it and x-vector. \section{Results} \begin{table}[!t] \caption{Comparison of our PL-EESR with two baseline models on SITW and NSITW.} \label{result1} \centering \setlength\tabcolsep{3pt} \begin{tabular}{ p{1cm}ccclcc} \toprule \multicolumn{1}{c}{Test set}& \multicolumn{1}{c}{System}& \multicolumn{2}{c}{Dev} & & \multicolumn{2}{c}{Eval} \\ \cline{3-4} \cline{6-7} & & EER & DCF & &EER&DCF\\ \hline \multirow{3}*{SITW}& x-vector& 2.965&0.1946 & &3.417 & 0.2241\\ &PL-EESR w/o enh &3.656&0.2215&&3.442 & 0.2210\\ & \textbf{PL-EESR} &\textbf{2.851} & \textbf{0.1805}& &\textbf{2.460}& \textbf{0.1690}\\ \hline \multirow{3}*{NSITW} & x-vector& 6.816&0.3685 & &7.190& 0.4131\\ &PL-EESR w/o enh &7.470&0.3835&& 8.066& 0.4550\\ & \textbf{PL-EESR} &\textbf{4.698} & \textbf{0.3007}& & \textbf{5.272} &\textbf{0.3334} \\ \bottomrule \end{tabular} \end{table} \begin{table*}[h] \caption{Comparison of our end-to-end training scheme with different training settings. ``Joint" and ``Separate" denote whether the enhancement module and speaker representation module are fine-tuned together. ``Noisy" and ``Clean" denote whether the network is optimized on the gradient of noisy utterance or clean utterance.\\} \label{result2} \centering \begin{tabular}{ ccccccclcc} \toprule \multicolumn{1}{c}{Test set}& \multicolumn{1}{c}{System}& \multicolumn{1}{c}{Training}& \multicolumn{1}{c}{CE Loss Objective} &\multicolumn{1}{c}{Perceptual Loss Objective} &\multicolumn{2}{c}{Dev} & & \multicolumn{2}{c}{Eval} \\ \cline{6-7} \cline{9-10} &&& &&EER & DCF & &EER&DCF\\ \hline \multirow{4}*{SITW}&(a)&Separate &Noisy &- &\textbf{2.734}&0.1906 && 2.788 & 0.1833 \\ &(b) &Separate&-&Noisy&3.812 & 0.2274& &3.827&0.2400 \\ &(c) &Separate&Noisy; Clean&Noisy; Clean&2.811 &0.1919 & &2.570&0.1804 \\ &(d) &Joint&Noisy; Clean&Noisy; Clean& 2.851&\textbf{0.1805}& &\textbf{2.460}& \textbf{0.1690}\\ \hline \multirow{4}*{NSITW}&(a)&Separate &Noisy &- &5.930 & 0.3351&&5.931& 0.3706\\ &(b) &Separate&-&Noisy&7.624 &0.3951& &8.124&0.4611\\ &(c) &Separate&Noisy; Clean&Noisy; Clean&5.160 &0.3161 & &5.632& 0.3539\\ &(d) &Joint&Noisy; Clean&Noisy; Clean& \textbf{4.698}&\textbf{0.3007} & &\textbf{5.272}& \textbf{0.3334}\\ \bottomrule \end{tabular} \end{table*} \subsection{Baseline results} Table \ref{result1} shows the comparison of the baselines and our PL-EESR on the development set and evaluation set of SITW (top) and NSITW (bottom) respectively. The text in bold is the best performance for each metric in different conditions. We can find in clean condition, compared without enhancement processing, our method achieves 28.5\% and 23.5\% relative improvements in terms of EER and minDCF respectively in evaluation set, 22.0\% and 18.5\% in the development set. In a noisy environment, the relative improvements in terms of EER and minDCF are 34.6\% and 26.7\% for the evaluation set, 37.1\% and 21.6\% for the development set. Compared to the standard x-vector, which can be seen as the state of the art model for speaker representation, the relative improvements are 28.0\% and 24.6\% in terms of EER and minDCF for evaluation set, 3.8\% and 7.2\% for the development set for SITW. For NSITW, the improvements in evaluation set and development set are 26.7\%, 19.3\% and 31.1\%, 18.4\% respectively. We can conclude from this comparison that our proposed method is beneficial to speaker representation extraction from both clean and noisy utterances. \subsection{Overall comparisons} Table \ref{result2} shows the performance of our end-to-end model with different training settings. Firstly, we trained the speech enhancement module and speaker verification module separately to verify the effect of the perceptual loss, which means there is no finetune stage in System (a)-(c) in Table \ref{result2} and all of these three systems are optimized in the stage of Pre-training 2. In the first experiment, we use only the feedback of the speaker classification task, which is the cross entropy loss on predicted speaker ID with noisy utterance as input. The system flow chart is shown in \ref{exp}(a), and this training scheme is based on the basic idea in \cite{shon2019voiceid}. The results are summarized as System (a) in Table \ref{result2}. System (b) is the original perceptual loss used in \cite{kataria2020feature}. In this experiment, the clean utterance is set as the target and the module is trained on the gradient of noisy utterance as shown in \ref{exp} (b). This performance is summarized in System (b) in Table \ref{result2}. Then we modified the perceptual loss function. Since the cross entropy loss on the speaker label can be used to enlarge the inter-class distance, the perceptual loss in our system focus on decrease the intra-class distance. Specifically, after computing the perceptual loss, the module is optimized on both the gradient of noisy utterance and clean utterance. To make sure this idea works for utterance in clean environments as well, the cross entropy is computed for both noisy utterance and clean utterance. The flow chart of this experiment is shown in Fig. \ref{exp} (c) and results are summarized in System (c) of Table \ref{result2}. System (d) is used to verify the facility of training jointly to decrease the mismatch between these two modules. The only difference between System (c) and System (d) in Table \ref{result2} is that System(d) is trained in finetune stage. Comparing the results of System (a)-(c) in Table \ref{result2} with baseline performance in Table \ref{result1}, we can find using System (b) harmful for both clean and noisy environments. Perhaps this degradation is because the \textbf{vox\_clean} still involved some background noise although we set it as the target in training. However, both System (a) and System (c) outperform the baseline. This shows the necessity of end-to-end speech enhancement to reduce distortion and information loss. Finally, the System (d) gains the most improvements in all of these four systems in most conditions except the EER of devaluation set of SITW. \section{Conclusion} Motivated by the unsatisfactory performance of speech enhancement applied on speaker embedding extraction task, we proposed the end-to-end training scheme for robust speaker representation. The model is trained on loss functions that aim at mapping the noisy and clean utterances to the identical representation as well as classifying speaker labels. The experiment results show that our method outperforms baseline models in both clean and noisy environments. \bibliographystyle{IEEEbib}
1,477,468,750,275
arxiv
\section{Introduction} We consider the first order relativistic equations [1--12], which describe the motion of the elementary particle having spin $s=3/2$. A comparative characteristic of such equations is given. The second order equations of the Klein--Gordon type are not discussed. Known first order relativistic equations of motion of an elementary particle with spin $s=3/2$ [1--9] can be divided into two groups. The first group is a partial case of the corresponding equations for an arbitrary spin. The second group is the specially suggested equations for the description of the properties of an elementary particle having spin s = 3/2. Recently, in order to describe the motion of a particle of arbitrary spin better, the start of introduction of the new equation [10--12] was given. Such equation has an interesting special case for the spin $s=3/2$. The first step in analysis of the equation [10--12] is the comparison with the known approaches to the problem. Below, in order to provide the comparison of the main properties of the equations [1--12] between each other, the partial case when $s=3/2$ is chosen. The first equation for an arbitrary half-integer spin was proposed by P. Dirac in 1936 [1]. After that step by step main equations for an arbitrary spin by M. Fierz and W. Pauli [2,3], by H. Bhabha [4--6] and by W. Bargmann and E. Wigner [7] have been suggested. In the first group the Bhabha [4--6] and Bargmann--Wigner [7] equations are the most known. The Rarita--Schwinger [8] and Fisk--Tait [9] equations are the most known among the special equations (second group) for the particle having spin 3/2. Note that hadrons with spin 3/2 are long known in the particle physics. In particular, there is a multiplet of $\Delta$-baryons, or $\Delta$-resonances ($\Delta^{++}, \Delta^{+},\Delta^{0},\Delta^{-}$), consisting of three quarks of the type u and d. The $\Omega^{-}$ hyperon consists of three strange quarks (sss) and has a lifetime $10^{-10}$s. Effective quantum field theory describes some properties of such elementary particles on the basis of the Rarita--Schwinger equation [13--15] and its minor modifications [16]. A fundamentally different physical example arises in supersymmetry. It is gravitino, which is a superpartner of graviton. However, contrary to hyperons (baryons with a spin 3/2), superpartners are not experimentally observable. Moreover, the graviton itself generates many more questions than the clear answers exist. Registration of $\Delta$-resonances in LHC experiments continues. So we have new candidates for particles with spin 3/2 [17, 18]. Furthermore, such elementary particles are constantly offered as carriers of dark matter [19--21]. Finally, spin 3/2 is found not only in elementary particles but also in other physical systems [22, 23]. It should be noted especially that the theoretical description of the elementary particles with spin 3/2 in the frameworks of field theory still deals with a number of fundamental problems (see, e.g., [24--31], as well as [13--16]). This is typical for all fields with higher spins that are larger than s=1. The main difficulties are caused by the fact that, in addition to the components related to the spin $s=3/2$, the corresponding quantum field also has redundant components. Moreover, in equations [1--6, 8,9] the representations of the Lorenz group, not of the Poincaré group, are applied. However, the Poincaré symmetry of the Rarita--Schwinger system of equations sometimes is mentioned, see, e.g., [32, 33]. \section{The Pauli--Fierz equation} In [3] both the field of arbitrary spin and the partial cases when the values of spin are 3/2 or 2 were considered. Tensors are used to describe the case of an integer spin, and spinors are used to describe a half-integer spin. W. Pauli and M. Fierz started from the Dirac equation [1]. Therefore, sometimes this object is called as the Dirac--Pauli--Fierz equation. In the absence of external forces, the wave field corresponding to the particles with spin 3/2 is described by the following spinors (below we use the notation from [3]) \begin{equation} \label{Eg.1} a^{\dot{\alpha}}_{\beta\gamma}=a^{\dot{\alpha}}_{\gamma\beta}, \quad b^{\dot{\alpha}\dot{\beta}}_{\gamma}=b^{\dot{\beta}\dot{\alpha}}_{\gamma}. \end{equation} Spinors (1) go into one another by reflections and satisfy the equations \begin{equation} \label{Eg.2} p^{\dot{\beta}\rho}a^{\dot{\alpha}}_{\rho\gamma}+p^{\dot{\alpha}\rho}a^{\dot{\beta}}_{\rho\gamma}=2mb^{\dot{\alpha}\beta}_{\gamma}, \quad p_{\alpha\dot{\rho}}b^{\dot{\rho}\dot{\gamma}}_{\beta}+p_{\beta\dot{\rho}}b^{\dot{\rho}\dot{\gamma}}_{\alpha}=2ma^{\dot{\gamma}}_{\alpha\beta}, \end{equation} together with the conditions \begin{equation} \label{Eg.3} p_{\dot{\alpha}}\mbox{}^{\beta}a^{\dot{\alpha}}_{\beta\gamma}=0, \quad p_{\dot{\alpha}}\mbox{}^{\gamma}b^{\dot{\alpha}\dot{\beta}}_{\gamma}=0. \end{equation} Here the operator of momentum has the specific form $p_{\dot{\alpha}\beta}=-i\sigma_{\dot{\alpha}\beta}^{k}\frac{\partial}{\partial x_{k}}$, and $\sigma_{\dot{\alpha}\beta}^{k}=\left(\sigma^{1},\sigma^{2},\sigma^{3},iI\right)_{\dot{\alpha}\beta}$ are the Pauli matrices in standard representation. Matrix $\sigma_{k}^{\dot{\alpha}\beta}$ is the Hermitian conjugate of $-\sigma_{\alpha\dot{\beta}}^{k}$. The second order wave equation for $a^{\dot{\gamma}}_{\alpha\beta}$ and $b^{\dot{\alpha}\dot{\beta}}_{\gamma}$ follows from these equations. The additional conditions (3) mean that formalism is without particles having spin 1/2. Equations (1) and (2) were derived in [3] from the variation principle as well. Corresponding Lagrange formalism led to the differential relationship between the spinors $a^{\dot{\gamma}}_{\alpha\beta}$ and $b^{\dot{\alpha}\dot{\beta}}_{\gamma}$ in the form of equations as follows: $p^{\dot{\beta}\rho}a^{\dot{\alpha}}_{\gamma\rho}=mb^{\dot{\alpha}\dot{\beta}}_{\gamma}, \, p_{\alpha\dot{\rho}}b^{\dot{\rho}\dot{\gamma}}_{\beta}=ma^{\dot{\gamma}}_{\alpha\beta}$. The wave function in the system of equations (2) and (3) is a matrix-column with 16 rows. The Pauli--Fierz equation for the spin case 3/2 is equivalent to the Rarita--Schwinger equation, which we consider below. Such equivalence has been noted by a number of authors; see, for example, [34] . Note that in [35] the Rarita--Schwinger equation was derived directly from the Pauli--Fierz equation. Unfortunately, the author of [35] forgot to refer to the article of Rarita--Schwinger [8], which at that time was already well known. \section{The Bhabha equation} This equation is known from [4], see also [5]. Such matrix-differential equation in partial derivatives of the first order has a general form for arbitrary spin. Nevertheless, for each partial value of the spin the main matrices have different explicit forms, which depend on the different explicit types of generators of the Lie algebra of the corresponding representation of the Lorentz group. The Bhabha equation has the form \begin{equation} \label{Eg.4} \left(p_{\mu}\alpha^{\mu}+m\right)\psi\left(x\right)=0, \quad \mu=\overline{0,3} \quad j=1,2,3, \end{equation} where $p_{\mu}\equiv i\partial/\partial x^{\mu}$, $m$ is an arbitrary constant (the mass of the particle, and different mass values for particles of different spins are possible), $\alpha^{\mu}$ are four matrices that satisfy different sets of commutation relations in each case. Such equation is invariant with respect to arbitrary transformations of the Lorentz group if the matrices $\alpha^{\mu}$ obey the commutation relations \begin{equation} \label{Eg.5} \left[\alpha^{\mu},S^{\rho\sigma}\right]=\alpha^{\mu}S^{\rho\sigma}-S^{\rho\sigma}\alpha^{\mu}=g^{\mu\rho}\alpha^{\sigma}-g^{\mu\sigma}\alpha^{\rho}, \end{equation} where metric tensor $g^{\mu\nu}$ is defined as $g^{00}=-g^{11}=-g^{22}=-g^{33}=$I, and matrices $S^{\rho\sigma}-S^{\sigma\rho}$ determine six generators of transformations of a concrete representation of the Lie algebra of the Lorentz group, which satisfy the commutation relations in the form \begin{equation} \label{Eg.6} \left[S^{\mu\nu},S^{\rho\sigma}\right]=-g^{\mu\rho}S^{\nu\sigma}+g^{\mu\sigma}S^{\nu\rho}+g^{\nu\rho}S^{\mu\sigma}-g^{\nu\sigma}S^{\mu\rho}. \end{equation} The following condition $\left[\alpha^{\mu},\alpha^{\nu}\right]=S^{\mu\nu}$ is necessary as well. The number of components of the wave function in equation (4) is different for different values of spin. In [6] Bhabha described the partial case, when $s$ = 3/2, and personally proved that in this case his equation coincides with the Rarita--Schwinger equation. In particular, the wave function has 16 components. Note also that Bhabha and his followers used the representations of the Lorentz group, not the representations of the Poincaré group. \section{The Bargmann--Wigner equation} The equation from [7], as it is evident. e.g., from [36], is a Dirac-like equation in spaces of arbitrary dimensions, in which the gamma matrix representations of Clifford algebra over the field of complex numbers are determined. The Bargmann--Wigner wave function for a particle of arbitrary spin s is a multispinor $\psi_{\alpha_{1}\alpha_{2}...\alpha_{2s}}$ with 2s spinor indices, each running independently from 1 to 4. Such wave function is completely symmetrical under permutations of indices. The wave equation is given by \begin{equation} \label{Eg.7} \left(\gamma^{(n)}_{\mu}p^{\mu}-m\right)\psi\left(x\right)=0, \quad n=1,...,2s, \end{equation} where the $\gamma^{(n)}_{\mu}$ are the set of Dirac $\gamma$-matrices operating on $n$-th spinor index of $\psi$. More precisely, if $\psi$ is regarded as a vector in the direct product space of $2s\cdot 4$-dimensional spaces (corresponding to the $2s$ spinor indices), and if we denote by I and $\gamma_{\mu}$ the unit matrix and the usual Dirac matrices in any one of these factor spaces, then $\gamma^{(n)}_{\mu}$ is the Kronecker product $\gamma^{(n)}_{\mu}=\mathrm{I}\times ... \times \gamma_{\mu}\times ... \times \mathrm{I}$, with $\gamma_{\mu}$ occurring as the $n$-th factor. The dimension of such spaces is $2s\times4$, hence, for $s$=3/2 the wave function will have the 12 components. Note a special development of the Bargmann--Wigner formalism for the case of a free spin-3/2-particle, which was published in [37]. A comparison with the Rarita--Schwinger theory is discussed. It is shown that the theories are equivalent. \section{The Rarita--Schwinger equation} This equation describes the particle with arbitrary half-integer spin but the most known is the application to the particle having spin 3/2. Thus, in [8], a fundamentally new object was introduced, which has four spinor indices and additional indices that define a symmetric tensor of arbitrary rank k. It is the object that acts as a wave function in the equation from [8]. This is the combination of two different elements of the Pauli--Fierz formalism [3] in order to simplify it. For k=0 one has ordinary Dirac equation, in the case, when k=1, it gives the 16-component equation for the particle having spin 3/2. Such equation has some relation to the description of fermions with spin 3/2 only after the additional conditions are imposed. Namely this equation is widespread and is called the Rarita--Schwinger equation [8] for a particle with spin 3/2. In addition to the existence of redundant components, problems naturally arise with the transformational properties of the 16-component vector-spinor, causality, relativistic invariance and quantization. The main problem is an interaction with external fields. These difficulties of the Rarita--Schwinger equation were mentioned by many researchers, see, for example, [24--31] . Hence, the Rarita-Schwinger equation without any loss of generality can be written as \begin{equation} \label{Eg.8} \left(i\gamma^{\mu}\partial_{\mu}-m\right)\psi_{\nu}-\frac{1}{3}\left(\gamma_{\nu}i\partial_{\mu}+\gamma_{\mu}i\partial_{\nu}\right)\psi^{\mu}+\frac{1}{3}\gamma_{\nu}\left(i\gamma^{\alpha}\partial_{\alpha}+m\right)\left(\gamma^{\mu}\psi_{\mu}\right)=0. \end{equation} After imposing additional conditions the system of equations is obtained as follows \begin{equation} \label{Eg.9} \left(i\gamma^{\mu}\partial_{\mu}-m\right)\psi_{\nu}\left(x\right)=0, \quad \partial^{\mu}\psi_{\mu}=0, \quad \gamma^{\mu}\psi_{\mu}=0. \end{equation} \section{The Fisk-Tain equation} This equation was suggested in [9] for the particle having spin 3/2. It is the Dirac-like equation with 24-component wave function in the form of antisymmetric tensor-spinor of the Lorentz group \begin{equation} \label{Eg.10} \Psi^{\mu\nu}=\left( \begin{array}{cccc} 0 & \Psi^{01} & \Psi^{02} & \Psi^{03} \\ -\Psi^{01} & 0 & \Psi^{12} & \Psi^{13} \\ -\Psi^{02} & -\Psi^{12} & 0 & \Psi^{23} \\ -\Psi^{03} & -\Psi^{13} & -\Psi^{23} & 0 \end{array} \right) \end{equation} Here each component of the tensor (10) is the 4-component Dirac spinor. In notations \begin{equation} \label{Eg.11} \psi=\left(\Psi^{01},\Psi^{02},\Psi^{03}\right) \quad \chi=\left(\Psi^{23},\Psi^{31},\Psi^{12}\right), \end{equation} the Fisk--Tait equation has the form \begin{equation} \label{Eg.12} \left(\gamma^{\rho}p_{\rho}-m\right)\psi^{\mu\nu}\left(x\right)=0, \quad \gamma_{\mu}\gamma_{\nu}\psi^{\mu\nu}=0, \quad \varepsilon^{\mu\nu}\mbox{}\mbox{}_{\sigma\rho}p_{\nu}\psi^{\sigma\rho}=0, \end{equation} $$\left(\gamma^{\rho}p_{\rho}-m\right)\chi^{\mu\nu}\left(x\right)=0, \quad \gamma_{\mu}\gamma_{\nu}\chi^{\mu\nu}=0, \quad p_{\mu}\chi^{\mu\nu}=0.$$ The system of equations (12) was put into consideration in order to overcome the difficulties mentioned in [24--31]. Note only partial and rather conditional success. The equation is criticized in [38] for doubling the parity and the presence of negative energy. Nevertheless, this approach still has followers up for today [39]. Improvement of the models [3, 5, 7--9] considered here is still relevant, see, e.g. [16, 39]. \section{Equation without redundant components} The relativistic quantum-mechanical equation [10, 11] for the spin $s=3/2$ fermion-antifermion doublet is given by \begin{equation} \label{Eg.13} i\partial_{0}f\left(x\right)=\sqrt{-\Delta +m^{2}}f\left(x\right), \quad f=\mathrm{column}\left|f^{1},f^{2},f^{3},f^{4},f^{5},f^{6},f^{7},f^{8}\right|. \end{equation} The general solution has the form \begin{equation} \label{Eg.14} f(x)= \left| {{\begin{array}{*{20}c} f_{\mathrm{part}} \hfill \\ f_{\mathrm{antipart}} \hfill \\ \end{array} }} \right| =\frac{1}{\left(2\pi\right)^{\frac{3}{2}}}\int d^{3}k e^{-ikx}b^{\mathrm{A}}(\overrightarrow{k})\mathrm{d}_{\mathrm{A}}, \quad \mathrm{A}=\overline{1,8}, \end{equation} where $\mathrm{d}_{\mathrm{A}}$ are the orts of 8-component Cartesian basis: $\mathrm{d}_{\mathrm{A}}=\left\{\delta_\mathrm{A\dot{B}}\right\},\, \mathrm{A\dot{B}}=\overline{1,8}$. The functions $b^{1}(\overrightarrow{k}), \, b^{2}(\overrightarrow{k}), \, b^{3}(\overrightarrow{k}), \, b^{4}(\overrightarrow{k})$ in solution (14) are the momentum-spin amplitudes of the massive fermion with the spin $s=3/2$ and the spin projection $(3/2,1/2,-1/2,-3/2)$, respectively; $b^{5}(\overrightarrow{k}), \, b^{6}(\overrightarrow{k}),$ $b^{7}(\overrightarrow{k}), \, b^{8}(\overrightarrow{k})$ are the momentum-spin amplitudes of the antiparticle (antifermion) with the spin $s=3/2$ and the spin projection $(-3/2,-1/2,1/2,3/2)$, respectively. Such interpretation of the amplitudes directly follows from the equations on eigenvalues of the momentum and spin operators (the explicit form of the spin operator is given just below). Equation (13) is considered in rigged Hilbert space $\mathrm{S}^{3,8}\subset\mathrm{H}^{3,8}\subset\mathrm{S}^{3,8*}$, where $\mathrm{H}^{3,8}$ is the Hilbert space of 8-component functions, $\mathrm{S}^{3,8}$ is the corresponding space of Schwartz test functions, which is dense in Schwartz generalized function space $\mathrm{S}^{3,8*}$ (space $\mathrm{S}^{3,8*}$ is conjugated to $\mathrm{S}^{3,8}$ by the corresponding topology). We call the model of the physical reality, which is based on the equation (13), as the relativistic canonical quantum mechanics of the spin $s=3/2$ fermion-antifermion doublet. Indeed, equation (13) directly demonstrates the relativistic relationship between energy, momentum and the mass of the particle, does not lead to negative energies and describes SU(2) spin of antiparticle as a mirror reflection of particle SU(2) spin: \begin{equation} \label{Eg.15} s^{1}_{8}=\left( \begin{array}{cccc} s^{1} & 0 \\ 0 & -s^{1} \\ \end{array} \right), \quad s^{2}_{8}=\left( \begin{array}{cccc} s^{2} & 0 \\ 0 & s^{2} \\ \end{array} \right), \quad s^{3}_{8}=\left( \begin{array}{cccc} s^{3} & 0 \\ 0 & -s^{3} \\ \end{array} \right), \end{equation} where \begin{equation} \label{Eg.16} s^{1}=\frac{1}{2}\left( \begin{array}{cccc} 0 & \sqrt{3} & 0 & 0 \\ \sqrt{3} & 0 & 2 & 0 \\ 0 & 2 & 0 & \sqrt{3} \\ 0 & 0 & \sqrt{3} & 0 \end{array} \right), \, s^{2}=\frac{i}{2}\left( \begin{array}{cccc} 0 & -\sqrt{3} & 0 & 0 \\ \sqrt{3} & 0 & -2 & 0 \\ 0 & 2 & 0 & -\sqrt{3} \\ 0 & 0 & \sqrt{3} & 0 \end{array} \right), \, s^{3}=\frac{1}{2}\left( \begin{array}{cccc} 3 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -3 \end{array} \right). \end{equation} Here antiparticle is related to the four lowest components of the column from (13). We put into consideration, for the particle having spin $s=3/2$, the equation that is an analogue of the Dirac equation for the fermion with spin 1/2 and nonzero mass. In the form of Hamilton this equation is given by \begin{equation} \label{Eg.17} \left[i\partial_{0}-\Gamma_{8}^{0}(\overrightarrow{\Gamma}_{8}\cdot \overrightarrow{p}+m)\right]\psi(x)=0, \quad \psi=\mathrm{column}\left|\psi^{1},\psi^{2},\psi^{3},\psi^{4},\psi^{5},\psi^{6},\psi^{7},\psi^{8}\right|. \end{equation} where the $8\times 8$ gamma matrices have the form \begin{equation} \label{Eg.18} \Gamma_{8}^{0}=\left| {{\begin{array}{*{20}c} \mathrm{I}_{4} \hfill & 0 \\ 0 \hfill & -\mathrm{I}_{4} \\ \end{array} }} \right|, \quad \Gamma_{8}^{j}=\left| {{\begin{array}{*{20}c} 0 \hfill & \Sigma^{j} \\ -\Sigma^{j} \hfill & 0 \\ \end{array} }} \right|, \end{equation} where \begin{equation} \label{Eg.19} \Sigma^{j}=\left| {{\begin{array}{*{20}c} \sigma^{j} \hfill & 0 \\ 0 \hfill & \sigma^{j} \\ \end{array} }} \right|, \end{equation} and $\sigma^{j}$ are the standard $2\times 2$ Pauli matrices. The equation (17) is derived from the relativistic canonical quantum mechanics (13) on the basis of transformation, which is given by the extended Foldy--Wouthuysen operator $V$ in the form: \begin{equation} \label{Eg.20} V\left(\partial_{0}+i\omega\right)V^{-1}=\partial_{0}+i\Gamma_{8}^{0}(\overrightarrow{\Gamma}_{8}\cdot \overrightarrow{p}+m), \quad \psi = Vf; \quad \omega=\sqrt{-\Delta +m^{2}}=\sqrt{\overrightarrow{p}^{2}+m^{2}}. \end{equation} Operator $V$ is given by \begin{equation} \label{Eg.21} V=\frac{ i\Gamma^{j}_{8}\partial_{j}+\omega+m}{\sqrt{2\omega(\omega+m)}}\left| {{\begin{array}{*{20}c} \mathrm{I}_{4} \hfill & 0 \\ 0 \hfill & \mathrm{I}_{4}C \\ \end{array} }} \right|, \quad V^{-1}=\left| {{\begin{array}{*{20}c} \mathrm{I}_{4} \hfill & 0 \\ 0 \hfill & \mathrm{I}_{4}C \\ \end{array} }} \right|\frac{ -i\Gamma^{j}_{8}\partial_{j}+\omega+m}{\sqrt{2\omega(\omega+m)}}, \quad VV^{-1}=V^{-1}V=\mathrm{I}_{8}, \end{equation} where $\mathrm{I}_{4}$ is $4 \times 4$ unit matrix and $C$ is the operator of complex conjugation, $C \psi = \psi^{*}$ (operator of involution in $\mathrm{H}^{3,1}$). The transformation inverse to (20), (21) is valid as well. Note that transformation (20), (21) can be applied only to the operators (of equation, energy, momentum, spin, etc.) taken in anti-Hermitian form. The general solution of the equation (17) is given by \begin{equation} \label{Eg.22} \psi(x)=\frac{1}{\left(2\pi\right)^{\frac{3}{2}}}\int d^{3}k\left[e^{-ikx}c^{\mathrm{A}}(\overrightarrow{k})\mathrm{v}^{-}_{\mathrm{A}}(\overrightarrow{k})+e^{ikx}c^{*\mathrm{B}}(\overrightarrow{k})\mathrm{v}^{+}_{\mathrm{B}}(\overrightarrow{k})\right], \end{equation} where $\mathrm{A}=\overline{1,4}, \, \mathrm{B}=\overline{5,8}$ and the 8-component spinors $(\mathrm{v}^{-}_{\mathrm{A}}(\overrightarrow{k}), \, \mathrm{v}^{+}_{\mathrm{B}}(\overrightarrow{k}))$ are given by \small $$\mathrm{v}^{-}_{1}(\overrightarrow{k}) = N\left| \begin{array}{cccc} \widetilde{\omega}+m \\ 0 \\ 0 \\ 0 \\ k^{3} \\ k^{1}+ik^{2} \\ 0 \\ 0 \\ \end{array} \right|, \quad \mathrm{v}^{-}_{2}(\overrightarrow{k}) = N\left| \begin{array}{cccc} 0 \\ \widetilde{\omega}+m \\ 0 \\ 0 \\ k^{1}-ik^{2} \\ -k^{3} \\ 0 \\ 0 \\ \end{array} \right|,$$ $$ \mathrm{v}^{-}_{3}(\overrightarrow{k}) = N \left| \begin{array}{cccc} 0 \\ 0 \\ \widetilde{\omega}+m \\ 0 \\ 0 \\ 0 \\ k^{3} \\ k^{1}+ik^{2} \\ \end{array} \right|, \quad \mathrm{v}^{-}_{4}(\overrightarrow{k}) = N\left| \begin{array}{cccc} 0 \\ 0 \\ 0 \\ \widetilde{\omega}+m \\ 0 \\ 0 \\ k^{1}-ik^{2} \\ -k^{3} \\ \end{array} \right|,$$ \begin{equation} \label{Eg.23} \mathrm{v}^{+}_{5}(\overrightarrow{k}) = N\left| \begin{array}{cccc} k^{3} \\ k^{1}+ik^{2} \\ 0 \\ 0 \\ \widetilde{\omega}+m \\ 0 \\ 0 \\ 0 \\ \end{array} \right|, \quad \mathrm{v}^{+}_{6}(\overrightarrow{k}) = N\left| \begin{array}{cccc} k^{1}-ik^{2} \\ -k^{3} \\ 0 \\ 0 \\ 0 \\ \widetilde{\omega}+m \\ 0 \\ 0 \\ \end{array} \right|, \end{equation} $$\mathrm{v}^{+}_{7}(\overrightarrow{k}) = N\left| \begin{array}{cccc} 0 \\ 0 \\ k^{3} \\ k^{1}+ik^{2} \\ 0 \\ 0 \\ \widetilde{\omega}+m \\ 0 \\ \end{array} \right|, \quad \mathrm{v}^{+}_{8}(\overrightarrow{k}) = N\left| \begin{array}{cccc} 0 \\ 0 \\ k^{1}-ik^{2} \\ -k^{3} \\ 0 \\ 0 \\ 0 \\ \widetilde{\omega}+m \\ \end{array} \right|,$$ \normalsize where \begin{equation} \label{Eq.24} N\equiv \frac{1}{\sqrt{2\widetilde{\omega}(\widetilde{\omega}+m)}}, \quad \widetilde{\omega}\equiv \sqrt{\overrightarrow{k}^{2}+m^{2}}. \end{equation} The 8-component spinors (23) are derived from the orts of the Cartesian basis with the help of the transformation (20), (21). The spinors (23) satisfy the relations of the orthonormalization and completeness similar to the corresponding relations for the standard 4-component Dirac spinors. Function $c^{\mathrm{A}}(\overrightarrow{k})$, $c^{*\mathrm{B}}(\overrightarrow{k})$ in (22) are the momentum-spin amplitudes. Quantum-mechanical interpretation of corresponding amplitudes can be given only in the frameworks of relativistic canonical quantum mechanics based on the equation (13). As we can see, equation (17), contrary to other equations considered here, does not contain redundant components. \section{Brief conclusions} A comparison of the proposed new equation with known approaches to the description of a particle with spin 3/2 is presented. Equation (17), whose wave function is 8-component, has fundamental advantages in comparison with other models discussed above, where wave functions have 12, 16, 24 components. In addition to the absence of redundant components, the obvious advantage is the direct operator link (20), (21) to relativistic canonical quantum mechanics, which allows a clear quantum-mechanical interpretation of all statements, results and consequences. Equation (13) itself has an independent interest. Briefly note the Poincaré invariance of equations (13), (17). If for equation (13) the corresponding representation of the Poincaré group is relatively simple, then for the Dirac-like equation (17) we have a somewhat cumbersome form (see [12], Chapter 7) of operators that define the Poincaré symmetry. Poincaré, not Lorentz, symmetry of equation (17) is its next advantage (the only exceptions are the Bargmann--Wigner equation and some special consideration of the Rarita--Schwinger equation), so an important task for further research is to simplify the explicit form of symmetry operators found in [12]. The proof of Poincaré invariance of equation (17) in a simple explicitly covariant form (not more cumbersome than for the usual Dirac equation, when we have 4-component spinors) is performed on the basis of the 256-dimensional representations of the Clifford algebras $\textit{C}\ell^{\mathbb{R}}$(0,8), $\textit{C}\ell^{\mathbb{R}}$(1,7) [40] in the terms of $8 \times 8$ Dirac $\gamma$ matrices. On this basis we can prove not only spin 1 properties of the corresponding Dirac-like equation (17) but the spin 3/2 Poincaré symmetries as well. The method is known from [41--44]. The detailed consideration of the Poincaré invariance will be the task of the upcoming paper. We have the reason to predict interesting and unexpected applications in high-energy physics and nuclear physics, especially for the problem of particle-antiparticle asymmetry at the beginning of the evolution of the Universe after the Big Bang. Indeed, equation (17) describes spin $s=3/2$ fermion-antifermion doublet.
1,477,468,750,276
arxiv
\section{Introduction} It is now an established fact that neutrinos are massive and leptonic flavors are not symmetries of Nature~\cite{Pontecorvo:1967fh, Gribov:1968kq}. In the last decade this picture has become fully proved thanks to the upcoming of a set of precise experiments. In particular, the results obtained with solar~\cite{Cleveland:1998nv,Kaether:2010ag,Abdurashitov:2009tn, Hosaka:2005um,Aharmim:2007nv,Aharmim:2005gt,Aharmim:2008kc,Collaboration:2009gd,Arpesella:2008mt,Collaboration:2008mr} and atmospheric neutrinos~\cite{Ashie:2005ik,Wendell:2010md} have been confirmed in experiments using terrestrial beams: neutrinos produced in nuclear reactors~\cite{Shimizu:2008zz,CHOOZ} and accelerators~\cite{Ahn:2006zza,Adamson:2008zt,Collaboration:2009yc,minapp70} facilities have been detected at distances of the order of hundreds of kilometers~\cite{ourrep}. The minimum joint description of all the neutrino data requires mixing among all the three known neutrinos ($\nu_e$, $\nu_\mu$, $\nu_\tau$), which can be expressed as quantum superpositions of three massive states $\nu_i$ ($i=1,2,3$) with masses $m_i$. This implies the presence of a leptonic mixing matrix in the weak charged current interactions~\cite{Maki:1962mu, Kobayashi:1973fv} which can be parametrized as: \begin{equation} U = \begin{pmatrix} 1 & 0 & 0 \\ 0 & c_{23} & {s_{23}} \\ 0 & -s_{23} & {c_{23}} \end{pmatrix} \cdot \begin{pmatrix} c_{13} & 0 & s_{13} e^{-i\delta_\text{CP}} \\ 0 & 1 & 0 \\ -s_{13} e^{i\delta_\text{CP}} & 0 & c_{13} \end{pmatrix} \cdot \begin{pmatrix} c_{21} & s_{12} & 0 \\ -s_{12} & c_{12} & 0 \\ 0 & 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} e^{i \eta_1} & 0 & 0 \\ 0 & e^{i \eta_2} & 0 \\ 0 & 0 & 1 \end{pmatrix}, \label{eq:U3m} \end{equation} where $c_{ij} \equiv \cos\theta_{ij}$ and $s_{ij} \equiv \sin\theta_{ij}$. In addition to the Dirac-type phase $\delta_\text{CP}$, analogous to that of the quark sector, there are two physical phases $\eta_i$ associated to the Majorana character of neutrinos and which are not relevant for neutrino oscillations~\cite{Bilenky:1980cx, Langacker:1986jv}. Given the observed hierarchy between the solar and atmospheric mass-squared splittings there are two possible non-equivalent orderings for the mass eigenvalues, which are conventionally chosen as \begin{align} \label{eq:normal} m_1< m_2< m_3 \;\;\; {\rm with} \;\;\; \Delta m^2_{21} &\ll (\Delta m^2_{32} \simeq \Delta m^2_{31}) \text{ with } (\Delta m^2_{31} > 0) \,; \\ \label{eq:inverted} m_3< m_1< m_2 \;\;\; {\rm with} \;\;\; \Delta m^2_{21} &\ll |\Delta m^2_{31} \simeq \Delta m^2_{32}| \text{ with } (\Delta m^2_{31} < 0) \,. \end{align} As it is customary we refer to the first option, Eq.~\eqref{eq:normal}, as the \emph{normal} (N) scheme, and to the second one, Eq.~\eqref{eq:inverted} , as the \emph{inverted} (I) scheme; in this form they correspond to the two possible choices of the sign of $\Delta m^2_{31}$. In this convention the angles $\theta_{ij}$ can be taken without loss of generality to lie in the first quadrant, $\theta_{ij} \in [0, \pi/2]$, and the phases $\delta_\text{CP},\; \eta_i\in [0,2\pi]$. Within this context, $\Delta m^2_{21}$, $|\Delta m^2_{31}|$, $\theta_{12}$, and $\theta_{23}$ are relatively well determined from oscillation experiments ~\cite{ourfit,Fogli:2009zza,Schwetz:2008er,Maltoni:2008ka}, while only an upper bound is derived for the mixing angle $\theta_{13}$ and barely nothing is known on the phases and on the sign of $\Delta m^2_{31}$. Furthermore neutrino oscillation data provides as unique information on the absolute neutrino mass scale a lower bound \begin{eqnarray} &&\sum m_i \gtrsim \sqrt{|\Delta m^2_{31}|}\;\; {\rm for}\; {\rm N}\\ &&\sum m_i \gtrsim 2 \sqrt{|\Delta m^2_{31}|} \;\; {\rm for}\; {\rm I}\\ \end{eqnarray} Conversely the neutrino mass scale is constrained in laboratory experiments searching for its kinematic effects in Tritium $\beta$ decay which are sensitive to the so-called effective electron neutrino mass~\cite{betamix,vissanitrit,smirtrit} \begin{equation} m^2_{\nu_e} \equiv \sum_i m^2_i |U_{ei}|^2=c_{13}^2 c_{12}^2 m_1^2 +c_{13}^2 s_{12}^2 m_2^2+s_{13}^2 m_3^2 \,, \label{eq:mb} \end{equation} At present the most precise determination from the Mainz~\cite{tritmainz} and Troitsk~\cite{trittroitsk} experiments give no indication in favor of $m_{\nu_e}\neq 0$ and one sets an upper limit \begin{equation} \label{eq:nuelim} m_{\nu_e}<2.2~\text{eV} \; , \end{equation} at 95\% confidence level (CL). A new experimental project, KATRIN~\cite{katrin}, is under construction with an estimated sensitivity limit: $m_{\nu_e} \sim 0.2$ eV. Direct information on neutrino masses can also be obtained from neutrinoless double beta decay ($0\nu\beta\beta$) searches provided they are Majorana particles. In the absence of other sources of lepton number violation in the low energy lagrangian, the $0\nu\beta\beta$ decay amplitude is proportional to the effective Majorana mass of $\nu_e$, $m_{ee}$, \begin{equation} m_{ee} = \left| \sum_i m_i U_{ei}^2 \right|= \left | c_{13}^2 c_{12}^2 m_1\, {e}^{i\eta_1} + c_{13}^2 s_{12}^2 m_1\, {e}^{i\eta_2} + s_{13}^2\, {e}^{-i\delta_{CP}} \;, \right| \label{eq:mbb} \end{equation} which, in addition to the masses and mixing parameters that affect the tritium beta decay spectrum, depends also on the phases in the leptonic mixing matrix. The strongest bound from $0\nu\beta\beta$ decay was imposed by the Heidelberg-Moscow group~\cite{hmlimit} \begin{equation} m_{ee} < 0.26~(0.34)~\text{eV} \quad \text{at 68\% (90\%) CL,} \end{equation} which holds for a given prediction of the nuclear matrix element. However, there are large uncertainties in those predictions which may considerably weaken the bound~\cite{bbteoreview}. A series of new experiments is planned with sensitivity of up to $m_{ee} \sim 0.01$ eV~\cite{bbreview}. Neutrinos, like any other particles, contribute to the total energy density of the Universe. Furthermore within what we presently know of their masses, the three Standard Model (SM) neutrinos are relativistic through most of the evolution of the Universe and they are very weakly interacting which means that they decoupled early in cosmic history. Depending on their exact masses they can impact the CMB spectra, in particular by altering the value of the redshift for matter-radiation equality. More importantly, their free streaming suppresses the growth of structures on scales smaller than the horizon at the time when they become non-relativistic and therefore affects the matter power spectrum which is probed from surveys of the LSS distribution (see \cite{pastor} for a detailed review of cosmological effects of neutrino mass). Within their present precision, cosmological observations are sensitive to neutrinos only via their contribution to the energy density in our Universe, $\Omega_\nu h^2$ (where $h$ is the Hubble constant normalized to $H_0 = 100 ~\text{km} ~\text{s}^{-1} ~\text{Mpc}^{-1}$). $\Omega_\nu h^2$ is related to the total mass in the form of neutrinos \begin{equation} \Omega_{\nu}h^2 = \sum_i m_i / (94 \text{eV}) \,. \end{equation} Therefore cosmological data mostly gives information on the sum of the neutrino masses and has very little to say on their mixing structure and on the ordering of the mass states (see Ref. \cite{jkpv} for a recent update on the sensitivity of future cosmological observations to the mass ordering.) There is a growing literature on the information extracted from cosmological observations on the neutrino mass scale, starting with the analysis performed by the different experimental collaborations~\cite{WMAP7,WMAP5,WMAP3,BAO,SDSS}. The basic observation is that, besides variations due to the observables considered, the bounds on the neutrino mass obtained depend on the assumptions made on the history of the cosmic expansion, or in other words, on how many parameters besides the $\Lambda$CDM model are allowed to vary when analyzing the cosmological data. Additionally depending on those assumptions some observables need or not to be considered in order to account for degeneracies among the parameters (for some recent analysis see ~\cite{hannestad,hamann,reid}). In this article we present the results of a global analysis of cosmological observables in $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ cosmologies which depart from $\Lambda$CDM models by allowing, besides neutrino masses $\Omega_\nu\neq 0$, non-vanishing curvature $\Omega_k\neq 0$, dark energy with equation of state with $\omega\neq -1$ together with the presence of new particle physics whose effect on the present cosmological observations can be parametrized in terms of additional relativistic degrees of freedom $\Delta N_{\rm rel}$. In particular this extends the most general analysis of Ref.~\cite{hamann} by accounting also for non-flatness effects. We adopt a purely phenomenological approach in analyzing the effect of a non-vanishing spatial curvature without addressing its origin. However it is worth mentioning that, within inflationary models which produce the simple initial conditions here considered it is difficult to end up with a significant $\Omega_k$ \cite{infla}. We describe in Sec.~\ref{sec:inputs} the different cosmological observables included in these 10-parameter analysis as well as our statistical treatment of those. The results of the analysis are presented in Sec.~\ref{sec:cosmoana} where we discuss the differences obtained when the full shape information from the LSS matter power spectrum is included versus when only the corresponding distance measurement from the baryon acoustic oscillations is accounted for. We also compare the bounds on the neutrino mass scale in these $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ scenarios with those obtained for the 6+1 parameter analysis in $\Lambda{\rm CDM}+m_\nu$ models and we also study the dependence of those on the set of observables included in the analysis. These results are combined with the information on neutrino mass differences and mixing from the global analysis of neutrino oscillation experiments in Sec.\ref{sec:mbb} to derive the presently allowed ranges for the two laboratory probes of the absolute scale of neutrino mass: the effective neutrino mass in single beta decay $m_{\nu_e}$ and the effective Majorana neutrino mass in neutrinoless $\beta\beta$ decay $m_{ee}$. We summarize our conclusions in Sec.\ref{sec:conclu}. \section{Cosmological Inputs and Data Analysis} \label{sec:inputs} \TABLE{ \begin{tabular}{|l|c|} \hline Parameter & symbol\\ \hline Hubble Constant & $H_0$ \\ Baryon density & $\Omega_b h^2$ \\ Dark matter density & $\Omega_c h^2$\\ Scalar spectral index & $n_s$ \\ Optical Depth at Reonization & $\tau$\\ Amplitude of scalar power spectrum at $k=0.05$ Mpc$^{-1}$& $A_S$ \\ \hline Total neutrino mass & ${\displaystyle \sum_{i=1,3} m_{\nu,i}} $ \\ Dark energy equation of state parameter & $\omega$ \\ Effective number of extra relativistic degrees of freedom & $\Delta N_{\rm rel}$\\ Spatial curvature density & $\Omega_k$ \\ \hline \end{tabular} \caption{Cosmological parameters used in our most general analysis. $h=H_0/100$. We denote the cosmology characterized by these parameters as $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$.} \label{tab:param} } We consider cosmologies $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ characterized by the free parameters listed in Table ~\ref{tab:param}. All parameters are as usually defined in the literature with the exception of $\Delta N_{\rm rel}$. Our definition of extra relativistic degrees of freedom accounts for the fact that we have evidence of the existence of three and only three standard neutrino species which mix due to mass oscillations \cite{ourrep}. Their contribution to the energy budget of the universe is included in $\Omega_\nu h^2$. $\Delta N_{\rm rel}$ parametrizes the contribution of additional relativistic massless states of any spin to the radiation energy density. For convenience that contribution is normalized to the one from a spin $1/2$ weakly interacting massless state. That normalization is $\Delta N_{\rm rel}$. In these cosmologies several parameter degeneracies appear in any of the cosmological observables. First, any experiment that measures the angular diameter or luminosity distance to a single redshift is not able to constrain $\Omega_k$ because the distance depends not only on $\Omega_k$, but also on the expansion history of the universe. Thus for a universe containing matter and vacuum energy, one needs to combine at least two absolute distance indicators, or the expansion rates out to different redshifts to break this degeneracy. Furthermore when dark energy is dynamical, $\omega\neq-1$ , a third distance indicator is required. Finally the presence of extra relativistic degrees of freedom $\Delta N_{\rm rel}$ changes the matter-radiation equality epoch, a change that can be compensated by the corresponding modification of the matter density $\Omega_m h^2$. As a result, $\Delta N_{\rm rel}$ and $\Omega_mh^2$ are strongly degenerate unless a fourth distance indicator provides us with an independent constraint on $\Omega_mh^2$. In our analysis we include the results from the 7-year data of WMAP \cite{WMAP7} on the temperature and polarization anisotropies in the form of the temperature (TT), E-mode polarization (EE), B-mode polarization (BB), and temperature-polarization cross-correlation (TE) power spectra for which we use the likelihood function as provided by the collaboration \footnote{We notice that, although the models considered do not generate any B-mode polarization, in order to account for the information from EE and low-l TE data, the BB power spectrum must also be included in the analysis because WMAP provides the combined likelihood for the low-l TE, EE and BB spectra~\cite{larson}}. A number of CMB experiments have probed smaller angular scales than WMAP. In particular we consider the results from the temperature power spectra from the Cosmic Background Imager (CBI) \cite{CBI}, the Very Small Array (VSA)\cite{VSA}, BOOMERANG \cite{BOOMERANG} and the Arcminute Cosmology Bolometer Array Receiver (ACBAR), \cite{ACBAR}. In order to avoid redundancies among the CMB data sets, we follow the procedure in Refs.\cite{WMAP5,finelli}. We use seven band powers for CBI (in the range $948<\ell<1739$), five for VSA ($894<\ell<1407$), seven for BOOMERANG ($924<\ell<1370$), and sixteen band powers of ACBAR in the range $900<\ell<2000$. We do not include the results of the Background Imaging of Cosmic Extragalactic Polarization (BICEP) \cite{BICEP} experiment whose bands overlap excessively with WMAP and from QUaD \cite{QUAD} which observes the same region of sky as ACBAR and it is less precise \cite{finelli}. Furthermore we do not include in the analysis the polarization results of these experiments. As mentioned above, in the analysis of WMAP we use the likelihood function as provided by the collaboration. For the other CMB experiments we build the corresponding likelihood functions from the data, covariance matrix and window functions given by each experiment. We compute theoretical CMB predictions using the fast Boltzmann code CAMB~\cite{CAMB,CMBFAST}. Following the procedure outlined in Ref.\cite{WMAP3} whenever it is required we account for the Sunyaev-Zel'dovich (SZ) effect by marginalizing over the amplitude of the SZ contribution parametrized by the model of Ref.\cite{KS}. We assume a uniform prior on the amplitude as $0< A_{\rm SZ}< 2$. We also include the results from Ref.\cite{H0} on the present-day Hubble constant, $H_0=74.2\pm 3.6~{\rm km~s^{-1}~Mpc^{-1}}$ where the quoted error includes both statistical and systematic errors. This measurement of $H_0$ is obtained from the magnitude-redshift relation of 240 low-$z$ Type Ia supernovae at $z<0.1$. We include this result as a Gaussian prior and neglect the slight cosmology dependence \cite{reid} of this constraint. The results from luminosity measurements of high-$z$ Type Ia supernovae are included as presented in the compilation of the supernova data called the ``Constitution'' sample in Ref.\cite{SN09} which consists of 397 supernovae and it is an extension of the ``Union'' sample \cite{SN08}. With these data we build the corresponding likelihood function without including systematic errors whose precise values are still under debate \cite{SN09}. In our analysis we marginalize over the absolute magnitude of the supernovae with a uniform prior. Finally we also include the results from the matter power spectrum as derived from large scale structure surveys in two different forms. In one case we use the measurement of BAO scale obtained from the Two-Degree Field Galaxy Redshift Survey (2dFGRS) and the Sloan Digital Sky Survey Data Release 7 (SDSS DR7) \cite{BAO}. In the other we include the full power spectrum of the SDSS DR7 survey \cite{SDSS} (which we label LSSPS). For the analysis including the BAO scale, we use as input data the two distance ratios at $z=0.2$ and $z=0.35$ presented in Ref.\cite{BAO} and build the corresponding likelihood function using the covariance matrix as given in that reference. As discussed in Ref.\cite{BAO} the distance ratios can be considered as measurements of $d_z \equiv rs(z_d)/D_V(z)$ and apply to any of the considered models. $r_s(z_d)$ is the comoving sound horizon at the baryon drag epoch and $D_V(z) = [(1+z)^2 D_A^2 c z/H(z)]^{1/3}$ with $D_A$ is the angular diameter distance and $H(z)$ is the Hubble parameter. However in their fitting procedure the value of $d_z$ is obtained by first assuming some fiducial cosmology, extracting the value of $D_V(z)$ and then computing $rs(z_d) / D_V(z)$ with $r_s(z_d)$ evaluated by that fiducial cosmology using the approximated formula of Eisenstein \& Hu \cite{z_d} for $z_d$. As discussed in Ref.\cite{hamann} this approximated formula is not strictly valid for the extended cosmologies which we are considering. We correct for this effect by a) exactly evaluating the redshift at baryon drag epoch by using Eq.(B.5) in Ref.\cite{hamann} in the extended cosmologies, and b) correcting for the use of the approximate formula in the presentation of the data by rescaling the predictions by a factor $r^{\rm fid}_s(z_{d\, \rm approx})/r^{\rm fid}_s(z_{d\, \rm exact})$ (we prefer to rescale the predictions since the covariance matrix is given for the data as presented). In our second analysis we include the full SDSS DR7 data which consists of 45 bins, covering wavenumbers from $k_{\rm min} = 0.02\ h {\rm Mpc}^{-1}$ to $k_{\rm max} = 0.2\ h {\rm Mpc}^{-1}$ (where $k_{\rm min}$ and $k_{\rm max}$ denote the wavenumber at which the window functions of the first and last data point have their maximum). In this analysis we use the likelihood function as provided by the experiment. Together with the linear matter power spectrum it requires a smooth version of it with the baryon oscillations removed. We construct such no-wiggle spectrum for the extended cosmologies here considered from the linear matter power spectrum computed by CAMB using the method based on the discrete spectral analysis of the power spectrum described in Appendix A.1 of Ref.\cite{hamann}. We also perform comparative analysis including only the $\Lambda$CDM parameters plus massive neutrinos (first seven in Table~\ref{tab:param}) fixing $\omega=-1$ $\Delta N_{\rm rel}=\Omega_k=0$ for different combinations of the above observables. Additional constraints on the cosmological parameters can be obtained if one includes in the analysis information on the growth of structure from other low redshift data. Among others the small scale primordial spectrum determined from Lyman-alpha forest clouds or the priors on the amplitude of mass fluctuations derived from different galaxy cluster samples. We have conservatively chosen not to include those in our analysis because generically these results are subject to model dependence assumptions which render them not directly applicable for the most general cosmologies here consider. With the data from the different samples included in a given analysis and the theoretical predictions for them in terms of the relevant parameters $\vec x$, we construct the corresponding combined likelihood function. In Bayesian statistics our knowledge of $\vec x$ is summarized by the posterior probability distribution function (p.d.f.) \begin{equation} p(\vec x|\mathrm{D},\mathcal{P}) = \dfrac{\mathcal{L}(\mathrm{D} | \vec x)\, \pi(\vec x | \mathcal{P})} {\int \mathcal{L}(\mathrm{D} | \vec x')\, \pi(\vec x' | \mathcal{P})\, d\vec x'} \,. \label{eq:ppdf} \end{equation} $\pi(\vec x | \mathcal{P})$ is the prior probability density for the parameters. In our analysis we assume a uniform prior probability for the $\vec x$ parameters in Table \ref{tab:param}. For $\sum m_{\nu}$ and $\Delta N_{\rm rel}$ we imposed that they should be both $\geq 0$. Following standard techniques in order to reconstruct the posterior p.d.f.\ Eq.~\eqref{eq:ppdf} we have developed a Markov Chain Monte Carlo (MCMC) generator which employs the Metropolis-Hasting algorithm including the adapting for the kernel function to increase the efficiency. Full details are given in Appendix B of Ref.\cite{oursolar}. For each combination of data we generate ${\cal O} (50)$ chains in parallel and verify its convergence by studying the variation of the Gelman-Rubin $R$-parameter \cite{rubin} imposing as convergence criteria $R-1\lesssim 5\times 10^{-2}$. \FIGURE[!h]{ \includegraphics[width=0.8\textwidth]{1dim.eps} \caption{ Constraints from our global analysis on the cosmological parameters of $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ for the analysis including CMB+H0+SN+LSSPS (solid red) and CMB+H0+SN+BAO (dotted blue). The different panels show the marginalized one-dimensional probability distributions for all parameters. For the neutrino mass see also Fig.\ref{fig:mnu}.} \label{fig:1dim} } \TABLE{ \begin{tabular}{|c||c|c|c||c|c|c|} \hline &\multicolumn{3}{c|} {CMB+HO+SN+BAO} & \multicolumn{3}{c|} {CMB+HO+SN+LSS-PS} \\ \hline & best & 1$\sigma$ & 95\% CL & best & 1$\sigma$ & 95\% CL \\[+0.2cm] \hline &&&&&& \\[-0.3cm] $H_0$ km/s/Mpc & 76.2 & $^{ + 3.0} _ {- 2.8}$ & $^{ + 5.7} _ {- 5.6} $ & 74.4 & $^{ + 2.8} _ {- 2.9}$ & $^{ + 5.6} _ {- 5.6} $ \\[+0.2cm] $\Omega_b h^2\times 100$ & 2.205 & $^{ + 0.057} _ {- 0.050} $ & $^{ + 0.103} _ {- 0.105} $ & 2.239 & $^{ + 0.059} _ {- 0.046} $ &$^{ + 0.095} _ {- 0.108} $ \\ [+0.2cm] $\Omega_c h^2$ & 0.131 & $^{ + 0.018} _ {- 0.013} $ & $^{ + 0.036} _ {- 0.023} $ &0.128 & $^{ + 0.024} _ {- 0.009} $ &$^{ + 0.042} _ {- 0.018} $ \\[+0.2cm] $n_S$ & 0.961&$^{ + 0.021} _ {- 0.015} $&$^{ + 0.040} _ {- 0.030} $ & 0.971&$^{ + 0.019} _ {- 0.017} $&$^{ + 0.037} _ {- 0.033} $ \\ [+0.2cm] $\tau$ & 0.086& $^{ + 0.011} _ {- 0.015}$ & $^{ + 0.026} _ {- 0.028} $ & 0.083& $^{ + 0.016} _ {- 0.011}$ & $^{ + 0.030} _ {- 0.023} $ \\[+0.2cm] $\sigma_8$ & 0.787& $^{ + 0.091} _ {- 0.073} $& $^{ + 0.135} _ {- 0.179} $ &0.824& $^{ + 0.051} _ {- 0.048} $ &$^{ + 0.097} _ {- 0.105} $ \\ [+0.2cm] $\Omega_k$ & -0.006& $^{ + 0.010} _ {- 0.009} $ & $ -0.022\leq \Omega_k\leq 0.016 $ & -0.011& $^{ + 0.008} _ {- 0.009} $ & $ -0.028\leq \Omega_k\leq 0.007 $ \\[+0.2cm] $\omega$ & -1.17& $^{ + 0.19} _ {- 0.21} $ & $ -0.62\leq\omega+1\leq 0.18 $ & -1.12 & $^{ + 0.21} _ {- 0.20} $ & $ -0.57\leq\omega+1\leq 0.26 $ \\[+0.2cm] $\Delta N_{\rm rel}$ &1.2& $^{ + 1.1} _ {- 0.61} $ & $0.08\leq \Delta N_{\rm rel}\leq 3.2$ &1.3& $^{ + 1.4} _ {- 0.54} $ & $0.21\leq \Delta N_{\rm rel}\leq 3.6$ \\[+0.2cm] $\sum m_\nu$ (eV) & &$\leq 0.77$ & $\leq 1.5$ & & $\leq 0.37$ & $\leq 0.76$ \\ \hline \end{tabular} \caption{Constraints from our global analysis for $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ cosmologies. We show the values for the best fit parameters and the corresponding 1$\sigma$ (68\%) and 2$\sigma$ (95\%) allowed intervals.} \label{tab:1dim} } \FIGURE[!h]{ \includegraphics[height=0.7\textheight]{cont_10par.eps} \caption{Constraints from our global analysis for $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ cosmologies for the analysis including CMB+H0+SN+BAO (full regions) and for for the analysis including CMB+H0+SN+LSSPS (void regions). We show the 68\% and 95\% CL two-dimensional credibility regions for the last four parameters in Table \ref{tab:param}} \label{fig:fit10} } \section{Results of the Cosmological Fits} \label{sec:cosmoana} Our results for the two analysis in $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ cosmologies are presented in Figs.~\ref{fig:1dim}--~\ref{fig:fit10} and in Table \ref{tab:1dim}. In Fig.~\ref{fig:1dim} we show the marginalized one-dimensional probability distributions for the ten independent parameters obtained from Eq.\eqref{eq:ppdf} as \footnote{Technically this is obtained from the MCMC chain by discretizing the parameter space and counting the fraction of points in each cell.} \begin{equation} p_{\rm 1-dim}(x_i) =\int dx_{k\neq i} \, p(\vec x|\mathrm{D},\mathcal{P}) \; . \end{equation} For convenience we show the information on the normalization of the scalar power spectrum in terms of the derived $\sigma_8$ parameter which parametrizes the expected root mean square amplitude of the matter fluctuations in spheres of radius $R=8 h^{-1}$ Mpc. The best fit values given in the second and fourth columns in Table \ref{tab:1dim} are those for which $p_{\rm 1-dim}(x^{\rm best}_i)$ is maximum. The allowed ranges at a given CL, $x^{\rm CL}_{i,\rm min}\leq x_i\leq x^{\rm CL}_{i,\rm max}$, are obtained from the condition \begin{eqnarray} && {\rm CL} \left[x^{\rm CL}_{i,\rm min}\leq x_i\leq x^{\rm CL}_{i,\rm max}\right] =\int_{x^{\rm CL}_{i,\rm min}}^{x^{\rm CL}_{i,\rm max}} p_{\rm 1-dim}(x_i) \label{eq:1dimrange} \\ && {\rm with} \;\;\; p_{\rm 1-dim}(x^{\rm CL}_{i,\rm min})=p_{\rm 1-dim}(x^{\rm CL}_{i,\rm max}) \label{eq:2side}\\ &&{\rm or} \;\;\; x^{\rm CL}_{i, \rm min}=0 \;\;\;{\rm for}\;\; x_i=\Delta N_{rel}, \sum m_\nu \label{eq:1side} \end{eqnarray} where \eqref{eq:1side} is used when there is no solution for condition \eqref{eq:2side}. Equivalently we define the marginalized two-dimensional probability distribution functions \begin{equation} p_{\rm 2-dim}(x_i,x_j) =\int dx_{k\neq i,j} \, p(\vec x|\mathrm{D},\mathcal{P}) \; , \end{equation} and from these, we obtain the two-dimensional credibility regions with a given CL as the region with smallest area and with CL integral posterior probability. In practice they are obtained as the regions surrounded by a two-dimensional isoprobability contour which contains the point of highest posterior probability and within which the integral posterior probability is CL. We plot in Fig.\ref{fig:fit10} the 68\% and 95\% CL two-dimensional credibility regions for the last four parameters in Table \ref{tab:param}. For the analysis including CMB+H0+SN+BAO (full regions) and including CMB+H0+SN+LSSPS (void regions). Because of the degeneracies present in these cosmologies one finds, as expected, a degradation in the constraints of the {\sl standard} parameters (ie those of the $\Lambda$CDM model) when compared with the analysis performed within the $\Lambda$CDM priors for the same set of observables (see for example table 1 in Ref.~\cite{WMAP7}). As seen in the figure this particularly affects the determination of $\Omega_c$ (or equivalently $\Omega_m$) as a consequence of the well--known degeneracy in the predictions of the CMB spectra between $\Delta N_{\rm rel}$ and $\Omega_m$. This is so because a simultaneous change of both can leave untouched the redshift for matter-radiation equality, which is well constrained by the ratio of height of the third and first peaks in the CMB spectra. This degeneracy is broken by the addition of the H0 prior as well as the independent determination of $\Omega_m$ from the distance information from LSS, either using only BAO or the full power spectrum. It is interesting to notice that we find that data is better described by allowing a non-zero amount of extra radiation even though it is only at most a 2$\sigma$ effect. This implies, for example, that models with extra light sterile neutrinos are favoured by the data (as discussed in Ref.\cite{steriles} in the context of flat cosmologies with a cosmological constant) even for these $o\omega$CDM models. Most conservatively we can read the results as a 2$\sigma$ upper bound on $\Delta N_{\rm rel}\leq 3.2$ (3.6) for the analysis including CMB+H0+SN+BAO (CMB+H0+SN+LSSPS). We also find a widening in the allowed range of $n_S$ as a consequence of its degeneracy with $\Delta N_{\rm rel}$ and with the dark energy equation of state $\omega$. Both change the ratio of generated power at low versus large angular scales in the CMB spectrum, an effect that can be offset by a change in the spectral index. Conversely we find that in these cosmologies $\omega$ is considerably less constrained than in $\omega$CDM scenarios for which a 95\% range $-0.089\leq\omega+1\leq 0.12$ is obtained from the analysis of CMB+BAO+SN ~\cite{WMAP7}. The normalization of the power spectrum as parametrized by $\sigma_8$ is mostly affected by the presence of neutrino masses. Their main effect is to reduce the amplitude on the power spectrum on free-streaming scales therefore decreasing $\sigma_8$. There is also a residual degeneracy between $\sigma_8$ and $\Omega_k$ mostly associated with the fact that allowing for non-flat cosmologies permits to increase the amount of dark matter in the form of neutrinos without affecting $\Omega_c$ and therefore minimizing their indirect impact on the CMB spectra. As a consequence we find that there is a correlation between the allowed range of neutrino mass and $\Omega_k$ as it is seen in the corresponding panel of Fig.\ref{fig:fit10}. This leads to a somewhat wider allowed range of $\Omega_k$ when compared to the results obtained from the analysis of CMB+BAO+SN in o$\omega$CDM scenarios $-0.019\leq\Omega_k\leq 0.0072$~\cite{WMAP7}. Figures \ref{fig:1dim} and Fig.\ref{fig:fit10} also display clearly the differences in the results obtained when the full shape information from the LSS matter power spectrum is included versus when only the corresponding distance measurement from BAO is accounted for. We see that, with the expected exception of the neutrino mass (and correspondingly of $\sigma_8$), both sets of data lead to comparable precision on the determination of the cosmological parameters. Concerning the neutrino masses we find that neither of the two analysis show any evidence for neutrino mass and the best fit point is obtained for $\sum m_\nu=0$. However the 95 \% upper bound obtained when using BAO, $\sum m_\nu \leq 1.5$, is tighten by a about a factor 2, $\sum m_\nu \leq 0.76$, by considering instead the full LSSPS. \FIGURE[!h]{ \includegraphics[width=0.9\textwidth]{mnu.eps} \caption{\label{fig:mnu} Constraint on $\Sigma m_\nu$ as a function of the CL for the different analysis as labeled in the figure.} } \TABLE{ \begin{tabular}{|c|c||c|} \hline Model & Observables & $\Sigma m_\nu$ (eV) 95\% Bound \\ \hline o\omega{\rm CDM} +\Delta N_{\rm rel}+m_\nu $ & {\footnotesize CMB+HO+SN+BAO} & $\leq 1.5$ \\ o\omega{\rm CDM} +\Delta N_{\rm rel}+m_\nu $ & {\footnotesize CMB+HO+SN+LSSPS} & $\leq 0.76$ \\ $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB+H0+SN+BAO} & $\leq 0.61$ \\ $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB+H0+SN+LSSPS} & $\leq 0.36$ \\ $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB (+SN)} & $\leq 1.2$ \\ $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB+BAO} & $\leq 0.75$ \\ $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB+LSSPS} & $\leq 0.55$ \\ $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB+H0} & $\leq 0.45$ \\ \hline \end{tabular} \caption{95 \% upper bound on the sum of the neutrino masses from the different cosmological analysis. The analysis within $\Lambda{\rm CDM} +m_\nu$ models including only CMB data or in combination with SN yield the same 95\% bound. } \label{tab:sigma} } We plot in Fig.\ref{fig:mnu} the bound on $\sum m_\nu$ for the two analysis in the $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ cosmologies at a given CL together with the corresponding results from different analysis performed in the framework of $\Lambda{\rm CDM}+m_\nu$ models. The corresponding 95\% CL bounds are listed in Table.~\ref{tab:sigma}. We find that for the same combination of observables CMB+HO+SN+BAO (CMB+HO+SN+LSSPS) the bound for a $\Lambda{\rm CDM}+m_\nu$ scenario is $\sum m_\nu \leq 0.61$ ($\sum m_\nu \leq 0.35$) which is a factor $\sim 3$ (2) tighter than the corresponding one obtained in $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ cosmologies. However, we also find that at lower CL $\Lambda{\rm CDM}+m_\nu$ scenarios are better fitted with a non vanishing $m_\nu$ when the information from CMB (and H0) is combined with the information from LSS surveys. This is, however, at most a 1$\sigma$ effect associated with the slight mismatches between the best fit values of the cosmological parameters obtained in the analysis of the observables in the context of the $\Lambda$CDM. This illustrates the well-known fact that overconstrained scenarios are more ``sensitive'' to small fluctuations in the data, due either to statistics or to an optimistic estimate of the systematic uncertainties. Consequently, even if more conservative, the bounds derived on more general scenarios are more robust against these effects. \section{Combination with Oscillation Data} \label{sec:mbb} We present in this section the allowed ranges for the sum of the neutrino masses and the two laboratory probes of the absolute scale of neutrino mass: the effective neutrino mass in single beta decay $m_{\nu_e}$ and the effective Majorana neutrino mass in neutrinoless $\beta\beta$ decay $m_{ee}$, obtained from the combination of the cosmological analysis discussed above, with the information from the global analysis of solar, atmospheric, reactor and accelerator longbaseline (LBL) neutrino experiments in terms of flavour oscillations between the three neutrinos~\cite{ourfit}. Our starting point is the $\chi^2$ function from the oscillation analysis \begin{eqnarray} \chi^2_{\rm O}(\Delta m^2_{21},\Delta m^2_{31},\theta_{12},\theta_{13}, \theta_{23},\delta_{\rm CP})&=& \chi^2_{\rm Solar+KamLAND} (\Delta m^2_{21},\theta_{12},\theta_{13}) +\chi^2_{\rm CHOOZ} (\Delta m^2_{31},\theta_{13}) \nonumber\\ && +\chi^2_{\rm ATM+LBL} (\Delta m^2_{21},\Delta m^2_{31},\theta_{12},\theta_{13}, \theta_{23},\delta_{\rm CP}) \label{eq:chiosc} \\ &\Rightarrow& \chi^2_{\rm O} (m_{\nu_e},m_{ee}, \sum m_{\nu_i}) \end{eqnarray} where the last step is obtained after marginalization over $\Delta m^2_{31}$ and $\theta_{23}$ and allowing for variation of the two phases $\eta_1$ and $\eta_2$ within their full range. In Fig.\ref{fig:mbeta} we plot the 95\% allowed regions (for 2 dof) in the planes ($m_{\nu_e}$,$\sum m_\nu$) and ($m_{ee}$,$\sum m_\nu$) as obtained from the marginalization of $\chi^2_{\rm O} (m_{\nu_e},m_{ee}, \sum m_{\nu_i})$ with respect to the undisplayed parameter in each plot. In the figure we also show superimposed the single parameter 95\% bounds on $\sum m_{\nu_i}$ from the different cosmological analysis described in the previous section. The figure illustrates the well-known fact that currently for either mass ordering the results from neutrino oscillation experiments imply a lower bound on $m_{\nu_e}$. On the contrary $m_{ee}$ is only bounded from below for the case of the normal ordering while full cancellation due to the unknown Majorana phases is still allowed for the inverted ordering. \TABLE{ \begin{tabular}{|c|c||c|c|c|} \hline & & \multicolumn{3}{c|}{Cosmo+Oscillations}\\ & &\multicolumn{3}{c|}{95\% Ranges} \\ \hline Model & Observables & $m_{\nu_e}$ (eV) & $m_{ee}$ (eV) & $\Sigma m_\nu$ (eV) \\ \hline $\begin{array}{c} o\omega{\rm CDM} \\ +\Delta N_{\rm rel}+m_\nu \end{array}$ & {\footnotesize CMB+HO+SN+BAO} & $\begin{array} {l} {\rm N} \; [0.0047- 0.51]\\ {\rm I}\; [0.047- 0.51] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.00- 0.51]\\ {\rm I}\; [0.014- 0.51] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.056- 1.5]\\ {\rm I}\; [0.098- 1.5] \end{array}$ \\[+0.5cm] $\begin{array}{c} o\omega{\rm CDM} \\ +\Delta N_{\rm rel}+m_\nu \end{array}$ & {\footnotesize CMB+HO+SN+LSSPS} & $\begin{array} {l} {\rm N} \; [0.0047- 0.27]\\ {\rm I}\; [0.047- 0.27] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.00- 0.25]\\ {\rm I}\; [0.014- 0.25] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.056- 0.75]\\ {\rm I}\; [0.098- 0.76] \end{array}$ \\[+0.5cm] $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB+H0+SN+BAO} & $\begin{array} {l} {\rm N} \; [0.0047- 0.20]\\ {\rm I}\; [0.048- 0.21] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.00- 0.20]\\ {\rm I}\; [0.014- 0.21] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.056- 0.61]\\ {\rm I}\; [0.097- 0.61] \end{array}$ \\[+0.5cm] $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB+H0+SN+LSSSP} & $\begin{array} {l} {\rm N} \; [0.0047- 0.12]\\ {\rm I}\; [0.047- 0.12] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.00- 0.12]\\ {\rm I}\; [0.014- 0.12] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.056- 0.36]\\ {\rm I}\; [0.098- 0.36] \end{array}$ \\[+0.5cm] $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB (+SN)} & $\begin{array} {l} {\rm N} \; [0.0047- 0.40]\\ {\rm I}\; [0.047- 0.40] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.00- 0.40]\\ {\rm I}\; [0.014- 0.41] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.056- 1.2]\\ {\rm I}\; [0.098- 1.2] \end{array}$ \\[+0.5cm] $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB+BAO} & $\begin{array} {l} {\rm N} \; [0.0052- 0.25]\\ {\rm I}\; [0.047- 0.25] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.00- 0.25]\\ {\rm I}\; [0.014- 0.25] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.056- 0.75]\\ {\rm I}\; [0.099- 0.75] \end{array}$ \\[+0.5cm] $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB+LSSPS} & $\begin{array} {l} {\rm N} \; [0.0047- 0.18]\\ {\rm I}\; [0.048- 0.19] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.00- 0.18]\\ {\rm I}\; [0.014- 0.19] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.056- 0.55]\\ {\rm I}\; [0.099- 0.55] \end{array}$ \\[+0.5cm] $\Lambda{\rm CDM} +m_\nu$ & {\footnotesize CMB+H0} & $\begin{array} {l} {\rm N} \; [0.0047- 0.14]\\ {\rm I}\; [0.047- 0.16] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.00- 0.14]\\ {\rm I}\; [0.014- 0.16] \end{array}$ & $\begin{array} {l} {\rm N} \; [0.056- 0.44]\\ {\rm I}\; [0.097- 0.45] \end{array}$ \\[+0.5cm] \hline \end{tabular} \caption{95\% allowed ranges for the different probes of the absolute neutrino mass scale from the global analysis of the cosmological data with with the results from oscillation experiments. The analysis within $\Lambda{\rm CDM} +m_\nu$ models including only CMB data or in combination with SN yield the same 95\% ranges. } \label{tab:mbeta} } \FIGURE[!h]{ \includegraphics[width=0.7\textwidth]{mbb.eps} \caption{ 95\% allowed regions (for 2 dof) in the planes ($m_{\nu_e}$,$\sum m_\nu$) and ($m_{ee}$,$\sum m_\nu$) from the global analysis of oscillation data (full regions). We also show superimposed the 95\% upper bounds on $\sum m_\nu$ from cosmological constraints for the different analysis as labeled in the figure.} \label{fig:mbeta} } In order to obtain the global combined ranges we first define a one parameter equivalent $\chi^2_{\rm C} (\sum m_{\nu})$ function~\cite{foglimbeta} for a given cosmological analysis from the condition that it leads to the same CL intervals than the corresponding marginalized one-dimensional probability distribution function: \begin{equation} {\rm CL} =\frac{1}{2\pi}\int_0^{\chi^2_{\rm C} (\sum m_{\nu})} \frac{e^{-x^2/2}}{\sqrt{x}} \end{equation} where CL is obtained from Eq.~\eqref{eq:1dimrange} with $\sum m_{\nu_i}= x^{\rm CL}_{i,\rm max}$ and $x^{\rm CL}_{i,\rm min}=0$ when the lower bound for that CL is $0$ (which imples that the function $\chi^2_{\rm C} (\sum m_{\nu})$ is single valuated). When $x^{\rm CL}_{i,\rm min}\neq 0$ the function $\chi^2_{\rm C} (\sum m_{\nu})$ takes the same value for $\sum m_{\nu_i}= x^{\rm CL}_{i,\rm max}$ and $\sum m_{\nu_i}= x^{\rm CL}_{i,\rm min}$. Finally we construct \begin{equation} \chi^2_{\rm O+C} (m_{\nu_e},m_{ee}, \sum m_{\nu_i})= \chi^2_{\rm O} (m_{\nu_e},m_{ee}, \sum m_{\nu_i})+ \chi^2_{\rm C} (\sum m_{\nu}) \;, \end{equation} from which we obtain the the 2$\sigma$ 1-dim allowed ranges for $m_{\nu_e}$, $m_{ee}$, and $\sum m_{\nu_i}$ given in Table~\ref{tab:mbeta} from the condition \begin{equation} \Delta\chi^2_{\rm O+C}(m_{\nu_e})={\rm Min}_{(m_{ee}, \sum m_{\nu_i})} \left[ \chi^2_{\rm O+C}(m_{\nu_e},m_{ee}, \sum m_{\nu_i})\right] -\chi^2_{\rm O+C,min}<4 \; , \end{equation} and equivalently for $m_{ee}$ and $\sum m_{\nu_i}$. The results show that, even for the most restrictive analysis including LSSPS, part of the allowed ranges for $m_{\nu_e}$ in the context of the $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ cosmologies are within the reach of the KATRIN experiment. On the contrary this is not the case for $\Lambda{\rm CDM}+m_\nu$ models unless only the information of CMB and BAO (or SN) is included. We also find that near future neutrinoless double beta decay can test some of the allowed ranges in all these scenarios. This will be complementary to the improvement on the expected sensitivity from upcoming cosmological probes such as the Planck mission \cite{planck}. \section{Summary} \label{sec:conclu} In this work we have studied the information on the absolute value of the neutrino mass which can be obtained from the analysis of the cosmological data in $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ cosmologies in which besides neutrino masses, one allows for non-vanishing curvature, dark energy with equation of state with $\omega\neq -1$ together with the presence of new particle physics whose effect on the present cosmological observations can be parametrized in terms of additional relativistic degrees of freedom. To break the degeneracies in these models, at least the information from four different cosmological probes must be combined. Thus we have performed analysis including the data from CMB experiments, the present day Hubble constant H0, measurement, the high-redshift Type-I SN results and the information from large scale LSS surveys. We have compared the results from the analysis when the full shape information from the LSS matter power spectrum is included versus when only the corresponding distance measurement from the baryon acoustic oscillations is consider. Our results are summarize in Table \ref{tab:1dim}. Because of the degeneracies present in these cosmologies one finds a degradation in the constraints of the {\sl standard} parameters (ie those of the $\Lambda$CDM model) when compared with the analysis performed within the $\Lambda$CDM priors for the same set of observables. Concerning the neutrino masses we find that neither of the two analysis show any evidence for neutrino mass and the best fit is obtained for $\sum m_\nu=0$. However the 95 \% upper bound obtained when using BAO, $\sum m_\nu \leq 1.5$, is tighten by a about a factor 2, $\sum m_\nu \leq 0.76$, by considering instead the full LSSPS. We have compared these results with those obtained from different analysis performed in the framework of $\Lambda{\rm CDM}+m_\nu$ models. The corresponding 95\% CL bounds are listed in Table.~\ref{tab:sigma}. We find that for the same combination of observables CMB+HO+SN+BAO (CMB+HO+SN+LSSPS) the bound for a $\Lambda{\rm CDM}+m_\nu$ scenario is $\sum m_\nu \leq 0.61$ ($\sum m_\nu \leq 0.35$) which is a factor $\sim 3$ (2) tighter than the corresponding one obtained in $o\omega{\rm CDM}+\Delta N_{\rm rel}+m_\nu$ cosmologies. Finally we have statistically combined these results with the information on neutrino mass differences and mixing from the global analysis of neutrino oscillation experiments and we have derived the presently allowed ranges for the two laboratory probes of the absolute scale of neutrino mass: the effective neutrino mass in single beta decay $m_{\nu_e}$ and the effective Majorana neutrino mass in neutrinoless $\beta\beta$ decay $m_{ee}$. These results can be used to directly address the capabilities of the future $\beta$ and neutrinoless-$\beta\beta$ decay experiments to probe the allowed parameter space. \section*{Acknowledgments} We are specially indebted to J. Taron for his collaboration in the early stages of this project and comments on the final version. We thank R. Jimenez, E. Nardi and L. Verde for comments. This work is supported by Spanish MICINN grants 2007-66665-C02-01, FPA-2009-08958 and FPA-2009-09017 and consolider-ingenio 2010 grant CSD-2008-0037, by CSIC grant 200950I111, by CUR Generalitat de Catalunya grant 2009SGR502, by Comunidad Autonoma de Madrid through the HEPHACOS project P-ESP-00346, by USA-NSF grant PHY-0653342 and by EU grant EURONU. \bibliographystyle{JHEP} \providecommand{\href}[2]{#2}\begingroup\raggedright
1,477,468,750,277
arxiv
\section{Introduction\label{sec1}} Assume $M$ and $N$ are smooth compact Riemannian manifolds without boundary and they are embedded into $\mathbb{R}^{l}$ and $\mathbb{R}^{\overline{l}}$ respectively. The following spaces are of interest in the calculus of variations:% \begin{align*} W^{1,2}\left( M,N\right) & =\left\{ u\in W^{1,2}\left( M,\mathbb{R}% ^{\overline{l}}\right) :u\left( x\right) \in N\text{ a.e. }x\in M\right\} ,\\ H_{W}^{1,2}\left( M,N\right) & =\left\{ u\in W^{1,2}\left( M,N\right) :\text{ there exists a sequence }u_{i}\in C^{\infty}\left( M,N\right) \right. \\ & \left. \text{such that }u_{i}\rightharpoonup u\text{ in }W^{1,2}\left( M,N\right) \right\} . \end{align*} For a brief history and detailed references on the study of analytical and topological issues related to these spaces, one may refer to \cite{HL1,HL2,PR}% . In particular, it follows from theorem 7.1 of \cite{HL2} that a necessary condition for $H_{W}^{1,2}\left( M,N\right) =W^{1,2}\left( M,N\right) $ is that $M$ satisfies the $1$-extension property with respect to $N$ (see section 2.2 of \cite{HL2} for a definition). It was conjectured in section 7 of \cite{HL2} that the $1$-extension property is also sufficient for $H_{W}% ^{1,2}\left( M,N\right) =W^{1,2}\left( M,N\right) $. In \cite{H,PR}, it was shown that $H_{W}^{1,2}\left( M,N\right) =W^{1,2}\left( M,N\right) $ when $\pi_{1}\left( M\right) =0$ or $\pi_{1}\left( N\right) =0 $. Note that if $\pi_{1}\left( M\right) =0$ or $\pi_{1}\left( N\right) =0$, then $M$ satisfies the $1$-extension property with respect to $N$. In section 8 of \cite{HL3}, it was proved that the above conjecture is true under the additional assumption that $N$ satisfies the $2$-vanishing condition. The main aim of the present article is to confirm the conjecture in its full generality. More precisely, we have \begin{theorem} \label{thm1.1}Let $M^{n}$ and $N$ be smooth compact Riemannian manifolds without boundary ($n\geq3$). Take a Lipschitz triangulation $h:K\rightarrow M$, then% \begin{align*} & H_{W}^{1,2}\left( M,N\right) \\ & =\left\{ u\in W^{1,2}\left( M,N\right) :u_{\#,2}\left( h\right) \text{ has a continuous extension to }M\text{ w.r.t. }N\right\} \\ & =\left\{ u\in W^{1,2}\left( M,N\right) :u\text{ may be connected to some smooth maps}\right\} . \end{align*} In addition, if $\alpha\in\left[ M,N\right] $ satisfies $\left. \alpha\circ h\right\vert _{\left\vert K^{1}\right\vert }=u_{\#,2}\left( h\right) $, then we may find a sequence of smooth maps $u_{i}\in C^{\infty}\left( M,N\right) $ such that $u_{i}\rightharpoonup u$ in $W^{1,2}\left( M,N\right) $, $\left[ u_{i}\right] =\alpha$ and $du_{i}\rightarrow du$ a.e.. \end{theorem} Here $u_{\#,2}\left( h\right) $ is the $1$-homotopy class defined by White \cite{W} (see also section 4 of \cite{HL2}) and $\left[ M,N\right] $ means all homotopy classes of maps from $M$ to $N$. It follows from Theorem \ref{thm1.1} that \begin{corollary} \label{cor1.1}Let $M^{n}$ and $N$ be smooth compact Riemannian manifolds without boundary and $n\geq3$. Then smooth maps are weakly sequentially dense in $W^{1,2}\left( M,N\right) $ if and only if $M$ satisfies the $1$-extension property with respect to $N$. \end{corollary} For $p\in\left[ 3,n-1\right] $ being an natural number, it remains a challenging open problem to find out whether the weak sequential density of smooth maps in $W^{1,p}\left( M,N\right) $ is equivalent to the condition that $M$ satisfies the $p-1$ extension property with respect to $N$. This was verified to be true under further topological assumptions on $N$ (see section 8 of \cite{HL3}). However, even for $W^{1,3}\left( S^{4},S^{2}\right) $, it is still not known whether smooth maps are weakly sequentially dense. Some very interesting recent work on this space can be found in \cite{HR}. The paper is written as follows. In Section \ref{sec2}, we will present some technical lemmas. In Section \ref{sec3}, we will prove the above theorem and corollary. \textbf{Acknowledgments. }The research of the author is supported by National Science Foundation Grant DMS-0209504. \section{Some preparations\label{sec2}} The following local result, which was proved by Pakzad and Riviere in \cite{PR}, plays an important role in our discussion. \begin{theorem} [\cite{PR}]\label{thmPR}Let $N$ be a smooth compact Riemannian manifold. Assume $n\geq3 $, $B_{1}=B_{1}^{n}$, $f\in W^{1,2}\left( \partial B_{1},N\right) \cap C\left( \partial B_{1},N\right) $, $f\sim \operatorname{const}$, $u\in W^{1,2}\left( B_{1},N\right) $, $\left. u\right\vert _{\partial B_{1}}=f$, then there exists a sequence $u_{i}\in W^{1,2}\left( B_{1},N\right) \cap C\left( \overline{B}_{1},N\right) $ such that $\left. u_{i}\right\vert _{\partial B_{1}}=f$, $u_{i}\rightharpoonup u$ in $W^{1,2}\left( B_{1},N\right) $ and $du_{i}\rightarrow du$ a.e.. In addition, if $v\in W^{1,2}\left( B_{2}\backslash B_{1},N\right) \cap C\left( \overline{B}_{2}\backslash B_{1},N\right) $ satisfies $\left. v\right\vert _{\partial B_{1}}=f$ and $\left. v\right\vert _{\partial B_{2}% }\equiv\operatorname{const}$, then we may estimate% \[ \int_{B_{1}}\left\vert du_{i}\right\vert ^{2}d\mathcal{H}^{n}\leq c\left( n,N\right) \left( \int_{B_{1}}\left\vert du\right\vert ^{2}d\mathcal{H}% ^{n}+\int_{B_{2}\backslash B_{1}}\left\vert dv\right\vert ^{2}d\mathcal{H}% ^{n}\right) . \] \end{theorem} For convenience, we will use those notations and concepts in section 2, 3 and 4 of \cite{HL2}. The following lemma is a rough version of Luckhaus's lemma \cite{L}. For reader's convenience, we sketch a proof of this simpler version using results from section 3 of \cite{HL2}. \begin{lemma} \label{lem2.1}Assume $M^{n}$ and $N$ are smooth compact Riemannian manifolds without boundary. Let $e>0$, $0<\delta<1$, $A>0$, then there exists an $\varepsilon=\varepsilon\left( e,\delta,A,M,N\right) >0$ such that for any $u,v\in W^{1,2}\left( M,N\right) $ with $\left\vert du\right\vert _{L^{2}\left( M\right) },\left\vert dv\right\vert _{L^{2}\left( M\right) }\leq A$ and $\left\vert u-v\right\vert _{L^{2}\left( M\right) }% \leq\varepsilon$, we may find a $w\in W^{1,2}\left( M\times\left( 0,\delta\right) ,N\right) $ such that, in the trace sense $w\left( x,0\right) =u\left( x\right) $, $w\left( x,\delta\right) =v\left( x\right) $ a.e. $x\in M$ and% \[ \left\vert dw\right\vert _{L^{2}\left( M\times\left( 0,\delta\right) \right) }\leq c\left( M\right) \sqrt{\delta}\left( \left\vert du\right\vert _{L^{2}\left( M\right) }+\left\vert dv\right\vert _{L^{2}\left( M\right) }+e\right) . \] \end{lemma} \begin{proof} Let $\varepsilon_{M}>0$ be a small positive number such that% \[ V_{2\varepsilon_{M}}\left( M\right) =\left\{ x\in\mathbb{R}^{l}:d\left( x,M\right) <2\varepsilon_{M}\right\} \] is a tubular neighborhood of $M$. Let $\pi_{M}:V_{2\varepsilon_{M}}\left( M\right) \rightarrow M$ be the nearest point projection. Similarly we have $\varepsilon_{N}$, $V_{2\varepsilon_{N}}\left( N\right) $ and $\pi_{N}$ for $N$. Choose a Lipschitz cubeulation $h:K\rightarrow M$. We may assume each cell in $K$ is a cube of unit size. For $\xi\in B_{\varepsilon_{M}}^{l}$, $x\in\left\vert K\right\vert $, let $h_{\xi}\left( x\right) =\pi_{M}\left( h\left( x\right) +\xi\right) $. Assume $\varepsilon_{M}$ is small enough such that all $h_{\xi}$'s are bi-Lipschitz maps. Set $m=\left[ \frac {1}{\delta}\right] +1$, using $\left[ 0,1\right] =\cup_{i=1}^{m}\left[ \frac{i-1}{m},\frac{i}{m}\right] $, we may divide each $k$-cube in $K$ into $m^{k}$ small cubes. In particular, we get a subdivision of $K$, called $K_{m}$. It follows from section 3 of \cite{HL2} that for a.e. $\xi\in B_{\varepsilon_{M}}^{l}$, $u\circ h_{\xi},v\circ h_{\xi}\in\mathcal{W}% ^{1,2}\left( K_{m},N\right) $. Applying the estimates in section 3 of \cite{HL2} to each unit size $k$-cube in $\left\vert K_{m}^{k}\right\vert $, we get% \begin{align*} \int_{B_{\varepsilon_{M}}^{l}}d\mathcal{H}^{l}\left( \xi\right) \int_{\left\vert K_{m}^{k}\right\vert }\left\vert d\left( \left. u\circ h_{\xi}\right\vert _{\left\vert K_{m}^{k}\right\vert }\right) \right\vert ^{2}d\mathcal{H}^{k} & \leq c\left( M\right) \delta^{k-n}\left\vert du\right\vert _{L^{2}\left( M\right) }^{2},\\ \int_{B_{\varepsilon_{M}}^{l}}d\mathcal{H}^{l}\left( \xi\right) \int_{\left\vert K_{m}^{k}\right\vert }\left\vert d\left( \left. v\circ h_{\xi}\right\vert _{\left\vert K_{m}^{k}\right\vert }\right) \right\vert ^{2}d\mathcal{H}^{k} & \leq c\left( M\right) \delta^{k-n}\left\vert dv\right\vert _{L^{2}\left( M\right) }^{2}, \end{align*} and% \begin{align*} & \left( \int_{B_{\varepsilon_{M}}^{l}}\left\vert u\circ h_{\xi}-v\circ h_{\xi}\right\vert _{L^{\infty}\left( \left\vert K_{m}^{1}\right\vert \right) }^{2}d\mathcal{H}^{l}\left( \xi\right) \right) ^{\frac{1}{2}}\\ & \leq c\left( \delta,M\right) \left( \left\vert d\left( u-v\right) \right\vert _{L^{2}\left( M\right) }^{\frac{3}{4}}\left\vert u-v\right\vert _{L^{2}\left( M\right) }^{\frac{1}{4}}+\left\vert u-v\right\vert _{L^{2}\left( M\right) }\right) \\ & \leq c\left( \delta,A,M\right) \varepsilon^{\frac{1}{4}}. \end{align*} By the mean value inequality, we may find a $\xi\in B_{\varepsilon_{M}}^{l} $ such that $u\circ h_{\xi},v\circ h_{\xi}\in\mathcal{W}^{1,2}\left( K_{m},N\right) $,% \[ \left\vert u\circ h_{\xi}-v\circ h_{\xi}\right\vert _{L^{\infty}\left( \left\vert K_{m}^{1}\right\vert \right) }\leq c\left( \delta,A,M\right) \varepsilon^{\frac{1}{4}}<\varepsilon_{N}\quad\text{when }\varepsilon\text{ is small enough,}% \] and% \begin{align*} & \int_{\left\vert K_{m}^{k}\right\vert }\left[ \left\vert d\left( \left. u\circ h_{\xi}\right\vert _{\left\vert K_{m}^{k}\right\vert }\right) \right\vert ^{2}+\left\vert d\left( \left. v\circ h_{\xi}\right\vert _{\left\vert K_{m}^{k}\right\vert }\right) \right\vert ^{2}\right] d\mathcal{H}^{k}\\ & \leq c\left( M\right) \delta^{k-n}\left( \left\vert du\right\vert _{L^{2}\left( M\right) }^{2}+\left\vert dv\right\vert _{L^{2}\left( M\right) }^{2}\right) \end{align*} for $1\leq k\leq n$. Fix a $\eta\in C^{\infty}\left( \mathbb{R}% ,\mathbb{R}\right) $ such that $0\leq\eta\leq1$, $\left. \eta\right\vert _{\left( -\infty,\frac{1}{3}\right) }=1$ and $\left. \eta\right\vert _{\left( \frac{2}{3},\infty\right) }=0$. Letting $f=u\circ h_{\xi}$, $g=v\circ h_{\xi}$, we will define $\phi:\left\vert K\right\vert \times\left[ 0,\delta\right] \rightarrow N$ inductively. First set $\phi\left( x,0\right) =f\left( x\right) $ and $\phi\left( x,\delta\right) =g\left( x\right) $ for $x\in\left\vert K\right\vert $. For $\Delta\in K_{m}% ^{1}\backslash K_{m}^{0}$, on $\Delta\times\left[ 0,\delta\right] $, we let \[ \phi\left( x,t\right) =\pi_{N}\left( \eta\left( \frac{t}{\delta}\right) f\left( x\right) +\left( 1-\eta\left( \frac{t}{\delta}\right) \right) g\left( x\right) \right) \quad x\in\Delta,0\leq t\leq\delta. \] For $\Delta\in K_{m}^{2}\backslash K_{m}^{1}$, let $y_{\Delta}$ be the center of $\Delta$, and define $\phi$ on $\Delta\times\left[ 0,\delta\right] $ as the homogeneous degree zero extension of $\left. \phi\right\vert _{\partial\left( \Delta\times\left[ 0,\delta\right] \right) } $ with respect to $\left( y_{\Delta},\frac{\delta}{2}\right) $. Next we handle each $3$-cube, $4$-cube, $\cdots$, $n$-cube in a similar way. Calculations show that% \begin{align*} & \int_{\left\vert K\right\vert \times\left[ 0,\delta\right] }\left\vert d\phi\right\vert ^{2}d\mathcal{H}^{n+1}\\ & \leq c\left( n\right) \sum_{k=1}^{n}\delta^{n+1-k}\int_{\left\vert K_{m}^{k}\right\vert }\left[ \left\vert d\left( \left. u\circ h_{\xi }\right\vert _{\left\vert K_{m}^{k}\right\vert }\right) \right\vert ^{2}+\left\vert d\left( \left. v\circ h_{\xi}\right\vert _{\left\vert K_{m}^{k}\right\vert }\right) \right\vert ^{2}\right] d\mathcal{H}% ^{k}+c\left( \delta,A,M\right) \varepsilon^{\frac{1}{2}}\\ & \leq c\left( M\right) \delta\left( \left\vert du\right\vert _{L^{2}\left( M\right) }^{2}+\left\vert dv\right\vert _{L^{2}\left( M\right) }^{2}+e^{2}\right) \end{align*} when $\varepsilon$ is small enough. Finally $w:M\times\left[ 0,\delta\right] \rightarrow N$, defined by $w\left( x,t\right) =\phi\left( h_{\xi}% ^{-1}\left( x\right) ,t\right) $, is the needed map. \end{proof} \begin{lemma} \label{lem2.2}Assume $N$ is a smooth compact Riemannian manifold, $n\geq2$, $B_{1}=B_{1}^{n}$, $u,v\in W^{1,2}\left( B_{1},N\right) $ such that $\left. u\right\vert _{\partial B_{1}}=\left. v\right\vert _{\partial B_{1}}$. Define $w:B_{1}\times\left( 0,1\right) \rightarrow N$ by% \[ w\left( x,t\right) =\left\{ \begin{array} [c]{cc}% u\left( x\right) , & x\in B_{1}\backslash B_{t};\\ u\left( \frac{t^{2}}{\left\vert x\right\vert }\frac{x}{\left\vert x\right\vert }\right) , & x\in B_{t}\backslash B_{t^{2}};\\ v\left( \frac{x}{t^{2}}\right) , & x\in B_{t^{2}}; \end{array} \right. \] then $w\in W^{1,2}\left( B_{1}\times\left( 0,1\right) ,N\right) $ and% \[ \left\vert dw\right\vert _{L^{2}\left( B_{1}\times\left( 0,1\right) \right) }\leq c\left( n\right) \left( \left\vert du\right\vert _{L^{2}\left( B_{1}\right) }+\left\vert dv\right\vert _{L^{2}\left( B_{1}\right) }\right) . \] \end{lemma} \begin{proof} Note that% \[ \left\vert dw\left( x,t\right) \right\vert \leq\left\{ \begin{array} [c]{cc}% \left\vert du\left( x\right) \right\vert , & t<\left\vert x\right\vert ;\\ c\left( n\right) \left\vert du\left( \frac{t^{2}}{\left\vert x\right\vert }\frac{x}{\left\vert x\right\vert }\right) \right\vert \frac{t^{2}% }{\left\vert x\right\vert ^{2}}, & t^{2}<\left\vert x\right\vert <t;\\ c\left( n\right) \left\vert dv\left( \frac{x}{t^{2}}\right) \right\vert \frac{1}{t^{2}}, & \left\vert x\right\vert <t^{2}. \end{array} \right. \] Hence% \begin{align*} & \int_{\substack{0<t<1 \\t^{2}<\left\vert x\right\vert <t}}\left\vert dw\left( x,t\right) \right\vert ^{2}d\mathcal{H}^{n+1}\left( x,t\right) \\ & \leq c\left( n\right) \int_{0}^{1}dt\int_{t^{2}}^{t}dr\int_{\partial B_{r}}\left\vert du\left( \frac{t^{2}}{r^{2}}x\right) \right\vert ^{2}% \frac{t^{4}}{r^{4}}d\mathcal{H}^{n-1}\left( x\right) \\ & =c\left( n\right) \int_{0}^{1}dt\int_{t}^{1}ds\int_{\partial B_{s}}% \frac{t^{2\left( n-2\right) }}{s^{2\left( n-2\right) }}\left\vert du\left( y\right) \right\vert ^{2}d\mathcal{H}^{n-1}\left( y\right) \\ & \leq c\left( n\right) \left\vert du\right\vert _{L^{2}\left( B_{1}\right) }^{2}, \end{align*} and% \begin{align*} & \int_{\substack{0<t<1 \\\left\vert x\right\vert <t^{2}}}\left\vert dw\left( x,t\right) \right\vert ^{2}d\mathcal{H}^{n+1}\left( x,t\right) \\ & \leq c\left( n\right) \int_{0}^{1}dt\int_{B_{t^{2}}}\left\vert dv\left( \frac{x}{t^{2}}\right) \right\vert ^{2}\frac{1}{t^{4}}d\mathcal{H}^{n}\left( x\right) \\ & \leq c\left( n\right) \left\vert dv\right\vert _{L^{2}\left( B_{1}\right) }^{2}. \end{align*} The lemma follows. \end{proof} \section{Identifying weak limits of smooth maps\label{sec3}} In this section, we shall prove Theorem \ref{thm1.1} and Corollary \ref{cor1.1}. \begin{proof} [Proof of Theorem \ref{thm1.1}]Let $h:K\rightarrow M$ be a Lipschitz cubeulation. We may assume each cell in $K$ is a cube of unit size. Let $\varepsilon_{M}>0$ be a small number such that% \[ V_{2\varepsilon_{M}}\left( M\right) =\left\{ x\in\mathbb{R}^{l}:d\left( x,N\right) <2\varepsilon_{M}\right\} \] is a tubular neighborhood of $M$. Denote $\pi_{M}:V_{2\varepsilon_{M}}\left( M\right) \rightarrow M$ as the nearest point projection. For $\xi\in B_{\varepsilon_{M}}^{l}$, we let $h_{\xi}\left( x\right) =\pi_{M}\left( h\left( x\right) +\xi\right) $ for $x\in\left\vert K\right\vert $, the polytope of $K$. We may assume $\varepsilon_{M}$ is small enough such that all $h_{\xi}$ are bi-Lipschitz maps. Replacing $h$ by $h_{\xi}$ when necessary, we may assume $f=u\circ h\in\mathcal{W}^{1,2}\left( K,N\right) $. Then we may find a $g\in C\left( \left\vert K\right\vert ,N\right) \cap\mathcal{W}% ^{1,2}\left( K,N\right) $ such that $\left[ g\circ h^{-1}\right] =\alpha$ and $\left. g\right\vert _{\left\vert K^{1}\right\vert }=\left. f\right\vert _{\left\vert K^{1}\right\vert }$ (see the proof of theorem 5.5 and theorem 6.1 in \cite{HL3}). For each cell $\Delta\in K$, let $y_{\Delta}$ be the center of $\Delta$. For $x\in\Delta$, let $\left\vert x\right\vert _{\Delta}$ be the Minkowski norm with respect to $y_{\Delta}$, that is% \[ \left\vert x\right\vert _{\Delta}=\inf\left\{ t>0:y_{\Delta}+\frac {x-y_{\Delta}}{t}\in\Delta\right\} . \] \noindent\textbf{Step 1: }For every $\Delta\in K^{2}\backslash K^{1}$, we may find a sequence $\phi_{i}\in C\left( \Delta,N\right) \cap W^{1,2}\left( \Delta,N\right) $ such that $\left. \phi_{i}\right\vert _{\partial\Delta }=\left. g\right\vert _{\partial\Delta}$, $\phi_{i}\rightarrow\left. f\right\vert _{\Delta}$ in $W^{1,2}\left( \Delta,N\right) $ and $d\phi _{i}\rightarrow d\left( \left. f\right\vert _{\Delta}\right) $ a.e. (see lemma 4.4 in \cite{HL2}). For $x\in\Delta$, let% \[ f_{i}\left( x\right) =\left\{ \begin{array} [c]{cc}% \phi_{i}\left( x\right) , & \left\vert x\right\vert _{\Delta}\geq\frac {1}{2^{i}};\\ \phi_{i}\left( y_{\Delta}+\frac{1}{2^{2i}\left\vert x\right\vert _{\Delta}% }\frac{x-y_{\Delta}}{\left\vert x\right\vert _{\Delta}}\right) , & \frac {1}{2^{2i}}\leq\left\vert x\right\vert _{\Delta}\leq\frac{1}{2^{i}};\\ g\left( y_{\Delta}+2^{2i}\left( x-y_{\Delta}\right) \right) , & \left\vert x\right\vert _{\Delta}\leq\frac{1}{2^{2i}}. \end{array} \right. \] It is clear that $f_{i}\rightharpoonup\left. f\right\vert _{\Delta}$ in $W^{1,2}\left( \Delta,N\right) $, $df_{i}\rightarrow d\left( \left. f\right\vert _{\Delta}\right) $ a.e. on $\Delta$,% \[ \left\vert df_{i}\right\vert _{L^{2}\left( \Delta\right) }\leq c\cdot\left( \left\vert d\phi_{i}\right\vert _{L^{2}\left( \Delta\right) }+\left\vert d\left( \left. g\right\vert _{\Delta}\right) \right\vert _{L^{2}\left( \Delta\right) }\right) \leq c\left( f,g\right) \] and $f_{i}\in C\left( \left\vert K^{2}\right\vert ,N\right) $. In addition, if we define $h_{2,i}:\Delta\times\left[ 0,1\right] \rightarrow N$ by% \[ h_{2,i}\left( x,t\right) =\left\{ \begin{array} [c]{cc}% \phi_{i}\left( x\right) , & \left\vert x\right\vert _{\Delta}\geq\frac {1}{2^{i}}+\frac{2^{i}-1}{2^{i}}t;\\ \phi_{i}\left( y_{\Delta}+\frac{\left( \frac{1}{2^{i}}+\frac{2^{i}-1}{2^{i}% }t\right) ^{2}}{\left\vert x\right\vert _{\Delta}}\frac{x-y_{\Delta}% }{\left\vert x\right\vert _{\Delta}}\right) , & \left( \frac{1}{2^{i}}% +\frac{2^{i}-1}{2^{i}}t\right) ^{2}\leq\left\vert x\right\vert _{\Delta}% \leq\frac{1}{2^{i}}+\frac{2^{i}-1}{2^{i}}t;\\ g\left( y_{\Delta}+\frac{x-y_{\Delta}}{\left( \frac{1}{2^{i}}+\frac{2^{i}% -1}{2^{i}}t\right) ^{2}}\right) , & \left\vert x\right\vert _{\Delta}% \leq\left( \frac{1}{2^{i}}+\frac{2^{i}-1}{2^{i}}t\right) ^{2}. \end{array} \right. \] Then by Lemma \ref{lem2.2}, we know $h_{2,i}\in W^{1,2}\left( \Delta \times\left[ 0,1\right] ,N\right) $, \[ \left\vert dh_{2,i}\right\vert _{L^{2}\left( \Delta\times\left[ 0,1\right] \right) }\leq c\cdot\left( \left\vert d\phi_{i}\right\vert _{L^{2}\left( \Delta\right) }+\left\vert d\left( \left. g\right\vert _{\Delta}\right) \right\vert _{L^{2}\left( \Delta\right) }\right) \leq c\left( f,g\right) \] and\ $h_{2,i}\in C\left( \left\vert K^{2}\right\vert \times\left[ 0,1\right] ,N\right) $. \noindent\textbf{Step 2: }Assume for some $2\leq k\leq n-1$, we have a sequence $f_{i}\in C\left( \left\vert K^{k}\right\vert ,N\right) \cap\mathcal{W}^{1,2}\left( K^{k},N\right) $ and $h_{k,i}\in C\left( \left\vert K^{k}\right\vert \times\left[ 0,1\right] ,N\right) $ such that for each $\Delta\in K^{k}$, $f_{i}\rightharpoonup\left. f\right\vert _{\Delta}$ in $W^{1,2}\left( \Delta,N\right) $, $h_{k,i}\in W^{1,2}\left( \Delta\times\left[ 0,1\right] ,N\right) $,% \begin{equation} \left\vert d\left( \left. f_{i}\right\vert _{\Delta}\right) \right\vert _{L^{2}\left( \Delta\right) }\leq c\left( f,g\right) ,\quad\left\vert dh_{k,i}\right\vert _{L^{2}\left( \Delta\times\left[ 0,1\right] \right) }\leq c\left( f,g\right) \label{eq3.1}% \end{equation} and $h_{k,i}\left( x,0\right) =f_{i}\left( x\right) $, $h_{k,i}\left( x,1\right) =g\left( x\right) $ for $x\in\left\vert K^{k}\right\vert $. Since for every $\Delta\in K^{k+1}\backslash K^{k}$, $f_{i}\rightharpoonup \left. f\right\vert _{\partial\Delta}$ in $W^{1,2}\left( \partial \Delta,N\right) $, for fixed $j$ by Lemma \ref{lem2.1} we may find a $n_{j}\geq j$ such that for each $\Delta\in K^{k+1}\backslash K^{k}$, there exists a $w_{j}\in W^{1,2}\left( \partial\Delta\times\left[ 0,2^{-j}\right] ,N\right) $ with $w_{j}\left( x,0\right) =f\left( x\right) $, $w_{j}\left( x,\frac{1}{2^{j}}\right) =f_{n_{j}}\left( x\right) $ and% \[ \left\vert dw_{j}\right\vert _{L^{2}\left( \partial\Delta\times\left( 0,\frac{1}{2^{j}}\right) \right) }\leq\frac{c\left( n\right) }{2^{\frac {j}{2}}}\left( \left\vert d\left( \left. f\right\vert _{\partial\Delta }\right) \right\vert _{L^{2}\left( \partial\Delta\right) }+\left\vert df_{n_{j}}\right\vert _{L^{2}\left( \partial\Delta\right) }+1\right) \leq\frac{c\left( f,g\right) }{2^{\frac{j}{2}}}. \] Without loss of generality, we may replace $f_{i}$ by $f_{n_{i}}$ and $h_{k,i}$ by $h_{k,n_{i}}$. Fix a $\Delta\in K^{k+1}\backslash K^{k}$. For $x\in\Delta$, let% \[ \psi_{i}\left( x\right) =\left\{ \begin{array} [c]{cc}% f\left( y_{\Delta}+\frac{2^{i}\left( x-y_{\Delta}\right) }{2^{i}-1}\right) , & \left\vert x\right\vert _{\Delta}\leq\frac{2^{i}-1}{2^{i}};\\ w_{i}\left( y_{\Delta}+\frac{x-y_{\Delta}}{\left\vert x\right\vert _{\Delta}% },\left\vert x\right\vert _{\Delta}-\frac{2^{i}-1}{2^{i}}\right) , & \frac{2^{i}-1}{2^{i}}\leq\left\vert x\right\vert _{\Delta}\leq1. \end{array} \right. \] Then $\left. \psi_{i}\right\vert _{\left\vert K^{k}\right\vert }=f_{i}$ and $\psi_{i}\rightarrow\left. f\right\vert _{\Delta}$ in $W^{1,2}\left( \Delta,N\right) $ as $i\rightarrow\infty$ for each $\Delta\in K^{k+1}% \backslash K^{k}$. By Theorem \ref{thmPR} and (\ref{eq3.1}) (use $h_{k,i}$ and $g$ for the needed \textquotedblleft$v$\textquotedblright\ in Theorem \ref{thmPR}, one may refer to lemma 9.8 of \cite{HL3}), for every $\Delta\in K^{k+1}\backslash K^{k}$, we may find $\phi_{i}\in C\left( \Delta,N\right) \cap W^{1,2}\left( \Delta,N\right) $ such that $\left. \phi_{i}\right\vert _{\partial\Delta}=\left. f_{i}\right\vert _{\partial\Delta}$, $\left\vert \phi_{i}-\psi_{i}\right\vert _{L^{2}\left( \Delta\right) }<\frac{1}{2^{i}}$, $\left\vert d\phi_{i}\right\vert _{L^{2}\left( \Delta\right) }\leq c\left( f,g\right) $ and \[ \int_{M}\frac{\left\vert d\phi_{i}-d\psi_{i}\right\vert }{1+\left\vert d\phi_{i}-d\psi_{i}\right\vert }d\mathcal{H}^{k+1}\leq\frac{1}{2^{i}}. \] After passing to subsequence, we may assume $d\phi_{i}\rightarrow d\left( \left. f\right\vert _{\Delta}\right) $ a.e. on $\Delta$. Fix a $\Delta\in K^{k+1}\backslash K^{k}$, for any $x\in\Delta$, define% \begin{align*} g_{k+1,i}\left( x\right) & =\left\{ \begin{array} [c]{cc}% h_{k,i}\left( y_{\Delta}+\frac{x-y_{\Delta}}{\left\vert x\right\vert _{\Delta}},1+2\left( \frac{1}{2}-\left\vert x\right\vert _{\Delta}\right) \right) , & \frac{1}{2}\leq\left\vert x\right\vert _{\Delta}\leq1;\\ g\left( y_{\Delta}+2\left( x-y_{\Delta}\right) \right) , & \left\vert x\right\vert _{\Delta}\leq\frac{1}{2}, \end{array} \right. \\ f_{i}\left( x\right) & =\left\{ \begin{array} [c]{cc}% \phi_{i}\left( x\right) , & \left\vert x\right\vert _{\Delta}\geq\frac {1}{2^{i}};\\ \phi_{i}\left( y_{\Delta}+\frac{1}{2^{2i}\left\vert x\right\vert _{\Delta}% }\frac{x-y_{\Delta}}{\left\vert x\right\vert _{\Delta}}\right) , & \frac {1}{2^{2i}}\leq\left\vert x\right\vert _{\Delta}\leq\frac{1}{2^{i}};\\ g_{k+1,i}\left( y_{\Delta}+2^{2i}\left( x-y_{\Delta}\right) \right) , & \left\vert x\right\vert _{\Delta}\leq\frac{1}{2^{2i}}, \end{array} \right. \\ \widetilde{h}_{k+1,i}\left( x,t\right) & =\left\{ \begin{array} [c]{cc}% \phi_{i}\left( x\right) , & \left\vert x\right\vert _{\Delta}\geq\frac {1}{2^{i}}+\frac{2^{i}-1}{2^{i}}t;\\ \phi_{i}\left( y_{\Delta}+\frac{\left( \frac{1}{2^{i}}+\frac{2^{i}-1}{2^{i}% }t\right) ^{2}}{\left\vert x\right\vert _{\Delta}}\frac{x-y_{\Delta}% }{\left\vert x\right\vert _{\Delta}}\right) , & \left( \frac{1}{2^{i}}% +\frac{2^{i}-1}{2^{i}}t\right) ^{2}\leq\left\vert x\right\vert _{\Delta}% \leq\frac{1}{2^{i}}+\frac{2^{i}-1}{2^{i}}t;\\ g_{k+1,i}\left( y_{\Delta}+\frac{x-y_{\Delta}}{\left( \frac{1}{2^{i}}% +\frac{2^{i}-1}{2^{i}}t\right) ^{2}}\right) , & \left\vert x\right\vert _{\Delta}\leq\left( \frac{1}{2^{i}}+\frac{2^{i}-1}{2^{i}}t\right) ^{2}, \end{array} \right. \\ \widetilde{\widetilde{h}}_{k+1,i}\left( x,t\right) & =\left\{ \begin{array} [c]{cc}% h_{k,i}\left( y_{\Delta}+\frac{x-y_{\Delta}}{\left\vert x\right\vert _{\Delta}},1+2\left( \frac{1+t}{2}-\left\vert x\right\vert _{\Delta}\right) \right) , & \frac{1+t}{2}\leq\left\vert x\right\vert _{\Delta}\leq1;\\ g\left( y_{\Delta}+\frac{2}{1+t}\left( x-y_{\Delta}\right) \right) , & \left\vert x\right\vert _{\Delta}\leq\frac{1+t}{2}, \end{array} \right. \end{align*} and% \[ h_{k+1,i}\left( x,t\right) =\left\{ \begin{array} [c]{cc}% \widetilde{h}_{k+1,i}\left( x,2t\right) , & 0\leq t\leq\frac{1}{2};\\ \widetilde{\widetilde{h}}_{k+1,i}\left( x,2t-1\right) , & \frac{1}{2}\leq t\leq1. \end{array} \right. \] Simple calculations show that for any $\Delta\in K^{k+1}\backslash K^{k}$, $f_{i}\rightharpoonup\left. f\right\vert _{\Delta}$ in $W^{1,2}\left( \Delta,N\right) $, $df_{i}\rightarrow d\left( \left. f\right\vert _{\Delta }\right) $ a.e. on $\Delta$, $h_{k+1,i}\in W^{1,2}\left( \Delta\times\left[ 0,1\right] ,N\right) $,% \[ \left\vert df_{i}\right\vert _{L^{2}\left( \Delta\right) }\leq c\left( f,g\right) ,\quad\left\vert dh_{k+1,i}\right\vert _{L^{2}\left( \Delta \times\left[ 0,1\right] \right) }\leq c\left( f,g\right) \] and $h_{k+1,i}\left( x,0\right) =f_{i}\left( x\right) $, $h_{k+1,i}\left( x,1\right) =g\left( x\right) $ for $x\in\left\vert K^{k+1}\right\vert $. Hence we finish when we reach $f_{i}\in C\left( \left\vert K\right\vert ,N\right) \cap\mathcal{W}^{1,2}\left( K,N\right) $ and $h_{n,i}\in C\left( \left\vert K\right\vert \times\left[ 0,1\right] ,N\right) $. Let $v_{i}=f_{i}\circ h^{-1}$. Then it is clear that $v_{i}\in C\left( M,N\right) \cap W^{1,2}\left( M,N\right) $, $\left[ v_{i}\right] =\alpha $, $\left\vert v_{i}-u\right\vert _{L^{2}\left( M\right) }\rightarrow0$, $\left\vert dv_{i}\right\vert _{L^{2}\left( M\right) }\leq c\left( u,g\right) $ and $dv_{i}\rightarrow du$ a.e. on $M$. Hence, we may find $u_{i}\in C^{\infty}\left( M,N\right) $ such that $\left\vert u_{i}% -u\right\vert _{L^{2}\left( M\right) }\rightarrow0$, $\left\vert du_{i}\right\vert _{L^{2}\left( M\right) }\leq c\left( u,g\right) $, $\left[ u_{i}\right] =\alpha$ and $du_{i}\rightarrow du$ a.e. on $M$. In particular, this shows% \[ H_{W}^{1,2}\left( M,N\right) \supset\left\{ u\in W^{1,2}\left( M,N\right) :u_{\#,2}\left( h\right) \text{ has a continuous extension to }M\text{ w.r.t. }N\right\} . \] The other direction of inclusion was proved in section 7 of \cite{HL2}. To see% \[ H_{W}^{1,2}\left( M,N\right) =\left\{ u\in W^{1,2}\left( M,N\right) :u\text{ may be connected to some smooth maps}\right\} , \] we only need to use the above proved equality and proposition 5.2 of \cite{HL2}, which shows% \begin{align*} & \left\{ u\in W^{1,2}\left( M,N\right) :u_{\#,2}\left( h\right) \text{ has a continuous extension to }M\text{ w.r.t. }N\right\} \\ & =\left\{ u\in W^{1,2}\left( M,N\right) :u\text{ may be connected to some smooth maps}\right\} . \end{align*} \end{proof} We remark that many constructions above are motivated from section 5 and section 6 of \cite{HL3}. \begin{proof} [Proof of Corollary \ref{cor1.1}]This follows from Theorem \ref{thm1.1} and corollary 5.4 of \cite{HL2}. \end{proof}
1,477,468,750,278
arxiv
\section{Introduction} The value of explicit software architecture has been increasingly recognized for software maintenance, and evolution activities \cite{link2019value}. Especially architecture descriptions relating coarsely granular programming elements found as a useful tool to effectively communicate system functionality and architectural decisions \cite{pacheco2018designing,venters2018software}. These descriptions also support dependency analysis which drives the task of software modernization \cite{escobar2016towards}. Despite the numerous benefits, a legacy or open-source software system often lacks such kind of architecture descriptions. Moreover when such architecture descriptions are available, these are not aligned with the latest version of system implementation \cite{shahbazian2018recovering}. In such situations, a light-weight architecture recovery approach which approximately represents the true architecture of a system may be more convenient over the sophisticated techniques of architecture recovery. Such a light-weight approach shall quickly extract relevant information necessary to build architecture descriptions so that it can provide much-needed assistance to software architects dealing with re-engineering and modernization of existing systems, thus increasing their productivity. Intending to design a light-weight approach, this paper presents an architecture recovery approach based on {\em centrality measures} from the theory of Social Network Analysis \cite{papacharissi2009virtual,albert2002statistical}. Three observations drove the rationale behind using centrality measures for architecture extraction: (i) Most of these measures provide a highly intuitive and computationally simple way to analyze interactions when a graph represents the structure of a system. (ii) These measures quantify the structure of a system at multiple levels, i.e., at a particular node level, in relation to other nodes in the graph, and at a group of nodes or communities. (iii) These measures support the development of data-driven approaches to architecture recovery. The centrality measures-based approach presented in this paper recovers architecture descriptions in two phases. In the first phase, a centrality score is assigned to each program element. We assume that the system functionality is decomposed among multiple layers, and so in the second phase, a layer is assigned to each program element. The paper primarily contributes to existing knowledge-base of architecture recovery domain in the following ways. (1) The paper demonstrates the use of centrality measures in recovering high-level architecture descriptions. (2) The paper describes a data-driven approach to architecture recovery using supervised classification algorithms. (3) The paper presents an evaluation of supervised classification algorithms in extracting architectural descriptions. Rest of the paper is organized as follows: The centrality measures used in the paper are defined in Section II. Section III describes the central element of the approach. The algorithmic and data-driven approaches to the problem of layer assignment are explained in Section IV. The results and evaluation of the approach are presented in Section V. The section VI puts our approach in the context of existing approaches by discussing its features in relation with them. Finally, the paper concludes in Section VII. \begin{figure*}[t] \centering \caption{An Example of Class Dependencies with and their centrality scores.} \label{f1} \begin{tabular}{c} \includegraphics[scale=0.11]{ExampleLayering.png} \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|} \hline PID & ind & outd & deg & bet & clos & eig \\ \hline A & 0 & 3 & 3 & 0 & 0.71 & 0 \\ \hline B & 0 & 1 & 1 & 0 & 0.5 & 0 \\ \hline C & 1 & 1 & 2 & 2 & 0.6 & 0.055 \\ \hline D & 2 & 2 & 4 & 2.5 & 1 & 0.27 \\ \hline E & 2 & 3 & 5 & 5.5 & 0.8 & 0.0055 \\ \hline F & 3 & 0 & 3 & 0 & 0 & 1 \\ \hline G & 2 & 0 & 2 & 0 & 0 & 0.99 \\ \hline L2 & 1 & 5 & 6 & 0 & 1 & 0.5 \\ \hline L1 & 4 & 5 & 9 & 1 & 1 & 0.5 \\ \hline L0 & 5 & 0 & 5 & 0 & 0 & 1 \\ \hline \end{tabular} \end{figure*} \section{Social Network Analysis Measures} \label{measures} The theory of Social Network Analysis (SNA) provides a generic framework to analyze the structure of complex systems. This framework includes a rich set of measures, models, and methods to extract the patterns of interactions among systems' elements. A complex system is expressed as a network of nodes and edges to support the analysis of systems from diverse application domains. Examples of complex systems that have been analyzed with the help of SNA include communities on social media platforms\cite{papacharissi2009virtual}, and neural systems\cite{albert2002statistical}. The techniques from SNA have been applied to control the influence of the disease\cite{watts1999networks} to understand biological systems\cite{silva2015methodology}, to investigate the protein interactions \cite{AMITAI20041135}, and to examine animal behavior\cite{wey2008social}. These diverse applications of SNA show that a complex system exhibits certain graph-theoretic common properties such as centrality, scale-free, small world, community structure, and power-law degree distribution \cite{newman2002random,newman2001random,newman2003structure,albert2002statistical,borgatti2005centrality,freeman1979centrality}. Some of these commonly observed SNA measures relevant to our study are described below. The theory of Social Network Analysis (SNA) provides a range of measures with varying levels. Some are applied at the node level where others applied at the network level. The node-level measures are fine-grain measures that are calculated from the nodes which are directly connected to a given node. The {\em Centrality measures}\cite{singh2020centrality} are the examples of node-level measures that quantify the importance of an individual or a node in the network. A central node is an influential node having significant potential to communicate and access the information. There exists different {\em centrality measures} and they are derived from the connections to a node, position of a node in the network, distance of a node from others, and relative importance of nodes. \subsection{Degree centrality} This measure determines the central node based on the connections to the individual node. A node with a higher degree in the network is considered as the most influential one. In a directed graph, two different centrality measures exist {\em in-degree} and {\em out-degree} based on the number of incoming and outgoing edges respectively. The degree centrality of a node $v$ is equal to the number of its connections normalized to the maximum possible degree of the node. \begin{equation} C_{D}(v) = deg(v) \end{equation} \begin{equation} NC_{D}= \frac{C_{D}(v)}{n - 1}=\frac{deg(v)}{n - 1} \end{equation} \subsection{Closeness centrality} This measure aims to identify an influential node in terms of faster and wider spread of information in the network. The influential nodes are characterized by a smaller inter-node distance which signify the faster transfer of information. The closeness centrality is derived from the average distance from a node to all the connected nodes at different depths. However, the distance between the disconnected components of the network is infinite and hence it is excluded. For the central node, the average distance would be small and is calculated as the inverse of the sum of the distance to all other nodes. The normalized closeness($NC_{C}$) is in the range from 0 to 1 where 0 represents an isolated node and 1 indicates a strongly connected node. \begin{equation} C_{C}(v) = \sum ( \frac {1} {d_{vw} } ) \end{equation} \begin{equation} NC_{C} = \sum ( \frac {n-1} {d_{vw} } ) \end{equation} \subsection{Betweenness centrality} This measure aims to identify those central nodes which are responsible for connecting two or more components of the network. Removal of such a central node would mean a disconnection of the complete network. Hence, these nodes act as a bridge to pass the information \cite{borgatti2005centrality, white1994betweenness}. Betweenness centrality is defined as the number of shortest paths passing through a node. \begin{equation} C_{B}(v) = \sum_{s \neq v \neq t} \frac{\sigma_{st}(v)}{\sigma_{st}} \end{equation} where, $\sigma_{st}$ is the total number of shortest paths from a node $s$ to $t$ and $\sigma_{st}(v)$ is the number of paths that pass through $v$. The relative betweenness centrality of any node in the graph with respect to the maximum centrality of the node is calculated from $C_{B}(v)$. \begin{equation} C_{B}^{'}(v) = \frac {2 C_{B}(v)} { n ^{2} - 3n + 2} \end{equation} \subsection{Eigenvector centrality} The Eigenvector centrality is a relative centrality measure, unlike the previous three measures that are absolute one. The Eigenvector centrality calculation depends on the largest real Eigenvalue present in the symmetric adjacency matrix. The centrality of a node $v$ is proportional to the sum of the centralities of the nodes connected to it \cite{bonacich2007some,borgatti2005centrality}. \begin{equation} \lambda v_i =\sum_{j=1}^{n}{a_{ij}v_j} \end{equation} \par In general, it requires the solution of the equation $Av = \lambda v$ where $A$ is an adjacency matrix. Figure \ref{f1} shows the centrality scores of various programming elements calculated by considering the dependencies as shown in the figure. Here, it is to be noted that centrality scores can be calculated at different granularity levels i.e., at the object, method, class, package or at a logical layer. In the figure, centrality scores are calculated at class and layer level. Here, we have considered a layer as logical encapsulation unit loosely holding multiple classes. \section{Approach} The broad objective of the approach is to extract high-level architecture descriptions from the implementation artefacts so that the analysis specific to an architecture style can be performed. In this paper, we demonstrate the approach with the help of implementation artefacts available in a Java-based system such as Java and JAR files. However, the method can be extended to other language-specific artefacts. Similarly, the approach assumes that a system under study is implemented around the Layered architecture style and demonstrates analyses specific to the Layered architecture style. \begin{figure}[h] \centering \includegraphics[scale=0.3]{Architecture} \caption{Block diagram of a tool implement in Java to discover layered architecture using centrality.} \label{fig:architecture} \end{figure} As shown in Figure \ref{fig:architecture}, the approach consists of following two phases. \begin{enumerate} \item \textbf{Dependency Network Builder and Analysis [Phase 1]:} The purpose of this phase is to retrieve the dependencies present in the implementation artefacts. For a Java-based system, this phase takes Java or Jar files as an input and generates a dependency network. The programming elements such as {\em Classes, Interfaces}, and {\em Packages} are the nodes of the network and the Java relationships such as $extends$, $implements$, and $imports$ are the edges in the dependency network. The output of this stage is represented as a graph in Graph Modeling Language (GML) notations. In the second stage, a centrality score to each node is assigned. The centrality score includes the different measures described in Section \ref{measures}, and they are calculated at the Class and Interface levels. The output of this stage is a data file in the CSV format describing the centrality score assigned to each program element. \item {\bf Architecture Style Recovery and Analysis [Phase 2]} The purpose of this phase is to perform architecture style-specific activities. In this paper, the activities of this phase are illustrated by assuming Layered architecture style. For the Layered architecture style, we define a sub-activity called layer assignment. The layer assignment activity aims to assign the most appropriate layer to a program element. Additional style-specific analyses such as analysis of layer violations, and performance modeling can be supported once the programming elements are assigned to appropriate layers. \end{enumerate} The {\em Phase 1} activities which include building a dependency network and calculating centrality scores are straightforward to realize when the tools such as $JDependency$ are available. The {\em Phase 2} activities that are performing style-specific analyses can be realized in multiple ways. Two such techniques to realize architecture style-specific analyses are described in the following section. \begin{table}[] \centering \begin{tabular}{|l|c|c|c|} \hline \textbf{Centrality} & \textbf{Upper} & \textbf{Middle} & \textbf{Lower} \\ \hline In-degree & low & - & high \\ \hline Out-degree & high & - & low \\ \hline Betweenness & low & high & low \\ \hline Closeness & high & high & low \\ \hline Eigenvector & low & low & high \\ \hline \end{tabular}\\ \caption{ Relative Significance of centrality measures with respect to layers} \label{relation} \end{table} \begin{table}[] \centering \begin{tabular}{|p{0.75in}|p{2.25in}|} \hline $\delta_{il}$ and $\delta_{iu}$ & Lower and upper bound for in-degree centrality values. \\ \hline $\delta_{ol}$ and $\delta_{ou}$ & Lower and upper bound for out-degree centrality values. \\ \hline $\delta_{b}$ & Critical Value for between-ness centrality \\ \hline $\delta_{c}$ & Critical Value for closeness centrality \\ \hline $\delta_{e}$ & Critical Value for eigen-value centrality \\ \hline \end{tabular} \caption{Configuration Parameters} \label{cp} \end{table} \section{Layer Assignment} The objective of the layer assignment stage is to identify the most appropriate layer based on the centrality measures. We assume a {\em three-layers} based decomposition. Here, we use the term {\em layer} in the loose sense that a {\em layer} is a logical coarse-grained unit of encapsulating program elements and not in the strict sense as that used for {\em Layer architecture style} \cite{buschmann2007pattern}. The decision to decompose all the system responsibilities into three layers is based on the observation that functionalities for the majority of the applications can be cleanly decomposed into three coarse-grained layers. For example, many applications typically use architectural styles such as Model-View-Controller (MVC), Presentation-Abstraction-Control (PAC),\cite{buschmann2007pattern} and a 3-tier style, i.e. Presentation, Business Logic, and Data Storage. \begin{algorithm} \caption{\textbf{: primaryLabel(inDegree, outDegree, n)} \newline \textbf{Input:} inDegree[1:n],outDegree[1:n]: Vector, n:Integer\newline \textbf{Output:} inPartition[1:n], outPartition[1:n] Vector } \label{algo_CentLayer1} \begin{algorithmic}[1] \State Initialize $\delta_{iu}$, $\delta_{il}$, $\delta_{ou}$ and $\delta_{ol}$ \For{\textit{node} in 1 to n} \If{$in(node) = 0 $ and $out(node) = 0$} \State $inPartition[node] \leftarrow lower $ \State $outPartition[node] \leftarrow lower $ 5 \Else \If{$in(node) > \delta_{il} $} \State $inPartition[node] \leftarrow lower $ \Else \If{$in(node) < \delta_{iu} $} \State $inPartition[node] \leftarrow upper $ \Else \State $inPartition[node] \leftarrow middle $ \EndIf \EndIf \If{$out(node) > \delta_{ou} $} \State $outPartition[node] \leftarrow upper $ \Else \If{$out(node) < \delta_{ol} $} \State $outPartition[node] \leftarrow lower $ \Else \State $outPartition[node] \leftarrow middle $ \EndIf \EndIf \EndIf \EndFor \end{algorithmic} \end{algorithm} Two different techniques are developed to assign layers to program elements based on centrality measures. The first techniques uses a set of pre-defined rules. The second technique automatically learns the assignment rules from the pre-labelled layer-assignments using a supervisory classification algorithm. \subsection{Rule-Driven Layer Assignment} Dependencies among the program elements are used to identify logical units of decomposition. These dependencies are quantified in-terms of centrality measures described in Section \ref{measures}. The measure of {\em degree centrality} from Section II-A is further divided as {\em in-degree ($inDeg$)} and {\em out-degree ($outDeg$)} measures which count the number of incoming and outgoing edges of a node. Total five measures of centrality are used. A set of configuration parameters, as shown in Table \ref{cp}, are defined. These parameters provide flexibility while mapping program elements to a specific layer. Five accessor functions namely $in$, $out$, $between$, $closeness$ and $eigen$ are defined to get the values of in-degree centrality, out-degree centrality, betweenness centrality, closeness centrality and eigenvector centrality associated to a specific node. These functions are used to assign a program element to a specific layer from three layers, i.e. {\em upper, middle and lower}. Table \ref{relation} describes the relative significance of various centrality measure regarding upper, middle and lower layers. The {\bf Algorithm} \ref{algo_CentLayer1} is operated on a dependency-network in which nodes represent program element and edges dependencies. The objective of this algorithm is to partition the node space representing into three segments corresponding to lower, middle and upper layers. The algorithm calculates two different partitions. The first partition i.e. $inPartition$ is calculated using the {\em in-degree} centrality measure while the second partition is calculated using the {\em out-degree} centrality measure. \begin{algorithm} \caption{\textbf{refineLabel(inParticion, outPartition, n)}\newline \textbf{Input:} inPartition[1:n], outPartition[1:n]: Vector \newline n: Integer \newline \textbf{Output:} nodeLabels[1:n]: Vector} \label{algo_CentLayer2} \begin{algorithmic}[1] \State Initialize $\delta_{b}$, $\delta_{c}$ and $\delta_{e}$ \For{\textit{node} in 1 to n} \If{$inPartition[node] = outPartition[node] $} \State $nodeLabels[node] \leftarrow outPartition[node] $ \Else \State $nodeLabels[node] \leftarrow \newline upDown(inPartition[node], outPatition[node]) $ \hfill \EndIf \EndFor \end{algorithmic} \end{algorithm} After the execution of {\bf Algorithm} \ref{algo_CentLayer1} each node is labeled with two labels corresponding to layers. The various combination of labels include {\em (lower, lower), (middle, middle), (top, top), (middle, top),} and {\em (middle, lower)}. Out of these six labelling, the labels {\em (middle, top),} and {\em (middle, lower)} are conflicting because two different labels are assigned to a node. This conflict needs to be resolved. The {\bf Algorithm} \ref{algo_CentLayer2} resolves the conflicting labels and assigns the unique label to each node. The conflicting labels are resolved by using the rules described in the decision Table \ref{dt}. The function $upDown$ called in the {\bf Algorithm} \ref{algo_CentLayer2} uses these rules. The rules in Table \ref{dt} resolve the conflicting assignments using the centrality measures of {\em closeness, between-ness}, and {\em Eigen vector}, while the primary layer assignment is done with {\em in-degree} and {\em out-degree} centrality measures. When {\bf Algorithm} \ref{algo_CentLayer2} is executed, some of the nodes from the middle layer bubble up to the upper layer, and some nodes fall to the lower level. Some nodes remain at the middle layer. The vector $nodeLabels$ holds the unique labelling of each node in the dependency network after resolving all conflicts. \begin{table}[t] \centering \caption{Decision Table used to Refine Layering} \label{dt} \begin{tabular}{|c|c|c|p{10.0cm}|} \hline \multicolumn{1}{|l|}{Layer} & Measure & \multicolumn{1}{l|}{Significance} & \multicolumn{1}{c|}{Rationale} \\ \hline \multirow{3}{*}{upper} & in & 0 & Classes with in-degree value equal to 0 are placed in the upper layer. \\ \cline{2-4} & out & high & Classes with high out-degree are placed in the top layer because they use services from layers beneath them. \\ \cline{2-4} & closeness & high & Classes with high closeness value are placed in the upper layer because of large average distance from top layer to bottom layer. \\ \hline middle & between & high & Classes with high betweenness value are placed in the middle layer as they fall on the path from top layer to bottom layer. \\ \hline \multirow{4}{*}{lower} & in & high & Classes with high in-degree value are placed in the bottom layer because they are highly used. \\ \cline{2-4} & out & 0 & Classes with out-degree value equal to zero are placed to bottom layer because they only provide services. \\ \cline{2-4} & eigen & 1 & Classes with eigen value equal to 1 are placed to bottom layer because they are highly reused. \\ \cline{2-4} & in & - & Classes with in-degree and out-degree values are equal to 0 are placed to bottom layer, because they are isolated classes. \\ \hline \end{tabular} \end{table} \subsection{Supervised Classification based Layered Assignment} The {\em configuration parameters} need to be suitably initialized for the correct functioning of the {\em algorithmic-centric} approach discussed in the previous section. The system architect responsible for architecture recovery needs to fine tune the parameters to get layering at the desired level of abstraction. To overcome this drawback a {\em data-driven} approach is developed to assign labels to the programming elements. \begin{table*}[t] \centering \caption{Sample observations from the Datasets used for Supervised Learning} \label{ds} \begin{tabular}{|p{0.25in}|p{1.5in}|p{0.5in}|p{0.6in}|p{0.5in}|p{0.6in}|p{0.5in}|p{0.3in}| } \hline Id & Label & In-Degree& Out-Degree& Closeness & Between- ness & Eigen- vector & Layer \\ \hline \multicolumn{8}{|p{5.25in}|}{\centering \text{HealthWatcher}} \\ \hline 1 & ComplaintRecord &1& 10& 1.714& 19& 0.0056& 2 \\ \hline 2 & ObjectAlreadyInserted Exception& 37& 0& 0& 0& 0.347& 1 \\ \hline 3 & ObjectNotFound Exception& 53& 0& 0& 0& 0.943&1 \\ \hline 4& ObjectNotValidException &41& 0& 0& 0& 0.883& 1 \\ \hline 5& RepositoryException& 60& 0& 0& 0& 1& 1 \\ \hline \multicolumn{8}{|p{5.25in}|}{\centering \text{ConStore}} \\ \hline 1& Cache& 2& 1& 1& 0& 0.0162& 2 \\ \hline 2& CacheObject &4& 0& 0& 0& 0.053& 2 \\ \hline 3& LRUCache& 0& 2& 1& 0& 0& 2 \\ \hline 4& MRUCache& 1& 2& 1& 7& 0.0246& 2 \\ \hline 5& ItemQuery& 1& 20& 0.412& 47.166& 0.0388& 2 \\ \hline \end{tabular} \end{table*} In the data-driven approach, the problem of layered assignment is modeled as a multi-class classification problem with three labels i.e. {\em lower (1), middle (2) and upper (3)} with numerically encoded as 1,2, and 3 respectively. The classification model is trained on the labeled data-set. The data set, as shown in Table \ref{ds}, includes program element identifiers, values of all the centrality measures and layering labels as specified by the system architect responsible for architecture recovery. The layering labels can be used from the previous version of the system under study or the labels guessed by system architect to explore different alternatives for system decomposition. We implement three supervised classification algorithm namely K-Nearest Neighbour, and Support Vector Machine and Decision Tree. These are the machine learning algorithms particularly used for multi-class classification problems. A detailed comparison of these various algorithms can be found in \cite{hassan2018comparison} Python's Scikit-Learn \cite{hao2019machine} library is used to develop classification model based on these algorithms. Table \ref{ds} shows the format of the sample dataset used to train the classification models. The developed models are evaluated against the classification metrics such as accuracy, precision, recall, and F1-Score. .\begin{figure}[h] \centering \includegraphics[ scale=0.3]{samplenet.png} \caption{An Architecture of a System Designed to Test the approach} \label{tc} \end{figure} \begin{table*} \caption{Accuracy and Confusion Matrix for Data-Driven and Algorithmic Approach} \label{acc} \begin{center} \begin{tabular}{|p{0.3in}|p{0.3in}|p{0.3in}|p{0.3in}||p{0.3in}|p{0.3in}|p{0.3in}||p{0.3in}|p{0.3in}|p{0.3in}||p{0.3in}|p{0.3in}|p{0.3in}|} \hline \multicolumn{13}{|p{3.9in}|}{\centering {\bf Constore (Size: 66 classes or interfaces) Confusion Matrix} } \\ \hline \multicolumn{4}{|p{1.2in}||}{\centering SVM} & \multicolumn{3}{|p{0.9in}||}{\centering Decision Tree} & \multicolumn{3}{|p{0.9in}||}{\centering KNN classifier} & \multicolumn{3}{|p{0.9in}|}{\centering Rule based} \\ \hline Layer & lower& middle & upper & lower& middle & upper &lower& middle & upper & lower& middle & upper \\ \hline lower & 43 &0 &0 & 43 & 0& 0& 40& 3& 0 & 27 &16 & 0] \\ \hline middle& 14 & 0 & 1 & 13& 2& 0& 12& 3& 0&9& 5 & 1 \\ \hline upper & 6& 0& 2& 6& 0& 2 & 7& 1& 0 & 4 & 2 &2 \\ \hline \multicolumn{4}{|p{1.2in}||}{\centering Accuracy = 0.68} & \multicolumn{3}{|p{0.9in}||}{\centering Accuracy = 0.71} & \multicolumn{3}{|p{0.9in}||}{\centering Accuracy = 0.65} & \multicolumn{3}{|p{0.9in}|}{\centering Accuracy =0.52} \\ \hline \multicolumn{13}{|p{3.9in}|}{\centering {\bf Recall (R), Precision (P), F1-Score (F1) Evaluation } } \\ \hline \hline & R & P & F-1 & R & P & F-1& R & P & F-1& R & P & F-1 \\ \hline lower& 0.68 & 1.00 & 0.81 & 0.69 & 1.00 & 0.82 & 0.68 & 0.93 & 0.78 & 0.68 & 0.63 & 0.65 \\ \hline middle & 0.00 & 0.00 & 0.00 & 1.00 & 0.13 & 0.24& 0.43 & 0.20 & 0.27 & 0.22& 0.33 & 0.26 \\ \hline top & 0.67 & 0.25 & 0.36 & 1.00 & 0.25 & 0.40 & 0.00 & 0.00 & 0.00& 0.67 & 0.25 & 0.36 \\ \hline \multicolumn{13}{|p{3.9in}|}{\centering {\bf HealthWatcher(Size: 135 classes or interfaces) Confusion Matrix}} \\ \hline lower & 47 & 1& 9 & 49& 4& 4 & 41 & 8 & 8 &28& 16& 13 \\ \hline middle & 20 & 5& 12 & 15& 20& 2 & 7& 28& 2 &6& 30 & 1 \\ \hline upper & 5& 1& 35& 7& 0& 34& 6& 6& 29 &3 & 9& 29 \\ \hline \multicolumn{4}{|p{1.2in}||}{\centering Accuracy = 0.64} & \multicolumn{3}{|p{0.9in}||}{\centering Accuracy = 0.76} & \multicolumn{3}{|p{0.9in}||}{\centering Accuracy = 0.72} & \multicolumn{3}{|p{0.9in}|}{\centering Accuracy = 0.63} \\ \hline \hline \multicolumn{13}{|p{3.9in}|}{\centering {\bf Recall (R), Precision (P), F1-Score (F1) Evaluation } } \\ \hline \hline & R & P & F-1 & R & P & F-1& R & P & F-1& R & P & F-1 \\ \hline lower & 0.65 & 0.82 & 0.73 & 0.69 & 0.86 & 0.77 & 0.76 & 0.72 & 0.74 & 0.76& 0.49 & 0.60 \\ \hline middle & 0.71 & 0.14 & 0.23 & 0.83 & 0.54 & 0.66 & 0.67 & 0.76 & 0.71 & 0.55& 0.81 & 0.65 \\ \hline upper &0.62 & 0.85 & 0.72 & 0.85 & 0.83 & 0.84 & 0.74 & 0.71 & 0.72 & 0.66& 0.66 & 0.66 \\ \hline \multicolumn{13}{|p{3.9in}|}{\centering {\bf Test Architecture System (Size = 16 Classes) Confusion Matrix}} \\ \hline lower & 5 & 2 &0 & 4 & 3 & 0 & 5 & 2 & 0 & 5& 2 &0 \\ \hline middle & 1 & 4 &0 & 0 & 5 &0 & 1 & 4 &0 & 1& 4 &0 \\ \hline upper & 1 & 0 &3 & 0 & 1 & 3 & 4 &0 &0 & 1& 0 &3 \\ \hline \multicolumn{4}{|p{1.2in}||}{\centering Accuracy = 0.75} & \multicolumn{3}{|p{0.9in}||}{\centering Accuracy = 0.75} & \multicolumn{3}{|p{0.9in}||}{\centering Accuracy = 0.56} & \multicolumn{3}{|p{0.9in}|}{\centering Accuracy =0.75} \\ \hline \multicolumn{13}{|p{3.9in}|}{\centering {\bf Recall (R), Precision (P), F1-Score (F1) Evaluation } } \\ \hline & R & P & F-1 & R & P & F-1 & R & P & F-1 & R & P & F-1 \\ \hline lower& 0.71& 0.71 & 0.71 & 1.00& 0.57& 0.73& 0.50 & 0.71 & 0.59 & 0.71 & 0.71 & 0.71 \\ \hline middle & 0.67 & 0.80& 0.73 & 0.56 & 1.00 & 0.71 & 0.67 & 0.80 & 0.73 & 0.67 & 0.80 & 0.73 \\ \hline upper & 1.00 & 0.75 & 0.86 & 1.00 & 0.75 & 0.86 & 0.00 & 0.00 & 0.00& 1.00 & 0.75 & 0.86 \\ \hline \end{tabular} \end{center} \end{table*} \section{Evaluation} \subsection{Test cases} The following software systems are used to evaluate the performance of the architecture recovery approach developed in this paper. It includes: \begin{enumerate} \item {\bf Test Architecture system}: A small-scale test architecture system, as shown in Figure \ref{tc}, has been specially designed to test the approach. It is a simulated architecture test case. It includes 16 classes without the implementation of any functionalities. It includes only dependencies among classes, as shown in Figure \ref{tc}. The classes named as {\em SEC, TX, SERI} in the figure represent crosscutting concerns. \item {\bf ConStore:} ConStore is a small scale Java-based library designed to manage concept networks. The concept network is a mechanism used to represent the meta-model of an application domain consisting of concepts and connections between them. The ConStore is a framework for detailing out the concepts and creating a domain model for a given application. It provides services to store, navigate and retrieve the concept network\cite{constore}. \item {\bf HealthWatcher:} The HealthWatcher is a web-based application providing healthcare-related services\cite{greenwood2007impact}. This application provides services to users to communicate health-related issues. Users can register, update and query their health-related complaints and problems. The application follows a client-server, layered architecture style. \end{enumerate} All these applications are selected as test cases because the layering of the program elements was known in advance. \subsection{Results and Evaluation} The performance of classification models is typically evaluated against measures such as accuracy, precision, recall, and F1-Score \cite{goutte2005probabilistic}. These metrics are derived from a confusion matrix which compares the count of actual class labels for the observations in a given data set and the class labels as predicted by a classification model. Four different metrics are derived by comparing true labels with the predicted labels. These are accuracy, recall, precision and F1-score. Table \ref{acc} shows the performance analysis against these metrics. The table compares the performance of algorithmic-centric approach and data-driven approach for all the test cases. \subsubsection{Accuracy Analysis} The accuracy is the rate of correction for classification models. Higher the value of accuracy, better is the model. From the accuracy point of view, one can observe from the Table \ref{acc} that the data-driven approach performs better as compared to the algorithmic-centric approach. The decision-based classifier preforms better on all the test cases with an average accuracy of 74\%. This is because the performance of algorithmic approach depends on the proper tuning of various configuration parameters. The results shown in Table \ref{acc} are obtained with the values of configuration parameters shown in Table \ref{cpu}. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|} \hline & ConStore& Healthwatcher& Test Arch. \\ \hline $\delta_{il}$ & 4& 10& 2 \\ \hline $\delta_{iu}$& 1& 1& 1 \\ \hline $\delta_{ol}$& 4& 2& 2 \\ \hline $\delta_{ou}$& 1& 5& 2 \\ \hline $\delta_{b}$ & 6& 9& 6 \\ \hline $\delta_{c}$& 0.8& 0.8& 0.6 \\ \hline $\delta_{e}$& 0.6& 0.5& 0.6 \\ \hline \end{tabular} \end{center} \caption{Configuration Parameters used during layer recovery} \label{cpu} \end{table} The machine learning models automatically learn and adjust the model parameters for the better results of accuracy. In case of algorithmic approach, the configuration parameter tuning is an iterative process and need to try different combinations. \subsubsection{Recall, Precision, F1-Score Analysis} Recall indicates the proportion of correctly identified true positives while precision is the proportion of correct positive identification. High values of both recall and precision are desired, but it isn't easy to get high values simultaneously for recall and precision. Hence, F1- score combines recall and precision into one metrics. From the recall, precision, and F1-score point of view, one can observe from Table \ref{acc} that decision tree-based classifier performs better with the highest F1-score of 0.86 for the upper layer classification of test architecture system. Recalling class labels with higher precision for {\em middle layer} is a challenging task for all the models described in this paper. This is because of the presence of many not so cleanly encapsulated functionalities in a module at the middle layer and mapping crosscutting concerns to one of the three layers. \section{Earlier Approaches and Discussion} Recovering architecture descriptions from the code has been one of the widely and continuously explored problem by Software Architecture researchers. This has resulted in a large number of techniques\cite{maqbool2007hierarchical}, survey papers \cite{garcia2013comparative} and books \cite{isazadeh2017source} devoted to the topic. In the context of these earlier approaches, this section provides the rationale behind the implementation decisions taken while developing our approach. \subsection{Include Dependencies vs Symbolic Dependencies} The recent study reported in \cite{lutellier2017measuring} has recognized that the quality of recovered architecture depends on the type of dependencies analyzed to recover architecture. The study analyzes the impact of {\em symbolic dependencies} i.e. dependencies at the program identifier level versus {\em include dependencies} i.e. at the level of importing files or including packages. Further, it emphasizes that symbolic dependencies are more accurate way to recover structural information. The use of include dependencies is error prone owing to the fact that a programmer may include a package without using it. We used {\em include dependencies} in our approach because extracting and managing include dependencies are simple as compared to symbolic dependencies. Further, we mitigated the risk of unused packages by excluding these relationship from further analysis. Many programming environments facilitate the removal of unused packages. One of our objectives was to develop a data-driven approach and cleaning data in this way is an established practice in the field of data engineering. \subsection{Unsupervised Clustering vs Supervised classification} The techniques of unsupervised clustering have been adopted widely to extract high-level architectures through the analysis of dependencies between implementation artefacts \cite{maqbool2007hierarchical}. These approaches use hierarchical and search-based methods for clustering. These approaches usually take substantial search time to find not so good architectures \cite{mohammadi2019new}. One of the advantages of clustering methods is that unlabelled data sets drive these methods. But, the identified clusters of program elements need to be labelled with appropriate labels. Our choice of {\em supervised classification method} is driven by the fact that {\em centrality measures} quantify the structural properties with reference to a node, and relation of the nodes with respect to others. Processing such quantified values in efficient way is one of the advantages of many of supervised classification methods. Further, assigning program elements with layering labels is not an issue if such information is available from the previous version of software, which is the case for many re-engineering and modernization projects. In the absence of such labelled data set, the approach presented in the paper can still be adopted in two stages. In the first stage, a tentative layer labelling can be done through algorithmic approach followed by the labelling through supervised classification method. The architecture descriptions extracted by the approach can be viewed as multiple ways of decomposing a system rather than a single ground truth architecture which is often difficult to agree upon and laborious to discover \cite{garcia2013comparative}. One of the architecture out of these extracted architectures can be selected by assessing these architectures for the properties such as minimal layer of violation \cite{sarkar2009discovery,sarkar2010architecture} or satisfaction of a particular quality attribute\cite{isazadeh2017source} or any other project specific criteria. \subsection{Choice of Number of Layers} We described the working of the approach by assuming a three-layer decomposition, But this is not a strict restriction. The algorithmic centric method can be adapted by redesigning rules for additional layers while supervised classification method can be adjusted by relabelling program element with a number of layers considered. \section{Conclusion} The paper presents an approach to recover high-level architecture from system implementations. The main highlights of the approach presented in the paper include: (i) The approach uses the centrality measures from the field of Social Network Analysis to quantify the structural properties of an implementation. (ii) The dependency graph formed by including programming units(i.e. classes in Java) is treated as a network, and centrality measures are applied to extract structural properties. (iii) The paper treats a {\em layer} as a coarsely granular abstraction encapsulating system functionalities. Then paper maps a group programming elements sharing common structural properties manifested through centrality measures to a layer. (iv) Paper describes two mapping methods for this purpose called algorithmic centric and data-driven. (v) Overall data-driven methods perform better compared to the algorithmic centric method to map a program element to a layer. Paper makes particular assumption such as availability of Java-based system implementation; a system is decomposed into three layers and availability of pre-labelled data set for supervised classification. These are the assumption made to simplify the demonstration of the approach and its realization. Hence, these assumptions do not make the approach a restrictive one. However, these assumptions can be relaxed, and the approach is flexible enough to extend. Exploring the impact of fusing structural properties along with some semantic features such as a dominant concern addressed by a programming element would be an exciting exercise for future exploration. \bibliographystyle{plain}
1,477,468,750,279
arxiv
\section{Introduction} Spin qubits in hole quantum dots are frontrunner candidates to process quantum information and implement large-scale universal quantum computers~\cite{scappucci2020germanium,Gonzalez-Zalba2021,hendrickx2020fast,hendrickx2020four,Jirovec2021,maurand2016cmos,camenzind2021spin,piot2022single}. The key advantages of holes stem from their reduced sensitivity to the noise caused by hyperfine interactions to defects with non-zero nuclear spin ~\cite{PhysRevB.78.155329,prechtel2016decoupling,warburton2013single,PhysRevLett.127.190501}, and from their strong effective spin-orbit interaction (SOI), which enables ultrafast and all-electric qubit control at low power~\cite{PhysRevLett.98.097202,froning2020ultrafast,Wang2022,watzinger2018germanium}. The emergence of a large SOI in hole nanostructures is tightly linked to the design of the quantum dot, and it is maximized in systems where the hole is tightly confined in two directions and electrically driven by a field aligned to the softer confinement~\cite{DRkloeffel1,DRkloeffel2,DRkloeffel3,bosco2021squeezed}. Rabi frequencies exceeding 400~MHz have been measured in germanium/silicon (Ge/Si) core/shell nanowires~\cite{froning2020ultrafast} and Ge hut nanowires~\cite{Wang2022}. A major issue in these systems, however, is the large coupling to charge noise, that limits the coherence time of the qubit to tens of nanoseconds. Moreover, the strong SOI in long hole quantum dots is predicted to enable a strong transversal~\cite{DRkloeffel2} and longitudinal~\cite{bosco2022fully} coupling to high-impedance microwave resonators~\cite{PhysRevX.7.011030,PhysRevApplied.5.044004,Grunhaupt2019,Maleeva2018,PhysRevApplied.11.044014,PhysRevLett.121.117001}, where the strength of the interaction exceeds the coherence time of the qubits and the photons. Reaching the strong coupling regime in hole quantum dots will enable long-range connectivity of distant qubits~\cite{VandersypenInterfacingspinqubits2017} as well as quantum error correcting architectures~\cite{PhysRevLett.118.147701}, and will be a big step towards scaling up spin-based quantum computers. In contrast to state-of-the-art experiments, where multiple quantum dots encode a single qubit~\cite{Landig2018,mi2018coherent,doi:10.1126/science.aaa3786,harvey2021circuit,PhysRevB.100.125430,bottcher2021parametric}, hole spin qubits only need a single quantum dot~\cite{DRkloeffel2,bosco2022fully,PhysRevB.102.205412,michal2022tunable}, significantly diminishing the complexity of the architecture. However, to enhance the spin-photon interactions, the zero-point field of the photon needs to be aligned to the long direction of the dot~\cite{DRkloeffel2,bosco2022fully}, and thus the plunger electrode capacitively coupling the dot and the resonator has to be misaligned from center of the dot, reducing the geometric lever arm to this gate and limiting the maximal spin-resonator coupling strength.\\ In this work, we propose a different type of hole spin qubit that is defined in a thin curved quantum well (CQW). This architecture not only benefits from the large SOI of hole quantum dots, which can even be reached at lower values of electric field, but it also can be designed to be free of charge noise. In striking contrast to alternative proposals to reduce the charge noise~\cite{bosco2020hole, Wang2021,Malcok2022}, in CQWs charge noise is suppressed for a wide range of electric fields and not only at fine-tuned points in parameter space, providing a critical technological advantage compared to competing architectures. This enhancement could push spin-based quantum information processing towards new speed and coherence standards. The smallest coupling to charge noise occurs when the magnetic field is aligned to the well, where the effective Zeeman energy and the $g$-factor is widely tunable by external electric fields and by engineering the strain of the well. Strikingly, in an annular CQW, the maximal $g$-factor can reach rather large values, in analogy to topological insulator nanowires~\cite{PhysRevB.104.165405,legg2021giant}. For this reason, CQWs can be effective architectures also in the search of exotic topological particles, such as Majorana fermions~\cite{doi:10.1063/5.0055997}, where the low value of the $g$-factor along the hole nanowire is a critical issue~\cite{PhysRevB.90.195421}. Moreover, because of the large dipole moment of holes confined in CQWs, even spin qubits in short quantum dots can be driven ultrafast at low power by electric fields perpendicular to the smooth confinement direction. This mechanism not only relaxes the stringent technological constraints on the design of ultrafast spin qubits, but it also offers a different way to interface spin qubits in single quantum dots to microwave photons in high-impedance resonators, thus pushing spin-photon hybrid architectures towards new speed and coherence standards, and paving the way towards the implementation of a large-scale quantum processor.\\ This work is organized as follows. In Sec.~\ref{sec:theory}, we present the system and we introduce the theoretical model used to describe it, including an analysis of the strain, a crucial ingredient in hole nanostructures~\cite{PhysRevB.90.115419,Niquet2012,PhysRevB.103.245304}. We discuss two different setups: an annular CQWs, where a semiconducting shell fully covers an inner core~\cite{Lauhon2002}, and a CQW grown on top of a planar substrate. The latter architecture, in particular, is compatible to current CMOS processes, and it holds particular promise for scaling up quantum computers. As the complete theoretical model is rather complicated, to gain valuable insights into the response of the system, in Sec.~\ref{sec:Effective-theory}, we derive an effective low energy theory for the CQW, and discuss its key features in the presence of electric and magnetic fields. The effective model introduced in this section describes with a reasonable accuracy a wide range of devices, also when additional valence bands as well as lattice and cross-section anisotropies are included. These effects are addressed in Apps.~\ref{app:SOHs} and~\ref{sec:deviation_LK}, respectively. In Sec.~\ref{sec:SQD} and Sec.~\ref{sec:Elongated-QDs}, we analyze in detail hole spin qubits in these systems and we discuss qubits in quantum dots that are short and long compared to the radius of the well, respectively. We highlight the advantages of these qubits compared to alternative designs and we also examine the differences between annular and planar CQWs. In Sec.~\ref{sec:spin-photon}, we analyze the coupling between these qubits and photons in microwave resonators. We estimate that the interaction strength in state-of-the-art devices can exceed a few hundreds of MHz, much larger than the decay rate of qubits and photons, thus enabling a strong qubit-photon coupling, with far-reaching consequences for spin-based quantum computing. \section{Holes in curved quantum wells} \label{sec:theory} \subsection{Setup} \label{sec:Setup} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Sketch_Setup_2} \caption{\label{fig:sketch} Sketch of a hole spin qubit in a curved quantum well. The hole wavefunction is confined in the pink region in the $(x,y)$-plane, and it extends for a harmonic length $l_z$ in the $z$-direction. External electric and magnetic fields are also indicated. The blue regions highlight the interface with a different material. In particular, an experimentally relevant example comprises a germanium thin quantum well (pink) surrounded by silicon (blue). In a) we show an annular CQW, where a thin semiconducting shell fully surrounds an inner core. In b) we show a CQW in a planar setup, where the semiconducting shell is grown on top of a planar substrate. } \end{figure} In this work, we analyze the CQWs sketched in Fig.~\ref{fig:sketch}. We examine setups where the hole wavefunction is confined in an annular CQW (pink) sandwiched between a core of radius $R_1$ and an outer shell (blue) extending from a radius $R_2$ to $R_3$. We introduce the thickness $\tau=R_2-R_1$ and the average radius $R=(R_1+R_2)/2$ of the quantum well. The well extends along the $z$-direction and the $x=r\cos{\varphi}$ and $y=r\sin(\varphi)$ directions define the cross-section; $r$ and $\varphi$ are polar coordinates. The spin qubit is defined by confining the hole in a quantum dot along $z$ and by further applying a magnetic field $\textbf{B}$ that splits the spin states. The quantum dot can be defined electrostatically by metallic gates\cite{froning2020ultrafast,hendrickx2020four}, or by growing thin insulating regions in the quantum well~\cite{Jia2019}. While the latter approach could provide a better control over size and shape of the dot, for simplicity, we focus here on the former case~\footnote{We note that in the annular CQW of Fig.~\ref{fig:sketch}a), the technologically challenging gate-all-around technology can give an excellent control over the quantum dot dimension and shape.}. For the most part of our analysis, we explicitly study annular CQWs where a thin semiconducting shell fully surrounds an inner core, as shown in Fig.~\ref{fig:sketch}a). This structure is compatible with current technology~\cite{Lauhon2002}. However, our theory describes well also architectures comprising planar CQWs grown on a planar substrate, as shown in Fig.\ref{fig:sketch}b), as well as CQWs with less symmetric cross-sections, e.g. square or hexagonal. In particular, planar architectures grown on top of a silicon substrate hold particular promise to scale up quantum processors because of their compatibility with CMOS technology~\cite{Veldhorst2017}. In this context, another material that is attracting much attention is strained Ge, that present large SOI, small effective mass, and can be grown epitaxially with high purity on top of Si~\cite{scappucci2020germanium}. Moreover, in nanowires the mismatch of the energy bands between Si and Ge and the alignment of their Fermi energies ensures that even without external gating, the charge carriers are holes confined in Ge~\cite{scappucci2020germanium}. For these reasons, although our theory is applicable to a wide range of semiconducting materials, we restrict ourselves to the analysis of a Ge CQW (where the hole is confined) surrounded by Si. The confinement of the holes in the shell instead than in the core has far-reaching consequences on the resulting spin qubits, and sets our setup apart from current Ge/Si core/shell and hut nanowires~\cite{froning2020ultrafast,PhysRevResearch.3.013081, Wang2022}. We now discuss in detail the key ingredients required to define these qubits, including a description of the physics of the valence band of semiconductors, of the electric and magnetic fields, and of strain. \subsection{Theoretical model} \label{sec:Model} The physics of hole nanostructures is accurately described by the Hamiltonian \begin{equation} \label{eq:Hamiltonian} H=H_\text{LK}+V_\text{C}(r,z) - e \textbf{E}\cdot \textbf{r} + H_\textbf{B}+ H_\text{BP} \ . \end{equation} The kinetic energy of the holes is modelled by the isotropic Luttinger-Kohn (LK) Hamiltonian~\cite{WinklerSpinOrbitCoupling2003} \begin{equation} \label{eq:LK-Hamiltonian} H_\text{LK}=\left(\gamma_1+\frac{5}{2}\gamma_s\right)\frac{p^2}{2m} -\frac{\gamma_s}{m}(\textbf{p}\cdot \textbf{J})^2 \ , \end{equation} where $\gamma_1$ and $\gamma_s$ are material-dependent LK parameters parametrizing the mixture of heavy holes (HHs) and light holes (LHs) at the top of the valence band of cubic semiconductors, $\textbf{p}=-i\hbar\nabla$ is the canonical momentum [$p^2=-\hbar^2\nabla^2$] and $\textbf{J}=(J_x,J_y,J_z)$ is the vector of spin-3/2 matrices. In particular, in Ge $\gamma_1\approx 13.35$ and $\gamma_s\approx 4.96$~\cite{WinklerSpinOrbitCoupling2003}. Deviations from this model, including the contributions of the additional valence band and of cubic anisotropies are addressed in App.~\ref{app:SOHs} and~\ref{sec:deviation_LK}, respectively. The quantum dot is defined by the confinement potential $V_\text{C}(r,z) =V_r(r)+\hbar\omega_z z^2/2l_z^2$, comprising an abrupt potential $V_r(r)$ in the radial direction modelling the boundary of the CQW and a smooth harmonic potential parametrized by a characteristic length $l_z$ and frequency $\omega_z=\hbar\gamma_1/ml_z^2$. We also include an electric field $\textbf{E}$ produced by the gates and a magnetic field $\textbf{B}$, which couples to the spin of the hole by the Hamiltonian \begin{equation} \frac{H_\textbf{B}}{2\mu_B}=\kappa \textbf{B}\cdot \textbf{J}+\frac{2\gamma_s}{\hbar}\left[\left(\frac{\gamma_1}{\gamma_s}+\frac{5}{2}\right)\left\{\textbf{A}, \textbf{p} \right\}- 2\left\{\textbf{A}\cdot\textbf{J}, \textbf{p}\cdot \textbf{J} \right\}\right] \ , \end{equation} where $\{A,B\}=(AB+BA)/2$ and $\mu_B$ is the Bohr magneton. Here, $H_\textbf{B}$ includes the Zeeman field and the orbital magnetic field effects coming from the Peierls substitutions $\textbf{p}\to \pmb{ \pi}=\textbf{p}+e\textbf{A}$, with $\textbf{A}=(zB_y-yB_z/2,-zB_x+xB_z/2,0)$. We neglect small corrections $\mathcal{O}(\textbf{B}^2)$ and anisotropies $\propto J_i^3$ of the Zeeman interactions. \subsection{Strain in a Ge curved quantum well} \label{sec:strain} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Strain} \caption{\label{fig:strain} Strain energy of a Ge CQW as a function of thickness $\tau$ of the well. With dashed lines we show the approximated results in Eq.~\eqref{eq_strain_pars}, while with solid lines we show a more general result, including differences in the elastic constants of Si and Ge, as discussed in App.~\ref{app:strain}. The radial strain $\epsilon_r$ is to good approximation independent of the outer Si shell, while the longitudinal strain $\epsilon_z$ depends strongly on the outer shell thickness $R_3$, measured with respect to $R_2$, see Fig.~\ref{fig:sketch}. For Ge CQWs, the radial strain dominates, while the longitudinal strain plays a significant role in thicker Ge wells, especially when covered by a thick outer Si layer. } \end{figure} Another key feature to understand the behaviour of hole spin qubits is the strain that strongly hybridizes HHs and LHs~\cite{PhysRevB.90.115419,Niquet2012,PhysRevB.103.245304}. We now restrict ourselves to the analysis of the technologically relevant scenario where a thin Ge CQW is surrounded by Si. In this case, the strain is mainly determined by the mismatch of lattice constants of Si and Ge, $a_\text{Si}=0.543$~nm and $a_\text{Ge}=0.566$~nm, respectively. In the annular Ge CQW sketched in Fig.~\ref{fig:sketch}a), the strain is strikingly different than in other devices~\cite{PhysRevB.90.115419,doi:10.1002/adma.201906523,PhysRevB.103.125201,bosco2021squeezed}, and it is accurately modelled by the Bir-Pikus (BP) Hamiltonian \begin{equation} \label{eq:strain} H_\text{BP}=J_z^2\epsilon_z- J_r^2\epsilon_r \ , \end{equation} where $J_r=J_x\cos(\varphi)+J_y\sin(\varphi)$ is the spin-3/2 matrix aligned to the radial direction. The strain energies $\epsilon_{r}$ and $\epsilon_z$ can be approximated as \begin{subequations} \label{eq_strain_pars} \begin{align} \epsilon_r & \approx \left(1-\frac{\tau}{2R}\right)^2|b| \varepsilon_0 >0\ , \\ \epsilon_z& \approx \left(\frac{1}{2}-\frac{\tau}{8R}-\frac{R^2}{R_3^2}\right) \frac{\tau}{R}|b|\varepsilon_0 \ . \end{align} \end{subequations} where $\varepsilon_0\approx 1.6 \varepsilon_\parallel$ is the typical strain in planar heterostructure~\cite{doi:10.1063/1.1601686}, $\varepsilon_\parallel=(a_\text{Ge}-a_\text{Si})/a_\text{Ge}\approx 4\%$ is the relative mismatch of lattice constants of Si and Ge, and $b=-2.2$~eV~\cite{WinklerSpinOrbitCoupling2003}. If Ge is grown on pure Si, the typical strain energy is $|b|\varepsilon_0\approx 140$~meV. However, if a Si$_{1-x}$Ge$_x$ compound~\cite{scappucci2020germanium} substitutes pure Si in the core and outer shell, the strain decreases as $\varepsilon_\parallel\to (1-x)\varepsilon_\parallel$. The dependence of these quantities on the thickness $\tau$ of the Ge well is shown in Fig.~\ref{fig:strain}. By comparing to the results obtained by a more general analysis analogous to Ref.~\cite{PhysRevB.90.115419}, we observe that the simple expressions in Eq.~\eqref{eq_strain_pars} accurately describe the system. A more detailed derivation of Eqs.~\eqref{eq:strain} and~\eqref{eq_strain_pars}, also including a discussion on inhomogeneous strain, is provided in App.~\ref{app:strain}. From Eq.~\eqref{eq:strain}, we observe that the strain energy can be decomposed into two different components, parametrized by the competing energies $\epsilon_{r,z}$. We emphasize that these energies can be designed independently because they depend on different design parameters of the cross-section. In particular, the energy $\epsilon_z$ favours holes with the quantization axis aligned to the $z$-direction, and it strongly depends on thickness $\tau$ of the Ge well and on the radius $R_3$ of the outer Si shell. When $R_3$ is sufficiently large (small), then $\epsilon_z>0$ ($\epsilon_z<0$) and LHs (HHs) have a lower energy. When $\epsilon_z>0$, this terms is qualitatively analogous to the typical strain in Si/Ge core/shell nanowires~\cite{PhysRevB.90.115419}, where the BP Hamiltonian is~\footnote{The strain of a Ge/Si core/shell nanowire with inner radius $R_1$ and outer radius $R_2$ is straightforwardly related to the strain in the thin Ge CQW [Eq.~\eqref{eq_strain_pars}] by the substitutions $\tau/R\to 2$ and $R/R_3\to R_1/2R_2$.} \begin{equation} H_\text{BP}^\text{c/s}\approx J_z^2 \frac{|b|\varepsilon_0}{2}\left(1- \frac{R_1^2}{R_2^2}\right) \ . \end{equation} In contrast, the energy $\epsilon_r$ favours HHs in the radial direction, and to good approximation it is independent of the presence of the outer Si shell. This type of strain is not present in usual Ge/Si core/shell nanowires, but it emerges in planar heterostructures~\cite{PhysRevB.103.125201,Wang2021,bosco2021squeezed} and hut wires~\cite{doi:10.1002/adma.201906523}, where the ground states are expected to be HHs with quantization axis aligned to the strong confinement direction. When $\tau\ll R$, we recover this expected limit and \begin{equation} H_\text{BP}^\text{PH}=-J_r^2 |b|\varepsilon_0 \ . \end{equation} From the Fig.~\ref{fig:strain}, we observe that while $\epsilon_z$ is dominant at large values of the ratio $\tau/R$, in thin Ge CQWs $\epsilon_r$ is the dominant contribution and the ground state comprises to good approximation radial HHs. In the following, we restrict ourselves to the analysis of devices with thin Ge CQWs and a thick outer Si layer, such that $\epsilon_r>\epsilon_z>0$. \subsection{Hamiltonian in cylindrical coordinates } \label{sec:frame} In a thin Ge CQW, the radial confinement energy \begin{equation} \epsilon_c=\frac{\hbar^2\pi^2\gamma_1}{m\tau^2}\approx 100 \times \left(\frac{10~\text{nm}}{\tau}\right)^2~\text{meV} \end{equation} is large compared to the quantization energy $\hbar\omega_\varphi=\hbar^2\gamma_1/mR^2$ of the total angular momentum and to the confinement energy $\hbar\omega_z=\hbar^2\gamma_1/ml_z^2$ along the quantum well, both in the meV range. In this case, where $\tau\ll R, l_z$, the dynamics of the confined holes is well-described by an effective low-energy theory where the radial degrees of freedom are frozen. However, because the radial strain $\epsilon_r$ is comparable to the radial confinement, i.e. $\epsilon_c\sim \epsilon_r\sim 100$~meV, an accurate low-energy model of the system needs to account exactly for $\epsilon_r$. For this reason, we rotate the Hamiltonian in Eq.~\eqref{eq:Hamiltonian} to a cylindrical coordinate system where the spin quantization axis is aligned to the radial direction and the radial strain $-\epsilon_r J_r^2$ is diagonal. This rotation is generated by the angular-dependent unitary operator $U=e^{-i J_3 \varphi} e^{-i J_2 \pi/2}$, comprising a first rotation that aligns the spin matrices $J_x$ and $J_y$ to the radial and angular directions, respectively, and a second transformation that aligns the spin quantization axis to the radial direction, i.e. $J_r\to J_3$ and $J_z\to -J_1$. To avoid confusion, in the new frame we label the spin-3/2 matrices by 1,2,3 instead of $x,y,z$. The effect of $U$ on the most relevant operators is \begin{subequations} \label{eq:rot_polar} \begin{align} \label{eq:mom-rot} U^\dagger \textbf{p} U & = \textbf{p}+ \hbar \textbf{e}_\varphi J_1 \ , \\ \label{eq:J-rot} U^\dagger \textbf{J} U &= \textbf{e}_r J_3 + \textbf{e}_\varphi J_2 - \textbf{e}_z J_1 \ . \end{align} \end{subequations} We note that in the product of the rotated operators, some care must be taken because $\textbf{e}_{r,\varphi}$ are unit vectors in cylindrical coordinates that depend on $\varphi$, and do not commute with $p_\varphi=-i\hbar\partial_\varphi$. For this reason, we report in App.~\ref{sec:LK-BP-cyl} the explicit expressions of the LK and BP Hamiltonians in this coordinate system. Importantly, from Eq.~\eqref{eq:rot_polar} it follows that the total angular momentum $F_z=p_\varphi+\hbar J_z$ in the original coordinate system transforms as \begin{equation} U^\dagger F_z U = p_\varphi \ , \end{equation} in the new frame. Consequently, the physical eigensolutions of the Hamiltonian in the rotated frame are antiperiodic in $\varphi$ and $p_\varphi$ is quantized in units of $(2l-1)\hbar/2$, with $l\in\mathbb{Z}$. \section{Effective low-energy theory} \label{sec:Effective-theory} We first derive an effective model describing the band dispersion in the absence of external fields ($\textbf{E}=\textbf{B}=0$) and then generalize our results to account for finite values of $\textbf{E}$ and $\textbf{B}$. In the frame introduced in Sec.~\ref{sec:frame} and when $\textbf{E}=\textbf{B}=0$, $p_\varphi$ and $p_z$ are good quantum numbers of the total isotropic Hamiltonian $H$ in Eq.~\eqref{eq:Hamiltonian}. To construct an effective Hamiltonian that acts only on these two degrees of freedom, we trace out the radial direction by projecting the rotated $H$ onto the basis states \begin{equation} \label{eq:basis-states} \psi_n(r)=\sqrt{\frac{2}{\tau r}}\sin\left[\frac{\pi n}{\tau} \left(r-R-\frac{\tau}{2}\right)\right] \ , \ n\in \mathbb{N} \ , \end{equation} satisfying hard-wall boundary conditions at $r=R\pm \tau/2$. These functions are the eigenstates of the operator $\gamma_1 p_r^2/m$ to eigenvalues $\epsilon_c n^2$, with $p_r=-i\hbar(\partial_r+1/2r)$ being the hermitian radial momentum, and they provide a complete basis for the radial degree of freedom. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{pars} \caption{\label{fig:pars} Parameters of the effective theory in Eq.~\eqref{eq:effective_H} of a thin Ge CQW. We show with blue and red lines the inverse effective mass and the spin-orbit velocities, see Eq.~\eqref{eq:pars}, obtained at $\epsilon_z=0$ and at $\epsilon_z=0.5\epsilon_r$, respectively. While the effective mass is not strongly modified by strain, the SOI can acquire a large strain dependence, and it significantly varies as a function of the longitudinal strain $\epsilon_z$. } \end{figure} The quasi-degenerate ground state of the system comprises two HH Kramers partners with quantum number $n=1$, energetically separated from the first (HH) excited states by the energy $3\epsilon_c (1-2\gamma_s/\gamma_1)/2$, and from the second (LH) excited state by $2\gamma_s\epsilon_c/\gamma_1+2\epsilon_r+\epsilon_z$. By considering only the first two radial eigenstates with $n=1$ and $n=2$, and by a second order Schrieffer-Wolff transformation~\cite{WinklerSpinOrbitCoupling2003,BRAVYI20112793} we find that the dynamics of the ground state is captured by the quadratic Hamiltonian \begin{equation} \label{eq:effective_H} H_\text{GS}=\frac{p_+p_-}{2m^*} -\frac{p_+^2+p_-^2}{4\delta m}+\sigma_+ \left( v_- p_--v_+ p_+ \right) + \text{h.c.} \ , \end{equation} where $p_\pm=\frac{p_\varphi}{R}\pm i p_z$ and $\text{h.c.}$ means hermitian conjugate. The parameters of the effective theory to second order in $\tau/R$ are given by \begin{subequations} \label{eq:pars} \begin{align} \frac{1}{m^*}&\approx \frac{1}{m}\left(\gamma _1+\gamma _s -3 \tilde{\gamma}\right)\ , \\ v_-&\approx \frac{3}{2}\frac{\hbar}{m R}\left[\left(\gamma_s-\tilde{\gamma}\right) -\frac{1}{2}(\gamma_1+2\gamma_s)\tilde{\epsilon}_z \right]\ , \\ \frac{1}{\delta m}&\approx \frac{3}{m} \gamma_s \tilde{\epsilon}_z \ , \ \ \text{and} \ \ \ v_+ \approx \frac{3}{4}\frac{\hbar}{m R}\gamma_1\tilde{\epsilon}_z \ ; \end{align} \end{subequations} we introduce here the dimensionless quantities \begin{subequations} \begin{align} \tilde{\gamma}&=\frac{256}{9 \pi^2}\frac{ \gamma _s }{10 + \gamma _1 \left(3 \epsilon _c+4 \epsilon _r+2 \epsilon _z\right)/\gamma_s \epsilon _c } \ , \\ \tilde{\epsilon}_z&= \frac{ \epsilon _z}{\epsilon _z+2 \epsilon _r+2 \gamma _s\epsilon _c/\gamma _1 } \ . \end{align} \end{subequations} While $\tilde{\gamma}$ only quantitatively modifies the parameters of the effective Hamiltonian, the longitudinal strain $\tilde{\epsilon}_z$ introduces the qualitatively different terms $v_+$ and $ \delta m$, that modify the SOI and the effective mass along the quantum well and in the angular direction. The dependence of these parameters on strain is shown in Fig.~\ref{fig:pars}. We observe that while the effective mass is not strongly affected by $\delta m$, the effective SOI can be largely modified by $\epsilon_z$, and $v_+$ can become comparable to $v_-$. The effective theory in Eq.~\eqref{eq:effective_H} can be also generalized to include the high energy valence band that is separated in energy from the HH-LH subspace by an energy $\Delta_\text{S}\approx 300$~meV. These high energy states do not alter qualitatively the physics described here. In fact, in this case, Eq.~\eqref{eq:effective_H} is still valid, but the effective parameters in Eq.~\eqref{eq:pars} acquire corrections that scale as $\Delta_\text{S}/\epsilon_c$. The effect of the additional valence band is discussed in detail in App.~\ref{app:SOHs}, and in particular, the generalized version of Eq.~\eqref{eq:pars} is given in Eq.~\eqref{eq:pars_SOH}. \\ \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Band_Comparison_hom_zoom} \caption{\label{fig:band} Energy dispersion of a strained thin Ge CQW. We compare the dispersion calculated from the effective theory in Eq.~\eqref{eq:effective_H} (blue lines) and by numerically diagonalizing the Hamiltonian $H$ in Eq.~\eqref{eq:Hamiltonian} (red dashed lines). In a) and b), we show results obtained at $E_x=0$ and $E_x=\epsilon_c/eR$. We consider $\textbf{B}=0$, $\tau=R/2$, $\epsilon_r=2\epsilon_z=3\epsilon_c$. In the insets at the bottom right of the figures, we show the hole density of the groundstate at $p_z=0$ in the lab frame. At $p_z=0$, in a) the ground and first excited states are spilt by the small SOI gap $\Delta$ [Eq.~\eqref{eq:delta}]. In contrast, in b) the gap $\hbar\omega_E$ [Eq.~\eqref{eq:omegaE}] is large and $E_x$ pins the hole to the top of the well. } \end{figure} In Fig.~\ref{fig:band}a) we compare the energy dispersion of the lowest energy levels as a function of $p_z$ derived from the effective model in Eq.~\eqref{eq:effective_H} to the dispersion calculated by numerically diagonalizing the Hamiltonian $H$ in Eq.~\eqref{eq:Hamiltonian}. For the numerical calculations, we project $H$ onto the lowest 20 states $\psi_{n}(r)$ in the radial direction, see Eq.~\eqref{eq:basis-states}, and consider 40 total angular momentum eigenstates. We observe an excellent agreement between effective theory and the numerics even at rather large values of momentum $p_z$, strain $\epsilon_{r,z}$ and thickness $\tau$. At $p_z=0$, the eigenstates of the effective Hamiltonian $H_\text{GS}$ in Eq.~\eqref{eq:effective_H} are the degenerate Kramers partners \begin{equation} \label{eq:eigenstates_pz0} |g_{1,2}^l\rangle= \frac{|\uparrow\rangle\mp|\downarrow\rangle}{\sqrt{2}}|\pm l\rangle \ , \ \text{and} \ |e_{1,2}^l\rangle=\frac{|\uparrow\rangle\pm|\downarrow\rangle}{\sqrt{2}}|\pm l\rangle \ , \end{equation} where $|\uparrow\downarrow\rangle$ are the pseudospins and $| \pm l\rangle$ are the total angular momentum eigenstates with eigenvalues $p_\varphi~|~\pm~l~\rangle=\pm\hbar (l-1/2)|~\pm~l \rangle $, with $l\in\mathbb{N}$. The eigenenergies are \begin{equation} \epsilon_{g,e}= \frac{\hbar^2(2l-1)^2}{8m_\varphi R^2}\mp\frac{\hbar}{2R}\left(v_--v_+\right)\left(2l-1\right) \ , \end{equation} where the angular mass is $m_\varphi=\left({1}/{m^*}-{1}/{\delta m}\right)^{-1}= m /\left[\gamma_1+\gamma_s-3(\tilde{\gamma}+\gamma_s\tilde{\epsilon}_z) \right ]$, see Eq.~\eqref{eq:pars}; neglecting the small effect of strain, we find $m_\varphi \approx 0.06 m$, thus resulting in a variation of the angular mass of $20\%$ from the average value $m/\gamma_1$. The ground state of the system comprises the Kramers doublet $|g^{l=1}_{1,2}\rangle$, and is separated from the first excited doublet $|e^{l=1}_{1,2}\rangle$ by the SOI energy \begin{equation} \label{eq:delta} \Delta=\frac{\hbar }{R}v_\varphi= \frac{3}{2}\frac{\tilde{\kappa}}{\gamma_1}\hbar \omega_\varphi \ , \end{equation} where we introduce the angular SOI velocity $v_\varphi=v_--v_+$ and the dimensionless quantity \begin{equation} \label{eq:tildekappa} \tilde{\kappa}= \gamma _s-\tilde{\gamma}- \left(\gamma _1+\gamma _s\right)\tilde{\epsilon}_z \ . \end{equation} The amplitude of the energy splitting is set by the characteristic angular momentum quantization energy $\hbar\omega_\varphi=\hbar^2\gamma_1/mR^2\approx 10\times (10~\text{nm}/R)^2$~meV, but it is reduced by the strain-dependent SOI $v_\varphi m R/\hbar\gamma_1=3\tilde{\kappa}/2\gamma_1$, which in parameter range considered ranges from $\sim 0.5$ at $\epsilon_z=0$, to $\sim 0.15$ at $\epsilon_z=0.5\epsilon_r$, see Fig.~\ref{fig:pars}. Importantly, the longitudinal strain $\epsilon_z$ \textit{decreases} the subband gap $\Delta$. This trend is in striking contrast to Ge/Si core/shell nanowires, where the strain $\epsilon_z$ induced by the Si shell \textit{increases} the energy gap at $p_z=0$, resulting in a gap $\Delta_\text{c/s}\approx 0.5 \epsilon_z\sim 10$~meV~\cite{PhysRevB.90.115419,DRkloeffel1,DRkloeffel2,DRkloeffel3}.\\ To define spin qubits, a subband energy gap $\Delta$ in the meV range is convenient because it reduces the leakage outside of the computational subspace. In strongly strained thin Ge CQWs, this requirement would constrain the maximal values of the radius $R$ to 10-20 nm. While technologically feasible, this requirement can be also relaxed by including an external electric field $E_x$, e.g. produced by a back or top gate. This field breaks the rotational symmetry and yields the additional energy \begin{equation} \label{eq:E-field_H} H_E= -eE_xR\cos(\varphi) \ , \end{equation} which introduces matrix elements with amplitude $eE_xR/2$ coupling eigenstates of the total angular momentum quantum number $l$ to states with $l\pm 1$. As shown in Fig.~\ref{fig:band}b), the effective Hamiltonian $H_\text{GS}+H_E$ reproduces nicely the energy dispersion of the low-energy states even in the presence of rather large electric fields. As anticipated, $E_x$ also induces a large energy splitting between the ground state and the first excited state at $p_z=0$. At large values of $E_x$, the angular coordinate $\varphi$ is pinned in the vicinity of $\varphi=0$ and thus the wavefunction is confined at the top of the well. In this case, one can expand $H_E$ close to $\varphi=0$, resulting in \begin{equation} \label{eq:HO_expansion} H_\text{GS}(p_z=0)+H_E\approx \frac{p_\varphi^2}{2m_\varphi R^2}+\frac{eE_xR}{2}\varphi^2 + \Delta p_\varphi \sigma_x \ . \end{equation} Because the mass $m_\varphi\approx 0.06 m$ is to good approximation independent of strain, see Fig.~\ref{fig:pars}, the harmonic confinement frequency \begin{equation} \label{eq:omegaE} \omega_E=\sqrt{\frac{eE_x}{m_\varphi R}} \ \end{equation} is independent of strain and dominates over the smaller gap $\Delta$. In this case, the subband gap between the ground and first excited doublet is to good approximation $\hbar\omega_E\approx 8$~meV at $E_x=1$~V/$\mu$m and $R=20$~nm, and it decreases slowly $\propto R^{-1/2}$ compared to the faster $R^{-2}$ decay of $\Delta$. To facilitate the comparison to Fig.~\ref{fig:band}b), we note that the ratio $\frac{\hbar\omega_E}{\epsilon_c} = \frac{\tau}{\pi R} \sqrt{\frac{eE_xR}{\epsilon_c}} \sqrt{\frac{m}{\gamma_1 m_\varphi}}\approx 0.17$ at $\tau=R/2$ and $eE_xR=\epsilon_c$, in good agreement with the figure. We remark that the electric field $E_x$ originates from a gate that breaks rotational symmetry and it cannot come from gates completely wrapped around the CQW. Such gates all-around will instead produce a radial electric field $E_r$ that does not break the inversion symmetry, but that can quantitatively modify the effective parameters in Eq.~\eqref{eq:pars}. We also note by a third order Schrieffer-Wolff transformation that this electric field generates a \textit{cubic} SOI term \begin{equation} H_{r}= \frac{-4 \tilde{\gamma} \gamma _s eE_r\tau}{\pi ^4 \left(\gamma _1-2 \gamma _s\right) \left( 2\gamma _s+\gamma _1 \left(2 \epsilon _r+\epsilon _z\right)/\epsilon _c\right)}\frac{p_-^3\tau^3}{\hbar^3}\sigma_+ +\text{h.c.} \ , \end{equation} in analogy to planar Ge/SiGe heterostructures~\cite{PhysRevB.103.125201,Wang2021,bosco2021squeezed}. While this term can be of interest at large values of $R$, in the regime of parameters studied in this work it only adds a small correction and will not be discussed further.\\ Finally, we introduce the magnetic field $\textbf{B}$ in the effective theory. By writing the Hamiltonian $H_\textbf{B}$ in the radial basis in Eq.~\eqref{eq:basis-states}, and with a second order Schrieffer-Wolff transformation, we find to linear order in $\textbf{B}$ \begin{equation} \label{eq:Zeeman} \begin{split} \frac{H_B}{3\kappa\mu_B}&= B_x \left\{\cos(\varphi)\left[\left(1-\frac{\tilde{\gamma}}{\kappa}\right)\sigma_z-\frac{\tilde{\kappa}}{\kappa}\frac{z}{R} \sigma_x \right]+\tilde{\epsilon}_z\sin(\varphi)\sigma_y\right\}\\ &+B_y \left\{\sin(\varphi)\left[\left(1-\frac{\tilde{\gamma}}{\kappa}\right)\sigma_z-\frac{\tilde{\kappa}}{\kappa}\frac{z}{R} \sigma_x \right]-\tilde{\epsilon}_z\cos(\varphi)\sigma_y\right\}\\ &+ B_z \left[\left(\tilde{\epsilon}_z+\frac{1}{2}\frac{\tilde{\kappa}}{\kappa}\right)\sigma_x+\frac{1}{3\kappa} \frac{m}{m_\varphi}p_\varphi \right] \ . \end{split} \end{equation} We omit the negligible spin-independent shift of the dot $-2\mu_B z\left\{p_\varphi,[B_x\cos(\varphi)+B_y\sin(\varphi)]\right\}/m_\varphi R$. We emphasize that the magnetic interactions have an angular dependence caused by the transformation $U$ and that the origin of the coordinate system coincides with the center of mass of the quantum dot. In addition, the corrections to the Zeeman energy caused by the high energy holes are discussed in detail in App.~\ref{app:SOHs}, see in particular Eq.~\eqref{eq:Zeeman_SOH}. \section{Spin qubits in Short quantum dots} \label{sec:SQD} We now study the properties of a spin qubit confined in a quantum dot in the thin CQWs sketched in Fig.~\ref{fig:sketch}. The behaviour of the spin qubit strongly depends on the length $l_z$ of the dot. In particular, we examine two different qubit designs where the dot is long and short compared to the radius $R$, i.e. $l_z\gg R$ and $l_z\lesssim R$, respectively. Both these regimes can be described by the effective theory introduced in Sec.~\ref{sec:Effective-theory}. At first, we restrict ourselves to the analysis of the annular CQW shown in Fig.~\ref{fig:sketch}a) and described by the isotropic LK Hamiltonian in Eq.~\eqref{eq:LK-Hamiltonian}. We then show that our theory well describes also the planar CQW in Fig.~\ref{fig:sketch}b). Moreover, our theory models a wide range of devices with general cross-sections and is valid even when cubic anisotropies of the LK Hamiltonian are included. A detailed analysis of anisotropic corrections, including results obtained for quantum wells grown along a main crystallographic axis and with square cross-sections, is given in App.~\ref{sec:deviation_LK}. The dynamics of a spin qubit can be mapped to the effective quantum dot Hamiltonian~\cite{doi:10.1063/1.4858959,PhysRevLett.120.137702,PhysRevApplied.16.054034,VenitucciElectricalmanipulationsemiconductor2018,doi:10.1126/science.1080880} \begin{equation} \label{eq:QD-effective-theory} H_\text{QD}=\frac{\mu_B}{2} \pmb{\sigma}\cdot\left(\underline{g}-\sum_{i=x,y,z}\frac{eE_i(t)R}{\hbar\omega_\varphi}\delta\underline{ g}^i\right)\cdot\textbf{B} \ , \end{equation} parametrized by a tensor $\underline{g}$ of $g$-factors and an tensor $ \delta\underline{g}^i$ driving spin transitions via the ac field $E_i(t)$. An accurate model to describe short quantum dots, with $l_z\lesssim R$, is provided by the Hamiltonian \begin{equation} \label{eq:SQD_theory} H_\text{SD}=\frac{\hbar\bar{\omega}_\varphi}{2}p_\varphi^2+\Delta p_\varphi\sigma_x -\frac{\hbar\bar{\omega}_z}{2} \lambda_z+\frac{\hbar v_z}{\sqrt{2}\bar{l}_z} \lambda_y\sigma_y-eE_xR\cos(\varphi) \ , \end{equation} obtained by projecting Eqs.~\eqref{eq:effective_H} and~\eqref{eq:E-field_H} onto the first two eigenstates of the longitudinal confinement, energetically separated by $\hbar\bar{\omega}_z\propto 1/\bar{l}_z^2$. We introduce the longitudinal mass $m_z=\left({1}/{m^*}+{1}/{\delta m}\right)^{-1}\approx 0.06m$ and SOI velocity $v_z=v_-+v_+\approx 6.5 \hbar/mR$, see Eq.~\eqref{eq:pars} and Fig.~\ref{fig:pars}. Strain weakly affects these parameters and in the regimes studied it results in variations of $\lesssim 15\%$ from the numerical values provided here. We also introduce the exact longitudinal and angular frequencies $ \bar{\omega}_z=\omega_z\sqrt{m/\gamma_1 m_z} $ and $ \bar{\omega}_\varphi=\omega_\varphi\sqrt{m/\gamma_1 m_\varphi}$, that include the small corrections of longitudinal and angular masses from the average value $m/\gamma_1$; in analogy, we define the exact harmonic length $\bar{l}_z=l_z(\gamma_1 m_z/m)^{1/4}$. Here, the Pauli matrices $\lambda_{x,y,z}$ act on these orbital states, while $\sigma_{x,y,z}$ act on the pseudo spin. We remark that $p_\varphi$ is the total angular momentum, and that at $E_x=0$ the degenerate Kramers partners are the eigenstates of $p_\varphi \sigma_x$ (and not of $\sigma_x$) to the same eigenvalue, see Eq.~\eqref{eq:eigenstates_pz0}. To proceed further, it is convenient to eliminate the term $\propto\lambda_y\sigma_y$ by the rotation $H_\text{SD}\to e^{i \theta_z \lambda_x\sigma_y/2} H_\text{SD}e^{-i \theta_z \lambda_x\sigma_y/2} $ where $\theta_z=\arctan(\sqrt{2}v_z/\bar{\omega}_z\bar{l}_z)$. After this transformation, the low energy Hamiltonian acting on the ground state Kramers partners is \begin{equation} \label{eq:gs-SQD_theory} H_\text{SD}^\text{GS}=\frac{\hbar\bar{\omega}_\varphi}{2}p_\varphi^2+\Delta\cos(\theta_z) p_\varphi\sigma_x -eE_xR\cos(\varphi) \ . \end{equation} At weak electric fields, the subband gap that energetically separates the spin qubit to the non-computational subspace is \begin{equation} \label{eq:Eg_SQD} E_g=\sqrt{\Delta^2 \cos(\theta_z)^2+e^2E_x^2R^2} \ . \end{equation} In quantum wells with $R=20$~nm, the gap is $E_g\approx 1$~meV at $E_x=0$ and it increases with $E_x$. At large values of $E_x\gtrsim 1$~V/$\mu$m, the energy gap approaches $\hbar \omega_E\gtrsim 8$~meV, see Eq.~\eqref{eq:omegaE}. Also, in this basis, Eq.~\eqref{eq:Zeeman} reduces to \begin{equation} \label{eq:Zeeman_SQD} \begin{split} H_\text{SD}^{B}=& 3\kappa\mu_B B_z \left[\left(\tilde{\epsilon}_z+\frac{1}{2}\frac{\tilde{\kappa}}{\kappa}\right)\cos(\theta_z)\sigma_x+\frac{1}{3\kappa} \frac{m}{m_\varphi}p_\varphi \right]\\ &+ 3\kappa\mu_B\left[B_x\cos(\varphi)+B_y\sin(\varphi) \right] q_0 \sigma_z\\ &+3\kappa\mu_B\left[B_x\sin(\varphi)-B_y\cos(\varphi) \right]\tilde{\epsilon}_z\sigma_y \ , \end{split} \end{equation} where we introduce the size-dependent quantity \begin{equation} q_0=\left(1-\frac{\tilde{\gamma}}{\kappa}\right)\cos(\theta_z)-\frac{\tilde{\kappa}}{\kappa}\frac{\bar{l}_z}{\sqrt{2}R}\sin(\theta_z) \ . \end{equation} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{SQD-g.pdf} \caption{\label{fig:gfact_SQD} Matrix of $g$-factors of a hole spin qubit in a thin Ge CQW. We show the diagonal elements $g_{xx}$, $g_{yy}$, $g_{zz}$ in blue, black and red, respectively; the off-diagonal elements are zero. We compare the results of a numerically simulation a three-dimensional quantum dot obtained by discretizing the Hamiltonian~\eqref{eq:Hamiltonian} with an annular cross section (dots), against the effective theory~\eqref{eq:SQD_theory} (solid lines) and the approximate formulas in Eq.~\eqref{eq:g-tens_app} (dashed lines). In a), we show $g_{ii}$ as a function of the length $l_z$ of the quantum dot at $E_x=0$; in this case, $|g_{xx}|=|g_{yy}|$. In b), we show $g_{ii}$ as a function of the electric field $E_x$ at $l_z=R$. In both cases, we consider $\epsilon_r=3\epsilon_z=\epsilon_c$ and $\tau=R/2$. At $R=20$~nm, the electric field is $E_x\in[0,0.26]$~V/$\mu$m. } \end{figure} \subsection{Defining the spin qubit } We now discuss the matrix $\underline{g}$ of $g$-factors, that determines the energy gap of different spin states in the quantum dot. From Eqs.~\eqref{eq:gs-SQD_theory} and~\eqref{eq:Zeeman_SQD}, $\underline{g}$ can be derived by projecting $H_\text{SD}^{B}$ onto the degenerate groundstate of $H_\text{SD}^\text{GS}$. The resulting $g$-tensor is diagonal and $\underline{g}=g_{ii}\delta_{ij}$. The diagonal element $g_{ii}$ are shown in Fig.~\ref{fig:gfact_SQD}. We note that the effective theory in Eq.~\eqref{eq:SQD_theory} is reasonably accurate and approximates well a numerical simulation of a three-dimensional quantum dot obtained from Eq.~\eqref{eq:Hamiltonian}. In Fig.~\ref{fig:gfact_SQD}a), we shown the dependence of $g_{ii}$ on the size of the dot at $E_x=0$. In this case, the eigenstates of Eq.~\eqref{eq:gs-SQD_theory} coincide with the eigenstates of $p_\varphi \sigma_x$ given in Eq.~\eqref{eq:eigenstates_pz0} and one obtains $g_{xx}=-g_{yy}= -3\kappa (q_0-\tilde{\epsilon}_z)$ and $g_{zz}= 6\kappa(\tilde{\epsilon}_z +{\tilde{\kappa}}/2\kappa)\cos(\theta_z)-{m}/{m_\varphi}$. Because of the orbital magnetic field, $g_{zz}$ is rather large, and only weakly dependent on the length $l_z$ of the dot. A similar enhancement of effective Zeeman energy emerges in topological insulator nanowires~\cite{PhysRevB.104.165405}, where the leading contribution is the SOI-induced term $\propto \tilde{\kappa}$. Because in competing hole-based architectures, such as Ge/Si core/shell nanowires, $g_{zz}\sim 1$ is rather small~\cite{adelsberger2}, the large value of $g_{zz}$ in thin CQWs is particularly advantageous for topological quantum computing in the seek of exotic particles, such as Majorana bound states~\cite{PhysRevB.86.085408,PhysRevB.90.195421}. At small values of $E_x$, one can still use a few eigenstates in Eq.~\eqref{eq:eigenstates_pz0}. By using the energetically lowest 2 quasi-degenerate Kramers partners and by introducing the $E_x$-dependent angle $\theta_E=\arctan\left[eE_xR/\Delta\cos(\theta_z)\right]$, one finds to second order in perturbation theory \begin{subequations} \label{eq:g-tens_app} \begin{align} g_{xx}&=3\kappa\left[ \tilde{\epsilon}_z \cos(\theta_E)-q_0-q_0\frac{m_\varphi}{m}eE_xR \sin(\theta_E)\right]\\ g_{yy}&=3\kappa\left[ q_0 \cos(\theta_E)-\tilde{\epsilon}_z-\tilde{\epsilon}_z\frac{m_\varphi}{m}eE_xR \sin(\theta_E)\right] \\ g_{zz}&=6\kappa\left(\tilde{\epsilon}_z +\frac{\tilde{\kappa}}{2\kappa}\right)\cos(\theta_z)-\frac{m}{m_\varphi}\cos(\theta_E) \ . \end{align} \end{subequations} These equations capture qualitatively the trend of the $g$-factors as a function of $E_x$, but they are not quantitatively accurate at large values of $E_x$, as shown in Fig.~\ref{fig:gfact_SQD}b). At large values of $E_x$, Eq.~\eqref{eq:gs-SQD_theory} is approximated by a harmonic oscillator with harmonic frequency $\omega_E$, see Eqs.~\eqref{eq:HO_expansion} and~\eqref{eq:omegaE}, resulting in $g_{xx}=-6\kappa e^{-\varphi_E^2(\varphi_{S}^{-2}+\frac{1}{4})}[q_0\cosh(\varphi_E^2/\varphi_{S})-\tilde{\epsilon}_z\sinh(\varphi_E^2/\varphi_{S})]$, $g_{yy}=6\kappa e^{-\varphi_E^2(\varphi_{S}^{-2}+\frac{1}{4})}[q_0\sinh(\varphi_E^2/\varphi_{S})-\tilde{\epsilon}_z\cosh(\varphi_E^2/\varphi_{S})]$ and $g_{zz}=6\kappa\tilde{\epsilon}_z\cos(\theta_z)$ with angular width $\varphi_E=\sqrt{\omega _{\varphi } m/ \gamma _1 m_{\varphi }\omega _E}$ and SOI angle $\varphi_{S}^{-1}=m_\varphi v_\varphi\cos(\theta_z)R/\hbar$~\footnote{The signs of the $g$-factors can be understood from Eq.~\eqref{eq:Zeeman_SQD} by considering a rotation of $\pi/2$ around $y$, that transforms $\sigma_x\to \sigma_z$ and $\sigma_z\to -\sigma_x$. In addition, at large $E_x$, because of the SOI $\Delta \cos(\theta_z)\sigma_x p_\varphi $ in the effective Hamiltonian~\eqref{eq:SQD_theory}, the expectation value of $m p_\varphi/m_\varphi$ in the harmonic oscillator groundstate exactly cancels the SOI-induced Zeeman energy $\propto \tilde{\kappa}$ in the estimation of $g_{zz}$. One also finds that $\cos(\varphi)\sigma_{y,z}\to \pm e^{-\varphi_E^2(\varphi_{S}^{-2}+\frac{1}{4})}\cosh(\varphi_E^2/\varphi_{S})\sigma_{y,x} $ and $\sin(\varphi)\sigma_{y,z}\to e^{-\varphi_E^2(\varphi_{S}^{-2}+\frac{1}{4})}\sinh(\varphi_E^2/\varphi_{S})\sigma_{x,y} $.}. Because, depending on the amplitude of the longitudinal strain $\epsilon_z$, the limiting values of the $g$-factor for $E_x=0$ and $E_x\to \infty$ can have opposite signs, there could be points at finite values of $E_x$ where the $g_{ii}$ vanish, as shown in Fig.~\ref{fig:gfact_SQD}b). We note that the $g$-tensor is strongly anisotropic and it is tunable by the electric field and by designing the strain, especially via the outer shell thickness. This behaviour is typical of hole nanostructures~\cite{PhysRevLett.110.046602,PhysRevLett.127.190501,adelsberger2021hole,doi:10.1063/1.5025413,PhysRevB.87.161305,qvist2021anisotropic}. In contrast to Ge/Si core/shell nanowires~\cite{adelsberger2021hole,froning2020ultrafast}, however, in a thin CQW the $g$-factor is only strongly modulated at weak values of the electric field $E_x$, and at $E_x\gtrsim 1$~V/$\mu$m, $g$ becomes weakly dependent of $E_x$. In particular, $g$ is \textit{independent} of $E_x$ when $\textbf{B}$ is aligned to the quantum well and it only varies as $E_x^{-1/2}$ when $\textbf{B}$ is perpendicular to it. This property suggests that spin qubits in these devices can have a low susceptibility to charge noise, enabling long coherence times. \subsection{Decoherence of the qubit} \label{sec:decoh} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{deph} \caption{\label{fig:deph} Dephasing time $T_2^*$ of a hole spin qubit in a thin CQW as function of electric field $E_x$. In a), we show with full and empty dots the time $T_2^*$ when the magnetic field is aligned to the $x$ and $y$-directions, respectively; in b), we show the result when $\textbf{B}\parallel z$. Blue, black and red dots indicate longitudinal strain $\epsilon_z=0$, $\epsilon_z=\epsilon_c/10$, and $\epsilon_z=\epsilon_c/3$, respectively. For the simulation, we consider a free induction decay experiment, in a qubit with frequency $\omega_Q^i/2\pi=5$~GHz, and $\alpha \sqrt{\delta V^2}=0.3$~$\mu$eV. We also use $R=l_z=\tau/2=20$~nm, and $\epsilon_r=\epsilon_c$. } \end{figure} We now discuss the coherence time of these hole spin qubits. We examine a free induction decay experiment, where the spin is prepared in an eigenstate of the Zeeman energy and left idle. In this case, 1/f charge noise causes random fluctuations of the electric potential, with spectral function $S(\omega)=\langle\delta V^2\rangle/|\omega|$. We consider here magnetic fields aligned to the main confinement axes $i=(x,y,z)$. Because of the dependence of $g$ on the external electric field, the charge noise causes dephasing, with decay rate~\cite{MAKHLIN2004315,bosco2021squeezed,bosco2020hole} \begin{equation} \label{eq:t2star} \frac{1}{T_2^*}\approx \frac{\omega_Q^i}{\sqrt{2\pi}} \frac{1}{g_{ii}} \frac{\partial g_{ii}}{\partial V} \sqrt{ \langle \delta V^2\rangle} \ . \end{equation} We neglect small logarithmic corrections caused by the divergence of the spectral function at low frequency~\cite{MAKHLIN2004315}. The qubit frequency $\omega_Q^i=g_{ii}\mu_B B_i/\hbar$ depends on the external magnetic and electric fields and we restrict our analysis to microwave frequencies $\omega_Q^i/2\pi\sim 1-20$~GHz. To estimate the sensitivity of the $g$-factor to the potential fluctuations, we assume that the noise comes from the electrodes, such that ${\partial g_{ii}}/{\partial V} \approx \alpha\delta g_{ii}^x /\hbar\omega_\varphi$, see Eq.~\eqref{eq:QD-effective-theory}. Here, $\alpha\sim 0.1-0.5$ is the lever arm of the gate~\footnote{If $\mu$ is the chemical potential of the dot, $\partial_V g_{ii} = \alpha \partial_\mu g_{ii}\approx \alpha \delta^x_{ii}g_{ii}/\hbar\omega_\varphi $. We use here the definition of the lever arm $\alpha=\partial\mu/\partial V$~\cite{burkard2021semiconductor}, of $\delta g^x_{ii}=\hbar\omega_\varphi\partial_{E_x} g_{ii}/eR$, and we assume that the variation of the chemical potential are caused only by $E_x$, i.e. $\Delta\mu\approx eE_xR$.}, whose precise value depends on the device design; resulting in the typical values $\alpha \sqrt{\langle\delta V^2\rangle}\sim 0.1-10$~$\mu$eV~\cite{Yonedaquantumdotspinqubit2018,burkard2021semiconductor}. In Fig.~\ref{fig:deph}, we show the decay time $T_2^*$ of a typical hole spin qubit in a thin CQW as a function of the applied electric field. When the magnetic field is applied in the $(x,y)$ plane, $T_2^*$ is generally in the $\mu$s range, and its precise value depends on $E_x$ and on the strain $\epsilon_z$. In particular, when $\textbf{B}\parallel E_x$ the strain dependence of $T_2^*$ is to good approximation negligible and $T_2^*$ increases monotonically with $E_x$. In contrast, when $\textbf{B}\parallel y$, the electric field dependence is weaker, and $T_2^*$ is strongly affected by $\epsilon_z$. The decrease of $T_2^*$ as strain increases originates from the reduced value of $g_{yy}$ at large $\epsilon_z$ and large $E_x$, see Eq.~\eqref{eq:t2star}. For example at $\epsilon_z=0.33\epsilon_c$ and $E_x= \hbar\omega_\varphi/eR\approx 0.13$~V/$\mu$m at $R=20$~nm, one finds $T_2^*\approx 750$~ns for the parameters considered. This dephasing time can be further improved by echo sequences~\cite{PhysRevLett.100.236802,Bluhm2011}. Strikingly, the flatness of the $g$-factor in the $z$-direction as a function of $E_x$ results in rather long coherence time of qubits in strained CQWs, with an enhanced value of $T_2^*$ when $E_x\gtrsim 2 \hbar\omega_\varphi/eR\approx 0.26$~V/$\mu$m at $R=20$~nm. At low values of $\epsilon_z$, the small value of $g_{zz}\propto\tilde{\epsilon}_z$ at large $E_x$ reduces $T_2^*$. Because $g_{zz}$ is only weakly dependent on the quantum dot length $l_z$, see Fig.~\ref{fig:gfact_SQD}a), this enhancement occurs also in long quantum dots. We emphasize that, in contrast to alternative proposals where the lifetime of the qubit is enhanced only at fine-tuned sweet-spots~\cite{bosco2020hole, Wang2021,Malcok2022}, in thin Ge CQWs the qubit is to good approximation insensitive to charge noise in a wide range of $E_x$, thus enabling highly coherent qubits with a low sensitivity to charge noise, a major issue in state-of-the-art hole spin quantum processors~\cite{froning2020ultrafast}. \subsection{Driving the qubit} \label{sec:driving} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{SQD-drive_er11.pdf} \caption{\label{fig:drive_SQD} Driving of a hole spin qubit in thin Ge CQW by an ac electric field $E_y(t)$. In a) and b), we show the non-zero elements $\delta g_{yx}^y$ and $\delta g_{xy}^y$ of the matrix $\delta \underline{g}^y$ as a function of the dc electric field $E_x$, see Eq.~\eqref{eq:QD-effective-theory}. With blue, black, and red curves, we show different values of longitudinal strain $\epsilon_z=0$, $\epsilon_z=\epsilon_r/10$, and $\epsilon_z=\epsilon_r/3$, respectively. We use $\tau=R/2$, $l_z=R$ and $\epsilon_r=\epsilon_c$. The dots, the solid and the dashed lines represent results obtained by Eqs.~\eqref{eq:Hamiltonian},~\eqref{eq:SQD_theory}, and~\eqref{eq:delta_g_app}, respectively. For the solid and dashed lines, we use the two lowest Kramers partners at $E_x=0$. The green lines show the results obtained by simulating a Ge/Si core/shell nanowire with $\epsilon_z=\pi^2\hbar\omega_\varphi/4$, where this effect is negligible. } \end{figure} We now discuss two qualitatively different mechanisms to drive the hole spin qubit. First, we note that in short quantum dots, where $l_z\lesssim R$, shaking the hole wavefunction along the quantum well~\cite{PhysRevB.74.165319} does not result in ultrafast Rabi oscillations. As discussed in Sec.~\ref{sec:Elongated-QDs}, this mechanism is more convenient in long quantum dots because the Rabi frequency $\omega_R\propto l_z^4$~\cite{bosco2021squeezed}, see Eq.~\eqref{eq:Rabi_EQD}. However, as shown in Fig.~\ref{fig:band}b), in a CQW the electric field $E_x$ confines the hole to the top of the cross-section and thus fast Rabi oscillations are enabled when the hole is periodically driven in the angular direction by an ac field $E_y(t)=E_y^{ac}\sin(\omega_D t)$, perpendicular to the dc field $E_x$. These spin transitions are parametrized by the matrix $\delta\underline{g}^y$ of effective $g$-tensors, see Eq.~\eqref{eq:QD-effective-theory}, and are the fastest when the dc field $E_x$ is comparable to the SOI gap $\Delta$; for this reason, we restrict ourselves to the analysis of the moderately weak electric fields. By including the small potential $-eE_y(t)R\sin(\varphi)$ in Eq.~\eqref{eq:gs-SQD_theory} and by using second order perturbation theory, we find that the driving term is $\propto E_y(t)(\sigma_x\delta g_{xy}^yB_y +\sigma_y\delta g_{yx}^yB_x)$, with \begin{subequations} \label{eq:delta_g_app} \begin{align} \delta g_{xy}^y&=3\kappa \sin(\theta_E) \left[ \frac{\hbar\omega_\varphi}{E_g}q_0+ \frac{m_\varphi\gamma_1}{m}\Big(q_0+\tilde{\epsilon}_z \cos(\theta_E)\Big)\right] \ , \\ \delta g_{yx}^y&=3\kappa \sin(\theta_E) \left[ \frac{\hbar\omega_\varphi}{E_g}\tilde{\epsilon}_z+ \frac{m_\varphi\gamma_1}{m}\Big(\tilde{\epsilon}_z +q_0\cos(\theta_E)\Big)\right] \ . \end{align} \end{subequations} We show these terms in Fig.~\ref{fig:drive_SQD}, and in particular we highlight their dependence on $E_x$ and on the longitudinal strain $\epsilon_z$, that can be designed by the thickness of the outer Si shell, see Eq.~\eqref{eq_strain_pars}. We note that at small values of $E_x$, Eq.~\eqref{eq:delta_g_app} approximates well the driving $\delta g_{xy}^y$ when $\textbf{B}\parallel y$, but are quantitatively inaccurate for $\delta g_{yx}^y$ when $\textbf{B}\parallel x$, especially at small $\epsilon_z$. The latter term is generally smaller and thus to enhance the Rabi frequency it is convenient to align $\textbf{B}$ to the ac drive. Moreover, $\epsilon_z$ strongly speeds up the driving, in sharp contrast to elongated dots in Ge/Si core/shell nanowires, where the Rabi frequency at small electric fields is $\propto 1/\epsilon_z$~\cite{DRkloeffel1,DRkloeffel2,DRkloeffel3}. We also note that while in principle a similar driving mechanism can also occur in core/shell nanowires, in the parameter range considered, the terms $\delta g_{ij}^y$ are negligible there. When $\textbf{B}\parallel z$, this effect vanishes. We now estimate the frequency $\omega_R$ of the Rabi oscillations generated by these driving terms when $\textbf{B}$ is aligned to the $i=\{x, y\}$ directions. In this case, the spin states are split by $\hbar \omega_Q^i=g_{ii} \mu_B B_i$, and when the qubit is in resonance with the drive, i.e. $\omega_D=\omega_Q^i$, we find \begin{equation} \label{eq:Rabi_new-SQD} \frac{\omega_R^i}{2\pi}=\frac{\omega_D}{2\pi} \frac{eE_y^{ac}R}{2\hbar\omega_\varphi}\frac{\delta g_{j i}^y}{g_{ii}}\approx 0.2~\text{GHz}\times\frac{\delta g_{j i}^y}{g_{ii}} \ . \end{equation} The numerical prefactor is obtained by considering a CQW with radius $R=20$~nm, such that $\hbar\omega_\varphi=2.5$~meV, and for the typical experimental values $\omega_D/2\pi= 5$~GHz and $E_y^{ac}=10$~mV/$\mu$m~\cite{froning2020ultrafast}. For thicker quantum wells, this factor increases as $R^3$, but the energy gap at $E_x=0$ also decreases as $1/R^2$. By comparing Figs.~\ref{fig:gfact_SQD} and~\ref{fig:drive_SQD}, we also observe that ${\delta g_{y x}^y}\lesssim g_{xx}$ when $\textbf{B}\parallel x$, and ${\delta g_{ xy}^y}\gtrsim g_{yy}$ when $\textbf{B}\parallel y$. For example, in a strongly strained device with a thick outer shell, at $E_x= \hbar\omega_\varphi/eR\approx 0.13$~V/$\mu$m and $R=20$~nm, one obtains $\delta g_{xy}^y/g_{yy}\approx 6$, resulting in $\omega_R^y/2\pi\approx 1.2$~GHz. At this value of $E_x$, we also find $g_{yy}=1.35$, such that $\omega_B^y/2\pi=5$~GHz at $B=0.25$~T, and the subband energy gap is $E_g=2.3$~meV. At this working point, we also expect a dephasing time of a few hundreds of nanoseconds, see Fig.~\ref{fig:deph}, much longer than the spin-flipping time, thus enabling highly coherent and ultrafast qubit operations at low power. \\ In addition, the $g$-factor can also be modulated by an ac field $E_x(t)=E_x^{ac}\cos(\omega_D t)$ applied in the $x$-direction, resulting in the variation $\delta \underline{g}^x$ of the $g$-factors in Eq.~\eqref{eq:g-tens_app} \begin{subequations} \label{eq:delta_g_app_Ex} \begin{align} \frac{\delta g_{xx}^x}{3\kappa \sin(\theta_E)}&= \frac{\hbar\omega_\varphi}{E_g}\tilde{\epsilon}_z\cos(\theta_E)+ \frac{m_\varphi\gamma_1}{2m}\Big(3+ \cos(2\theta_E)\Big)q_0\ , \\ \frac{\delta g_{yy}^x}{3\kappa \sin(\theta_E)}&= \frac{\hbar\omega_\varphi}{E_g}q_0\cos(\theta_E)+ \frac{m_\varphi\gamma_1}{2m}\Big(3+ \cos(2\theta_E)\Big)\tilde{\epsilon}_z \ , \\ \delta g_{zz}^x&=\frac{m}{2m_\varphi}\frac{\hbar\omega_\varphi}{E_g}\sin(2\theta_E)\ . \end{align} \end{subequations} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{SQD-drive_gmod_er1.pdf} \caption{\label{fig:drive_SQD_gmod} Driving of a hole spin qubit in a thin Ge CQW by $g$-tensor modulation via an ac electric field $E_x^{ac}$. In a) and b), we show the Rabi frequency as a function of the direction of the applied magnetic field for two different values of the dc field $E_x$. The results shown here use Eq.~\eqref{eq_rabi-gmod} with parameters obtained by discretizing the Hamiltonian Eq.~\eqref{eq:Hamiltonian}. The prefactor of $\omega_R$ is $200$~MHz in typical dots, see Eq.~\eqref{eq:Rabi_new-SQD}. } \end{figure} When the magnetic field is aligned to the $x,y,z$ directions, these terms do not induce spin transitions, and only modulate the qubit energy. However, at arbitrary orientations of $\textbf{B}$, Rabi oscillations can be induced by making use of the tunable anisotropy of the $g$-factor~\cite{doi:10.1126/science.1080880,PhysRevLett.120.137702,doi:10.1063/1.4858959,PhysRevApplied.16.054034,VenitucciElectricalmanipulationsemiconductor2018}. In particular, when the qubit energy $\hbar\omega_B=\mu_B\sqrt{g_{xx}^2B_x^2+g_{yy}^2B_y^2+g_{zz}^2B_z^2}$ is at resonance with $\omega_D$, the Rabi frequency induced by $E_x^{ac}$ is \begin{equation} \label{eq_rabi-gmod} \frac{\omega_R}{2\pi}=\frac{\omega_D}{2\pi} \frac{eE_x^{ac}R}{2\hbar\omega_\varphi}\left|\frac{\delta \underline{g}^x\cdot \textbf{B}}{\hbar\omega_B/\mu_B}-\frac{(\delta \underline{g}^x\cdot \textbf{B})\cdot(\underline{g}\cdot \textbf{B})}{\hbar^3\omega_B^3/\mu_B^3} (\underline{g}\cdot \textbf{B}) \right| \ . \end{equation} In Fig.~\ref{fig:drive_SQD_gmod}, we analyze the dependence of this driving mechanism on the direction of the magnetic field for different values of the dc field $E_x$. We consider here an arbitrary field $\textbf{B}=B (\cos(\theta_B)\sin(\varphi_B),\cos(\theta_B)\cos(\varphi_B),\sin(\theta_B))$, see Fig.~\ref{fig:sketch}. We observe that $\omega_R=0$ when $\textbf{B}$ is aligned to a confinement axis, but in certain parameter regimes it becomes comparable to the values obtained by driving $E_y^{ac}$. Comparing Fig.~\ref{fig:drive_SQD_gmod}a) to Fig.~\ref{fig:gfact_SQD}, we note that when $\textbf{B}$ is slightly misaligned from the $z$-direction, a large $\omega_R$ can be reached close to the value of $E_x$ where $g_{zz}$ vanishes; in particular, at $E_x=0.4\hbar\omega_\varphi/eR$, $g_{zz}\approx 1$. Away from these sweet spots in both electric field and magnetic field direction, $\omega_R$ is strongly reduced, and the optimal direction of $\textbf{B}$ changes, see Fig.~\ref{fig:drive_SQD_gmod}b). In contrast, when the qubit is driven by $E_y^{ac}$, the optimal values of $\omega_R$ occur at the fixed direction $\textbf{B}\parallel y$ and persist in a wide range of $E_x$. For this reason, the $E_y^{ac}$ driving is more convenient in experiments comprising chains of quantum dots in the CQW subjected to a global fixed $\textbf{B}$ field, and we focus on it in the following. \subsection{Planar Ge curved quantum wells} \label{sec:planar} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{SQD-g-half.pdf} \caption{\label{fig:drive_SQD_half} Hole spin qubit in a planar thin shell quantum dot. In a), we show the $|g|$ factor as a function of the electric field applied perpendicular to the substrate and in b), we show the driving term $\delta g_{yx}^y$, obtained by an ac field $E_y(t)$ aligned to the magnetic field $\textbf{B}$. For typical experimental parameters the Rabi frequency is $\omega_R/2\pi\approx 200$~MHz$\times \delta g_{yx}^y/g_{yy}$. For the plots, we consider $l_z=R=\tau/2=20$~nm, see Fig.~\ref{fig:sketch}b), and $\epsilon_r=\epsilon_c$. The values of $\epsilon_z$ are given in units of $\epsilon_c$. } \end{figure} Similar qubits can be designed also in planar systems, providing a technologically competitive and scalable architecture for hole-based quantum computers. A possible example of a planar CQW is sketched in Fig.~\ref{fig:sketch}b) and can be manufactured by growing a Ge quantum well over a Si substrate. Because the spin qubit proposed in Sec.~\ref{sec:SQD} works well at $E_x\approx\hbar\omega_\varphi/eR$, where the hole wavefunction is confined to the top half of the shell, see Fig.~\ref{fig:band}b), we expect an analogous behaviour in a spin qubit defined in planar CQWs. In Fig.~\ref{fig:drive_SQD_half}, we show the results of a numerical simulation of the Hamiltonian in Eq.~\eqref{eq:Hamiltonian} defined in the planar CQW sketched in Fig.~\ref{fig:sketch}b). In Fig.~\ref{fig:drive_SQD_half}a), we show the $g$-factor of a quantum dot of length $l_z=R$ as a function of the electric field $E_x$ perpendicular to the substrate. We observe that when the magnetic field is perpendicular to the well, there is a critical electric field above which the behaviour of $g_{xx}$ and $g_{yy}$ as function of $E_x$ resembles the one shown in Fig.~\ref{fig:gfact_SQD}b). The position of this critical electric field depends on the longitudinal strain $\epsilon_z$ and is shifted to more negative values as $\epsilon_z$ increases. Moreover, in analogy to annular CQWs, $g_{zz}$ is a rather flat function of the $E_x$ and it increases with $\epsilon_z$, approaching the high field limiting value $g_{zz}\approx 6 \kappa \tilde{\epsilon}_z\cos(\theta_z)$. These results indicate that qubits in planar CQWs can be as insensitive to charge noise as qubits in annular CQWs. In analogy to Fig.~\ref{fig:drive_SQD}b), we also find that these qubits can be driven fast by an ac electric field $E_y(t)$ applied parallel to the substrate. In particular, we observe that the driving strength can easily exceed the GHz range and becomes even stronger than in annular CQWs as the dc electric field $E_x$ decreases. At lower values of $E_x$, however, the subband gap $E_g$ separating computational and non-computational states also decreases. In the range of parameters examined here $E_g\gtrsim 0.5$~meV. We note that at $E_x\lesssim -2.5$~V/$\mu$m, $E_g$ drops to zero and one obtains two degenerate quantum dots at the two edges of the quantum well. We envision protocols where these two quantum dots can be exchange-coupled and used to perform fast read-out schemes in analogy to the corner dots in Si FETs~\cite{Voisin2014,doi:10.1063/1.4950976,PhysRevX.10.041010}, but we do not investigate these intriguing possibilities here. \section{Spin qubits in Long quantum dots} \label{sec:Elongated-QDs} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{EQD} \caption{\label{fig:SOI_EQD} Electric field dependence of subband energy gap and Rabi frequency in long quantum dots. We compare thin Ge CQWs (blue) and Ge/Si core/shell nanowires (red). The lines are obtained by the effective theory in Eq.~\eqref{eq:long-dot-model}. The dots show a numerical simulation of a three-dimensional quantum dot obtained by discretizing the Hamiltonian~\eqref{eq:Hamiltonian} with an annular (cylindrical) cross section, a harmonic potential along $z$, and with $\epsilon_r=3\epsilon_z=3\epsilon_c$ ($\epsilon_z=\pi^2\hbar\omega_\varphi/4$) for the CQW (core/shell nanowire). We use $\tau=R/2$ and $l_z=3R$. In a), we show the energy gap between the ground and first excited doublets. The dashed blue line is obtained by Eq.~\eqref{eq:gap-corrected} and the dashed black line shows $\bar{\omega}_z$. In b) we show the Rabi frequency $\omega_R$ at resonance, see Eqs.~\eqref{eq:Rabi_EQD},~\eqref{eq:SOI-long}. For realistic experimental parameters $\omega_D/2\pi=5$~GHz, $E_z^{ac}=10$~mV/$\mu$m, $l_z=30$~nm and $\hbar\omega_z=1.1$~meV, the prefactor of the Rabi frequency is $\omega_D eE_z^{ac}l_z/\hbar\omega_z\approx 1.4$~GHz. For the Ge/Si core/shell nanowire, we use the mass ${m}_z=0.06 m$ and the SOI velocity $v=2\times 0.15\times 6.5 eE_xR/\Delta_\text{c/s}$, with gap $\Delta_\text{c/s}= 0.5 \epsilon_z$. } \end{figure} When $l_z\gg R$, the longitudinal confinement energy $\hbar\omega_z=\hbar^2\gamma_1/ml_z^2$ is much smaller than angular momentum quantization energy $\hbar\omega_\varphi$. However, because of longitudinal strain, $\hbar\omega_z$ can still be comparable to the SOI energy $\Delta$, that is by a factor $3\tilde{\kappa}/2\gamma_1\sim 0.5-0.15$ smaller than $\hbar\omega_\varphi$, see Eq.~\eqref{eq:delta}. For this reason, a simple description of the system for moderate electric fields is obtained by projecting the effective theory in Eqs.~\eqref{eq:effective_H} and~\eqref{eq:E-field_H} onto the low-energy states $|g_{1,2}^{l=1}\rangle$ and $|e_{1,2}^{l=1}\rangle$ in Eq.~\eqref{eq:eigenstates_pz0}, resulting in the one-dimensional Hamiltonian \begin{equation} \label{eq:long-dot-model} H_\text{LD}=\frac{p_z^2}{2m_z}-\frac{\Delta}{2} \lambda_z-\frac{eE_xR}{2} \lambda_x+v_z p_z \lambda_x\sigma_y+\frac{m_z\bar{\omega}_z^2}{2}z^2 \ , \end{equation} where $\lambda_{x,y,z}$ and $\sigma_{x,y,z}$ are Pauli matrices acting on the orbital and pseudospin subspaces, respectively. In this regime, the Ge CQW mimics a Ge/Si core/shell nanowire, and Eq.~\eqref{eq:long-dot-model} is qualitatively analogous to the well-known model discussed in detail in Refs.~\cite{DRkloeffel1,DRkloeffel2,DRkloeffel3,adelsberger2021hole}. The direct Rashba SOI velocity~\cite{DRkloeffel3} \begin{equation} \label{eq:SOI-long} v= \frac{eE_xR}{\sqrt{\Delta^2+(eE_xR)^2}}v_z \ , \end{equation} is particularly important for spin qubits. In fact, in EDSR experiments~\cite{PhysRevB.74.165319,froning2020ultrafast,bosco2021squeezed} where an ac field $E_z^{ac}\cos(\omega_D t)$ is applied along the quantum well, $v$ is directly related to the Rabi frequency $\omega_R$ by \begin{equation} \label{eq:Rabi_EQD} \frac{\omega_R}{2\pi}= \frac{\omega_D}{2\pi} \frac{v eE_z^{ac}}{\hbar\bar{\omega}_z^2} \ . \end{equation} Moreover, the subband energy gap $\Delta$ is renormalized by $E_x$ and by the SOI velocity $v_z$ and it is approximately corrected as \begin{equation} \label{eq:gap-corrected} \Delta\to E_g\approx e^{-\frac{m_z^2 v_z^2 \bar{l}_z^2}{\hbar^2[\Delta^2+(eE_xR)^2]}} \sqrt{\Delta^2+(eE_xR)^2} \ . \end{equation} Eq.~\eqref{eq:SOI-long} and the square root in Eq.~\eqref{eq:gap-corrected} are obtained by rewriting Eq.~\eqref{eq:long-dot-model} in the basis that diagonalizes $H_\text{LD}$ at $p_z=0$. The overall Gaussian suppression of the subband gap is most relevant at weak electric field values. It can be derived by first removing the direct couplings between ground state and first excited states by performing a spin and orbital dependent shift $e^{- i m_z v_z z \lambda_x\sigma_y/ \hbar \sqrt{\Delta^2+(eE_xR)^2}}$ and then by averaging the resulting potential $ \sqrt{\Delta^2+(eE_xR)^2} \cos\left[2 m_z v_z z/\hbar \sqrt{\Delta^2+(eE_xR)^2}\right]$ over the harmonic oscillator ground state. This correction closely resembles the SOI-induced $g$-factor renormalization in nanowires with a large SOI~\cite{PhysRevB.98.165403, adelsberger2021hole,DRkloeffel2, PhysRevLett.127.190501}, and in the short dot case it approaches Eq.~\eqref{eq:Eg_SQD}. As shown in Fig.~\ref{fig:SOI_EQD}, the simple approximate Eqs.~\eqref{eq:SOI-long} and~\eqref{eq:gap-corrected} give a reasonable agreement to a complete three-dimensional simulation of a quantum dot even when the dots is moderately short and $E_g\sim \hbar\bar{\omega}_z$. There are some noteworthy quantitative differences between CQWs and core/shell nanowires that significantly impact the performance of the spin qubit; these differences are highlighted in Fig.~\ref{fig:SOI_EQD}. First, as discussed in Sec.~\ref{sec:Effective-theory}, because of strain the energy gap $\Delta$ is much smaller in a CQW than in core/shell nanowires where $\Delta_\text{c/s}\approx 0.5 \epsilon_z\sim 10$~meV. For this reason, in moderately long quantum dots defined in a CQW the energy gap to the first excited doublets is $E_g$ at $E_x=0$ and it approaches $\hbar\omega_z$ only at finite values of $E_x$. Moreover, the dipole energy $-e E_x R\lambda_x/2$ in Eq.~\eqref{eq:long-dot-model} is 3.3 times larger in a CQW than in a core/shell nanowire, where this term is $-0.15 e E_x R\lambda_x$~\cite{DRkloeffel2}. The significantly different ratio of dipole energy and subband gap strongly impacts the dependence of the Rabi frequency $\omega_R$ on $E_x$, see Fig.~\ref{fig:SOI_EQD}b). In fact, in a CQW the optimal direct Rashba SOI is obtained at values of $E_x$ that are $\sim 10$ times smaller than in core/shell nanowires, and $v$ remains constant over a wide range of $E_x$, enabling ultrafast qubit operations at low power even at small $E_x$. We also remark that, as discussed in Sec.~\ref{sec:decoh}, in strained CQW at sufficiently large $E_x$, when the magnetic field is applied along the quantum well, the $g$-factor is to good approximation independent of $E_x$, suppressing the sensitivity of the spin to charge noise and significantly boosting the coherence time and fidelity of these qubits. \section{Strong Spin-photon coupling} \label{sec:spin-photon} The large dipole moment of these quantum dots and their large SOI makes this architecture optimal to strongly couple the hole spin qubit to a superconducting resonators. One viable approach is to couple a long quantum dot to a high-impedance resonator by shaking the dot along the smooth confinement direction. In analogy to Ge/Si core/shell nanowires~\cite{DRkloeffel2,bosco2022fully}, this approach enables a strong spin-photon interaction in a single quantum dot, and it is especially appealing in our system when the magnetic field is applied along the $z$-direction, where the qubit is insensitive to charge noise, see Sec.~\ref{sec:decoh}. In this case, the large direct Rashba SOI linear in momentum $p_z$, see Eqs.~\eqref{eq:long-dot-model} and~\eqref{eq:SOI-long}, enhances the strength of the spin-photon interaction compared to alternative proposals based on cubic SOI~\cite{PhysRevB.102.205412}, potentially resulting in orders of magnitude larger coupling strengths~\cite{bosco2021squeezed}. However, this approach requires a plunger gate misaligned from the center of the quantum dot, reducing the geometric lever arm between the electrode and the dot, and potentially risking the screening of the driving gate by the electrode defining the dot. In contrast, we now focus here on a different setup that makes use of the alternative driving mechanism discussed in Sec.~\ref{sec:driving}, where a short quantum dot is shaken in the angular direction. Because this approach relies on a driving field applied in $y$-direction, perpendicular to the smooth confinement, the plunger electrode can be aligned to the center of the dot, enhancing the lever arm and potentially enabling higher coupling strengths. While we now restrict our analysis to short quantum dots in annular CQWs, we emphasize that the results shown here are valid also for planar CQWs, see Sec.~\ref{sec:planar}. We remark that our system only requires a \textit{single short quantum dot}, in contrast to different approaches in hole systems where the dipole moment is enlarged by delocalizing the hole over more dots~\cite{PhysRevResearch.3.013194}. State-of-the-art high impedance resonators can be made rather resilient against small magnetic fields, with quality factors $Q\sim 10^5$ at the small magnetic fields $B\lesssim 0.5$~T considered here, and at the same time they reach rather high values of zero-point-fluctuation potential $V_\text{ZPF}=e\omega_C\sqrt{\hbar Z}\sim 10-100$~$\mu$eV~\cite{PhysRevApplied.5.044004,PhysRevApplied.11.044014,Grunhaupt2019,Maleeva2018,PhysRevLett.121.117001}. Here, $Z\sim 1-10$~k$\Omega$ is the characteristic impedance of a cavity with resonant frequency $\omega_C/2\pi\approx 5$~GHz. The Hamiltonian describing this cavity is $H_C=\hbar\omega_C a^\dagger a$, where $a$ and $a^\dagger$ are bosonic ladder operators annihilating and creating a microwave photon in the resonator. At the antinode the quantized electric potential of a single boson is $\hat{V}=V_\text{ZPF} (a^\dagger+a)$~\cite{girvin2014circuit,RevModPhys.93.025005}. If the plunger electrode is connected to the antinode of the resonator instead of an external power source, then the qubit Hamiltonian in Eq.~\eqref{eq:QD-effective-theory} is still valid, while the ac drive is replaced as $eE_y(t)R\to \alpha V_\text{ZPF} (a^\dagger +a)$, where $\alpha$ is the lever arm of the electrode. We neglect here the variations of the quantum dot size caused by the gate and assume that the electrode only produces an electric field $E_y$. To maximize the qubit-resonator interactions, we consider $\textbf{B}\parallel y$, resulting in the coupling Hamiltonian $H_\text{int}=\nu (a^\dagger+a)\sigma_x$, with interaction strength \begin{equation} \frac{\nu}{2\pi}= \frac{\omega_B}{2\pi} \frac{\alpha V_\text{ZPF}}{2\hbar\omega_\varphi}\frac{\delta g_{xy}^y}{g_{yy}} \ . \end{equation} By considering as in Eq.~\eqref{eq:Rabi_new-SQD} a strained CQW with radius $R=20$~nm, $\omega_B/2\pi= 5$~GHz and $E_x=0.13$~V/$\mu$m, such that $\delta g_{xy}^y/g_{yy}\approx 6$, $g_{yy}=1.35$ and $B=0.25$~mT, we find $\nu\approx 50$~MHz for the realistic values of lever arm $\alpha=0.4$ and $V_\text{ZPF}= 20$~$\mu$eV~\cite{PhysRevApplied.5.044004}. This interaction strength is comparable to that reported in charge qubits~\cite{PhysRevX.7.011030} and in spin qubits defined in multiple quantum dots~\cite{mi2018coherent,Landig2018}. Moreover, we note that this system is well within the strong coupling regime. In fact, $\nu$ is about 40 times larger than the dephasing rate $1/T_2^*\approx 1.3$~MHz of the qubit, see Sec.~\ref{sec:decoh} and Fig.~\ref{fig:deph}, and three orders of magnitude larger than the decay rate of the photon in state-of-the-art cavities, $\omega_C/2\pi Q\approx 50 $~kHz. We emphasize that the values of $\nu$ reported here can be further optimized in different ways. Larger values of the ratio $\delta g_{xy}^y/g_{yy}\omega_\varphi$ can be reached by tuning $E_x$, by increasing the radius $R$, and, in our device, also by optimizing the electrostatic gate design to maximize the lever arm. Moreover, a stronger coupling strength can be reached at larger cavity and qubit frequencies and higher impedances because $\nu\propto \omega_B\omega_C\sqrt{Z}$. By considering $B=1$~T~\cite{PhysRevApplied.5.044004} and by reducing the length of resonator by $4$ times, one obtains more than an order of magnitude larger $\nu/2\pi\approx 800$~MHz at $\omega_B/2\pi\approx 20$~GHz, a frequency still compatible with microwave technology~\cite{mills2021two,zwerver2021qubits}. Moreover, resonators with a higher characteristic impedance, approaching the resistance quantum $25$~k$\Omega$, could also be conceived e.g. by using carbon nanotubes~\cite{doi:10.1063/1.4868868,1406008,Chudow2016} or quantum Hall edge states~\cite{PhysRevB.100.035416,PhysRevApplied.12.014030, PhysRevB.96.115407,PhysRevResearch.2.043383}, further enhancing the coupling strength. The latter approach is particularly appealing for our system, because in Ge/SiGe heterostructures well-developed quantum Hall plateaus have been recently observed at magnetic fields below 1~T~\cite{lodari2021lightly}. For these reasons, in our devices strong spin-photon couplings with interaction strength exceeding a few hundreds of MHz are realistically achievable with current state-of-the-art technology, opening up new possibilities for entangling distant qubits, as well as for high fidelity single-shot readout schemes. \section{Conclusions} In conclusion, in this work we discussed annular and planar curved quantum wells, focusing on their application for spin-based quantum information processing. This architecture takes full advantage of the large SOI of hole nanostructures and the curvature of the cross-section enhances the electric dipole moment of the system, and guarantees that the maximal value of the SOI is reached at low values of the externally applied electric field. We presented a detailed model of these devices, discussing several possible implementations and highlighting their key features and their differences from current state-of-the-art hole spin qubits, including their peculiar response to strain and electric field. Strikingly, in a wide range of electric fields, CQWs are to good approximation insensitive to charge noise, a critical issue in current devices, enabling ultrafast high coherent qubit gates at low power, and pushing hole spin qubits towards new speed and coherence standards. We also find that in CQWs ultrafast operations can be realized in short quantum dots, with ac driving fields perpendicular to the well. This feature enables a strong interaction between a hole spin confined in a single quantum dot and microwave photons, with interaction strengths that can realistically exceed a few hundreds of MHz with current technology. CQWs can thus relax the many technological constraints and challenges to reach the strong hole spin-photon coupling regime, and will constitute an effective building block to scale up the next generation of quantum processors. \begin{acknowledgments} We thank C. Adelsberger, B. Hetenyi, and H. Legg for useful discussions, and T. Patlatiuk and G. Katsaros for valuable comments and feedback on the manuscript. We are also grateful to G. Gadea Diez and I. Zardo for drawing our attention towards curved quantum wells. This work was supported as a part of NCCR SPIN funded by the Swiss National Science Foundation (grant number 51NF40-180604). \end{acknowledgments}
1,477,468,750,280
arxiv
\section{Introduction} \subsection{Main results} A large collection of financial contracts offering guaranteed minimum benefits (GMxBs) are often posed as control problems \cite{bauer2008universal}, in which the control is able to take any one of an uncountable number of values from the \emph{admissible set} at each point in its domain. For example, a contract featuring regular withdrawals may allow holders to withdraw any portion of their account. In the following, we consider a control which maximizes losses for the writer of the contract, hereafter referred to as an \emph{optimal control}. A typical example is a \emph{guaranteed minimum withdrawal benefit} (GMWB). If withdrawals are allowed at any time (i.e. ``continuously''), then the pricing problem can be formulated as a singular control \cite{milevsky2006financial,dai2008guaranteed,huang:2010,huang2013analysis} or an impulse control \cite{chen08a} problem. In practice, the contract usually specifies that the control can only be exercised at a finite number of deterministic \emph{exercise times} $t_{0}<t_{1}<\cdots<t_{N-1}$ \cite{bauer2008universal,chen2008effect}. The procedure for pricing such a contract using dynamic programming proceeds backwards from the expiry time $t_{N}$ as follows: \begin{enumerate} \item Given the solution as $t\rightarrow t_{n+1}^{-}$, the solution as $t\rightarrow t_{n}^{+}$ is acquired by solving an initial value problem. \item The solution as $t\rightarrow t_{n}^{-}$ is then determined by applying an optimal control, which is found by considering a collection of optimization problems. \end{enumerate} If, for example, a finite difference method is used to solve the initial value problem from $t_{n+1}^{-}$ to $t_{n}^{+}$, an optimal control is determined by solving an optimization problem at each grid node, in order to advance the solution to $t_{n}^{-}$. Continuing in this way, we determine the solution at the initial time. If there exists an \emph{optimal bang-bang} control, an optimal control taking on only a finite subset of values from the admissible set, the numerical algorithm simplifies considerably. The existence of such a control is a common assumption in insurance applications \cite{bacinello2011variable,ngai2011longevity,holz2012gmwb}, although no rigorous treatment is present in the literature. In this paper, we will also consider a weaker condition, a {\em{bang-bang principle}}. In this case, although an optimal control is not necessarily a finite subset of values from the admissible set, we will see that a control having this property can result in a large reduction in computational complexity. Our main result in this paper is the specification of sufficient conditions which can be used to guarantee the existence of an optimal bang-bang control. This result relies on the convexity and monotonicity of the solution and follows from a combination of basic results in convex analysis and parabolic partial differential equations (PDEs). We demonstrate our results on two common contracts in the GMxB family: \begin{itemize} \item The \emph{guaranteed lifelong withdrawal benefit} (GLWB) (a.k.a. \emph{guaranteed minimum lifelong withdrawal benefits (GMLWB)}) admits an optimal bang-bang control. In particular, we prove that a holder can maximize the writer's losses by only ever performing \begin{itemize} \item nonwithdrawal, \item withdrawal at the contract rate (i.e. never subject to a penalty), or \item a full surrender (i.e. maximal withdrawal; may be subject to a penalty). \end{itemize} \item On the other hand, the \emph{guaranteed minimum withdrawal benefit} (GMWB) is not necessarily convexity preserving, and does not satisfy a bang-bang principle other than in certain degenerate cases. \end{itemize} In the event that it is not possible to determine an optimal control analytically, numerical methods are required. Standard techniques in optimization are not always applicable, since these methods cannot guarantee convergence to a global extremum. In particular, without a priori knowledge about the objective functions appearing in the family of optimization problems corresponding to optimal holder behavior at the exercise times, a numerical method needs to resort to a linear search over a discretization of the admissible set. Convergence to a desired tolerance is achieved by refining this partition \cite{wang2008maximal}. Only with this approach can we be assured of a convergent algorithm. However, if an optimal bang-bang control exists, discretizing the control set becomes unnecessary. Theoretically, this simplifies convergence analysis. More importantly, in practice, this reduces the amount of work per local optimization problem, often the bottleneck of any numerical method. \subsection{Insurance applications} The GLWB is a response to a general reduction in the availability of defined benefit pension plans \cite{butrica2009disappearing}, allowing the buyer to replicate the security of such a plan via a substitute. The GLWB is bootstrapped via a lump sum payment $w_{0}$ to an insurer, which is invested in risky assets. We term this the \emph{investment account}. Associated with the GLWB contract is the \emph{guaranteed withdrawal benefit account}, referred to as the withdrawal benefit for brevity. This account is also initially set to $w_{0}$. At a finite set of deterministic \emph{withdrawal times}, the holder is entitled to withdraw a predetermined fraction of the withdrawal benefit (or any lesser amount), even if the investment account diminishes to zero. This predetermined fraction is referred to as the \emph{contract withdrawal rate}. If holders wish to withdraw in excess of the contract withdrawal rate, they can do so upon the payment of a penalty. Typical GLWB contracts include penalty rates that are decreasing functions of time. These contracts are often bundled with \emph{ratchets} (a.k.a. step-ups), a contract feature that periodically increases the withdrawal benefit to the investment account, provided that the latter has grown larger than the former. Moreover, \emph{bonus} (a.k.a. roll-up) provisions are also often present, in which the withdrawal benefit is increased if the holder does not withdraw at a given withdrawal time. Upon death, the holder's estate receives the entirety of the investment account. We show that a holder can maximize the writer's costs by only ever performing \emph{nonwithdrawal}, \emph{withdrawal at exactly the contract rate}, or \emph{surrendering the entirety of their account}. Such a holder will never withdraw a nonzero amount strictly below the contract rate or perform a partial surrender. However, this result requires a special form for the penalty and lapsation functions, which is not universal in all contracts. Pricing GLWB contracts has previously been considered in \cite{piscopo2011valuation,holz2012gmwb,forsyth2013risk,azimzadeh2013hedging}. Much like the GLWB contract, a GMWB is composed of an investment account and withdrawal benefit initially set to $w_{0}$, in which $w_{0}$ is a lump sum payment to an insurer. At a finite set of withdrawal times, the holder is entitled to withdraw up to a predetermined amount. Note that this amount is not a fraction of the withdrawal benefit, as in the GLWB, but rather a constant amount irrespective of the withdrawal benefit's size. Furthermore, unlike the GLWB, the action of withdrawing decreases both the investment account and withdrawal benefit on a dollar-for-dollar basis. The GMWB promises to return at least the entire original investment, regardless of the performance of the underlying risky investment. The holder may withdraw more than the predetermined amount subject to a penalty. Upon death, the contract is simply transferred to the holder's estate, and hence mortality risk need not be considered. Pricing GMWB contracts has been previously considered in \cite{milevsky2006financial,dai2008guaranteed,chen2008effect,huang:2010,huang2013analysis}. \subsection{Overview} In $\S$\ref{sec:GMxBs}, we introduce the GLWB and GMWB contracts. In $\S$\ref{sec:Model}, we generalize this to model a contract that can be controlled at finitely many times, a typical case in insurance practice (e.g. yearly or quarterly exercise). In $\S$\ref{sec:ControlReduction}, we develop sufficient conditions for the existence of an optimal bang-bang control and show that the GLWB satisfies these conditions. $\S$\ref{sec:NumericalResults} discusses a numerical method for finding the cost of funding GLWB and GMWB contracts, demonstrating the bang-bang principle for the former and providing an example of where it fails for the latter. \section{Guaranteed minimum benefits (GMxBs)\label{sec:GMxBs}} We introduce mathematical models for the GLWB and GMWB contracts in this section. Since most GMxB contracts offer withdrawals on anniversary dates, to simplify notation, we restrict our attention to annual withdrawals occurring at \[ \mathscr{T}\equiv\left\{ 0,1,\ldots,N-1\right\} . \] $0$ and $N$ are referred to as the \emph{initial} and \emph{expiry} times, respectively (no withdrawal occurs at $N$). In order to ensure that the writer can, at least in theory, hedge a short position in a GMxB with no risk, we assume that the holder will employ a loss-maximizing strategy. That is, the holder will act so as to maximize the cost of funding the GMxB. This represents the worst-case hedging cost for the writer. This worst-case cost is a function of the holder's investment account and withdrawal benefit. As such, we write $\mathbf{x}\equiv\left(x_{1},x_{2}\right)$, where $x_{1}$ is the value of the investment account and $x_{2}$ is the value of the withdrawal benefit. Both of these quantities are nonnegative. Let $\alpha$ denote the \emph{hedging fee}, the rate continuously deducted from the investment account $X_{1}$ (while $x_{1}$ is used to denote a particular value of the investment account, the capital symbol $X_{1}$ is reserved for the corresponding stochastic process) to provide the premium for the contract. We assume that between exercise times, the investment account of the GMxBs follows geometric Brownian motion (GBM) as per \[ \frac{dX_{1}}{X_{1}}=\left(\mu-\alpha\right)dt+\sigma dZ \] tracking the index $\hat{X}_{1}$ satisfying \[ \frac{d\hat{X}_{1}}{\hat{X}_{1}}=\mu dt+\sigma dZ \] where $Z$ is a Wiener process under the \emph{real-world measure}. We assume that it is not possible to short the investment account $X_{1}$ for fiduciary reasons \cite{chen2008effect}, so that the obvious arbitrage opportunity is prohibited. The worst-case cost of a GMxB is posed as the solution to an initial value problem (IVP) specified by three conditions: \begin{enumerate} \item the worst-case cost of funding the contract at the expiry time (posed as a Cauchy boundary condition; see, for example, (\ref{eq:GMWB-Initial}) and (\ref{eq:GLWB-Initial2})); \item the evolution of the worst-case cost \emph{across} withdrawals (posed as a supremum over the holder's actions, corresponding to the holder acting so as to maximize the writer's losses; see, for example, (\ref{eq:GMWB-Supremum}) and (\ref{eq:GLWB-Supremum})); \item the evolution of the worst-case cost \emph{between} withdrawals (posed as a conditional expectation; see, for example, (\ref{eq:GMWB-ValueFunction}) and (\ref{eq:GLWB-ValueFunction})). \end{enumerate} We begin by introducing the IVP for the GMWB before moving to the GLWB for ease of exposition. To distinguish the two contracts, we use the superscripts $\text{L}$ and $\text{M}$ to denote quantities that pertain to the GLWB and GMWB, respectively. In the following, we denote by $\tilde{\mathbb{E}}$ the expectation and by $\tilde{Z}$ a Wiener process under the \emph{risk-neutral measure, }that which renders the discounted index $\hat{X}_{1}$ into a martingale. For a function $g$ whose domain is a subset of $\mathbb{R}$, we use the notations $g\left(t^{-}\right)\equiv\lim_{s\uparrow t}g\left(s\right)$ and $g\left(t^{+}\right)\equiv\lim_{s\downarrow t}g\left(s\right)$ to denote the one-sided limits at $t$. \subsection{Guaranteed minimum withdrawal benefit (GMWB)} Since the GMWB is transferred to the holder's estate upon death, mortality risk is not considered. The worst-case cost of funding a GMWB at time $N$ (the expiry) is \cite{dai2008guaranteed} \[ \varphi^{\text{M}}\left(\mathbf{x}\right)\equiv\max\left(x_{1},\left(1-\kappa_N\right)x_{2}\right), \] corresponding to the greater of the entirety of the investment account or a full surrender at the \emph{penalty rate} at the $N$th anniversary, $\kappa_N \in \left[0,1\right]$. The worst-case cost of funding a GMWB at previous times is derived by a hedging argument in which the writer takes a position in the index $\hat{X}_{1}$ \cite{chen2008effect}. Equivalently, it is given by finding $V$ (within the relevant space of functions; see Appendix \ref{app:Preservation}) such that (s.t.) \begin{align} V\left(\mathbf{x},N\right) & =\varphi^{\text{M}}\left(\mathbf{x}\right) & \text{on }\left[0,\infty\right)^{2}\label{eq:GMWB-Initial}\\ V\left(\mathbf{x},n^{-}\right) & =\sup_{\lambda\in\left[0,1\right]}\left[V\left(\mathbf{f}_{\mathbf{x},n}^{\text{M}}\left(\lambda\right),n^{+}\right)+f_{\mathbf{x},n}^{\text{M}}\left(\lambda\right)\right] & \text{on }\left[0,\infty\right)^{2}\times\mathscr{T}\label{eq:GMWB-Supremum}\\ V\left(\mathbf{x},t\right) & =\tilde{\mathbb{E}}\Bigl[e^{-\int_{t}^{n+1}r\left(\tau\right)d\tau}V\left(X_{1}\left(\left(n+1\right)^{-}\right),x_{2},\left(n+1\right)^{-}\right)\nonumber \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\mid X_{1}\left(n^{+}\right)=x_{1}\Bigr] & \text{ on } \left[0,\infty\right)^{2}\times\left(n,n+1\right)\,\forall n\label{eq:GMWB-ValueFunction} \end{align} where between exercise times \begin{equation} \frac{dX_{1}}{X_{1}}=\left(r-\alpha\right)dt+\sigma d\tilde{Z}. \label{eq:GMWB-XRiskNeutral} \end{equation} $r$ is the risk-free rate, $f^{\text{M}}:\left[0,1\right]\rightarrow\mathbb{R}$ represents the cash flow from the writer to the holder, and $\mathbf{f}^{\text{M}}:\left[0,1\right]\rightarrow\left[0,\infty\right)^{2}$ represents the state of the contract postwithdrawal. The construction of $f^{\text{M}}$ and $\mathbf{f}^{\text{M}}$ is outlined below. The holder is able to withdraw a fraction $\lambda\in\left[0,1\right]$ of the withdrawal benefit at each exercise time. Intuitively, $V\left(\mathbf{x},n^{-}\right)$ and $V\left(\mathbf{x},n^{+}\right)$ can be thought of as the value of the contract ``immediately before'' and ``immediately after'' the exercise time $n$. Let $G\geq0$ denote the \emph{predetermined contract withdrawal amount} associated with the GMWB so that $G\wedge x_{2}$ ($a\wedge b\equiv\min\left(a,b\right)$, $a\vee b\equiv\max\left(a,b\right)$) is the maximum the holder can withdraw without incurring a penalty (both $\wedge$ and $\vee$ are understood to have lower operator precedence than the arithmetic operations). Consider the point $\left(x_{1},x_{2},n\right)$ with $n\in\mathscr{T}$. \begin{itemize} \item The maximum a holder can withdraw without incurring a penalty is $G\wedge x_{2}$. If the holder withdraws the amount $\lambda x_{2}$ with $\lambda x_{2}\in\left[0,G\wedge x_{2}\right]$, \begin{equation} V\left(\mathbf{x},n^{-}\right)=V(\underbrace{x_{1}-\lambda x_{2}\vee0,\, x_{2}-\lambda x_{2}}_{\mathbf{f}^{\text{M}}},\, n^{+})+\underbrace{\lambda x_{2}}_{f^{\text{M}}}.\label{eq:GMWB-Withdrawal} \end{equation} \item Let $\kappa_{n}\in\left[0,1\right]$ denote the \emph{penalty rate} at the $n$th anniversary. If the holder withdraws the amount $\lambda x_{2}$ with $\lambda x_{2}\in\left(G\wedge x_{2},x_{2}\right]$, \begin{equation} V\left(\mathbf{x},n^{-}\right) =V( \underbrace{x_{1}-\lambda x_{2} \vee0,\, x_{2}-\lambda x_{2}}_{\mathbf{f}^{\text{M}}},\, n^{+})+\underbrace{\lambda x_{2}-\kappa_{n}\left(\lambda x_{2}-G\right)}_{f^{\text{M}}}.\label{eq:GMWB-Surrender} \end{equation} Here, $\lambda x_{2}\in\left(G\wedge x_{2},x_{2}\right)$ corresponds to a partial surrender and $\lambda x_{2}=x_{2}$ (i.e. $\lambda=1$) corresponds to a full surrender. \end{itemize} We can summarize (\ref{eq:GMWB-Withdrawal}) and (\ref{eq:GMWB-Surrender}) by taking \begin{equation} f_{\mathbf{x},n}^{\text{M}}\left(\lambda\right)\equiv\begin{cases} \lambda x_{2} & \text{if }\lambda x_{2}\in\left[0,G\wedge x_{2}\right]\\ G+\left(1-\kappa_{n}\right)\left(\lambda x_{2}-G\right) & \text{if }\lambda x_{2}\in\left(G\wedge x_{2},x_{2}\right] \end{cases}\label{eq:GMWB-CashFlow} \end{equation} and \[ \mathbf{f}_{\mathbf{x},n}^{\text{M}}\left(\lambda\right)\equiv\left( x_{1}-\lambda x_{2}\vee0,\left(1-\lambda\right)x_{2}\right) . \] It can be shown from (\ref{eq:GMWB-ValueFunction}) that the cost to fund the GMWB (between exercise times) satisfies% \footnote{We discuss what it means for a function to satisfy this PDE in Appendix \ref{app:Preservation}.% } \cite{chen2008effect} \begin{equation} \partial_{t}V+\mathcal{L}V=0\text{ on }\left(0,\infty\right)^{2}\times\left(n,n+1\right) \text{ } \forall n\label{eq:GMWB-PDE} \end{equation} where \begin{equation} \mathcal{L}\equiv\frac{1}{2}\sigma^{2}x_{1}^{2}\partial_{x_{1}x_{1}}+\left(r-\alpha\right)x_{1}\partial_{x_{1}}-r.\label{eq:GMWB-L} \end{equation} \subsection{Guaranteed lifelong withdrawal benefit (GLWB)} Let $\mathcal{M}\left(t\right)$ be the mortality rate at time $t$ (i.e. $\int_{t_{1}}^{t_{2}}\mathcal{M}\left(t\right)dt$ is the fraction of the original holders who pass away in the interval $\left[t_{1},t_{2}\right]$), so that the survival probability at time $t$ is \[ \mathcal{R}\left(t\right)=1-\int_{0}^{t}\mathcal{M}\left(s\right)ds. \] We assume $\mathcal{M}$ is continuous and nonnegative, along with $\mathcal{R}\left(t\right)\geq0$ for all times $t$. We assume that mortality risk is diversifiable. Furthermore, we assume the existence of a time $t^{\star}>0$ s.t. $\mathcal{R}\left(t^{\star}\right)=0$. That is, survival beyond $t^{\star}$ is impossible (i.e. no holder lives forever). $N$ is chosen s.t. $N\geq t^{\star}$ to ensure that all holders have passed away at the expiry of the contract. As is often the case in practice, we assume ratchets are prescribed to occur on a subset of the anniversary dates (e.g. triennially). As usual, we assume that the holder of a GLWB will employ a loss-maximizing strategy. Since $N$ was picked sufficiently large, the insurer has no obligations at the $N$th anniversary and the worst-case cost of funding a GLWB at time $N$ is \begin{equation} \varphi^{\text{L}}\left(\mathbf{x}\right)\equiv0.\label{eq:GLWB-Initial} \end{equation} As with the GMWB, the worst-case cost of funding a GLWB is derived by a hedging argument in which the writer takes a position in the index $\hat{X}_{1}$ \cite{forsyth2013risk}. Equivalently, it is given by finding $V$ (within the relevant space of functions; see Appendix \ref{app:Preservation}) s.t. \begin{align} V\left(\mathbf{x},N\right) & =\varphi^{\text{L}}\left(\mathbf{x}\right) & \text{on }\left[0,\infty\right)^{2}\label{eq:GLWB-Initial2}\\ V\left(\mathbf{x},n^{-}\right) & =\sup_{\lambda\in\left[0,2\right]}\left[V\left(\mathbf{f}_{\mathbf{x},n}^{\text{L}}\left(\lambda\right),n^{+}\right)+f_{\mathbf{x},n}^{\text{L}}\left(\lambda\right)\right] & \text{on }\left[0,\infty\right)^{2}\times\mathscr{T}\label{eq:GLWB-Supremum}\\ V\left(\mathbf{x},t\right) & =\tilde{\mathbb{E}}\Bigl[e^{-\int_{t}^{n+1}r\left(\tau\right)d\tau}V\left(X_{1}\left(\left(n+1\right)^{-}\right),x_{2},\left(n+1\right)^{-}\right)\nonumber \\ & \qquad+\int_{t}^{n+1}e^{-\int_{t}^{s}r\left(\tau\right)d\tau}\mathcal{M}\left(s\right)X_{1}\left(s\right)ds\mid X_{1}\left(n^{+}\right)=x_{1}\Bigr] & \text{ on }\left[0,\infty\right)^{2}\times\left(n,n+1\right)\,\forall n\label{eq:GLWB-ValueFunction} \end{align} where between exercise times, $X_{1}$ is specified by (\ref{eq:GMWB-XRiskNeutral}). $f^{\text{L}}:\left[0,2\right]\rightarrow\mathbb{R}$ represents the (mortality-adjusted \cite{forsyth2013risk}) cash flow from the writer to the holder and $\mathbf{f}^{\text{L}}\colon\left[0,2\right]\rightarrow\left[0,\infty\right)^{2}$ represents the state of the contract postwithdrawal. In particular, $\lambda=0$ corresponds to nonwithdrawal, $\lambda\in\left(0,1\right]$ corresponds to withdrawal at or below the contract rate, and $\lambda\in\left(1,2\right]$ corresponds to a partial or full surrender. \begin{remark}We remark that the admissible set of actions $\left[0,2\right]$ is undesirably large (i.e. a continuum). We will apply the results established in $\S$\emph{\ref{sec:ControlReduction}} to show that an optimal strategy taking on values only from $\left\{ 0,1,2\right\} $ exists. In other words, an equivalent problem can be constructed by substituting the set $\left\{ 0,1,2\right\} $ for the original $\left[0,2\right]$ in the optimization problem \emph{(\ref{eq:GLWB-Supremum})}. The resulting problem has smaller computational complexity than the original one (i.e. successive refinements of $\left[0,2\right]$ need not be considered to attain convergence).\end{remark} The construction of $f^{\text{L}}$ and $\mathbf{f}^{\text{L}}$ is guided by the specification of the contract: \begin{itemize} \item Let $\beta$ denote the \emph{bonus rate}: if the holder does not withdraw, the withdrawal account is amplified by $1+\beta$. \item Let $\delta$ denote the \emph{contract withdrawal rate}; that is, $\delta x_{2}$ is the maximum a holder can withdraw without incurring a penalty. \item Let $\kappa_{n}\in\left[0,1\right]$ denote the \emph{penalty rate} at the $n$th anniversary, incurred if the holder withdraws above the contract withdrawal rate. \item Let \[ \mathbb{I}_{n}=\begin{cases} 1 & \text{\text{if a ratchet is prescribed to occur on the }}n\text{th anniversary}\\ 0 & \text{otherwise} \end{cases}. \] \end{itemize} Then, \begin{equation} f_{\mathbf{x},n}^{\text{L}}\left(\lambda\right)\equiv\mathcal{R}\left(n\right)\cdot\begin{cases} 0 & \text{if }\lambda=0\\ \lambda\delta x_{2} & \text{if }\lambda\in\left(0,1\right]\\ \delta x_{2}+\left(\lambda-1\right)\left(1-\kappa_{n}\right)\left(x_{1}-\delta x_{2}\vee0\right) & \text{if }\lambda\in\left(1,2\right] \end{cases}\label{eq:GLWB-CashFlow} \end{equation} and \begin{equation} \mathbf{f}_{\mathbf{x},n}^{\text{L}}\left(\lambda\right)\equiv\begin{cases} \left( x_{1},\, x_{2}\left(1+\beta\right)\vee\mathbb{I}_{n}x_{1}\right) & \text{if }\lambda=0\\ \left( x_{1}-\lambda\delta x_{2}\vee0,\, x_{2}\vee\mathbb{I}_{n}\left[x_{1}-\lambda\delta x_{2}\right]\right) & \text{if }\lambda\in\left(0,1\right]\\ \left(2-\lambda\right)\mathbf{f}_{\mathbf{x},n}\left(1\right) & \text{if }\lambda\in\left(1,2\right] \end{cases}.\label{eq:GLWB-StateTransition} \end{equation} It can be shown that the cost to fund the GLWB (between exercise times) satisfies \cite{forsyth2013risk} \begin{equation} \partial_{t}V+\mathcal{L}V+\mathcal{M}x_{1}=0\text{ on }\left(0,\infty\right)^{2}\times\left(n,n+1\right) \text{ } \forall n\label{eq:GLWB-PDE} \end{equation} where $\mathcal{L}$ is defined in (\ref{eq:GMWB-L}). \section{General formulation\label{sec:Model}} We generalize now the above IVPs. Let $\mathscr{T}\equiv\left\{ t_{0},\ldots,t_{N-1}\right\} $ along with the order $0\equiv t_{0}<\cdots<t_{N}\equiv T$, in which $T$ is referred to as the expiry time. Let $\Omega$ be a convex subset of $\mathbb{R}^{m}$. The set of all actions a holder can perform at an exercise time $t_{n}$ is denoted by $\Lambda_{n}\subset\mathbb{R}^{m^{\prime}}$, assumed to be nonempty, and referred to as an \emph{admissible set}. For brevity, let \begin{equation} v_{\mathbf{x},n}\left(\lambda\right)\equiv V\left(\mathbf{f}_{\mathbf{x},n}\left(\lambda\right),t_{n}^{+}\right)+f_{\mathbf{x},n}\left(\lambda\right)\label{eq:Model-LittleV} \end{equation} where $f_{\mathbf{x},n}\colon\Lambda_{n}\rightarrow\mathbb{R}$ and $\mathbf{f}_{\mathbf{x},n}\colon\Lambda_{n}\rightarrow\Omega$. We write $v_{\mathbf{x},n}\left(\lambda\right)$ to stress that for each fixed $\left(\mathbf{x},n\right)$, we consider an optimization problem in the variable $\lambda$. The general problem is to find $V$ satisfying the conditions \begin{align} V\left(\mathbf{x},T\right) & =\varphi\left(\mathbf{x}\right) & \text{on }\Omega\label{eq:Model-Initial}\\ V\left(\mathbf{x},t_{n}^{-}\right) & =\sup v_{\mathbf{x},n}\left(\Lambda_{n}\right) & \text{on }\Omega\times\mathscr{T}\label{eq:Model-Supremum} \end{align} along with a condition specifying the evolution of $V$ from $t_{n}^{+}$ to $t_{n+1}^{-}$ (see, for example, (\ref{eq:GMWB-ValueFunction}) and (\ref{eq:GLWB-ValueFunction})). \begin{remark}\label{Model-AdmissibleSetIndependentOfX}Convexity preservation, a property that helps establish the bang-bang principle, depends on each admissible set $\Lambda_{n}$ being independent of the state of the contract, $\mathbf{x}$. This is discussed in Remark \emph{\ref{rmk:ControlReduction-Statelessness}}.\end{remark} \section{Control reduction}\label{sec:ControlReduction} \begin{definition}[Optimal bang-bang control] \label{def:OptimalBangBang}$V$, a solution to the general IVP introduced in $\S$\ref{sec:Model}, is said to admit an optimal bang-bang control at time $t_{n}\in\mathscr{T}$ whenever \[ V\left(\mathbf{x},t_{n}^{-}\right)=\max v_{\mathbf{x},n}\left(\hat{\Lambda}_{n}\right)\text{on }\Omega, \] where $\hat{\Lambda}_{n}$ denotes a finite set independent of $\mathbf{x}$. \end{definition} The above condition is inherently simpler than (\ref{eq:Model-Supremum}), in which there are no guarantees on the cardinality of $\Lambda_{n}$. $\S$\ref{sub:ControlReduction-BangBangPrinciple} develops Corollary \ref{cor:OptimalBangBang}, establishing sufficient conditions for the existence of an optimal bang-bang control. This result requires that the relevant solution $V$ be convex and monotone (CM). Given a CM initial condition (\ref{eq:Model-Initial}), we seek to ensure that $V$ preserves the CM property at all previous times. $\S$\ref{sub:ControlReduction-PreservationAcross} develops conditions on the functions $f$ and $\mathbf{f}$ to ensure that the supremum (\ref{eq:Model-Supremum}) preserves the CM property. Similarly, $\S$\ref{sub:ControlReduction-Between} develops conditions on the dynamics of $V$ (and hence the underlying stochastic process(es)) to ensure that the CM property is preserved between exercise times. For the remainder of this work, we use the shorthand $V_{n}^{+}\left(\mathbf{x}\right)\equiv V\left(\mathbf{x},t_{n}^{+}\right)$ and $V_{n}^{-}\left(\mathbf{x}\right)\equiv V\left(\mathbf{x},t_{n}^{-}\right)$. \subsection{Preliminaries} In an effort to remain self-contained, we provide the reader with several elementary (but useful) definitions. In practice, we consider only vector spaces over $\mathbb{R}$ and hence restrict our definitions to this case. \begin{definition}[convex set] Let $W$ be a vector space over $\mathbb{R}$. $X\subset W$ is a convex set if for all $x,x^{\prime}\in X$ and $\theta\in\left(0,1\right)$, $\theta x+\left(1-\theta\right)x^{\prime}\in X$. \end{definition} \begin{definition}[convex function] \label{def:ControlReduction-Convex}Let $X$ be a convex set and $Y$ be a vector space over $\mathbb{R}$ equipped with a partial order $\leq_{Y}$. $h\colon X\rightarrow Y$ is a convex function if for all $x,x^{\prime}\in X$ and $\theta\in\left(0,1\right)$, \[ h\left(\theta x+\left(1-\theta\right)x^{\prime}\right)\leq_{Y}\theta h\left(x\right)+\left(1-\theta\right)h\left(x^{\prime}\right). \] \end{definition} \begin{definition}[extreme point] An extreme point of a convex set $X$ is a point $x\in X$ which cannot be written $x=\theta x^{\prime}+\left(1-\theta\right)x^{\prime\prime}$ for any $\theta\in\left(0,1\right)$ and $x^{\prime},x^{\prime\prime}\in X$ with $x^{\prime}\neq x^{\prime\prime}$. \end{definition} \begin{definition}[convex polytope] Let $Y$ be a topological vector space over $\mathbb{R}$. $P\subset Y$ is a convex polytope if it is a compact convex set with finitely many extreme points. The extreme points of a convex polytope are referred to as its vertices. \end{definition} \begin{definition}[monotone function] \label{def:ControlReduction-Monotone}Let $X$ and $Y$ be sets equipped with partial orders $\leq_{X}$ and $\leq_{Y}$, respectively. $h\colon X\rightarrow Y$ is monotone if for all $x,x^{\prime}\in X$, $x\leq_{X}x^{\prime}$ implies $h\left(x\right)\leq_{Y}h\left(x^{\prime}\right)$.\end{definition} \begin{lemma} \label{lem:ControlReduction-ConvexComposition}Let $A$ be a convex set, and let $B$ and $C$ be vector spaces over $\mathbb{R}$ equipped with partial orders $\leq_{B}$ and $\leq_{C}$, respectively. If $h_{1}\colon A\rightarrow B$ and $h_{2}\colon B\rightarrow C$ are convex functions with $h_{2}$ monotone, then $h_{2}\circ h_{1}$ is a convex function. \end{lemma} \begin{remark}\label{rmk:ControlReduction-OrderOnRN}For the remainder of this work, we equip $\mathbb{R}^{m}$ with the order $\leq$ defined as follows: if $\mathbf{x},\mathbf{y}\in\mathbb{R}^{m}$, $\mathbf{x}\leq\mathbf{y}$ whenever $x_{i}\leq y_{i}$ for all $i$.\end{remark} \subsection{Bang-bang principle\label{sub:ControlReduction-BangBangPrinciple}} Consider a particular exercise time $t_n$. Suppose the following: \begin{enumerate}[label=(A\arabic*)] \item \label{itm:ControlReduction-VConvexAndMonotone} $\mathbf{x} \mapsto V_{n}^{+}\left(\mathbf{x}\right)$ is CM. \item \label{itm:ControlReduction-BoundedFromAbove}For each fixed $\mathbf{x}\in\Omega$, $v_{\mathbf{x},n}\left(\Lambda_n\right)$ is bounded above. \end{enumerate} Throughout this section, we consider a particular point $\mathbf{y}\in\Omega$ in order to establish our result pointwise. For the results below, we require the following propositions: \begin{enumerate}[label=(B\arabic*)] \item \label{itm:ControlReduction-ConvexCollection}There exists a collection $\mathcal{P}_{n}\left(\mathbf{y}\right)\subset2^{\Lambda_{n}}$ s.t. $\bigcup_{P\in\mathcal{P}_{n}\left(\mathbf{y}\right)}P=\Lambda_{n}$ and each $P\in\mathcal{P}_{n}\left(\mathbf{y}\right)$ is compact convex. \item \label{itm:ControlReduction-FlowAndStateConvexity}For each $P\in\mathcal{P}_{n}\left(\mathbf{y}\right)$, the restrictions $\lambda \mapsto f_{\mathbf{y},n}|_{P}\left(\lambda\right)$ and $\lambda \mapsto \mathbf{f}_{\mathbf{y},n}|_{P}\left(\lambda\right)$ are convex \item \label{itm:ControlReduction-FinitePolytopes}$\mathcal{P}_{n}\left(\mathbf{y}\right)$ is a finite collection of convex polytopes. \end{enumerate} \begin{remark}\emph{\ref{itm:ControlReduction-ConvexCollection}} simply states that we can ``cut up'' the admissible set $\Lambda_{n}$ into (possibly overlapping) compact convex sets. \emph{\ref{itm:ControlReduction-FlowAndStateConvexity}} states that the restrictions of $f_{\mathbf{y},n}$ and $\mathbf{f}_{\mathbf{y},n}$ on each of these sets are convex functions of $\lambda$.\end{remark} \begin{lemma} \label{lma:ControlReduction-LittleVConvex}Suppose \emph{\ref{itm:ControlReduction-VConvexAndMonotone}}, \emph{\ref{itm:ControlReduction-ConvexCollection}}, and \emph{\ref{itm:ControlReduction-FlowAndStateConvexity}}. For each $P\in\mathcal{P}_{n}\left(\mathbf{y}\right)$, the restriction $\lambda \mapsto v_{\mathbf{y},n}|_{P}\left(\lambda\right)$ is convex.\end{lemma} \begin{proof} The proof is by (\ref{eq:Model-LittleV}), \ref{itm:ControlReduction-VConvexAndMonotone}, \ref{itm:ControlReduction-FlowAndStateConvexity}, and Lemma \ref{lem:ControlReduction-ConvexComposition}.\end{proof} \begin{lemma} \label{lem:ControlReduction-LittleVSupremum}Suppose \emph{\ref{itm:ControlReduction-VConvexAndMonotone}}, \emph{\ref{itm:ControlReduction-BoundedFromAbove}}, \emph{\ref{itm:ControlReduction-ConvexCollection}}, and \emph{\ref{itm:ControlReduction-FlowAndStateConvexity}}. Let $P\in\mathcal{P}_{n}\left(\mathbf{y}\right)$. Then, \[ \sup v_{\mathbf{y},n}\left(P\right)=\sup v_{\mathbf{y},n}\left(E\left(P\right)\right) \] where $E\left(P\right)$ denotes the set of extreme points of $P$.\end{lemma} \begin{proof} Let $w\equiv v_{\mathbf{y},n}|_{P}$. Note that $w\left(P\right)=v_{\mathbf{y},n}\left(P\right)$, and hence no generality is lost in considering $w$. Lemma \ref{lma:ControlReduction-LittleVConvex} establishes the convexity of $w$. Naturally, $\sup w\left(P\right)$ exists (and hence $\sup w\left(E\left(P\right)\right)$ exists too) due to \ref{itm:ControlReduction-BoundedFromAbove}. Finally, it is well known from elementary convex analysis that the supremum of a convex function on a compact convex set $P$ lies on the extreme points of $P$, $E\left(P\right)$. See \cite[Chap. 32]{rockafellar1997convex}. \end{proof} \begin{theorem}[bang-bang principle] \label{thm:ControlReduction-SupremumEverywhere}Suppose \emph{\ref{itm:ControlReduction-VConvexAndMonotone}}, \emph{\ref{itm:ControlReduction-BoundedFromAbove}}, \emph{\ref{itm:ControlReduction-ConvexCollection}}, and \emph{\ref{itm:ControlReduction-FlowAndStateConvexity}}. Then, \[ \sup v_{\mathbf{y},n}\left(\Lambda_{n}\right)=\sup v_{\mathbf{y},n}\left(\bigcup_{P\in\mathcal{P}_{n}\left(\mathbf{y}\right)}E\left(P\right)\right) \] where $E\left(P\right)$ denotes the set of extreme points of $P$.\end{theorem} \begin{proof} By \ref{itm:ControlReduction-ConvexCollection}, we have that $\Lambda_{n}=\bigcup_{P\in\mathcal{P}_{n}\left(\mathbf{y}\right)}P$. We can, w.l.o.g., assume that all members of $\mathcal{P}_{n}\left(\mathbf{y}\right)$ are nonempty (otherwise, remove all empty sets). $\sup v_{\mathbf{y},n}\left(\Lambda_{n}\right)$ exists due to \ref{itm:ControlReduction-BoundedFromAbove}. Since for each $P\in\mathcal{P}_{n}\left(\mathbf{y}\right)$, $\sup v_{\mathbf{y},n}\left(P\right)=\sup v_{\mathbf{y},n}\left(E\left(P\right)\right)$ (Lemma \ref{lem:ControlReduction-LittleVSupremum}), two applications of Lemma \ref{lem:Commutativity-ResultNonEmpty} allow us to ``commute'' the supremum with the union to get \begin{align*} \sup v_{\mathbf{y},n}\left(\Lambda_{n}\right) & =\sup v_{\mathbf{y},n}\left(\bigcup_{P\in\mathcal{P}_{n}\left(\mathbf{y}\right)}P\right)\\ & =\sup v_{\mathbf{y},n} \left(\bigcup_{P\in\mathcal{P}_{n}\left(\mathbf{y}\right)}E\left(P\right)\right). \end{align*} \end{proof} Theorem \ref{thm:ControlReduction-SupremumEverywhere} reduces the region over which to search for an optimal control. When $\mathcal{P}_{n}\left(\mathbf{y}\right)$ is a finite collection of convex polytopes, the situation is even nicer, as $\bigcup_{P\in\mathcal{P}_{n}\left(\mathbf{y}\right)}E\left(P\right)$ is a finite set (a finite union of finite sets). If, in addition, $\mathcal{P}_{n}$ is chosen independent of $\mathbf{y}$, we arrive at an optimal bang-bang control: \begin{corollary}[optimal bang-bang control] \label{cor:OptimalBangBang}Suppose \emph{\ref{itm:ControlReduction-VConvexAndMonotone}} and \emph{\ref{itm:ControlReduction-BoundedFromAbove}}. Furthermore, suppose \emph{\ref{itm:ControlReduction-ConvexCollection}}, \emph{\ref{itm:ControlReduction-FlowAndStateConvexity}}, and \emph{\ref{itm:ControlReduction-FinitePolytopes}} for all $\mathbf{y}\in\Omega$. Finally, suppose that there exists $\mathcal{P}_{n}$ s.t. $\mathcal{P}_{n}=\mathcal{P}_{n}\left(\mathbf{y}\right)$ for all $\mathbf{y}\in\Omega$. Then, the general IVP introduced in $\S$\ref{sec:Model} admits an optimal bang-bang control at time $t_{n}$ (Definition \emph{\ref{def:OptimalBangBang}}) with \[ V\left(\mathbf{x},t_{n}^{-}\right)=\sup v_{\mathbf{x},n}\left(\Lambda_{n}\right)=\max v_{\mathbf{x},n}\left(\hat{\Lambda}_{n}\right)\text{ on }\Omega \] and \[ \hat{\Lambda}_{n}\equiv\bigcup_{P\in\mathcal{P}_{n}}E\left(P\right). \] \end{corollary} \begin{example}\label{exm:ControlReduction-GLWBBangBang}Let $\mathbf{y}\in\left[0,\infty\right)^{2}$. We now find $\mathcal{P}_{n}^{\text{L}}\left(\mathbf{y}\right)$ s.t. \emph{\ref{itm:ControlReduction-ConvexCollection}}, \emph{\ref{itm:ControlReduction-FlowAndStateConvexity}}, and \emph{\ref{itm:ControlReduction-FinitePolytopes}} are satisfied for the GLWB. Take $P_{1}\equiv\left[0,1\right]$, $P_{2}\equiv\left[1,2\right]$, and $\mathcal{P}_{n}^{\text{L}}\left(\mathbf{y}\right)\equiv\left\{ P_{1},P_{2}\right\} $, satisfying \emph{\ref{itm:ControlReduction-FinitePolytopes}}. Note that $\bigcup_{P\in\mathcal{P}_{n}^{\text{L}}\left(\mathbf{y}\right)}P=\left[0,2\right]$, satisfying \emph{\ref{itm:ControlReduction-ConvexCollection}}. It is trivial to show that the functions $f_{\mathbf{y},n}^{\text{L}}|_{P_{j}}$ and $\mathbf{f}_{\mathbf{y},n}^{\text{L}}|_{P_{j}}$ defined in \emph{(\ref{eq:GLWB-CashFlow})} and \emph{(\ref{eq:GLWB-StateTransition})} are convex as functions of $\lambda$ (the maximum of convex functions is a convex function), thereby satisfying \emph{\ref{itm:ControlReduction-FlowAndStateConvexity}}. Since $\mathbf{y}$ was arbitrary and $\mathcal{P}_{n}^{\text{L}}$ was chosen independent of $\mathbf{y}$, we conclude (whenever \emph{\ref{itm:ControlReduction-VConvexAndMonotone}} and \emph{\ref{itm:ControlReduction-BoundedFromAbove}} hold), by Corollary \emph{\ref{cor:OptimalBangBang}}, that the supremum of $v_{\mathbf{y},n}^{\text{L}}$ occurs at \[ \hat{\Lambda}_{n}^{\text{L}}=E\left(P_{1}\right)\cup E\left(P_{2}\right)=E\left(\left[0,1\right]\right)\cup E\left(\left[1,2\right]\right)=\left\{ 0,1\right\} \cup\left\{ 1,2\right\} =\left\{ 0,1,2\right\} \] (corresponding to nonwithdrawal, withdrawal at exactly the contract rate, and a full surrender).\end{example} \begin{remark} When all the conditions required for Corollary \emph{\ref{cor:OptimalBangBang}} hold, with the exception that $\mathcal{P}_{n}\left(\mathbf{y}\right)$ depends on $\mathbf{y}$, then an optimal control is not necessarily bang-bang, but does satisfy the bang-bang principle, Theorem \emph{\ref{thm:ControlReduction-SupremumEverywhere}}. In many cases, this still results in considerable computational simplification (see Remark \emph{\ref{rmk:PrincipleVersusControl}}). \end{remark} \subsection{Preservation of convexity and monotonicity across exercise times\label{sub:ControlReduction-PreservationAcross}} Since the convexity and monotonicity of $V$ are desirable properties upon which the bang-bang principle depends (i.e. \ref{itm:ControlReduction-VConvexAndMonotone}), we would like to ensure that they are preserved ``across'' exercise times (i.e. from $t_{n}^{+}$ to $t_{n}^{-}$). Consider the $n$th exercise time, $t_{n}$. Suppose the following: \begin{enumerate}[label=(C\arabic*)] \item \label{ass:ControlReduction-FlowAndStateConvexityInX}For each fixed $\lambda\in\Lambda_{n}$, $\mathbf{x}\mapsto\mathbf{f}_{\mathbf{x},n}\left(\lambda\right)$ and $\mathbf{x}\mapsto f_{\mathbf{x},n}\left(\lambda\right)$ are convex.% \footnote{Note that this is not the same as \ref{itm:ControlReduction-FlowAndStateConvexity} Here, we mean that for each fixed $\lambda\in\Lambda_{n}$ and for all $\mathbf{x},\mathbf{x}^{\prime}\in\Omega$ and $\theta\in\left(0,1\right)$, \[ f_{\theta\mathbf{x}+\left(1-\theta\right)\mathbf{x}^{\prime},n}\left(\lambda\right)\leq\theta f_{\mathbf{x},n}\left(\lambda\right)+\left(1-\theta\right)f_{\mathbf{x}^{\prime},n}\left(\lambda\right) \] and \begin{equation} \mathbf{f}_{\theta\mathbf{x}+\left(1-\theta\right)\mathbf{x}^{\prime},n}\left(\lambda\right)\leq\theta\mathbf{f}_{\mathbf{x},n}\left(\lambda\right)+\left(1-\theta\right)\mathbf{f}_{\mathbf{x}^{\prime},n}\left(\lambda\right).\label{eq:Footnote-Convexity} \end{equation} The order $\leq$ used in (\ref{eq:Footnote-Convexity}) is that on $\Omega\subset\mathbb{R}^{m}$, inherited from the order on $\mathbb{R}^{m}$ established in Remark \ref{rmk:ControlReduction-OrderOnRN}.% }. \item \label{ass:ControlReduction-ControlEnsuresMonotone}For each $\mathbf{x},\mathbf{x}^{\prime}\in\Omega$ s.t. $\mathbf{x}\leq\mathbf{x}^{\prime}$, there exist sequences $\left\{ \lambda_{k}\right\} ,\left\{ \lambda_{k}^{\prime}\right\} \in\Lambda_n^{\mathbb{N}}$ s.t. $v_{\mathbf{x},n}\left(\lambda_{k}\right)\rightarrow V_{n}^{-}\left(\mathbf{x}\right)$, and for all $k$, $f_{\mathbf{x},n}\left(\lambda_{k}\right)\leq f_{\mathbf{x}^{\prime},n}\left(\lambda_{k}^{\prime}\right)$ and $\mathbf{f}_{\mathbf{x},n}\left(\lambda_{k}\right)\leq\mathbf{f}_{\mathbf{x}^{\prime},n}\left(\lambda_{k}^{\prime}\right)$. \end{enumerate} \begin{remark}\emph{\ref{ass:ControlReduction-ControlEnsuresMonotone}} simplifies greatly if for all \textbf{$\mathbf{x}$}, $v_{\mathbf{x},n}\left(\Lambda_{n}\right)$ contains its supremum.% \footnote{It is worthwhile to note that in practice, this is often the case; for fixed $n$, consider $\Lambda_{n}$ compact and $\lambda \mapsto v_{\mathbf{x},n}\left(\lambda\right)$ continuous for all $\mathbf{x}$.% } Denote this supremum $v_{\mathbf{x},n}\left(\lambda_{\mathbf{x}}\right)$, where $\lambda_{\mathbf{x}}\in\Lambda_{n}$ is an optimal action at $\mathbf{x}$. In this case, the following simpler assumption yields \emph{\ref{ass:ControlReduction-ControlEnsuresMonotone}}: for each $\mathbf{x},\mathbf{x}^{\prime}\in\Omega$ s.t. $\mathbf{x}\leq\mathbf{x}^{\prime}$, there exists $\lambda^{\prime}\in\Lambda_{n}$ s.t. $f_{\mathbf{x},n}\left(\lambda_{\mathbf{x}}\right)\leq f_{\mathbf{x}^{\prime},n}\left(\lambda^{\prime}\right)$ and $\mathbf{f}_{\mathbf{x},n}\left(\lambda_{\mathbf{x}}\right)\leq\mathbf{f}_{\mathbf{x}^{\prime},n}\left(\lambda^{\prime}\right)$ (take $\lambda_{k}=\lambda_{\mathbf{x}}$ and $\lambda_{k}^{\prime}=\lambda^{\prime}$ for all $k$ to arrive at \emph{\ref{ass:ControlReduction-ControlEnsuresMonotone}}). This simpler condition states that for each pair of positions $\mathbf{x}\leq\mathbf{x}^{\prime}$, there is an action $\lambda^{\prime}$ s.t. the position and cash flow after the event at $\mathbf{x}^{\prime}$ under action $\lambda^{\prime}$ are greater than (or equal to) the position and cash flow after the event at $\mathbf{x}$ under an optimal action $\lambda_{\mathbf{x}}$. Intuitively, this guarantees us that the position $\mathbf{x}^{\prime}$ is more desirable than $\mathbf{x}$ (from the holder's perspective). This is not a particularly restrictive assumption, and it should hold true for any model of a contract in which a larger position is more desirable than a smaller one.\end{remark} \begin{lemma} \label{lem:ControlReduction-ConvexPreservingAcross}Suppose \emph{\ref{itm:ControlReduction-VConvexAndMonotone}}, \emph{\ref{itm:ControlReduction-BoundedFromAbove}}, and \emph{\ref{ass:ControlReduction-FlowAndStateConvexityInX}}. Then, $\mathbf{x}\mapsto V_{n}^{-}\left(\mathbf{x}\right)$ is convex.\end{lemma \begin{proof} Fix $\mathbf{x},\mathbf{x}^{\prime}\in\Omega$ and $\theta\in\left(0,1\right)$, and let $\mathbf{z}\equiv\theta\mathbf{x}+\left(1-\theta\right)\mathbf{x}^{\prime}$. Then, by \ref{itm:ControlReduction-VConvexAndMonotone} and \ref{ass:ControlReduction-FlowAndStateConvexityInX}, \begin{align*} V_{n}^{-}\left(\mathbf{z}\right) & =\sup v_{\mathbf{z},n}\left(\Lambda_{n}\right)\\ & =\sup_{\lambda\in\Lambda_{n}}\left[V_{n}^{+}\left(\mathbf{f}_{\mathbf{z},n}\left(\lambda\right)\right)+f_{\mathbf{z},n}\left(\lambda\right)\right]\\ & \leq\sup_{\lambda\in\Lambda_{n}}\left[V_{n}^{+}\left(\theta\mathbf{f}_{\mathbf{x},n}\left(\lambda\right)+\left(1-\theta\right)\mathbf{f}_{\mathbf{x}^{\prime},n}\left(\lambda\right)\right)+\theta f_{\mathbf{x},n}\left(\lambda\right)+\left(1-\theta\right)f_{\mathbf{x}^{\prime},n}\left(\lambda\right)\right]\\ & \leq\theta\sup_{\lambda\in\Lambda_{n}}\left[V_{n}^{+}\left(\mathbf{f}_{\mathbf{x},n}\left(\lambda\right)\right)+f_{\mathbf{x},n}\left(\lambda\right)\right]+\left(1-\theta\right)\sup_{\lambda\in\Lambda_{n}}\left[V_{n}^{+}\left(\mathbf{f}_{\mathbf{x}^{\prime},n}\left(\lambda\right)\right)+f_{\mathbf{x}^{\prime},n}\left(\lambda\right)\right]\\ & =\theta\sup v_{\mathbf{x},n}\left(\Lambda_{n}\right)+\left(1-\theta\right)\sup v_{\mathbf{x}^{\prime},n}\left(\Lambda_{n}\right)\\ & =\theta V_{n}^{-}\left(\mathbf{x}\right)+\left(1-\theta\right)V_{n}^{-}\left(\mathbf{x}^{\prime}\right). \end{align*} \end{proof} \begin{remark}\label{rmk:ControlReduction-Statelessness}Note that the proof of Lemma \emph{\ref{lem:ControlReduction-ConvexPreservingAcross}} involves using $V_{n}^{-}\left(\mathbf{y}\right)=\sup v_{\mathbf{y},n}\left(\Lambda_{n}\right)$ for $\mathbf{y}=\mathbf{x},\mathbf{x}^{\prime}$. If $\Lambda_{n}$ is instead a function of the contract state (i.e. $\Lambda_{n}\equiv\Lambda_{n}\left(\mathbf{x}\right)$), then the above proof methodology does not work since it is not necessarily true that $V_{n}^{-}\left(\mathbf{y}\right)=\sup v_{\mathbf{y},n}\left(\Lambda_{n}\left(\mathbf{z}\right)\right)$ for $\mathbf{y}=\mathbf{x},\mathbf{x}^{\prime}$.\end{remark} \begin{lemma} \label{lem:ControlReduction-MonotonePreservingAcross}Suppose \emph{\ref{itm:ControlReduction-VConvexAndMonotone}}, \emph{\ref{itm:ControlReduction-BoundedFromAbove}}, and \emph{\ref{ass:ControlReduction-ControlEnsuresMonotone}}. Then, $\mathbf{x}\mapsto V_{n}^{-}\left(\mathbf{x}\right)$ is monotone.\end{lemma} \begin{proof} Let $\mathbf{x},\mathbf{x}^{\prime}\in\Omega$ s.t. $\mathbf{x}\leq\mathbf{x}^{\prime}$. By \ref{itm:ControlReduction-VConvexAndMonotone} (specifically, since $V_{n}^{+}$ is monotone) and \ref{ass:ControlReduction-ControlEnsuresMonotone}, for each $k$, \begin{align*} v_{\mathbf{x},n}\left(\lambda_{k}\right) & =V_{n}^{+}\left(\mathbf{f}_{\mathbf{x}^{\phantom{\prime}},n}\left(\lambda_{k}^{\phantom{\prime}}\right)\right)+f_{\mathbf{x}^{\phantom{\prime}},n}\left(\lambda_{k}^{\phantom{\prime}}\right)\\ & \leq V_{n}^{+}\left(\mathbf{f}_{\mathbf{x}^{\prime},n}\left(\lambda_{k}^{\prime}\right)\right)+f_{\mathbf{x}^{\prime},n}\left(\lambda_{k}^{\prime}\right)\\ & =v_{\mathbf{x}^{\prime},n}\left(\lambda_{k}^{\prime}\right) \end{align*} Then, \[ V_{n}^{-}\left(\mathbf{x}\right)=\lim_{k\rightarrow\infty}v_{\mathbf{x},n}\left(\lambda_{k}\right)\leq\limsup_{k\rightarrow\infty}v_{\mathbf{x}^{\prime},n}\left(\lambda_{k}^{\prime}\right)\leq\sup v_{\mathbf{x}^{\prime},n}\left(\Lambda_{n}\right)=V_{n}^{-}\left(\mathbf{x}^{\prime}\right), \] as desired. \end{proof} \begin{example}\label{exm:ControlReduction-GLWBConvexMonotoneBefore}We now show that the GLWB satisfies \emph{\ref{ass:ControlReduction-FlowAndStateConvexityInX}} and \emph{\ref{ass:ControlReduction-ControlEnsuresMonotone}} given \emph{\ref{itm:ControlReduction-VConvexAndMonotone}} and \emph{\ref{itm:ControlReduction-BoundedFromAbove}}. It is trivial to show that the functions $f_{\mathbf{x},n}^{\text{L}}\left(\lambda\right)$ and $\mathbf{f}_{\mathbf{x},n}^{\text{L}}\left(\lambda\right)$ defined in \emph{(\ref{eq:GLWB-CashFlow})} and \emph{(\ref{eq:GLWB-StateTransition})} are convex in $\mathbf{x}$ (the maximum of convex functions is a convex function), thereby satisfying \emph{\ref{ass:ControlReduction-FlowAndStateConvexityInX}}. \emph{\ref{ass:ControlReduction-ControlEnsuresMonotone}} is slightly more tedious to verify. Let $\mathbf{x},\mathbf{x}^{\prime}\in\Omega$ s.t. $\mathbf{x}\leq\mathbf{x}^{\prime}$. By \emph{\ref{itm:ControlReduction-VConvexAndMonotone}}, \emph{\ref{itm:ControlReduction-BoundedFromAbove}} and the argument in Example \emph{\ref{exm:ControlReduction-GLWBBangBang}}, we can, w.l.o.g., assume $\lambda_{\mathbf{x}}\in\left\{ 0,1,2\right\} $, where $\lambda_{\mathbf{x}}$ denotes an optimal action at $\mathbf{x}$. Hence, we need only consider three cases: \begin{enumerate} \item Suppose $\lambda_{\mathbf{x}}=0$. Take $\lambda^{\prime}=0$ to get $f_{\mathbf{x},n}^{\text{L}}\left(0\right)=f_{\mathbf{x}^{\prime},n}^{\text{L}}\left(\lambda^{\prime}\right)$ and $\mathbf{f}_{\mathbf{x},n}^{\text{L}}\left(0\right)\leq\mathbf{f}_{\mathbf{x}^{\prime},n}^{\text{L}}\left(\lambda^{\prime}\right)$. \item Suppose $\lambda_{\mathbf{x}}=1$. W.l.o.g., we can assume $x_{2}^{\prime}\geq x_{2}>0$. Take $\lambda^{\prime}=x_{2}/x_{2}^{\prime}$ to get $f_{\mathbf{x},n}^{\text{L}}\left(1\right)=f_{\mathbf{x}^{\prime},n}^{\text{L}}\left(\lambda^{\prime}\right)$ and $\mathbf{f}_{\mathbf{x},n}^{\text{L}}\left(1\right)\leq\mathbf{f}_{\mathbf{x}^{\prime},n}^{\text{L}}\left(\lambda^{\prime}\right)$. \item Suppose $\lambda_{\mathbf{x}}=2$. If $x_{1}\leq\delta x_{2}$, then $f_{\mathbf{x},n}^{\text{L}}\left(2\right)=f_{\mathbf{x},n}^{\text{L}}\left(1\right)$ and $\mathbf{f}_{\mathbf{x},n}^{\text{L}}\left(2\right)=\left( 0,0\right) \leq\mathbf{f}_{\mathbf{x},n}^{\text{L}}\left(1\right)$, and we can w.l.o.g. assume $x_{2}^{\prime}\geq x_{2}>0$ and once again take $\lambda^{\prime}=x_{2}/x_{2}^{\prime}$ to get $f_{\mathbf{x},n}^{\text{L}}\left(2\right)=f_{\mathbf{x}^{\prime},n}^{\text{L}}\left(\lambda^{\prime}\right)$ and $\mathbf{f}_{\mathbf{x},n}^{\text{L}}\left(2\right)=\left( 0,0\right) \leq\mathbf{f}_{\mathbf{x}^{\prime},n}^{\text{L}}\left(\lambda^{\prime}\right)$. Therefore, we can safely assume that $x_{1}>\delta x_{2}$ so that \begin{equation} f_{\mathbf{x},n}^{\text{L}}\left(2\right)=\mathcal{R}\left(n\right)\left[\left(1-\kappa\right)x_{1}+\kappa\delta x_{2}\right]\leq\mathcal{R}\left(n\right)x_{1}.\label{eq:ControlReduction-BoundOnOptimalCashFlow} \end{equation} \begin{enumerate} \item Suppose $x_{1}^{\prime}\leq\delta x_{2}^{\prime}$. Take $\lambda^{\prime}=1$ to get $\mathbf{f}_{\mathbf{x},n}^{\text{L}}\left(2\right)=\left( 0,0\right) \leq\mathbf{f}_{\mathbf{x}^{\prime},n}^{\text{L}}\left(1\right)$ and \[ f_{\mathbf{x},n}^{\text{L}}\left(2\right)\leq\mathcal{R}\left(n\right)x_{1}\leq\mathcal{R}\left(n\right)\delta x_{2}^{\prime}=f_{\mathbf{x}^{\prime},n}^{\text{L}}\left(1\right) \] by \emph{(\ref{eq:ControlReduction-BoundOnOptimalCashFlow})}. \item Suppose $x_{1}^{\prime}>\delta x_{2}^{\prime}$. Take $\lambda^{\prime}=2$ to get $\mathbf{f}_{\mathbf{x},n}^{\text{L}}\left(2\right)=\left( 0,0\right) =\mathbf{f}_{\mathbf{x}^{\prime},n}^{\text{L}}\left(2\right)$ and \[ f_{\mathbf{x},n}^{\text{L}}\left(2\right)\leq\mathcal{R}\left(n\right)\left[\left(1-\kappa_{n}\right)x_{1}^{\prime}+\kappa\delta x_{2}^{\prime}\right]=f_{\mathbf{x}^{\prime},n}^{\text{L}}\left(2\right). \] \end{enumerate} \end{enumerate} \end{example} \subsection{Preservation of convexity and monotonicity between exercise times\label{sub:ControlReduction-Between}} As previously mentioned, to apply Theorem \ref{thm:ControlReduction-SupremumEverywhere}, we need to check the validity of \ref{itm:ControlReduction-VConvexAndMonotone} (i.e. that the solution is CM at $t_{n}^{+}$). In light of this, we would like to identify scenarios in which $V_{n}^{+}$ is CM provided that $V_{n+1}^{-}$ is CM (i.e. convexity and monotonicity are preserved between exercise times). \begin{example}\label{exm:ControlReduction-GLWBMonotoneConvexBetweenExercise}If we assume that both GLWB and GMWB are written on an asset that follows GBM, then Appendix \emph{\ref{app:Preservation}} establishes the convexity and monotonicity (under sufficient regularity) of $V_{n}^{+}$ given the convexity and monotonicity of $V_{n+1}^{-}$. The general argument is applicable to contracts written on assets whose returns follow multidimensional drift-diffusions with parameters independent of the level of the asset (a local volatility model, for example, is not included in this class). Convexity and monotonicity preservation are retrieved directly from a property of the corresponding Green's function. Although the methodology in Appendix \emph{\ref{app:Preservation}} relates convexity and monotonicity to a general property of the Green's function (including the class of contracts driven by GBM), in the interest of intuition, we provide the reader with an alternate proof below using the linearity of the expectation operator along with the linearity of the stochastic process w.r.t. its initial value. Consider, in particular, the GLWB. Equation \emph{(\ref{eq:GLWB-ValueFunction})} stipulates \begin{align*} V_{n}^{+}\left(\mathbf{x}\right) & =\tilde{\mathbb{E}}\Bigl[e^{-\int_{n}^{n+1}r\left(\tau\right)d\tau}V_{n+1}^{-}\left(X_{1}\left(\left(n+1\right)^{-}\right),x_{2}\right)\\ & \qquad\qquad+\int_{n}^{n+1}e^{-\int_{n}^{s}r\left(\tau\right)d\tau}\mathcal{M}\left(s\right)X_{1}\left(s\right)ds\mid X_{1}\left(n^{+}\right)=x_{1}\Bigr] & \text{ on }\left[0,\infty\right)^{2}\times\mathscr{T}. \end{align*} Linearity allows us to consider the two terms appearing in the sum inside the conditional expectation separately. If each is convex in $\mathbf{x}$, so too is the entire expression. If $X_{1}\left(n^{+}\right)=x_{1}$, \[ X_{1}\left(s\right)=x_{1}Y\left(s\right) \] between $n$ and $n+1$, where \[ Y\left(s\right)\equiv\exp\left(\int_{n}^{s}\left[r\left(\tau\right)-\alpha\left(\tau\right)-\frac{1}{2}\sigma^{2}\left(\tau\right)\right]d\tau+\int_{n}^{s}\sigma\left(\tau\right)d\tilde{Z}\left(\tau\right)\right), \] from which it is evident that $X_{1}\left(s\right)$ is convex in $x_{1}$ since $Y$ depends only on time (note that the parameters appearing in $Y$ are independent of the level of the asset, precluding a local volatility model). It remains to show that the first term is also convex. Fix $\mathbf{y},\mathbf{y}^{\prime}\in\left[0,\infty\right)^{2}$, $\theta\in\left(0,1\right)$, and let $\mathbf{x}\equiv\theta\mathbf{y}+\left(1-\theta\right)\mathbf{y}^{\prime}$. Then, by assuming that $V_{n+1}^{-}\left(\mathbf{x}\right)$ is convex in $\mathbf{x}$, \begin{align*} V_{n+1}^{-}\left(x_{1}Y\left(\left(n+1\right)^{-}\right),x_{2}\right) & =V_{n+1}^{-}\left(\left(\theta y_{1}+\left(1-\theta\right)y_{1}^{\prime}\right)Y\left(\left(n+1\right)^{-}\right),\theta y_{2}+\left(1-\theta\right)y_{2}^{\prime}\right)\\ & \leq\theta V_{n+1}^{-}\left(y_{1}Y\left(\left(n+1\right)^{-}\right),y_{2}\right)+\left(1-\theta\right)V_{n+1}^{-}\left(y_{1}^{\prime}Y\left(\left(n+1\right)^{-}\right),y_{2}^{\prime}\right). \end{align*} One can use the same technique to show that monotonicity is preserved. An identical argument can be carried out for the GMWB.\end{example} Convexity and monotonicity preservation are established for a stochastic volatility model in \cite{bergman1996general}. For the case of general parabolic equations, convexity preservation is established in \cite{janson2004preservation}. This result is further generalized to parabolic integro-differential equations, arising from problems involving assets whose returns follow jump-diffusion processes \cite{bian2008convexity}. \subsection{Existence of an optimal bang-bang control} Once we have established that convexity and monotonicity are preserved across and between exercise times (i.e. $\S$\ref{sub:ControlReduction-PreservationAcross} and $\S$\ref{sub:ControlReduction-Between}, respectively), we need only apply our argument inductively to establish the existence of an optimal bang-bang control. Instead of providing a proof for the general case, we simply focus on the GLWB contract here. For the case of a general contract, assuming the dynamics followed by the assets preserve the convexity and monotonicity of the cost of funding the contract between exercise times (e.g. GBM, as in Appendix \ref{app:Preservation}), the reader can apply the same techniques to establish the existence of a bang-bang control. \begin{example}Consider the GLWB. Suppose that for some $n$ s.t. $0\leq n<N$, $V_{n+1}^{-}$ is CM. By Example \emph{\ref{exm:ControlReduction-GLWBMonotoneConvexBetweenExercise}}, $V_{n}^{+}$ is also CM. Under sufficient regularity (see Appendix \emph{\ref{app:Preservation}}), for fixed $\mathbf{x}$, $v_{\mathbf{x},n}$ is bounded above (satisfying \emph{\ref{itm:ControlReduction-BoundedFromAbove}}). Since \emph{\ref{itm:ControlReduction-VConvexAndMonotone}} and \emph{\ref{itm:ControlReduction-BoundedFromAbove}} are satisfied, we can use Example \emph{\ref{exm:ControlReduction-GLWBBangBang}} to conclude that the supremum of $v_{\mathbf{x},n}$, for each $\mathbf{x}\in\Omega$, occurs on $\left\{ 0,1,2\right\} $. By Example \emph{\ref{exm:ControlReduction-GLWBConvexMonotoneBefore}}, $V_{n}^{-}$ is convex and monotone. By \emph{(\ref{eq:GLWB-Initial})} and \emph{(\ref{eq:GLWB-Initial2})}, $V\left(\mathbf{x},N\right)=0$. Since $V\left(\mathbf{x},N\right)$ is trivially CM as a function of $\mathbf{x}$, we can apply the above argument inductively to establish the existence of an optimal bang-bang control. \end{example} \section{Numerical Examples}\label{sec:NumericalResults} To demonstrate the bang-bang principle in practice, we implement a numerical method to solve the GLWB and GMWB problems and examine loss-maximizing withdrawal strategies. \subsection{Contract pricing algorithm} Algorithm \ref{alg:NumericalMethod-Algorithm} highlights the usual dynamic programming approach to pricing contracts with finitely many exercise times. Note that line 2 is purposely non-specific; the algorithm does not presume anything about the underlying dynamics of the stochastic process(es) that $V$ is a function of, and as such does not make mention of a particular numerical method used to solve $V_{n}^{+}$ given $V_{n+1}^{-}$. Establishing that the control is bang-bang for a particular contract allows us to replace $\Lambda_{n}$ appearing on line 4 with a finite subset of itself. \begin{algorithm} \caption{Dynamic programming for pricing contracts with finitely many exercise times.\label{alg:NumericalMethod-Algorithm}} \vskip\belowcaptionskip \LinesNumbered \KwData{payoff at the expiry, $V_N=\varphi$} \KwResult{price of the contract at time zero, $V_0\equiv V_0^-$} \For{$n\leftarrow N - 1$ \KwTo $0$}{ use $V_{n+1}^-$ to determine $V_n^+$ \label{alg:Between} \\ \For{$\mathbf{x} \in \Omega$}{ $ V_n^-\left(\mathbf{x}\right) \equiv \sup_{\lambda\in\Lambda_{n}}V_{n}^+\left(\mathbf{f}_{\mathbf{x},n}\left(\lambda\right)\right)+f_{\mathbf{x},n}\left(\lambda\right) $ \label{alg:Across} } } \end{algorithm} \subsection{Numerical method} The numerical method discussed here applies to both GLWB and GMWB contracts. Each contract is originally posed on $\Omega=\left[0,\infty\right)^{2}$. We employ Algorithm \ref{alg:NumericalMethod-Algorithm} but instead approximate the solution using a finite difference method on the truncated domain $\left[0,x_{1}^{\text{max}}\right]\times\left[0,x_{2}^{\text{max}}\right]$. As such, since $\mathbf{f}_{\mathbf{x},n}\left(\lambda\right)$ will not necessarily land on a mesh node, linear interpolation is used to approximate $V_{n}^{+}\left(\mathbf{f}_{\mathbf{x},n}\left(\lambda\right)\right)$ on line 4. A local optimization problem is solved for each point on the finite difference grid. Details of the numerical scheme can be found in \cite{azimzadeh2013hedging,forsyth2013risk}. Between exercise times, the cost of funding each contract satisfies one of (\ref{eq:GMWB-PDE}) or (\ref{eq:GLWB-PDE}). Corresponding to line 2 of Algorithm \ref{alg:NumericalMethod-Algorithm}, we determine $V_{n}^{+}$ from $V_{n+1}^{-}$ using an implicit finite difference discretization. No additional boundary condition is needed at $x_{1}=0$ or $x_{2}=0$ ((\ref{eq:GMWB-PDE}) and (\ref{eq:GLWB-PDE}) hold along $\partial\Omega\times\left[t_{n},t_{n+1}\right)$). The same is true of $x_{2}=x_{2}^{\text{max}}\gg0$. At $x_{1}=x_{1}^{\text{max}}\gg0$, we impose \begin{equation} V\left(x_{1}^{\text{max}},x_{2},t\right)=g\left(t\right)x_{1}^{\text{max}}\label{eq:NumericalResults-Asymptotic} \end{equation} for some function $g$ differentiable everywhere but possibly at the exercise times $n$. Substituting the above into (\ref{eq:GMWB-PDE}) or (\ref{eq:GLWB-PDE}) yields an ordinary differential equation which is solved numerically alongside the rest of the domain. Errors introduced by the above approximations are small in the region of interest, as verified by numerical experiments. \begin{remark}Since we advance the numerical solution from $n^{-}$ to $\left(n-1\right)^{+}$ using a convergent method, the numerical solution converges pointwise to a solution $V$ that is convexity and monotonicity preserving. Although it is possible to show---for special cases---that convexity and monotonicity are preserved for finite mesh sizes, this is not necessarily true unconditionally.\end{remark} \begin{remark}Although we have shown that an optimal bang-bang control exists for the GLWB problem, we do not replace $\Lambda_{n}$ with $\left\{ 0,1,2\right\} $ on line \emph{4} of Algorithm \emph{\ref{alg:NumericalMethod-Algorithm}} when computing the cost to fund a GLWB in $\S$\emph{\ref{sub:Results-GLWB}} so as to demonstrate that our numerical method, having preserved convexity and monotonicity, selects an optimal bang-bang control. For both GLWB and GMWB, We assume that nothing is known about $v_{\mathbf{x},n}$ and hence form a partition \[ \lambda_{1}<\lambda_{2}<\cdots<\lambda_{p} \] of the admissible set and perform a linear search% \end{remark} \subsection{Results} \subsubsection{Guaranteed Lifelong Withdrawal Benefits\label{sub:Results-GLWB}} Figure \ref{fig:Results-GLWBWithdrawalStrategies} shows withdrawal strategies for the holder under the parameters in Table \ref{tab:Results-GLWBParameters} on the first four contract anniversaries. We can clearly see that the optimal control is bang-bang from the figures. At any point $\left(\mathbf{x},n\right)$, we see that the holder performs one of nonwithdrawal, withdrawal at exactly the contract rate, or a full surrender (despite being afforded the opportunity to withdraw any amount between nonwithdrawal and a full surrender). When the withdrawal benefit is much larger than the investment account, the optimal strategy is withdrawal at the contract rate (the guarantee is in the money). Conversely, when the investment account is much larger than the withdrawal benefit, the optimal strategy is surrender (the guarantee is out of the money), save for when the holder is anticipating the triennial ratchet (times $n=2$ and $n=3$). Otherwise, the optimal strategy includes nonwithdrawal (to receive a bonus) or withdrawal at the contract rate. Note that the strategy is constant along any straight line through the origin since the solution is homogeneous of order one in $\mathbf{x}$, as discussed in \cite{forsyth2013risk}. \begin{table} \protect\caption{GLWB parameters.\label{tab:Results-GLWBParameters}} \centering{}% \begin{tabular}{lcr} \toprule \multicolumn{2}{l}{\textbf{Parameter}} & \textbf{Value}\tabularnewline \midrule Volatility & $\sigma$ & 0.20\tabularnewline \midrule Risk-free rate & $r$ & 0.04\tabularnewline \midrule Hedging fee & $\alpha$ & 0.015\tabularnewline \midrule Contract rate & $\delta$ & 0.05\tabularnewline \midrule Bonus rate & $\beta$ & 0.06\tabularnewline \midrule Expiry & $N$ & 57\tabularnewline \midrule Initial investment & $w_{0}$ & 100\tabularnewline \midrule Initial age at time zero & & 65\tabularnewline \midrule Mortality data & & \cite{pasdika2005coping}\tabularnewline \midrule Ratchets & & Triennial\tabularnewline \midrule Withdrawals & & Annual\tabularnewline \bottomrule \end{tabular} \quad{} \begin{tabular}{lr} \toprule \textbf{Anniversary $n$} & \textbf{Penalty $\kappa_{n}$}\tabularnewline \midrule 1 & 0.03\tabularnewline \midrule 2 & 0.02\tabularnewline \midrule 3 & 0.01\tabularnewline \midrule $\geq4$ & 0.00\tabularnewline \bottomrule \end{tabular} \end{table} \begin{figure} \caption{Optimal control for the GLWB for data in Table \emph{\ref{tab:Results-GLWBParameters}}. As predicted, there exists an optimal control consisting only of nonwithdrawal, withdrawal at the contract rate, and a full surrender.\label{fig:Results-GLWBWithdrawalStrategies}} \vskip\belowcaptionskip \centering \includegraphics[height=0.175in]{glwb_legend} \subfigure[$n=1$]{ \includegraphics[width=2.9in]{glwb_time_1} } \subfigure[$n=2$]{ \includegraphics[width=2.9in]{glwb_time_2} } \subfigure[$n=3$]{ \includegraphics[width=2.9in]{glwb_time_3} } \subfigure[$n=4$]{ \includegraphics[width=2.9in]{glwb_time_4} } \end{figure} \subsubsection{\label{sub:Results-GMWB}Guaranteed minimum withdrawal benefit} For the GMWB, \ref{ass:ControlReduction-FlowAndStateConvexityInX} is violated. In particular, for $\kappa_{n}>0$, the function $f_{\mathbf{x},n}^{\text{M}}\left(\lambda\right)$ is concave as a function of $\mathbf{x}$. However, when $\kappa_{n}=0$ or $G=0$ ($G=0$ is considered in \cite{huang2013analysis}), the function $f_{\mathbf{x},n}^{\text{M }}\left(\lambda\right)$ (see (\ref{eq:GMWB-CashFlow})) is linear in $\mathbf{x}$, and hence the convexity of $V_{n}^{-}$ can be guaranteed given $V_{n}^{+}$ CM. In this case, it is possible to use the same machinery as was used in the GLWB case to arrive at a bang-bang principle (see Theorem \ref{thm:ControlReduction-SupremumEverywhere}). The case of $\kappa_{n}=0$ corresponds to zero surrender charges at the $n$th anniversary, while $G=0$ corresponds to enforcing that all withdrawals (regardless of size) be charged at the penalty rate. Now, consider the data in Table \ref{tab:Results-GMWBParameters}. Since $\kappa_{n}=0$ for all $n\geq7$, the convexity of $V$ in $\mathbf{x}$ is preserved for all $t\in\left(6,N\right]$. However, since $\kappa_{6}>0$, the convexity is violated as $t\rightarrow6^{-}$. Figure \ref{fig:Results-GMWBConvexity} demonstrates this preservation and violation of convexity. As a consequence, $V$ will not necessarily be convex in $\mathbf{x}$ as $t\rightarrow5^{+}$, and the contract fails to satisfy the bang-bang principle at each anniversary date $n\le5$. \begin{figure} \caption{$V\left(\mathbf{x},t\right)$ for fixed $x_{1}=100$ under the data in Table \emph{\ref{tab:Results-GMWBParameters}}. Points where $V\left(\mathbf{x},n^{-}\right)=V\left(\mathbf{x},n^{+}\right)$ correspond to nonwithdrawal. To the left of these points, the holder performs withdrawal (see Figure \emph{\ref{fig:Results-GMWBWithdrawalStrategies}}).\label{fig:Results-GMWBConvexity}} \centering \subfigure[Convexity is not preserved from $t\rightarrow6^{+}$ to $t\rightarrow6^{-}$.]{ \includegraphics[width=2.9in]{gmwb_value_time_6} } \subfigure[Convexity is preserved from $t\rightarrow7^{+}$ to $t\rightarrow7^{-}$.]{ \includegraphics[width=2.9in]{gmwb_value_time_7}} \end{figure} Note that for $x_{2}>0$, the conditions $\lambda x_{2}\in\left[0,G\wedge x_{2}\right]$ and $\lambda x_{2}\in\left(G\wedge x_{2},x_{2}\right]$ appearing in (\ref{eq:GMWB-CashFlow}) are equivalent to $\lambda\in\left[0,G/x_{2}\wedge1\right]$ and $\lambda\in\left(G/x_{2}\wedge1,1\right]$, respectively. Assuming that $V_{n}^{+}$ is CM and taking $\mathcal{P}_{n}^{\text{M}}\left(\mathbf{x}\right)\equiv\left\{ P_{1},P_{2}\right\} $ with $P_{1}\equiv\left[0,G/x_{2}\wedge1\right]$ and $P_{2}\equiv\left[G/x_{2}\wedge1,1\right]$ yields that there exists an optimal control taking on one of the values in $\left\{ 0,G/x_{2},1\right\} $ at any point $\left(\mathbf{x},n\right)$ with $x_{2}>0$. These three actions correspond to nonwithdrawal, withdrawing the predetermined amount $G$, or performing a full surrender. This is verified by Figure \ref{fig:Results-GMWBWithdrawalStrategies}, which shows withdrawal strategies under the parameters in Table \ref{tab:Results-GMWBParameters} at times $n=6$ and $n=7$. As predicted, along any line $x_{2}=\text{const.}$, the optimal control takes on one of a finite number of values. Since at $n=6$, $\kappa_{n}>0$, we see that the holder is more hesitant to surrender the contract whenever $x_{1}\gg x_{2}$ (compare with the same region at $n=7$). Control figures for GMWBs not satisfying the bang-bang principle can be seen in the numerical results in \cite{dai2008guaranteed,chen08a}. \begin{remark}\label{rmk:PrincipleVersusControl}Consider a GMWB with $\kappa_{n}=0$ for all withdrawal times $n$. As suggested by the above, this contract satisfies the bang-bang principle (in particular, Theorem \emph{\ref{thm:ControlReduction-SupremumEverywhere}} is satisfied) everywhere. However, the GMWB does not necessarily yield an optimal bang-bang control since $\mathcal{P}_{n}^{\text{M}}\left(\mathbf{x}\right)$ depends on $x_{2}$ (Corollary \emph{\ref{cor:OptimalBangBang}} is not satisfied). For example, consider an optimal control for the GMWB taking on the value $G/x_{2}$ at each $\mathbf{x}$ with $x_{2}>0$. Such a control's range is a superset of $\left(0,\infty\right)^{2}$ (not a finite set). However, in this case, the bang-bang principle guarantees that for fixed $x_2$, a finite subset of the admissible set need only be considered in the corresponding optimization problem. Computationally, this is just as desirable as the case of an optimal bang-bang control. \end{remark} \begin{table} \protect\caption{GMWB parameters \emph{\cite{chen2008effect}}.\label{tab:Results-GMWBParameters}} \centering{}% \begin{tabular}{lcr} \toprule \multicolumn{2}{l}{\textbf{Parameter}} & \textbf{Value}\tabularnewline \midrule Volatility & $\sigma$ & 0.15\tabularnewline \midrule Risk-free rate & $r$ & 0.05\tabularnewline \midrule Hedging fee & $\alpha$ & 0.01\tabularnewline \midrule Contract rate & $G$ & 10\tabularnewline \midrule Expiry & $N$ & 10\tabularnewline \midrule Initial investment & $w_{0}$ & 100\tabularnewline \midrule Withdrawals & & Annual\tabularnewline \bottomrule \end{tabular} \quad{}% \begin{tabular}{lr} \toprule \textbf{Anniversary $n$} & \textbf{Penalty $\kappa_{n}$}\tabularnewline \midrule 1 & 0.08\tabularnewline \midrule 2 & 0.07\tabularnewline \midrule 3 & 0.06\tabularnewline \midrule 4 & 0.05\tabularnewline \midrule 5 & 0.04\tabularnewline \midrule 6 & 0.03\tabularnewline \midrule $\geq7$ & 0.00\tabularnewline \bottomrule \end{tabular} \end{table} \begin{figure} \caption{Optimal control $\lambda_{\mathbf{x}}$ scaled by $x_{2}$ for the data in Table \emph{\ref{tab:Results-GMWBParameters}}.\label{fig:Results-GMWBWithdrawalStrategies}} \vskip\belowcaptionskip \begin{center} \includegraphics[height=0.175in]{gmwb_legend} \par\end{center} \centering \subfigure[$n=6$]{ \includegraphics[width=2.9in]{gmwb_time_6} } \subfigure[$n=7$]{ \includegraphics[width=2.9in]{gmwb_time_7} } \end{figure} \section{Conclusion} Although it is commonplace in the insurance literature to assume the existence of optimal bang-bang controls, there does not appear to be a rigorous statement of this result. We have rigorously derived sufficient conditions which guarantee the existence of optimal bang-bang controls for GMxB guarantees. These conditions require that the contract features be such that the solution to the optimal control can be formulated as maximizing a convex objective function, and that the underlying stochastic process assumed for the risky assets preserves convexity and monotonicity. These conditions are non-trivial, in that the conditions are satisfied for the GLWB contract but not for the GMWB contract with typical contract parameters. From a practical point of view, the existence of optimal bang-bang controls allows for the use of very efficient numerical methods. Although we have focused specifically on the application of our results to GMxB guarantees, the reader will have no difficulty in applying the sufficient conditions to other optimal control problems in finance. We believe that we can also use an approach similar to that used here to establish the existence of optimal bang-bang controls for general impulse control problems. In the impulse control case, these conditions will require that the intervention operator have a particular form and that the stochastic process (without intervention) preserve convexity and monotonicity. We leave this generalization for future work.